From tim.peters at gmail.com  Thu Jun  1 00:19:54 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Wed, 31 May 2006 18:19:54 -0400
Subject: [Python-Dev] Segmentation fault of Python if build on Solaris 9
	or10 with Sun Studio 11
In-Reply-To: <1149107900.447dfebc603d7@www.domainfactory-webmail.de>
References: <200605301953.38638.Andreas.Floeter@web.de>
	<1149107900.447dfebc603d7@www.domainfactory-webmail.de>
Message-ID: <1f7befae0605311519m36829c75p948661b43e31c7d2@mail.gmail.com>

[MvL, to Andreas Fl?ter]
> This strictly doesn't belong to python-dev: this is the list where
> you say "I want to help", not so much "I need your help".

LOL!  How true.

> If you want to resolve this yourself, we can guide you through
> that. I would start running the binary in a debugger to find
> out where it crashes. Maybe the bug in Python is easy to see
> from that. But then, maybe the bug is in the compiler, and not
> in Python...

The first or second thing to try is to recompile Python with C
optimization disabled, and especially in this case where compiling
with gcc instead works fine (that certainly _suggests_ "C compiler
optimization bug").

From oliphant.travis at ieee.org  Thu Jun  1 02:27:50 2006
From: oliphant.travis at ieee.org (Travis E. Oliphant)
Date: Wed, 31 May 2006 18:27:50 -0600
Subject: [Python-Dev] Possible bug in complexobject.c (still in Python 2.5)
In-Reply-To: <8393fff0605310402l4d45101agedbb47b3d8d2c12a@mail.gmail.com>
References: <8393fff0605310402l4d45101agedbb47b3d8d2c12a@mail.gmail.com>
Message-ID: <e5lcaf$irm$1@sea.gmane.org>


I'm curious about the difference between

float_subtype_new  in floatobject.c
complex_subtype_from_c_complex in complexobject.c

The former uses type->tp_alloc(type, 0) to create memory for the object 
while the latter uses PyType_GenericAlloc(type, 0) to create memory for 
the sub-type (thereby by-passing the sub-type's own memory allocator).

It seems like this is a bug.   Shouldn't type->tp_alloc(type, 0) also be 
used in the case of allocating complex subtypes?

This is causing problems in NumPy because we have a complex type that is 
a sub-type of the Python complex scalar.  It sometimes uses the 
complex_new code to generate instances (so that the __new__ interface is 
the same),  but because complex_subtype_from_c_complex is using 
PyType_GenericAlloc this is causing errors.

I can work around this by not calling the __new__ method for the base 
type but this is not consistent.


Thanks for any feedback,

-Travis


From oliphant.travis at ieee.org  Thu Jun  1 02:35:21 2006
From: oliphant.travis at ieee.org (Travis E. Oliphant)
Date: Wed, 31 May 2006 18:35:21 -0600
Subject: [Python-Dev] Possible bug in complexobject.c (still in Python 2.5)
Message-ID: <e5lco9$jia$1@sea.gmane.org>

I'm curious about the difference between

float_subtype_new  in floatobject.c
complex_subtype_from_c_complex in complexobject.c

The former uses type->tp_alloc(type, 0) to create memory for the object
while the latter uses PyType_GenericAlloc(type, 0) to create memory for
the sub-type (thereby by-passing the sub-type's own memory allocator).

It seems like this is a bug.   Shouldn't type->tp_alloc(type, 0) also be
used in the case of allocating complex subtypes?

This is causing problems in NumPy because we have a complex type that is
a sub-type of the Python complex scalar.  It sometimes uses the
complex_new code to generate instances (so that the __new__ interface is
the same),  but because complex_subtype_from_c_complex is using
PyType_GenericAlloc this is causing errors.

I can work around this by not calling the __new__ method for the base
type but this is not consistent.


Thanks for any feedback,

P.S.  Sorry about the cross-posting to another thread.  I must have hit 
reply instead of compose.  Please forgive the noise.


-Travis


From guido at python.org  Thu Jun  1 02:36:10 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 31 May 2006 17:36:10 -0700
Subject: [Python-Dev] Possible bug in complexobject.c (still in Python
	2.5)
In-Reply-To: <e5lcaf$irm$1@sea.gmane.org>
References: <8393fff0605310402l4d45101agedbb47b3d8d2c12a@mail.gmail.com>
	<e5lcaf$irm$1@sea.gmane.org>
Message-ID: <ca471dc20605311736j762d94c0q81a5664b9adc79e0@mail.gmail.com>

I wouldn't be surprised if this is a genuine bug; the complex type
doesn't get a lot of love from core developers.

Could you come up with a proposed fix, and a unit test showing that it
works (and that the old version doesn't)? (Maybe a good unit test
would require writing a custome C extension; in that case just show
some sample code.)

--Guido

On 5/31/06, Travis E. Oliphant <oliphant.travis at ieee.org> wrote:
>
> I'm curious about the difference between
>
> float_subtype_new  in floatobject.c
> complex_subtype_from_c_complex in complexobject.c
>
> The former uses type->tp_alloc(type, 0) to create memory for the object
> while the latter uses PyType_GenericAlloc(type, 0) to create memory for
> the sub-type (thereby by-passing the sub-type's own memory allocator).
>
> It seems like this is a bug.   Shouldn't type->tp_alloc(type, 0) also be
> used in the case of allocating complex subtypes?
>
> This is causing problems in NumPy because we have a complex type that is
> a sub-type of the Python complex scalar.  It sometimes uses the
> complex_new code to generate instances (so that the __new__ interface is
> the same),  but because complex_subtype_from_c_complex is using
> PyType_GenericAlloc this is causing errors.
>
> I can work around this by not calling the __new__ method for the base
> type but this is not consistent.
>
>
> Thanks for any feedback,
>
> -Travis
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From tim.peters at gmail.com  Thu Jun  1 03:10:47 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Wed, 31 May 2006 21:10:47 -0400
Subject: [Python-Dev] Let's stop eating exceptions in dict lookup
In-Reply-To: <8393fff0605301832s1776723dnfc63620631e7c5fd@mail.gmail.com>
References: <20060529171147.GA1717@code0.codespeak.net>
	<008f01c68354$fc01dd70$dc00000a@RaymondLaptop1>
	<20060529195453.GB14908@code0.codespeak.net>
	<00de01c68363$308d37c0$dc00000a@RaymondLaptop1>
	<20060529213428.GA20141@code0.codespeak.net>
	<447B7621.3040100@ewtllc.com>
	<8393fff0605301832s1776723dnfc63620631e7c5fd@mail.gmail.com>
Message-ID: <1f7befae0605311810i5996980dq313a8cae1725cf25@mail.gmail.com>

[Martin Blais]
> I'm still looking for a benchmark that is not amazingly uninformative
> and crappy.  I've been looking around all day, I even looked under the
> bed, I cannot find it.  I've also been looking around all day as well,
> even looked for it shooting out of the Iceland geysirs, of all
> places--it's always all day out here it seems, day and day-- and I
> still can't find it.  (In the process however, I found Thule beer and
> strangely dressed vikings, which makes it all worthwhile.)

For those who don't know, Martin stayed on in Iceland after the NFS
sprint.  He shows clear signs above of developing photon madness.

    http://www.timeanddate.com/worldclock/astronomy.html?n=211

Where that says "sunset", don't read "dark" -- it just means the top
of the sun dips a bit below the horizon for a few hours.  It never
gets dark this time of year.

If you haven't experienced this, no explanation can convey the
other-worldly sense of it.  Combined with Iceland's astonishing and
beautiful geography, a North American boy (like Martin or me) could
swear they were transported to a different planet.  It's one I'd love
to visit again, but back home for a few days now I still turn the
lights off for about half an hour each night and just sit here
cherishing darkness :-)

From tim.peters at gmail.com  Thu Jun  1 03:20:02 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Wed, 31 May 2006 21:20:02 -0400
Subject: [Python-Dev] Search for empty substrings (was Re: Let's stop eating
	exceptions in dict lookup)
Message-ID: <1f7befae0605311820i42cfae00xb37e2d18e85ce92a@mail.gmail.com>

[Fredrik Lundh]
> would "abc".find("", 100) == 3 be okay?  or should we switch to treating the
> optional start and end positions as "return value boundaries" (used to filter the
> result) rather than "slice directives" (used to process the source string before
> the operation)?  it's all trivial to implement, and has no performance implications,
> but I'm not sure what the consensus really is...

FWIW, I like what you eventually did:

>>> "ab".find("")
0
>>> "ab".find("", 1)
1
>>> "ab".find("", 2)
2
>>> "ab".find("", 3)
-1
>>> "ab".rfind("")
2
>>> "ab".rfind("", 1)
2
>>> "ab".rfind("", 2)
2
>>> "ab".rfind("", 3)
-1

I don't know that a compelling argument can be made for such a
seemingly senseless operation, but the behavior above is at least
consistent with the rule that a string of length n has exactly n+1
empty substrings, at 0:0, 1:1, ..., and n:n.

From blais at furius.ca  Thu Jun  1 04:24:08 2006
From: blais at furius.ca (Martin Blais)
Date: Wed, 31 May 2006 22:24:08 -0400
Subject: [Python-Dev] [Python-checkins] r46300 - in python/trunk:
	Lib/socket.py Lib/test/test_socket.py Lib/test/test_struct.py
	Modules/_struct.c Modules/arraymodule.c Modules/socketmodule.c
In-Reply-To: <ca471dc20605310710s197e412bv9bd21ff62a829548@mail.gmail.com>
References: <20060526120329.9A9671E400C@bag.python.org>
	<ca471dc20605260956t35aafa71tcad19b926c0c4b41@mail.gmail.com>
	<CD67B679-50C4-4398-81BF-40F3ED6B106B@redivi.com>
	<ca471dc20605291259h561526cek9dad74288ae0976b@mail.gmail.com>
	<8393fff0605310335o12b2b26ew43ba63cfd69c26e9@mail.gmail.com>
	<ca471dc20605310710s197e412bv9bd21ff62a829548@mail.gmail.com>
Message-ID: <8393fff0605311924n23bf5971ve39669cfffa9bdfc@mail.gmail.com>

On 5/31/06, Guido van Rossum <guido at python.org> wrote:
> On 5/31/06, Martin Blais <blais at furius.ca> wrote:
> > So I assume you're suggesting the following renames:
> >
> >   pack_to -> packinto
> >   recv_buf -> recvinto
> >   recvfrom_buf -> recvfrominto
> >
> > (I don't like that last one very much.
> > I'll go ahead and make those renames once I return.)
>
> You could add an underscore before _into.

Will do!
cheers,

From guido at python.org  Thu Jun  1 04:30:35 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 31 May 2006 19:30:35 -0700
Subject: [Python-Dev] Search for empty substrings (was Re: Let's stop
	eating exceptions in dict lookup)
In-Reply-To: <1f7befae0605311820i42cfae00xb37e2d18e85ce92a@mail.gmail.com>
References: <1f7befae0605311820i42cfae00xb37e2d18e85ce92a@mail.gmail.com>
Message-ID: <ca471dc20605311930g522ce8e1m6db8d46cd0b4e391@mail.gmail.com>

On 5/31/06, Tim Peters <tim.peters at gmail.com> wrote:
> [Fredrik Lundh]
> > would "abc".find("", 100) == 3 be okay?  or should we switch to treating the
> > optional start and end positions as "return value boundaries" (used to filter the
> > result) rather than "slice directives" (used to process the source string before
> > the operation)?  it's all trivial to implement, and has no performance implications,
> > but I'm not sure what the consensus really is...
>
> FWIW, I like what you eventually did:
>
> >>> "ab".find("")
> 0
> >>> "ab".find("", 1)
> 1
> >>> "ab".find("", 2)
> 2
> >>> "ab".find("", 3)
> -1
> >>> "ab".rfind("")
> 2
> >>> "ab".rfind("", 1)
> 2
> >>> "ab".rfind("", 2)
> 2
> >>> "ab".rfind("", 3)
> -1
>
> I don't know that a compelling argument can be made for such a
> seemingly senseless operation, but the behavior above is at least
> consistent with the rule that a string of length n has exactly n+1
> empty substrings, at 0:0, 1:1, ..., and n:n.

Yes. Bravo!

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From skip at pobox.com  Thu Jun  1 04:46:29 2006
From: skip at pobox.com (skip at pobox.com)
Date: Wed, 31 May 2006 21:46:29 -0500
Subject: [Python-Dev] Possible bug in complexobject.c (still in Python
	2.5)
In-Reply-To: <ca471dc20605311736j762d94c0q81a5664b9adc79e0@mail.gmail.com>
References: <8393fff0605310402l4d45101agedbb47b3d8d2c12a@mail.gmail.com>
	<e5lcaf$irm$1@sea.gmane.org>
	<ca471dc20605311736j762d94c0q81a5664b9adc79e0@mail.gmail.com>
Message-ID: <17534.21765.170409.362693@montanaro.dyndns.org>


    Guido> (Maybe a good unit test would require writing a custome C
    Guido> extension; in that case just show some sample code.)

Isn't that what Module/_testcapimodule.c is for?

Skip

From oliphant.travis at ieee.org  Thu Jun  1 05:59:03 2006
From: oliphant.travis at ieee.org (Travis E. Oliphant)
Date: Wed, 31 May 2006 21:59:03 -0600
Subject: [Python-Dev] Possible bug in complexobject.c (still in Python
	2.5)
In-Reply-To: <ca471dc20605311736j762d94c0q81a5664b9adc79e0@mail.gmail.com>
References: <8393fff0605310402l4d45101agedbb47b3d8d2c12a@mail.gmail.com>	<e5lcaf$irm$1@sea.gmane.org>
	<ca471dc20605311736j762d94c0q81a5664b9adc79e0@mail.gmail.com>
Message-ID: <e5lomn$n0u$1@sea.gmane.org>

Guido van Rossum wrote:
> I wouldn't be surprised if this is a genuine bug; the complex type
> doesn't get a lot of love from core developers.
> 
> Could you come up with a proposed fix, and a unit test showing that it
> works (and that the old version doesn't)? (Maybe a good unit test
> would require writing a custome C extension; in that case just show
> some sample code.)


Attached is a sample module that exposes the problem.  The problem goes 
away by replacing

op = PyType_GenericAlloc(type, 0);

with

op = type->tp_alloc(type, 0);

in the function

complex_subtype_from_c_complex

in the file complexobject.c  (about line #191).



The problem with a unit test is that it might not fail.  On my Linux 
system, it doesn't complain about the problem unless I first use strict 
pointer checking with

export MALLOC_CHECK_=2

Then the code

import bugtest
a = bugtest.newcomplex(3.0)
del a

Aborts

Valgrind also shows the error when running the simple code. It seems 
pretty clear to me that the subtype code should be calling the sub-types 
tp_alloc code instead of the generic one.


Best regards,

-Travis










-------------- next part --------------
A non-text attachment was scrubbed...
Name: bugtest.c
Type: text/x-csrc
Size: 1035 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060531/db6f3342/attachment-0001.c 

From oliphant.travis at ieee.org  Thu Jun  1 06:27:40 2006
From: oliphant.travis at ieee.org (Travis E. Oliphant)
Date: Wed, 31 May 2006 22:27:40 -0600
Subject: [Python-Dev] Possible bug in complexobject.c (still in Python
	2.5)
In-Reply-To: <e5lco9$jia$1@sea.gmane.org>
References: <e5lco9$jia$1@sea.gmane.org>
Message-ID: <e5lqbv$ql0$1@sea.gmane.org>

Travis E. Oliphant wrote:
> I'm curious about the difference between
> 
> float_subtype_new  in floatobject.c
> complex_subtype_from_c_complex in complexobject.c
> 
> The former uses type->tp_alloc(type, 0) to create memory for the object
> while the latter uses PyType_GenericAlloc(type, 0) to create memory for
> the sub-type (thereby by-passing the sub-type's own memory allocator).
> 
> It seems like this is a bug.   Shouldn't type->tp_alloc(type, 0) also be
> used in the case of allocating complex subtypes?

I submitted an entry and a patch for this on SourceForge Tracker (#1498638)

http://sourceforge.net/tracker/index.php?func=detail&aid=1498638&group_id=5470&atid=105470


-Travis


From nnorwitz at gmail.com  Thu Jun  1 07:11:58 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Wed, 31 May 2006 22:11:58 -0700
Subject: [Python-Dev] test_struct failure on 64 bit platforms
In-Reply-To: <1f7befae0605310605l71cddd63hecba38dc92478895@mail.gmail.com>
References: <ee2a432c0605310049r16ffb3f3j1a7c5b0ceba31401@mail.gmail.com>
	<4ED0CBC7-B364-4C6C-9E59-5A476582737D@redivi.com>
	<ee2a432c0605310204v2baebb1w38a894b8dbda3b1@mail.gmail.com>
	<1f7befae0605310605l71cddd63hecba38dc92478895@mail.gmail.com>
Message-ID: <ee2a432c0605312211y46043f07mb9363f4e7d9c2f2c@mail.gmail.com>

On 5/31/06, Tim Peters <tim.peters at gmail.com> wrote:
>
> "standard" is a technical word with precise meaning here, and is
> defined by the struct module docs, in contrast to "native".  It means
> whatever they say it means :-)  "Portable" may have been a more
> intuitive word than "standard" here -- read "standard" in the struct
> context in the sense of "standardized, regardless of native platform
> size or alignment or endian quirks".

:-)

> > Would someone augment the warnings module to make testing
> > more reasonable?
>
> What's required?  I know of two things:
>
> 1. There's no advertised way to save+restore the internal
>    filter list, or to remove a filter entry, so tests that want
>    to make a temporary change have to break into the internals.
>
> 2. There's no advertised way to disable "only gripe once per source
>    line" behavior.  This gets in the way of testing that warnings get
>    raised when running tests more than once, or using a common
>    function to raise warnings from multiple call sites.
>
> These get in the way of Zope and ZODB testing too, BTW.
> Unfortunately, looks like the new test_struct code bumped into both of
> them at once.

Right.  The 2 you list above are the only one's I know of.

You fixed one of them.  I find the __warningregistry__ fix extremely
obscure.  I remember working on wrt test_warnings (and -R maybe?).  I
don't think I fixed, someone else eventually figured it out, probably
you. :-)

n

From nnorwitz at gmail.com  Thu Jun  1 08:20:40 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Wed, 31 May 2006 23:20:40 -0700
Subject: [Python-Dev] string inconsistency
Message-ID: <ee2a432c0605312320l4c7f1145q93fd19610bbcf591@mail.gmail.com>

This is still in Lib/test/string_tests.py:

        #EQ("A", "", "replace", "", "A")
        # That was the correct result; this is the result we actually get
        # now (for str, but not for unicode):
        #EQ("", "", "replace", "", "A")

Is this going to be fixed?

n

From michele.simionato at gmail.com  Thu Jun  1 09:14:16 2006
From: michele.simionato at gmail.com (Michele Simionato)
Date: Thu, 1 Jun 2006 07:14:16 +0000 (UTC)
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org>
	<e5fidd$aod$1@sea.gmane.org> <e5fjnq$f8j$1@sea.gmane.org>
	<e5fkdh$gps$1@sea.gmane.org>
	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
Message-ID: <loom.20060601T090603-423@post.gmane.org>

Neal Norwitz <nnorwitz <at> gmail.com> writes:
> 
> > I wonder whether a check shouldn't just return (co_flags & 0x20), which
> > is CO_GENERATOR.
> 
> Makes more sense.

Okay, but my point is that the final user should not be expected to know
about those implementation details. The one obvious way to me is to have an
inspect.isgenerator, analogous to inspect.isfunction, inspect.ismethod, etc.
The typical use case is in writing a documentation/debugging tool. Now I
was writing a decorator that needed to discriminate in the case it was
decorating a regular function versus a generator. 

    Michele Simionato 


From g.brandl at gmx.net  Thu Jun  1 09:28:54 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 01 Jun 2006 09:28:54 +0200
Subject: [Python-Dev] feature request: inspect.isgenerator
In-Reply-To: <loom.20060601T090603-423@post.gmane.org>
References: <loom.20060529T092512-825@post.gmane.org>	<e5fidd$aod$1@sea.gmane.org>
	<e5fjnq$f8j$1@sea.gmane.org>	<e5fkdh$gps$1@sea.gmane.org>	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
	<loom.20060601T090603-423@post.gmane.org>
Message-ID: <e5m4vm$me8$1@sea.gmane.org>

Michele Simionato wrote:
> Neal Norwitz <nnorwitz <at> gmail.com> writes:
>> 
>> > I wonder whether a check shouldn't just return (co_flags & 0x20), which
>> > is CO_GENERATOR.
>> 
>> Makes more sense.
> 
> Okay, but my point is that the final user should not be expected to know
> about those implementation details. The one obvious way to me is to have an
> inspect.isgenerator, analogous to inspect.isfunction, inspect.ismethod, etc.
> The typical use case is in writing a documentation/debugging tool. Now I
> was writing a decorator that needed to discriminate in the case it was
> decorating a regular function versus a generator. 

I'd say, go ahead and write a patch including docs, and I think there's no
problem with accepting it (as long as it comes before beta1).

Georg


From tjreedy at udel.edu  Thu Jun  1 10:50:27 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 1 Jun 2006 04:50:27 -0400
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org><e5fidd$aod$1@sea.gmane.org>
	<e5fjnq$f8j$1@sea.gmane.org><e5fkdh$gps$1@sea.gmane.org><ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
	<loom.20060601T090603-423@post.gmane.org>
Message-ID: <e5m9pk$cgl$1@sea.gmane.org>


"Michele Simionato" <michele.simionato at gmail.com> wrote in message 
news:loom.20060601T090603-423 at post.gmane.org...
> Neal Norwitz <nnorwitz <at> gmail.com> writes:
>>
>> > I wonder whether a check shouldn't just return (co_flags & 0x20), 
>> > which
>> > is CO_GENERATOR.
>>
>> Makes more sense.
>
> Okay, but my point is that the final user should not be expected to know
> about those implementation details. The one obvious way to me is to have 
> an
> inspect.isgenerator, analogous to inspect.isfunction, inspect.ismethod, 
> etc.
> The typical use case is in writing a documentation/debugging tool. Now I
> was writing a decorator that needed to discriminate in the case it was
> decorating a regular function versus a generator.

To me, another obvious way is isinstance(object, gentype) where
gentype = type(i for i in []) # for instance
which should also be in types module.

tjr




From michele.simionato at gmail.com  Thu Jun  1 11:26:15 2006
From: michele.simionato at gmail.com (Michele Simionato)
Date: Thu, 1 Jun 2006 09:26:15 +0000 (UTC)
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org><e5fidd$aod$1@sea.gmane.org>
	<e5fjnq$f8j$1@sea.gmane.org><e5fkdh$gps$1@sea.gmane.org><ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
	<loom.20060601T090603-423@post.gmane.org>
	<e5m9pk$cgl$1@sea.gmane.org>
Message-ID: <loom.20060601T112347-75@post.gmane.org>

Terry Reedy <tjreedy <at> udel.edu> writes:
> To me, another obvious way is isinstance(object, gentype) where
> gentype = type(i for i in []) # for instance
> which should also be in types module.

No, that check would match generator objects, not generators tout court.
On a related notes, inspect.isfunction gives True on a generator, such
as

def g(): yield None

This could confuse people, however I am inclined to leave things as they are.
Any thoughts?

     Michele Simionato


From michele.simionato at gmail.com  Thu Jun  1 13:47:54 2006
From: michele.simionato at gmail.com (Michele Simionato)
Date: Thu, 1 Jun 2006 11:47:54 +0000 (UTC)
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org>	<e5fidd$aod$1@sea.gmane.org>
	<e5fjnq$f8j$1@sea.gmane.org>	<e5fkdh$gps$1@sea.gmane.org>	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
	<loom.20060601T090603-423@post.gmane.org>
	<e5m4vm$me8$1@sea.gmane.org>
Message-ID: <loom.20060601T134221-536@post.gmane.org>

Georg Brandl <g.brandl <at> gmx.net> writes:

> I'd say, go ahead and write a patch including docs, and I think there's no
> problem with accepting it (as long as it comes before beta1).

I was having a look at http://docs.python.org/dev/lib/inspect-types.html
and it would seem that adding isgenerator would imply changing
inspect.getmembers() and its documentation. Also, should one add
a GeneratorType, perhaps as a subclass of FunctionType?


From g.brandl at gmx.net  Thu Jun  1 14:20:45 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 01 Jun 2006 14:20:45 +0200
Subject: [Python-Dev] feature request: inspect.isgenerator
In-Reply-To: <loom.20060601T134221-536@post.gmane.org>
References: <loom.20060529T092512-825@post.gmane.org>	<e5fidd$aod$1@sea.gmane.org>	<e5fjnq$f8j$1@sea.gmane.org>	<e5fkdh$gps$1@sea.gmane.org>	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>	<loom.20060601T090603-423@post.gmane.org>	<e5m4vm$me8$1@sea.gmane.org>
	<loom.20060601T134221-536@post.gmane.org>
Message-ID: <e5mm2m$omn$1@sea.gmane.org>

Michele Simionato wrote:
> Georg Brandl <g.brandl <at> gmx.net> writes:
> 
>> I'd say, go ahead and write a patch including docs, and I think there's no
>> problem with accepting it (as long as it comes before beta1).
> 
> I was having a look at http://docs.python.org/dev/lib/inspect-types.html
> and it would seem that adding isgenerator would imply changing
> inspect.getmembers() and its documentation.

Yep.

> Also, should one add
> a GeneratorType, perhaps as a subclass of FunctionType?

Add GeneratorType where? There is already one in the types module.

Georg


From michele.simionato at gmail.com  Thu Jun  1 15:02:20 2006
From: michele.simionato at gmail.com (Michele Simionato)
Date: Thu, 1 Jun 2006 13:02:20 +0000 (UTC)
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org>	<e5fidd$aod$1@sea.gmane.org>	<e5fjnq$f8j$1@sea.gmane.org>	<e5fkdh$gps$1@sea.gmane.org>	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>	<loom.20060601T090603-423@post.gmane.org>	<e5m4vm$me8$1@sea.gmane.org>
	<loom.20060601T134221-536@post.gmane.org>
	<e5mm2m$omn$1@sea.gmane.org>
Message-ID: <loom.20060601T145807-833@post.gmane.org>

Georg Brandl <g.brandl <at> gmx.net> writes:
> 
> > Also, should one add
> > a GeneratorType, perhaps as a subclass of FunctionType?
> 
> Add GeneratorType where? There is already one in the types module.

Yep, this is the crux. types.GeneratorType refers to generator objects,
which in an improper sense are "instances" of a "generator function".
I.e.

def g(): yield 1 # this is a generator

go = g() # this is a generator object

I want isgenerator(g) == True, but isgenerator(go) == False.

So, what should be the class of g ? Maybe we can keep FunctionType
and don't bother.

         Michele Simionato






From g.brandl at gmx.net  Thu Jun  1 15:06:31 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 01 Jun 2006 15:06:31 +0200
Subject: [Python-Dev] feature request: inspect.isgenerator
In-Reply-To: <loom.20060601T145807-833@post.gmane.org>
References: <loom.20060529T092512-825@post.gmane.org>	<e5fidd$aod$1@sea.gmane.org>	<e5fjnq$f8j$1@sea.gmane.org>	<e5fkdh$gps$1@sea.gmane.org>	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>	<loom.20060601T090603-423@post.gmane.org>	<e5m4vm$me8$1@sea.gmane.org>	<loom.20060601T134221-536@post.gmane.org>	<e5mm2m$omn$1@sea.gmane.org>
	<loom.20060601T145807-833@post.gmane.org>
Message-ID: <e5moon$271$2@sea.gmane.org>

Michele Simionato wrote:
> Georg Brandl <g.brandl <at> gmx.net> writes:
>> 
>> > Also, should one add
>> > a GeneratorType, perhaps as a subclass of FunctionType?
>> 
>> Add GeneratorType where? There is already one in the types module.
> 
> Yep, this is the crux. types.GeneratorType refers to generator objects,
> which in an improper sense are "instances" of a "generator function".
> I.e.
> 
> def g(): yield 1 # this is a generator
> 
> go = g() # this is a generator object
> 
> I want isgenerator(g) == True, but isgenerator(go) == False.

Ah, ok. But then I would name the function differently, perhaps
isgeneratorfunc().

> So, what should be the class of g ? Maybe we can keep FunctionType
> and don't bother.

I would say, keep FunctionType. There's no real need to know the exact
type except for inspecting, and for that, the new function in inspect
can be used.

Georg


From tim.peters at gmail.com  Thu Jun  1 15:59:30 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Thu, 1 Jun 2006 09:59:30 -0400
Subject: [Python-Dev] string inconsistency
In-Reply-To: <ee2a432c0605312320l4c7f1145q93fd19610bbcf591@mail.gmail.com>
References: <ee2a432c0605312320l4c7f1145q93fd19610bbcf591@mail.gmail.com>
Message-ID: <1f7befae0606010659t697ce9f3n5cdcc66a05bc421a@mail.gmail.com>

[Neal]
> This is still in Lib/test/string_tests.py:
>
>         #EQ("A", "", "replace", "", "A")
>         # That was the correct result; this is the result we actually get
>         # now (for str, but not for unicode):
>         #EQ("", "", "replace", "", "A")
>
> Is this going to be fixed?

Done.  I had to comment out that new test during the NFS sprint
because the str and unicode implementations gave different results (a
pre-existing bug discovered during the sprint).  str.replace() was
repaired later during the sprint, but the new test remained commented
out.

From g.brandl at gmx.net  Thu Jun  1 17:47:40 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 01 Jun 2006 17:47:40 +0200
Subject: [Python-Dev] S/390 buildbot URLs problematic
Message-ID: <e5n26s$bt2$1@sea.gmane.org>

The S/390 buildbot should be renamed. While the URLs
buildbot generates in its email messages work, the
ones that are on the overview page don't.

Georg


From tjreedy at udel.edu  Thu Jun  1 17:48:35 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 1 Jun 2006 11:48:35 -0400
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org><e5fidd$aod$1@sea.gmane.org><e5fjnq$f8j$1@sea.gmane.org><e5fkdh$gps$1@sea.gmane.org><ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com><loom.20060601T090603-423@post.gmane.org><e5m9pk$cgl$1@sea.gmane.org>
	<loom.20060601T112347-75@post.gmane.org>
Message-ID: <e5n28j$cld$1@sea.gmane.org>


"Michele Simionato" <michele.simionato at gmail.com> wrote in message 
news:loom.20060601T112347-75 at post.gmane.org...
> No, that check would match generator objects, not generators tout court.

tout court?? is not English or commonly used at least in America

> On a related notes, inspect.isfunction gives True on a generator, such
> as
>
> def g(): yield None

Ok, you mean generator function, which produces generators, not generators 
themselves.  So what you want is a new isgenfunc function.  That makes more 
sense, in a sense, since I can see that you would want to wrap genfuncs 
differently from regular funcs.  But then I wonder why you don't use a 
different decorator since you know when you are writing a generator 
function.

tjr





From tjreedy at udel.edu  Thu Jun  1 17:57:15 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 1 Jun 2006 11:57:15 -0400
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org>	<e5fidd$aod$1@sea.gmane.org>	<e5fjnq$f8j$1@sea.gmane.org>	<e5fkdh$gps$1@sea.gmane.org>	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>	<loom.20060601T090603-423@post.gmane.org>	<e5m4vm$me8$1@sea.gmane.org><loom.20060601T134221-536@post.gmane.org><e5mm2m$omn$1@sea.gmane.org>
	<loom.20060601T145807-833@post.gmane.org>
Message-ID: <e5n2or$ei8$1@sea.gmane.org>


"Michele Simionato" <michele.simionato at gmail.com> wrote in message 
news:loom.20060601T145807-833 at post.gmane.org...

> Yep, this is the crux. types.GeneratorType refers to generator objects,
> which in an improper sense are "instances" of a "generator function".
> I.e.
>
> def g(): yield 1 # this is a generator
>
> go = g() # this is a generator object

That terminology does not work.  For every other type, an x is an x object 
and vice versa.  I think most of us call functions which return generators 
a generator function when the distinction needs to be made.  A generator is 
a type in the conceptual class 'iterator'.

Terry Jan Reedy




From pje at telecommunity.com  Thu Jun  1 18:07:09 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 01 Jun 2006 12:07:09 -0400
Subject: [Python-Dev] feature request: inspect.isgenerator
In-Reply-To: <loom.20060601T112347-75@post.gmane.org>
References: <loom.20060529T092512-825@post.gmane.org>
	<e5fidd$aod$1@sea.gmane.org> <e5fjnq$f8j$1@sea.gmane.org>
	<e5fkdh$gps$1@sea.gmane.org>
	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
	<loom.20060601T090603-423@post.gmane.org>
	<e5m9pk$cgl$1@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060601115712.03c01008@mail.telecommunity.com>

At 09:26 AM 6/1/2006 +0000, Michele Simionato wrote:
>Terry Reedy <tjreedy <at> udel.edu> writes:
> > To me, another obvious way is isinstance(object, gentype) where
> > gentype = type(i for i in []) # for instance
> > which should also be in types module.
>
>No, that check would match generator objects, not generators tout court.
>On a related notes, inspect.isfunction gives True on a generator, such
>as
>
>def g(): yield None
>
>This could confuse people, however I am inclined to leave things as they are.
>Any thoughts?

Yes, I think the whole concept of inspecting for this is broken.  *Any* 
function can return a generator-iterator.  A generator function is just a 
function that happens to always return one.

In other words, the confusion is in the idea of introspecting for this in 
the first place, not that generator functions are of FunctionType.  The 
best way to avoid the confusion is to avoid thinking that you can 
distinguish one type of function from another without explicit guidance 
from the function's author.

I'm -0 on having an isgenfunc(), but -1 on changing isfunction.  +1 on 
making the code flags available.  -1 on changing any other inspect stuff to 
handle generators specially.  They are not special and should not be 
treated specially - they are just functions that happen to always return 
generator-iterators -- and that is an *implementation detail* of the 
function.  Pushing that information out to introspection or doc is a bad 
idea in the general case.


From jack at performancedrivers.com  Thu Jun  1 18:21:34 2006
From: jack at performancedrivers.com (Jack Diederich)
Date: Thu, 1 Jun 2006 12:21:34 -0400
Subject: [Python-Dev] Let's stop eating exceptions in dict lookup
In-Reply-To: <1f7befae0605311810i5996980dq313a8cae1725cf25@mail.gmail.com>
References: <20060529171147.GA1717@code0.codespeak.net>
	<008f01c68354$fc01dd70$dc00000a@RaymondLaptop1>
	<20060529195453.GB14908@code0.codespeak.net>
	<00de01c68363$308d37c0$dc00000a@RaymondLaptop1>
	<20060529213428.GA20141@code0.codespeak.net>
	<447B7621.3040100@ewtllc.com>
	<8393fff0605301832s1776723dnfc63620631e7c5fd@mail.gmail.com>
	<1f7befae0605311810i5996980dq313a8cae1725cf25@mail.gmail.com>
Message-ID: <20060601162134.GB5802@performancedrivers.com>

On Wed, May 31, 2006 at 09:10:47PM -0400, Tim Peters wrote:
> [Martin Blais]
> > I'm still looking for a benchmark that is not amazingly uninformative
> > and crappy.  I've been looking around all day, I even looked under the
> > bed, I cannot find it.  I've also been looking around all day as well,
> > even looked for it shooting out of the Iceland geysirs, of all
> > places--it's always all day out here it seems, day and day-- and I
> > still can't find it.  (In the process however, I found Thule beer and
> > strangely dressed vikings, which makes it all worthwhile.)
> 
> For those who don't know, Martin stayed on in Iceland after the NFS
> sprint.  He shows clear signs above of developing photon madness.
> 
>     http://www.timeanddate.com/worldclock/astronomy.html?n=211
> 
> Where that says "sunset", don't read "dark" -- it just means the top
> of the sun dips a bit below the horizon for a few hours.  It never
> gets dark this time of year.
> 
> If you haven't experienced this, no explanation can convey the
> other-worldly sense of it.

The CCP Games CEO said they have trouble retaining talent from more
moderate latitudes for this reason.  18 hours of daylight makes them a
bit goofy and when the Winter Solstice rolls around they are apt to go
quite mad.

-Jack

From mcherm at mcherm.com  Thu Jun  1 19:18:49 2006
From: mcherm at mcherm.com (Michael Chermside)
Date: Thu, 01 Jun 2006 10:18:49 -0700
Subject: [Python-Dev] feature request: inspect.isgenerator
Message-ID: <20060601101849.4sscuh0gpi0cg0ww@login.werra.lunarpages.com>

Phillip J. Eby writes:
> Yes, I think the whole concept of inspecting for this is broken.   
> *Any* function can return a generator-iterator.  A generator  
> function is just a function that happens to always return one.

Just following up on Phillip's comments, consider the following functions:


     def foo(x):
         while still_going(x):
            yield some_func(x)

     def bar(x):
         while still_going(x):
             yield other_func(x)

     def foo_or_bar(x):
         if some_condition(x):
             return foo(x)
         else:
             return bar(x)

I presume that Michele's proposal is that inspect.isgenerator() (or
perhaps "inspect.isgenfunc()") would return True for "foo" and "bar"
but false for "foo_or_bar". Can you give a single use case for which
that distinction is desirable?

-- Michael Chermside


From anthony at interlink.com.au  Fri Jun  2 01:36:03 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Fri, 2 Jun 2006 09:36:03 +1000
Subject: [Python-Dev] Let's stop eating exceptions in dict lookup
In-Reply-To: <20060601162134.GB5802@performancedrivers.com>
References: <20060529171147.GA1717@code0.codespeak.net>
	<1f7befae0605311810i5996980dq313a8cae1725cf25@mail.gmail.com>
	<20060601162134.GB5802@performancedrivers.com>
Message-ID: <200606020936.08610.anthony@interlink.com.au>

On Friday 02 June 2006 02:21, Jack Diederich wrote:
> The CCP Games CEO said they have trouble retaining talent from more
> moderate latitudes for this reason.  18 hours of daylight makes
> them a bit goofy and when the Winter Solstice rolls around they are
> apt to go quite mad.

Obviously they need to hire people who are already crazy.

not-naming-any-names-ly,
Anthony

-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From pje at telecommunity.com  Fri Jun  2 05:11:35 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 01 Jun 2006 23:11:35 -0400
Subject: [Python-Dev] SF patch #1473257: "Add a gi_code attr to
	generators"
In-Reply-To: <43aa6ff70605311253t365c2e90g599ae191e9a525e9@mail.gmail.co
 m>
Message-ID: <5.1.1.6.0.20060601230800.01f286a8@mail.telecommunity.com>

At 09:53 PM 5/31/2006 +0200, Collin Winter wrote:
>Hi Phillip,
>
>Do you have any opinion on this patch (http://python.org/sf/1473257),
>which is assigned to you?

I didn't know it was assigned to me.  I guess SF doesn't send any 
notifications, and neither did Georg, so your email is the very first time 
that I've heard of it.

I don't have any opinion, but perhaps Python-Dev does?


From fredrik at pythonware.com  Fri Jun  2 05:10:38 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 02 Jun 2006 05:10:38 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <447DE055.4040105@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>
	<447DE055.4040105@egenix.com>
Message-ID: <e5oa7b$kvl$1@sea.gmane.org>

M.-A. Lemburg wrote:

> Seriously, I've been using and running pybench for years
> and even though tweaks to the interpreter do sometimes
> result in speedups or slow-downs where you wouldn't expect
> them (due to the interpreter using the Python objects),
> they are reproducable and often enough have uncovered
> that optimizations in one area may well result in slow-downs
> in other areas.

 > Often enough the results are related to low-level features
 > of the architecture you're using to run the code such as
 > cache size, cache lines, number of registers in the CPU or
 > on the FPU stack, etc. etc.

and that observation has never made you stop and think about whether 
there might be some problem with the benchmarking approach you're using? 
  after all, if a change to e.g. the try/except code slows things down 
or speed things up, is it really reasonable to expect that the time it 
takes to convert Unicode strings to uppercase should suddenly change due 
to cache effects or a changing number of registers in the CPU?  real 
hardware doesn't work that way...

is PyBench perhaps using the following approach:

     T = set of tests
     for N in range(number of test runs):
         for t in T:
             t0 = get_process_time()
             t()
             t1 = get_process_time()
             assign t1 - t0 to test t
             print assigned time

where t1 - t0 is very short?

that's not a very good idea, given how get_process_time tends to be 
implemented on current-era systems (google for "jiffies")...  but it 
definitely explains the bogus subtest results I'm seeing, and the "magic 
hardware" behaviour you're seeing.

</F>


From nnorwitz at gmail.com  Fri Jun  2 05:31:01 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 1 Jun 2006 20:31:01 -0700
Subject: [Python-Dev] Removing Mac OS 9 cruft
Message-ID: <ee2a432c0606012031u7c35c50dxf8623ce4633510bf@mail.gmail.com>

I was about to remove Mac/IDE scripts, but it looks like there might
be more stuff that is OS 9 related and should be removed.  Other
possibilities look like (everything under Mac/):

  Demo/*  this is a bit more speculative
  IDE scripts/*
  MPW/*
  Tools/IDE/*  this references IDE scripts, so presumably it should be toast?
  Tools/macfreeze/*
  Unsupported/mactcp/dnrglue.c
  Wastemods/*

I'm going mostly based on what has been modified somewhat recently.
Can someone confirm/reject these?  I'll remove them.

n

From guido at python.org  Fri Jun  2 05:58:29 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 1 Jun 2006 20:58:29 -0700
Subject: [Python-Dev] SF patch #1473257: "Add a gi_code attr to
	generators"
In-Reply-To: <5.1.1.6.0.20060601230800.01f286a8@mail.telecommunity.com>
References: <5.1.1.6.0.20060601230800.01f286a8@mail.telecommunity.com>
Message-ID: <ca471dc20606012058h563167b5p56515146fd68f20d@mail.gmail.com>

On 6/1/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> I didn't know it was assigned to me.  I guess SF doesn't send any
> notifications, and neither did Georg, so your email is the very first time
> that I've heard of it.

This is a longstanding SF bug. (One of the reasons why we should move
away from it ASAP IMO.)

While we're still using SF, developers should probably get in the
habit of sending an email to the assignee when assigning a bug...

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From ronaldoussoren at mac.com  Fri Jun  2 06:44:55 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Fri, 2 Jun 2006 06:44:55 +0200
Subject: [Python-Dev] Removing Mac OS 9 cruft
In-Reply-To: <ee2a432c0606012031u7c35c50dxf8623ce4633510bf@mail.gmail.com>
References: <ee2a432c0606012031u7c35c50dxf8623ce4633510bf@mail.gmail.com>
Message-ID: <2BE4984D-CC79-4F47-B28F-DE32DD5D1478@mac.com>


On 2-jun-2006, at 5:31, Neal Norwitz wrote:

> I was about to remove Mac/IDE scripts, but it looks like there might
> be more stuff that is OS 9 related and should be removed.  Other
> possibilities look like (everything under Mac/):
>
>   Demo/*  this is a bit more speculative
>   IDE scripts/*
>   MPW/*
>   Tools/IDE/*  this references IDE scripts, so presumably it should  
> be toast?
>   Tools/macfreeze/*
>   Unsupported/mactcp/dnrglue.c
>   Wastemods/*
>
> I'm going mostly based on what has been modified somewhat recently.
> Can someone confirm/reject these?  I'll remove them.

Demo is still needed, it contains example code. Some of it is stale,  
but not all. IMHO Modules/wastemodule.c and the demo's for it are  
also toast. I don't include support for waste in the universal  
binaries and so far nobody has complained. There's also not a  
universal binary of the version of waste we need.

I'll be working on the structure of the Mac tree this weekend, other  
than removing cruft like the stuff you mention I want to move the  
files in Mac/OSX one level up in the hierarchy.  Well actually I  
wasn't planning on actually removing stuff, just moving them to Mac/ 
Unsupported, but that's because I'm a wuss ;-). Feel free to remove  
these files (except Demo) and feel the satisfaction of removing  
unnecessary code.

BTW. Bgen is a lot more challenging than the removal of old cruft. A  
lot of the modules in Mac/Modules are generated using bgen. Sadly  
enough they are/were generated from the OS9 SDK header files instead  
of the current system header files. This makes updating these modules  
harder than necessary. I'd like to fix this before 2.5b1 is out, but  
don't know if I'll be able to understand bgen good enough to actually  
make that deadline.

Ronald
>
> n
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ 
> ronaldoussoren%40mac.com


From nnorwitz at gmail.com  Fri Jun  2 07:35:41 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 1 Jun 2006 22:35:41 -0700
Subject: [Python-Dev] test_unicode failure on MIPS
Message-ID: <ee2a432c0606012235m1bfb8b12x367e87d4fb72436c@mail.gmail.com>

Any ideas?

http://www.python.org/dev/buildbot/all/MIPS%20Debian%20trunk/builds/176/step-test/0

======================================================================
FAIL: test_count (test.test_unicode.UnicodeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/pybot/buildarea/trunk.klose-debian-mips/build/Lib/test/test_unicode.py",
line 97, in test_count
    string_tests.CommonTest.test_count(self)
  File "/home/pybot/buildarea/trunk.klose-debian-mips/build/Lib/test/string_tests.py",
line 119, in test_count
    self.checkequal(0, 'aaa', 'count', '', 10)
  File "/home/pybot/buildarea/trunk.klose-debian-mips/build/Lib/test/string_tests.py",
line 56, in checkequal
    realresult
AssertionError: 0 != 1

======================================================================
FAIL: test_rfind (test.test_unicode.UnicodeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/pybot/buildarea/trunk.klose-debian-mips/build/Lib/test/test_unicode.py",
line 118, in test_rfind
    string_tests.CommonTest.test_rfind(self)
  File "/home/pybot/buildarea/trunk.klose-debian-mips/build/Lib/test/string_tests.py",
line 198, in test_rfind
    self.checkequal(-1, 'abc', 'rfind', '', 4)
  File "/home/pybot/buildarea/trunk.klose-debian-mips/build/Lib/test/string_tests.py",
line 56, in checkequal
    realresult
AssertionError: -1 != 3

From g.brandl at gmx.net  Fri Jun  2 07:47:14 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 02 Jun 2006 07:47:14 +0200
Subject: [Python-Dev] SF patch #1473257: "Add a gi_code attr to
	generators"
In-Reply-To: <5.1.1.6.0.20060601230800.01f286a8@mail.telecommunity.com>
References: <43aa6ff70605311253t365c2e90g599ae191e9a525e9@mail.gmail.co m>
	<5.1.1.6.0.20060601230800.01f286a8@mail.telecommunity.com>
Message-ID: <e5ojd2$i2e$1@sea.gmane.org>

Phillip J. Eby wrote:
> At 09:53 PM 5/31/2006 +0200, Collin Winter wrote:
>>Hi Phillip,
>>
>>Do you have any opinion on this patch (http://python.org/sf/1473257),
>>which is assigned to you?
> 
> I didn't know it was assigned to me.  I guess SF doesn't send any 
> notifications, and neither did Georg, so your email is the very first time 
> that I've heard of it.

BTW, there's another one: #1483133.

Georg


From fredrik at pythonware.com  Fri Jun  2 08:03:53 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 02 Jun 2006 08:03:53 +0200
Subject: [Python-Dev] test_unicode failure on MIPS
In-Reply-To: <ee2a432c0606012235m1bfb8b12x367e87d4fb72436c@mail.gmail.com>
References: <ee2a432c0606012235m1bfb8b12x367e87d4fb72436c@mail.gmail.com>
Message-ID: <e5okc0$kbc$2@sea.gmane.org>

Neal Norwitz wrote:

> Any ideas?

this is a recent change, so it looks like the box simply didn't get 
around to rebuild the unicodeobject module.

(I'm beginning to wonder if I didn't forget to add some header file 
dependencies somewhere during the stringlib refactoring, but none of the 
other buildbots seem to have a problem with this...)

</F>


From nnorwitz at gmail.com  Fri Jun  2 08:29:40 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 1 Jun 2006 23:29:40 -0700
Subject: [Python-Dev] valgrind report
Message-ID: <ee2a432c0606012329n6288d84ao551d0d495ac18e76@mail.gmail.com>

Looks pretty good, except for 1 cjk problem:

test_codecencodings_jp

  Invalid read of size 1
     at 0x110AEBC3: shift_jis_2004_decode (_codecs_jp.c:642)
     by 0xBFCBDB7: mbidecoder_decode (multibytecodec.c:839)
   Address 0xAEC376B is 0 bytes after a block of size 3 alloc'd
     at 0x4A19B7E: malloc (vg_replace_malloc.c:149)
     by 0xBFCBF54: mbidecoder_decode (multibytecodec.c:1023)

n

From nnorwitz at gmail.com  Fri Jun  2 08:35:41 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 1 Jun 2006 23:35:41 -0700
Subject: [Python-Dev] test_unicode failure on MIPS
In-Reply-To: <e5okc0$kbc$2@sea.gmane.org>
References: <ee2a432c0606012235m1bfb8b12x367e87d4fb72436c@mail.gmail.com>
	<e5okc0$kbc$2@sea.gmane.org>
Message-ID: <ee2a432c0606012335q454aaaeco4a0fd984102fe2f1@mail.gmail.com>

On 6/1/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Neal Norwitz wrote:
>
> > Any ideas?
>
> this is a recent change, so it looks like the box simply didn't get
> around to rebuild the unicodeobject module.

That shouldn't be.  make distclean should be called (it was make clean
until recently).  However,

http://www.python.org/dev/buildbot/all/MIPS%20Debian%20trunk/builds/176/step-compile/0

seems to indicate unicodeobject was in fact not built.  I also don't
see any previous record of any builds (or make cleans).  That
buildslave is new and it had some connectivity problems I think.  So
maybe something was whacky on it.

The current build (still running) definitely did compile
unicodeobject.  So let's wait and see if that finishes successfully.

n

From mal at egenix.com  Fri Jun  2 10:40:06 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 02 Jun 2006 10:40:06 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5oa7b$kvl$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>
	<e5oa7b$kvl$1@sea.gmane.org>
Message-ID: <447FF966.5050807@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> Seriously, I've been using and running pybench for years
>> and even though tweaks to the interpreter do sometimes
>> result in speedups or slow-downs where you wouldn't expect
>> them (due to the interpreter using the Python objects),
>> they are reproducable and often enough have uncovered
>> that optimizations in one area may well result in slow-downs
>> in other areas.
> 
>  > Often enough the results are related to low-level features
>  > of the architecture you're using to run the code such as
>  > cache size, cache lines, number of registers in the CPU or
>  > on the FPU stack, etc. etc.
> 
> and that observation has never made you stop and think about whether 
> there might be some problem with the benchmarking approach you're using? 

The approach pybench is using is as follows:

* Run a calibration step which does the same as the actual
  test without the operation being tested (ie. call the
  function running the test, setup the for-loop, constant
  variables, etc.)

  The calibration step is run multiple times and is used
  to calculate an average test overhead time.

* Run the actual test which runs the operation multiple
  times.

  The test is then adjusted to make sure that the
  test overhead / test run ratio remains within
  reasonable bounds.

  If needed, the operation code is repeated verbatim in
  the for-loop, to decrease the ratio.

* Repeat the above for each test in the suite

* Repeat the suite N number of rounds

* Calculate the average run time of all test runs in all rounds.

>   after all, if a change to e.g. the try/except code slows things down 
> or speed things up, is it really reasonable to expect that the time it 
> takes to convert Unicode strings to uppercase should suddenly change due 
> to cache effects or a changing number of registers in the CPU?  real 
> hardware doesn't work that way...

Of course, but then changes to try-except logic can interfere
with the performance of setting up method calls. This is what
pybench then uncovers.

The only problem I see in the above approach is the way
calibration is done. The run-time of the calibration code
may be to small w/r to the resolution of the used timers.

Again, please provide the parameters you've used to run the
test case and the output. Things like warp factor, overhead,
etc. could hint to the problem you're seeing.

> is PyBench perhaps using the following approach:
> 
>      T = set of tests
>      for N in range(number of test runs):
>          for t in T:
>              t0 = get_process_time()
>              t()
>              t1 = get_process_time()
>              assign t1 - t0 to test t
>              print assigned time
> 
> where t1 - t0 is very short?

See above (or the code in pybench.py). t1-t0 is usually
around 20-50 seconds:

"""
        The tests must set .rounds to a value high enough to let the
        test run between 20-50 seconds. This is needed because
        clock()-timing only gives rather inaccurate values (on Linux,
        for example, it is accurate to a few hundreths of a
        second). If you don't want to wait that long, use a warp
        factor larger than 1.
"""

> that's not a very good idea, given how get_process_time tends to be 
> implemented on current-era systems (google for "jiffies")...  but it 
> definitely explains the bogus subtest results I'm seeing, and the "magic 
> hardware" behaviour you're seeing.

That's exactly the reason why tests run for a relatively long
time - to minimize these effects. Of course, using wall time
make this approach vulnerable to other effects such as current
load of the system, other processes having a higher priority
interfering with the timed process, etc.

For this reason, I'm currently looking for ways to measure the
process time on Windows.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              30 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Fri Jun  2 11:25:37 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 2 Jun 2006 11:25:37 +0200
Subject: [Python-Dev] Python Benchmarks
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com>
Message-ID: <e5p06h$tp1$1@sea.gmane.org>

M.-A. Lemburg wrote:

> Of course, but then changes to try-except logic can interfere
> with the performance of setting up method calls. This is what
> pybench then uncovers.

I think the only thing PyBench has uncovered is that you're convinced that it's
always right, and everybody else is always wrong, including people who've
spent decades measuring performance, and the hardware in your own computer.

> See above (or the code in pybench.py). t1-t0 is usually
> around 20-50 seconds:

what machines are you using?  using the default parameters, the entire run takes
about 50 seconds on the slowest machine I could find...

>> that's not a very good idea, given how get_process_time tends to be
>> implemented on current-era systems (google for "jiffies")...  but it
>> definitely explains the bogus subtest results I'm seeing, and the "magic
>> hardware" behaviour you're seeing.
>
> That's exactly the reason why tests run for a relatively long
> time - to minimize these effects. Of course, using wall time
> make this approach vulnerable to other effects such as current
> load of the system, other processes having a higher priority
> interfering with the timed process, etc.

since process time is *sampled*, not measured, process time isn't exactly in-
vulnerable either.  it's not hard to imagine scenarios where you end up being
assigned only a small part of the process time you're actually using, or cases
where you're assigned more time than you've had a chance to use.

afaik, if you want true performance counters on Linux, you need to patch the
operating system (unless something's changed in very recent versions).

I don't think that sampling errors can explain all the anomalies we've been seeing,
but I'd wouldn't be surprised if a high-resolution wall time clock on a lightly loaded
multiprocess system was, in practice, *more* reliable than sampled process time
on an equally loaded system.

</F> 




From mal at egenix.com  Fri Jun  2 12:26:02 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 02 Jun 2006 12:26:02 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5p06h$tp1$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org>
Message-ID: <4480123A.1090109@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> Of course, but then changes to try-except logic can interfere
>> with the performance of setting up method calls. This is what
>> pybench then uncovers.
> 
> I think the only thing PyBench has uncovered is that you're convinced that it's
> always right, and everybody else is always wrong, including people who've
> spent decades measuring performance, and the hardware in your own computer.

Oh, come on. You know that's not true and I'm trying to
understand what is causing your findings, but this is
difficult, since you're not providing enough details.
E.g. the output of pybench showing the timing results
would help a lot.

I would also like to reproduce your findings. Do you have
two revision numbers in svn which I could use for this ?

>> See above (or the code in pybench.py). t1-t0 is usually
>> around 20-50 seconds:
> 
> what machines are you using?  using the default parameters, the entire run takes
> about 50 seconds on the slowest machine I could find...

If the whole suite runs in 50 seconds, the per-test
run-times are far too small to be accurate. I usually
adjust the warp factor so that each *round* takes
50 seconds.

Looks like I have to revisit the default parameters and
update the doc-strings. I'll do that when I add the new
timers.

Could you check whether you still see the same results with
running with "pybench.py -w 1" ?

>>> that's not a very good idea, given how get_process_time tends to be
>>> implemented on current-era systems (google for "jiffies")...  but it
>>> definitely explains the bogus subtest results I'm seeing, and the "magic
>>> hardware" behaviour you're seeing.
>> That's exactly the reason why tests run for a relatively long
>> time - to minimize these effects. Of course, using wall time
>> make this approach vulnerable to other effects such as current
>> load of the system, other processes having a higher priority
>> interfering with the timed process, etc.
> 
> since process time is *sampled*, not measured, process time isn't exactly in-
> vulnerable either.  it's not hard to imagine scenarios where you end up being
> assigned only a small part of the process time you're actually using, or cases
> where you're assigned more time than you've had a chance to use.
> 
> afaik, if you want true performance counters on Linux, you need to patch the
> operating system (unless something's changed in very recent versions).
> 
> I don't think that sampling errors can explain all the anomalies we've been seeing,
> but I'd wouldn't be surprised if a high-resolution wall time clock on a lightly loaded
> multiprocess system was, in practice, *more* reliable than sampled process time
> on an equally loaded system.

That's why the timers being used by pybench will become a
parameter that you can then select to adapt pybench it to
the OS your running pybench on.

Note that time.clock, the current default timer in pybench,
is a high accuracy wall-clock timer on Windows, so it should
demonstrate similar behavior to timeit.py, even more so,
since your using warp 20 and thus a similar timing strategy
as that of timeit.py.

I suspect that the calibration step is causing problems.

Steve added a parameter to change the number of calibration
runs done per test: -C n. The default is 20.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              30 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From andrewdalke at gmail.com  Fri Jun  2 14:06:23 2006
From: andrewdalke at gmail.com (Andrew Dalke)
Date: Fri, 2 Jun 2006 14:06:23 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4480123A.1090109@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<447DC377.4000504@egenix.com>
	<17533.51096.462552.451772@montanaro.dyndns.org>
	<447DD153.8080202@egenix.com> <e5kkr6$14a$1@sea.gmane.org>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
Message-ID: <d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>

M.-A. Lemburg:
> The approach pybench is using is as follows:
 ...
>  The calibration step is run multiple times and is used
>  to calculate an average test overhead time.

One of the changes that occured during the sprint was to change this algorithm
to use the best time rather than the average.  Using the average assumes a
Gaussian distribution.  Timing results are not.  There is an absolute best but
that's rarely reached due to background noise.  It's more like a gamma
distribution
plus the minimum time.

To show the distribution is non-Gaussian I ran the following

def compute():
    x = 0
    for i in range(1000):
        for j in range(1000):
            x += 1

def bench():
    t1 = time.time()
    compute()
    t2 = time.time()
    return t2-t1

times = []
for i in range(1000):
    times.append(bench())

print times

The full distribution is attached as 'plot1.png' and the close up
(range 0.45-0.65)
as 'plot2.png'.  Not a clean gamma function, but that's a closer match than an
exponential.

The gamma distribution looks more like a exponential function when the shape
parameter is large.  This corresponds to a large amount of noise in the system,
so the run time is not close to the best time.  This means the average approach
works better when there is a lot of random background activity, which is not the
usual case when I try to benchmark.

When averaging a gamma distribution you'll end up with a bit of a
skew, and I think
the skew depends on the number of samples, reaching a limit point.

Using the minimum time should be more precise because there is a
definite lower bound and the machine should be stable.  In my test
above the first few results are

0.472838878632
0.473038911819
0.473326921463
0.473494052887
0.473829984665

I'm pretty certain the best time is 0.4725, or very close to that.
But the average
time is 0.58330151391 because of the long tail.  Here are the last 6 results in
my population of 1000

1.76353311539
1.79937505722
1.82750201225
2.01710510254
2.44861507416
2.90868496895

Granted, I hit a couple of web pages while doing this and my spam
filter processed
my mailbox in the background...

There's probably some Markov modeling which would look at the number
and distribution of samples so far and assuming a gamma distribution
determine how many more samples are needed to get a good estimate of
the absolute minumum time.  But min(large enough samples) should work
fine.

> If the whole suite runs in 50 seconds, the per-test
> run-times are far too small to be accurate. I usually
> adjust the warp factor so that each *round* takes
> 50 seconds.

The stringbench.py I wrote uses the timeit algorithm which
dynamically adjusts the test to run between 0.2 and 2 seconds.

> That's why the timers being used by pybench will become a
> parameter that you can then select to adapt pybench it to
> the OS your running pybench on.

Wasn't that decision a consequence of the problems found during
the sprint?

                                Andrew
                                dalke at dalkescientific.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: plot1.png
Type: image/png
Size: 2683 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060602/be8bb51c/attachment.png 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: plot2.png
Type: image/png
Size: 2916 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060602/be8bb51c/attachment-0001.png 

From guido at python.org  Fri Jun  2 15:26:52 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 2 Jun 2006 06:26:52 -0700
Subject: [Python-Dev] Removing Mac OS 9 cruft
In-Reply-To: <ee2a432c0606012031u7c35c50dxf8623ce4633510bf@mail.gmail.com>
References: <ee2a432c0606012031u7c35c50dxf8623ce4633510bf@mail.gmail.com>
Message-ID: <ca471dc20606020626t7c9ce8afmf3be6be942bd2ef@mail.gmail.com>

Just and Jack have confirmed that you can throw away everything except
possibly Demo/*. (Just even speculated that some cruft may have been
accidentally revived by the cvs -> svn transition?)

--Guido

On 6/1/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> I was about to remove Mac/IDE scripts, but it looks like there might
> be more stuff that is OS 9 related and should be removed.  Other
> possibilities look like (everything under Mac/):
>
>   Demo/*  this is a bit more speculative
>   IDE scripts/*
>   MPW/*
>   Tools/IDE/*  this references IDE scripts, so presumably it should be toast?
>   Tools/macfreeze/*
>   Unsupported/mactcp/dnrglue.c
>   Wastemods/*
>
> I'm going mostly based on what has been modified somewhat recently.
> Can someone confirm/reject these?  I'll remove them.
>
> n
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From mal at egenix.com  Fri Jun  2 15:29:14 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 02 Jun 2006 15:29:14 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>
	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>
	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
Message-ID: <44803D2A.5010108@egenix.com>

Andrew Dalke wrote:
> M.-A. Lemburg:
>> The approach pybench is using is as follows:
> ...
>>  The calibration step is run multiple times and is used
>>  to calculate an average test overhead time.
> 
> One of the changes that occured during the sprint was to change this
> algorithm
> to use the best time rather than the average.  Using the average assumes a
> Gaussian distribution.  Timing results are not.  There is an absolute
> best but
> that's rarely reached due to background noise.  It's more like a gamma
> distribution
> plus the minimum time.
> 
> To show the distribution is non-Gaussian I ran the following
> 
> def compute():
>    x = 0
>    for i in range(1000):
>        for j in range(1000):
>            x += 1
> 
> def bench():
>    t1 = time.time()
>    compute()
>    t2 = time.time()
>    return t2-t1
> 
> times = []
> for i in range(1000):
>    times.append(bench())
> 
> print times
> 
> The full distribution is attached as 'plot1.png' and the close up
> (range 0.45-0.65)
> as 'plot2.png'.  Not a clean gamma function, but that's a closer match
> than an
> exponential.
> 
> The gamma distribution looks more like a exponential function when the
> shape
> parameter is large.  This corresponds to a large amount of noise in the
> system,
> so the run time is not close to the best time.  This means the average
> approach
> works better when there is a lot of random background activity, which is
> not the
> usual case when I try to benchmark.
> 
> When averaging a gamma distribution you'll end up with a bit of a
> skew, and I think
> the skew depends on the number of samples, reaching a limit point.
> 
> Using the minimum time should be more precise because there is a
> definite lower bound and the machine should be stable.  In my test
> above the first few results are
> 
> 0.472838878632
> 0.473038911819
> 0.473326921463
> 0.473494052887
> 0.473829984665
> 
> I'm pretty certain the best time is 0.4725, or very close to that.
> But the average
> time is 0.58330151391 because of the long tail.  Here are the last 6
> results in
> my population of 1000
> 
> 1.76353311539
> 1.79937505722
> 1.82750201225
> 2.01710510254
> 2.44861507416
> 2.90868496895
> 
> Granted, I hit a couple of web pages while doing this and my spam
> filter processed
> my mailbox in the background...
> 
> There's probably some Markov modeling which would look at the number
> and distribution of samples so far and assuming a gamma distribution
> determine how many more samples are needed to get a good estimate of
> the absolute minumum time.  But min(large enough samples) should work
> fine.

Thanks for the great analysis !

Using the minimum looks like the way to go for calibration.

I wonder whether the same is true for the actual tests; since
you're looking for the expected run-time, the minimum may
not necessarily be the choice. Then again, in both cases
you are only looking at a small number of samples (20 for
the calibration, 10 for the number of rounds), so this
may be irrelevant.

BTW, did you run this test on Windows or a Unix machine ?

There's also an interesting second high at around 0.53.
What could be causing this ?

>> If the whole suite runs in 50 seconds, the per-test
>> run-times are far too small to be accurate. I usually
>> adjust the warp factor so that each *round* takes
>> 50 seconds.
> 
> The stringbench.py I wrote uses the timeit algorithm which
> dynamically adjusts the test to run between 0.2 and 2 seconds.
>
>> That's why the timers being used by pybench will become a
>> parameter that you can then select to adapt pybench it to
>> the OS your running pybench on.
> 
> Wasn't that decision a consequence of the problems found during
> the sprint?

It's a consequence of a discussion I had with Steve Holden
and Tim Peters:

I believe that using wall-clock timers
for benchmarking is not a good approach due to the high
noise level. Process time timers typically have a lower
resolution, but give a better picture of the actual
run-time of your code and also don't exhibit as much noise
as the wall-clock timer approach. Of course, you have
to run the tests somewhat longer to get reasonable
accuracy of the timings.

Tim thinks that it's better to use short running tests and
an accurate timer, accepting the added noise and counting
on the user making sure that the noise level is at a
minimum.

Since I like to give users the option of choosing for
themselves, I'm going to make the choice of timer an
option.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              30 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From mwh at python.net  Fri Jun  2 15:37:42 2006
From: mwh at python.net (Michael Hudson)
Date: Fri, 02 Jun 2006 14:37:42 +0100
Subject: [Python-Dev] Let's stop eating exceptions in dict lookup
In-Reply-To: <200606020936.08610.anthony@interlink.com.au> (Anthony Baxter's
	message of "Fri, 2 Jun 2006 09:36:03 +1000")
References: <20060529171147.GA1717@code0.codespeak.net>
	<1f7befae0605311810i5996980dq313a8cae1725cf25@mail.gmail.com>
	<20060601162134.GB5802@performancedrivers.com>
	<200606020936.08610.anthony@interlink.com.au>
Message-ID: <2mpshrvftl.fsf@starship.python.net>

Anthony Baxter <anthony at interlink.com.au> writes:

> On Friday 02 June 2006 02:21, Jack Diederich wrote:
>> The CCP Games CEO said they have trouble retaining talent from more
>> moderate latitudes for this reason.  18 hours of daylight makes
>> them a bit goofy and when the Winter Solstice rolls around they are
>> apt to go quite mad.
>
> Obviously they need to hire people who are already crazy.

I think they already did! :)

> not-naming-any-names-ly,
> Anthony

me-neither-ly y'rs
mwh

-- 
  > Look I don't know.  Thankyou everyone for arguing me round in
  > circles.
  No need for thanks, ma'am; that's what we're here for.
                                    -- LNR & Michael M Mason, cam.misc

From fredrik at pythonware.com  Fri Jun  2 15:49:46 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 2 Jun 2006 15:49:46 +0200
Subject: [Python-Dev] Python Benchmarks
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com><e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com><e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com><d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com>
Message-ID: <e5pflr$pcm$1@sea.gmane.org>

M.-A. Lemburg wrote:

> I believe that using wall-clock timers
> for benchmarking is not a good approach due to the high
> noise level. Process time timers typically have a lower
> resolution, but give a better picture of the actual
> run-time of your code and also don't exhibit as much noise
> as the wall-clock timer approach.

please stop repeating this nonsense.  there are no "process time timers" in con-
temporary operating systems; only tick counters.

there are patches for linux and commercial add-ons to most platforms that lets
you use hardware performance counters for process stuff, but there's no way to
emulate that by playing with different existing Unix or Win32 API:s; the thing
you think you're using simply isn't there.

</F> 




From mal at egenix.com  Fri Jun  2 16:50:03 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 02 Jun 2006 16:50:03 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44803D2A.5010108@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com>	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com>
Message-ID: <4480501B.9080709@egenix.com>

M.-A. Lemburg wrote:
>>> That's why the timers being used by pybench will become a
>>> parameter that you can then select to adapt pybench it to
>>> the OS your running pybench on.
>> Wasn't that decision a consequence of the problems found during
>> the sprint?
> 
> It's a consequence of a discussion I had with Steve Holden
> and Tim Peters:
> 
> I believe that using wall-clock timers
> for benchmarking is not a good approach due to the high
> noise level. Process time timers typically have a lower
> resolution, but give a better picture of the actual
> run-time of your code and also don't exhibit as much noise
> as the wall-clock timer approach. Of course, you have
> to run the tests somewhat longer to get reasonable
> accuracy of the timings.
> 
> Tim thinks that it's better to use short running tests and
> an accurate timer, accepting the added noise and counting
> on the user making sure that the noise level is at a
> minimum.

I just had an idea: if we could get each test to run
inside a single time slice assigned by the OS scheduler,
then we could benefit from the better resolution of the
hardware timers while still keeping the noise to a
minimum.

I suppose this could be achieved by:

* making sure that each tests needs less than 10ms to run

* calling time.sleep(0) after each test run

Here's some documentation on the Linux scheduler:

http://www.samspublishing.com/articles/article.asp?p=101760&seqNum=2&rl=1

Table 3.1 has the minimum time slice: 10ms.

What do you think ? Would this work ?

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              30 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From mcherm at mcherm.com  Fri Jun  2 17:10:21 2006
From: mcherm at mcherm.com (Michael Chermside)
Date: Fri, 02 Jun 2006 08:10:21 -0700
Subject: [Python-Dev] Python Benchmarks
Message-ID: <20060602081021.jio6uf0wn0okc8kw@login.werra.lunarpages.com>

Marc-Andre Lemburg writes:
> Using the minimum looks like the way to go for calibration.
>
> I wonder whether the same is true for the actual tests; since
> you're looking for the expected run-time, the minimum may
> not necessarily be the choice.

No, you're not looking for the expected run-time. The expected
run-time is a function of the speed of the CPU, the architechure
of same, what else is running simultaneously -- perhaps even
what music you choose to listen to that day. It is NOT a
constant for a given piece of code, and is NOT what you are
looking for.

What you really want to do in benchmarking is to *compare* the
performance of two (or more) different pieces of code. You do,
of course, care about the real-world performance. So if you
had two algorithms and one ran twice as fast when there were no
context switches and 10 times slower when there was background
activity on the machine, then you'd want prefer the algorithm
that supports context switches. But that's not a realistic
situation. What is far more common is that you run one test
while listening to the Grateful Dead and another test while
listening to Bach, and that (plus other random factors and the
phase of the moon) causes one test to run faster than the
other.

Taking the minimum time clearly subtracts some noise, which is
a good thing when comparing performance for two or more pieces
of code. It fails to account for the distribution of times, so
if one piece of code occasionally gets lucky and takes far less
time then minimum time won't be a good choice... but it would
be tricky to design code that would be affected by the scheduler
in this fashion even if you were explicitly trying!


Later he continues:
> Tim thinks that it's better to use short running tests and
> an accurate timer, accepting the added noise and counting
> on the user making sure that the noise level is at a
> minimum.
>
> Since I like to give users the option of choosing for
> themselves, I'm going to make the choice of timer an
> option.

I'm generally a fan of giving programmers choices. However,
this is an area where we have demonstrated that even very
competent programmers often have misunderstandings (read this
thread for evidence!). So be very careful about giving such
a choice: the default behavior should be chosen by people
who think carefully about such things, and the documentation
on the option should give a good explanation of the tradeoffs
or at least a link to such an explanation.

-- Michael Chermside


From theller at python.net  Fri Jun  2 18:27:47 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 02 Jun 2006 18:27:47 +0200
Subject: [Python-Dev] test_ctypes failures on ppc64 debian
Message-ID: <e5pou4$tis$1@sea.gmane.org>

test_ctypes fails on the ppc64 machine.  I don't have access to such
a machine myself, so I would have to do some trial and error, or try
to print some diagnostic information.

This should not be done in the trunk, so the question is: can the buildbots
build branches?

I assume I just have to enter a revision number and press the force-build button,
is this correct?  Or would someone consider this abuse?

Thomas


From tim.peters at gmail.com  Fri Jun  2 18:37:13 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 2 Jun 2006 12:37:13 -0400
Subject: [Python-Dev] test_ctypes failures on ppc64 debian
In-Reply-To: <e5pou4$tis$1@sea.gmane.org>
References: <e5pou4$tis$1@sea.gmane.org>
Message-ID: <1f7befae0606020937g9316c74t49b59758a2d0f4b1@mail.gmail.com>

[Thomas Heller]
> test_ctypes fails on the ppc64 machine.  I don't have access to such
> a machine myself, so I would have to do some trial and error, or try
> to print some diagnostic information.
>
> This should not be done in the trunk, so the question is: can the buildbots
> build branches?

Yes.  For example, that's how the buildbots run 2.4 tests.

> I assume I just have to enter a revision number and press the force-build
> button, is this correct?

No, you need to enter the "tail end" of the branch path in the "Branch
to build:" box.  You probably want to leave the "Revision to build:"
box empty.  Examples I know work because I've tried them in the past:
entering "trunk" in "Branch to build:" builds the current trunk, and
entering "branches/release24-maint" in "Branch to build:" builds the
current 2.4 branch.  I'm not certain that paths other than those work.

> Or would someone consider this abuse?

In this case, it only matters whether Matthias Klose thinks it's abuse
(since klose-debian-ppc64 is his box), so I've copied him on this
reply.  Matthias, I hope you don't mind some extra activity on that
box, since it may be the only way test_ctypes will ever pass on your
box :-)

From theller at python.net  Fri Jun  2 18:42:43 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 02 Jun 2006 18:42:43 +0200
Subject: [Python-Dev] test_ctypes failures on ppc64 debian
In-Reply-To: <1f7befae0606020937g9316c74t49b59758a2d0f4b1@mail.gmail.com>
References: <e5pou4$tis$1@sea.gmane.org>
	<1f7befae0606020937g9316c74t49b59758a2d0f4b1@mail.gmail.com>
Message-ID: <44806A83.2020203@python.net>

Tim Peters wrote:
> [Thomas Heller]
>> test_ctypes fails on the ppc64 machine.  I don't have access to such
>> a machine myself, so I would have to do some trial and error, or try
>> to print some diagnostic information.
>>
>> This should not be done in the trunk, so the question is: can the 
>> buildbots
>> build branches?
> 
> Yes.  For example, that's how the buildbots run 2.4 tests.
> 
>> I assume I just have to enter a revision number and press the force-build
>> button, is this correct?
> 
> No, you need to enter the "tail end" of the branch path in the "Branch
> to build:" box.  You probably want to leave the "Revision to build:"
> box empty.  Examples I know work because I've tried them in the past:
> entering "trunk" in "Branch to build:" builds the current trunk, and
> entering "branches/release24-maint" in "Branch to build:" builds the
> current 2.4 branch.  I'm not certain that paths other than those work.
> 
>> Or would someone consider this abuse?
> 
> In this case, it only matters whether Matthias Klose thinks it's abuse
> (since klose-debian-ppc64 is his box), so I've copied him on this
> reply.  Matthias, I hope you don't mind some extra activity on that
> box, since it may be the only way test_ctypes will ever pass on your
> box :-)

I have already mailed him asking if he can give me interactive access
to this machine ;-).  He has not yet replied - I'm not sure if this is because
he's been shocked to see such a request, or if he already is in holidays.

Thomas


From brett at python.org  Fri Jun  2 18:51:51 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 2 Jun 2006 09:51:51 -0700
Subject: [Python-Dev] SF patch #1473257: "Add a gi_code attr to
	generators"
In-Reply-To: <ca471dc20606012058h563167b5p56515146fd68f20d@mail.gmail.com>
References: <5.1.1.6.0.20060601230800.01f286a8@mail.telecommunity.com>
	<ca471dc20606012058h563167b5p56515146fd68f20d@mail.gmail.com>
Message-ID: <bbaeab100606020951m3ab28659t6e97688d5bb94a49@mail.gmail.com>

On 6/1/06, Guido van Rossum <guido at python.org> wrote:
>
> On 6/1/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > I didn't know it was assigned to me.  I guess SF doesn't send any
> > notifications, and neither did Georg, so your email is the very first
> time
> > that I've heard of it.
>
> This is a longstanding SF bug. (One of the reasons why we should move
> away from it ASAP IMO.)


The Request for Trackers should go out this weekend, putting a worst case
timeline of choosing a tracker as three months from this weekend.  Once that
is done hopefully switching over won't take very long.  In other words,
hopefully this can get done before October.

-Brett

While we're still using SF, developers should probably get in the
> habit of sending an email to the assignee when assigning a bug...
>
> --
> --Guido van Rossum (home page: http://www.python.org/~guido/)
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060602/be12e143/attachment.html 

From tjreedy at udel.edu  Fri Jun  2 20:20:05 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 2 Jun 2006 14:20:05 -0400
Subject: [Python-Dev] Python Benchmarks
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com><e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com><e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com><d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com>
Message-ID: <e5pvgm$mqv$1@sea.gmane.org>


"M.-A. Lemburg" <mal at egenix.com> wrote in message 
news:44803D2A.5010108 at egenix.com...
>> Granted, I hit a couple of web pages while doing this and my spam
>> filter processed my mailbox in the background...

Hardly a setting in which to run comparison tests, seems to me.

> Using the minimum looks like the way to go for calibration.

Or possibly the median.

But even better, the way to go to run comparison timings is to use a system 
with as little other stuff going on as possible.  For Windows, this means 
rebooting in safe mode, waiting until the system is quiescent, and then run 
the timing test with *nothing* else active that can be avoided.

Even then, I would look at the distribution of times for a given test to 
check for anomalously high values that should be tossed.  (This can be 
automated somewhat.)

Terry Jan Reedy




From fredrik at pythonware.com  Fri Jun  2 20:41:06 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 02 Jun 2006 20:41:06 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5pvgm$mqv$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com><e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com><e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com><d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>	<44803D2A.5010108@egenix.com>
	<e5pvgm$mqv$1@sea.gmane.org>
Message-ID: <e5q0o2$r2o$1@sea.gmane.org>

Terry Reedy wrote:

> But even better, the way to go to run comparison timings is to use a system 
> with as little other stuff going on as possible.  For Windows, this means 
> rebooting in safe mode, waiting until the system is quiescent, and then run 
> the timing test with *nothing* else active that can be avoided.

sigh.

</F>


From fredrik at pythonware.com  Fri Jun  2 20:52:55 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 02 Jun 2006 20:52:55 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4480501B.9080709@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com>	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>	<44803D2A.5010108@egenix.com>
	<4480501B.9080709@egenix.com>
Message-ID: <e5q1e7$tks$1@sea.gmane.org>

M.-A. Lemburg wrote:

> I just had an idea: if we could get each test to run
> inside a single time slice assigned by the OS scheduler,
> then we could benefit from the better resolution of the
> hardware timers while still keeping the noise to a
> minimum.
> 
> I suppose this could be achieved by:
> 
> * making sure that each tests needs less than 10ms to run

iirc, very recent linux kernels have a 1 millisecond tick.  so does 
alphas, and probably some other platforms.

> * calling time.sleep(0) after each test run

so some higher priority process can get a chance to run, and spend 9.5 
milliseconds shuffling data to a slow I/O device before blocking? ;-)

I'm not sure this problem can be solved, really, at least not as long as 
you're constrained to portable API:s.

(talking of which, if someone has some time and a linux box to spare, 
and wants to do some serious hacking on precision benchmarks, using

     http://user.it.uu.se/~mikpe/linux/perfctr/2.6/

to play with the TSC might be somewhat interesting.)

</F>


From mal at egenix.com  Fri Jun  2 23:20:06 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 02 Jun 2006 23:20:06 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4480501B.9080709@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com>	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>	<44803D2A.5010108@egenix.com>
	<4480501B.9080709@egenix.com>
Message-ID: <4480AB86.8080606@egenix.com>

M.-A. Lemburg wrote:
>>>> That's why the timers being used by pybench will become a
>>>> parameter that you can then select to adapt pybench it to
>>>> the OS your running pybench on.
>>> Wasn't that decision a consequence of the problems found during
>>> the sprint?
>> It's a consequence of a discussion I had with Steve Holden
>> and Tim Peters:
>>
>> I believe that using wall-clock timers
>> for benchmarking is not a good approach due to the high
>> noise level. Process time timers typically have a lower
>> resolution, but give a better picture of the actual
>> run-time of your code and also don't exhibit as much noise
>> as the wall-clock timer approach. Of course, you have
>> to run the tests somewhat longer to get reasonable
>> accuracy of the timings.
>>
>> Tim thinks that it's better to use short running tests and
>> an accurate timer, accepting the added noise and counting
>> on the user making sure that the noise level is at a
>> minimum.
> 
> I just had an idea: if we could get each test to run
> inside a single time slice assigned by the OS scheduler,
> then we could benefit from the better resolution of the
> hardware timers while still keeping the noise to a
> minimum.
> 
> I suppose this could be achieved by:
> 
> * making sure that each tests needs less than 10ms to run
> 
> * calling time.sleep(0) after each test run
> 
> Here's some documentation on the Linux scheduler:
> 
> http://www.samspublishing.com/articles/article.asp?p=101760&seqNum=2&rl=1
> 
> Table 3.1 has the minimum time slice: 10ms.
> 
> What do you think ? Would this work ?

I ran some tests related to this and it appears that provide
the test itself uses less than 1ms, chances are
high that you don't get any forced context switches in your
way while running the test.

It also appears that you have to use time.sleep(10e6) to
get the desired behavior. time.sleep(0) seems to receive
some extra care, so doesn't have the intended effect - at
least not on Linux.

I've checked this on AMD64 and Intel Pentium M. The script is
attached - it will run until you get more than 10 forced
context switches in 100 runs of the test, incrementing the
runtime of the test in each round.

It's also interesting that the difference between max and min
run-time of the tests can be as low as 0.2% on the Pentium,
whereas the AMD64 always stays around 4-5%. On an old AMD Athlon,
the difference rare goes below 50% - this might also have
to do with the kernel version running on that machine which
is 2.4 whereas the AMD64 and Pentium M are running 2.6.

Note that is needs to the resource module, so it won't work
on Windows.

It's interesting that even pressing a key on your keyboard
will cause forced context switches.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: microbench.py
Url: http://mail.python.org/pipermail/python-dev/attachments/20060602/f8671cce/attachment.asc 

From jjl at pobox.com  Sat Jun  3 01:19:01 2006
From: jjl at pobox.com (John J Lee)
Date: Fri, 2 Jun 2006 23:19:01 +0000 (UTC)
Subject: [Python-Dev] Some more comments re new uriparse module,
	patch 1462525
Message-ID: <Pine.LNX.4.64.0606022059340.8454@localhost>

[Not sure whether this kind of thing is best posted as tracker comments 
(but then the tracker gets terribly long and is mailed out every time a 
change happens) or posted here.  Feel free to tell me I'm posting in the 
wrong place...]

Some comments on this patch (a new module, submitted by Paul Jimenez, 
implementing the rules set out in RFC 3986 for URI parsing, joining URI 
references with a base URI etc.)

http://python.org/sf/1462525


Sorry for the pause, Paul.  I finally read RFC 3986 -- which I must say is 
probably the best-written RFC I've read (and there was much rejoicing).

I still haven't read 3987 and got to grips with the unicode issues 
(whatever they are), but I have just implemented the same stuff you did, 
so have some comments on non-unicode aspects of your implementation (the 
version labelled "v23" on the tracker):


Your urljoin implementation seems to pass the tests (the tests taken from 
the RFC), but I have to I admit I don't understand it :-)  It doesn't seem 
to take account of the distinction between undefined and empty URI 
components.  For example, the authority of the URI reference may be empty 
but still defined.  Anyway, if you're taking advantage of some subtle 
identity that implies that you can get away with truth-testing in place of 
"is None" tests, please don't ;-) It's slower than "is [not] None" tests 
both for the computer and (especially!) the reader.

I don't like the use of module posixpath to implement the algorithm 
labelled "remove_dot_segments".  URIs are not POSIX filesystem paths, and 
shouldn't depend on code meant to implement the latter.  But my own 
implementation is exceedingly ugly ATM, so I'm in no position to grumble 
too much :-)

Normalisation of the base URI is optional, and your urljoin function
never normalises.  Instead, it parses the base and reference, then
follows the algorithm of section 5.2 of the RFC.  Parsing is required
before normalisation takes place.  So urljoin forces people who need
to normalise the URI before to parse it twice, which is annoying.
There should be some way to parse 5-tuples in instead of URIs.  E.g.,
from my implementation:

def urljoin(base_uri, uri_reference):
     return urlunsplit(urljoin_parts(urlsplit(base_uri),
                                     urlsplit(uri_reference)))


It would be nice to have a 5-tuple-like class (I guess implemented as a 
subclass of tuple) that also exposes attributes (.authority, .path, etc.) 
-- the same way module time does it.

The path component is required, though may be empty.  Your parser
returns None (meaning "undefined") where it should return an empty
string.

Nit: Your tests involving ports contain non-digit characters in the
port (viz. "port"), which is not valid by section 3.2.3 of the RFC.

Smaller nit: the userinfo component was never allowed in http URLs,
but you use them in your tests.  This issue is outside of RFC 3986, of
course.

Particularly because the userinfo component is deprecated, I'd rather
that userinfo-splitting and joining were separate functions, with the
other functions dealing only with the standard RFC 3986 5-tuples.

DefaultSchemes should be a class attribute of URIParser

The naming of URLParser / URIParser is still insane :-)  I suggest
naming them _URIParser and URIParser respectively.

I guess there should be no mention of "URL" anywhere in the module --
only "URI" (even though I hate "URI", as a mostly-worthless
distinction from "URL", consistency inside the module is more
important, and URI is technically correct and fits with all the
terminology used in the RFC).  I'm still heavily -1 on calling it
"uriparse" though, because of the highly misleading comparison with
the name "urlparse" (the difference between the modules isn't the
difference between URIs and URLs).

Re your comment on "mailto:" in the tracker: sure, I understand it's not 
meant to be public, but the interface is!  .parse() will return a 4-tuple 
for mailto: URLs.  For everything else, it will return a 7-tuple.  That's 
silly.

The documentation should explain that the function of URIParser is
hiding scheme-dependent URI normalisation.

Method names and locals are still likeThis, contrary to PEP 8.

docstrings and other whitespace are still non-standard -- follow PEP 8
(and PEP 257, which PEP 8 references) Doesn't have to be totally rigid
of course -- e.g. lining up the ":" characters in the tests is fine.

Standard stdlib form documentation is still missing.  I'll be told off
if I don't read you your rights: you don't have to submit in LaTeX
markup -- apparently there are hordes of eager LaTeX markers-up
lurking ready to pounce on poorly-formatted documentation <wink>

Test suite still needs tweaking to put it in standard stdlib form


John


From andrewdalke at gmail.com  Sat Jun  3 01:19:15 2006
From: andrewdalke at gmail.com (Andrew Dalke)
Date: Sat, 3 Jun 2006 01:19:15 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5pvgm$mqv$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<e5kkr6$14a$1@sea.gmane.org> <447DE055.4040105@egenix.com>
	<e5oa7b$kvl$1@sea.gmane.org> <447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org> <4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
Message-ID: <d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>

On 6/2/06, Terry Reedy <tjreedy at udel.edu> wrote:
> Hardly a setting in which to run comparison tests, seems to me.

The point though was to show that the time distribution is non-Gaussian,
so intuition based on that doesn't help.

> > Using the minimum looks like the way to go for calibration.
>
> Or possibly the median.

Why?  I can't think of why that's more useful than the minimum time.

Given an large number of samples the difference between the
minimum and the median/average/whatever is mostly providing
information about the background noise, which is pretty irrelevent
to most benchmarks.

> But even better, the way to go to run comparison timings is to use a system
> with as little other stuff going on as possible.  For Windows, this means
> rebooting in safe mode, waiting until the system is quiescent, and then run
> the timing test with *nothing* else active that can be avoided.

A reason I program in Python is because I want to get work done and not
deal with stoic purity.  I'm not going to waste all that time (or money to buy
a new machine) just to run a benchmark.

Just how much more accurate would that be over the numbers we get
now.  Have you tried it?  What additional sensitivity did you get and was
the extra effort worthwhile?

> Even then, I would look at the distribution of times for a given test to
> check for anomalously high values that should be tossed.  (This can be
> automated somewhat.)

I say it can be automated completely.  Toss all but the lowest.
It's the one with the least noise overhead.

I think fitting the smaller data points to a gamma distribution might
yield better (more reproducible and useful) numbers but I know my
stats ability is woefully decayed so I'm not going to try.  My observation
is that the shape factor is usually small so in a few dozen to a hundred
samples there's a decent chance of getting a time with minimal noise
overhead.

                                Andrew
                                dalke at dalkescientific.com

From andrewdalke at gmail.com  Sat Jun  3 01:25:06 2006
From: andrewdalke at gmail.com (Andrew Dalke)
Date: Sat, 3 Jun 2006 01:25:06 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4480AB86.8080606@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <4480501B.9080709@egenix.com>
	<4480AB86.8080606@egenix.com>
Message-ID: <d78db4cd0606021625j3748fc6fhd0539a355088caed@mail.gmail.com>

On 6/2/06, M.-A. Lemburg <mal at egenix.com> wrote:
> It's interesting that even pressing a key on your keyboard
> will cause forced context switches.

When niceness was first added to multiprocessing OSes people found their
CPU intensive jobs would go faster by pressing enter a lot.

                                Andrew
                                dalke at dalkescientific.com

From tim.peters at gmail.com  Sat Jun  3 01:44:07 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 2 Jun 2006 19:44:07 -0400
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
	<d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
Message-ID: <1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>

[MAL]
>>> Using the minimum looks like the way to go for calibration.

[Terry Reedy]
>> Or possibly the median.

[Andrew Dalke]
> Why?  I can't think of why that's more useful than the minimum time.

A lot of things get mixed up here ;-)  The _mean_ is actually useful
if you're using a poor-resolution timer with a fast test.  For
example, suppose a test takes 1/10th the time of the span between
counter ticks.  Then, "on average", in 9 runs out of 10 the reported
elapsed time is 0 ticks, and in 1 run out of 10 the reported time is 1
tick.  0 and 1 are both wrong, but the mean (1/10) is correct.

So there _can_ be sense to that.  Then people vaguely recall that the
median is more robust than the mean, and all sense goes out the window
;-)

My answer is to use the timer with the best resolution the machine
has.  Using the mean is a way to worm around timer quantization
artifacts, but it's easier and clearer to use a timer with resolution
so fine that quantization doesn't make a lick of real difference.
Forcing a test to run for a long time is another way to make timer
quantization irrelevant, but then you're also vastly increasing
chances for other processes to disturb what you're testing.

I liked benchmarking on Crays in the good old days.  No time-sharing,
no virtual memory, and the OS believed to its core that its primary
purpose was to set the base address once at the start of a job so the
Fortran code could scream.  Test times were reproducible to the
nanosecond with no effort.  Running on a modern box for a few
microseconds at a time is a way to approximate that, provided you
measure the minimum time with a high-resolution timer :-)

From amk at amk.ca  Sat Jun  3 02:26:33 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Fri, 2 Jun 2006 20:26:33 -0400
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
References: <447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
	<d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
	<1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
Message-ID: <20060603002633.GA569@Andrew-iBook2.local>

On Fri, Jun 02, 2006 at 07:44:07PM -0400, Tim Peters wrote:
> Fortran code could scream.  Test times were reproducible to the
> nanosecond with no effort.  Running on a modern box for a few
> microseconds at a time is a way to approximate that, provided you
> measure the minimum time with a high-resolution timer :-)

On Linux with a multi-CPU machine, you could probably boot up the
system to use N-1 CPUs, and then start the Python process on CPU N.
That should avoid the process being interrupted by other processes,
though I guess there would still be some noise from memory bus and
kernel lock contention.

(At work we're trying to move toward this approach for doing realtime
audio: devote one CPU to the audio computation and use other CPUs for
I/O, web servers, and whatnot.)

--amk

From greg.ewing at canterbury.ac.nz  Sat Jun  3 03:20:48 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 03 Jun 2006 13:20:48 +1200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
	<d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
	<1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
Message-ID: <4480E3F0.8050801@canterbury.ac.nz>

Tim Peters wrote:

> I liked benchmarking on Crays in the good old days.  ...  
 > Test times were reproducible to the
> nanosecond with no effort.  Running on a modern box for a few
> microseconds at a time is a way to approximate that, provided you
> measure the minimum time with a high-resolution timer :-)

Obviously what we need here is a stand-alone Python interpreter
that runs on the bare machine, so there's no pesky operating
system around to mess up our times.

--
Greg

From greg.ewing at canterbury.ac.nz  Sat Jun  3 03:25:28 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 03 Jun 2006 13:25:28 +1200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <20060603002633.GA569@Andrew-iBook2.local>
References: <447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
	<d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
	<1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
	<20060603002633.GA569@Andrew-iBook2.local>
Message-ID: <4480E508.3060508@canterbury.ac.nz>

A.M. Kuchling wrote:

> (At work we're trying to move toward this approach for doing realtime
> audio: devote one CPU to the audio computation and use other CPUs for
> I/O, web servers, and whatnot.)

Speaking of creative uses for multiple CPUs, I was thinking
about dual-core Intel Macs the other day, and I wondered
whether it would be possible to configure it so that one
core was running MacOSX and the other was running Windows
at the same time.

It would give the term "dual booting" a whole new
meaning...

--
Greg

From jcarlson at uci.edu  Sat Jun  3 05:52:10 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 02 Jun 2006 20:52:10 -0700
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4480E3F0.8050801@canterbury.ac.nz>
References: <1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
	<4480E3F0.8050801@canterbury.ac.nz>
Message-ID: <20060602204633.69C0.JCARLSON@uci.edu>


Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> 
> Tim Peters wrote:
> 
> > I liked benchmarking on Crays in the good old days.  ...  
>  > Test times were reproducible to the
> > nanosecond with no effort.  Running on a modern box for a few
> > microseconds at a time is a way to approximate that, provided you
> > measure the minimum time with a high-resolution timer :-)
> 
> Obviously what we need here is a stand-alone Python interpreter
> that runs on the bare machine, so there's no pesky operating
> system around to mess up our times.

An early version of unununium would do that (I don't know if much
progress has been made since I last checked their site).

 - Josiah


From martin at v.loewis.de  Sat Jun  3 10:01:58 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 03 Jun 2006 10:01:58 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5p06h$tp1$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org>
Message-ID: <448141F6.7030806@v.loewis.de>

Fredrik Lundh wrote:
> since process time is *sampled*, not measured, process time isn't exactly in-
> vulnerable either.

I can't share that view. The scheduler knows *exactly* what thread is
running on the processor at any time, and that thread won't change
until the scheduler makes it change. So if you discount time spent
in interrupt handlers (which might be falsely accounted for the
thread that happens to run at the point of the interrupt), then
process time *is* measured, not sampled, on any modern operating system:
it is updated whenever the scheduler schedules a different thread.

Of course, the question still is what the resolution of the clock is
that makes these measurements. For Windows NT+, I would expect it to
be "quantum units", but I'm uncertain whether it could measure also
fractions of a quantum unit if the process does a blocking call.

> I don't think that sampling errors can explain all the anomalies we've been seeing,
> but I'd wouldn't be surprised if a high-resolution wall time clock on a lightly loaded
> multiprocess system was, in practice, *more* reliable than sampled process time
> on an equally loaded system.

On Linux, process time is accounted in jiffies. Unfortunately, for
compatibility, times(2) converts that to clock_t, losing precision.

Regards,
Martin

From martin at v.loewis.de  Sat Jun  3 10:08:28 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 03 Jun 2006 10:08:28 +0200
Subject: [Python-Dev] Removing Mac OS 9 cruft
In-Reply-To: <ca471dc20606020626t7c9ce8afmf3be6be942bd2ef@mail.gmail.com>
References: <ee2a432c0606012031u7c35c50dxf8623ce4633510bf@mail.gmail.com>
	<ca471dc20606020626t7c9ce8afmf3be6be942bd2ef@mail.gmail.com>
Message-ID: <4481437C.3030304@v.loewis.de>

Guido van Rossum wrote:
> Just and Jack have confirmed that you can throw away everything except
> possibly Demo/*. (Just even speculated that some cruft may have been
> accidentally revived by the cvs -> svn transition?)

No, they had been present when cvs was converted:

http://python.cvs.sourceforge.net/python/python/dist/src/Mac/IDE%20scripts/

These had caused ongoing problems for Windows, which could not stand
files with trailing dots.

Regards,
Martin

From martin at v.loewis.de  Sat Jun  3 10:13:47 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 03 Jun 2006 10:13:47 +0200
Subject: [Python-Dev] test_ctypes failures on ppc64 debian
In-Reply-To: <44806A83.2020203@python.net>
References: <e5pou4$tis$1@sea.gmane.org>	<1f7befae0606020937g9316c74t49b59758a2d0f4b1@mail.gmail.com>
	<44806A83.2020203@python.net>
Message-ID: <448144BB.7010707@v.loewis.de>

Thomas Heller wrote:
> I have already mailed him asking if he can give me interactive access
> to this machine ;-).  He has not yet replied - I'm not sure if this is because
> he's been shocked to see such a request, or if he already is in holidays.

I believe its a machine donated to Debian. They are quite hesitant to
hand out shell accounts to people who aren't Debian Developers.

OTOH, running a build through buildbot should be fine if you have some
"legitimate" use. It would be bad if the build were triggered by people
who are not contributing to Python (this hasn't happened so far).

Regards,
Martin

From mwh at python.net  Sat Jun  3 11:00:14 2006
From: mwh at python.net (Michael Hudson)
Date: Sat, 03 Jun 2006 10:00:14 +0100
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4480E3F0.8050801@canterbury.ac.nz> (Greg Ewing's message of
	"Sat, 03 Jun 2006 13:20:48 +1200")
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
	<d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
	<1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
	<4480E3F0.8050801@canterbury.ac.nz>
Message-ID: <2md5dqvckh.fsf@starship.python.net>

Greg Ewing <greg.ewing at canterbury.ac.nz> writes:

> Tim Peters wrote:
>
>> I liked benchmarking on Crays in the good old days.  ...  
>  > Test times were reproducible to the
>> nanosecond with no effort.  Running on a modern box for a few
>> microseconds at a time is a way to approximate that, provided you
>> measure the minimum time with a high-resolution timer :-)
>
> Obviously what we need here is a stand-alone Python interpreter
> that runs on the bare machine, so there's no pesky operating
> system around to mess up our times.

I'm sure we can write a PyPy backend that targets Open Firmware :)

Cheers,
mwh

-- 
  <exarkun> speak of the devil
  <moshez> exarkun: froor
  <exarkun> not you                             -- from Twisted.Quotes

From fredrik at pythonware.com  Sat Jun  3 11:09:42 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 03 Jun 2006 11:09:42 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448141F6.7030806@v.loewis.de>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>
	<448141F6.7030806@v.loewis.de>
Message-ID: <e5rjkj$rph$1@sea.gmane.org>

Martin v. L?wis wrote:

>> since process time is *sampled*, not measured, process time isn't exactly in-
>> vulnerable either.
> 
> I can't share that view. The scheduler knows *exactly* what thread is
> running on the processor at any time, and that thread won't change
> until the scheduler makes it change. So if you discount time spent
> in interrupt handlers (which might be falsely accounted for the
> thread that happens to run at the point of the interrupt), then
> process time *is* measured, not sampled, on any modern operating system:
> it is updated whenever the scheduler schedules a different thread.

updated with what?  afaik, the scheduler doesn't have to wait for a 
timer interrupt to reschedule things (think blocking, or interrupts that 
request rescheduling, or new processes, or...) -- but it's always the 
thread that runs when the timer interrupt arrives that gets the entire 
jiffy time.  for example, this script runs for ten seconds, usually 
without using any process time at all:

     import time
     for i in range(1000):
         for i in range(1000):
             i+i+i+i
         time.sleep(0.005)

while the same program, without the sleep, will run for a second or two, 
most of which is assigned to the process.

if the scheduler used the TSC to keep track of times, it would be 
*measuring* process time.  but unless something changed very recently, 
it doesn't.  it's all done by sampling, typically 100 or 1000 times per 
second.

> On Linux, process time is accounted in jiffies. Unfortunately, for
> compatibility, times(2) converts that to clock_t, losing precision.

times(2) reports time in 1/CLOCKS_PER_SEC second units, while jiffies 
are counted in 1/HZ second units.  on my machine, CLOCKS_PER_SEC is a 
thousand times larger than HZ.  what does this code print on your machine?

#include <time.h>
#include <sys/param.h>

main()
{
     printf("CLOCKS_PER_SEC=%d, HZ=%d\n", CLOCKS_PER_SEC, HZ);
}

?

</F>


From martin at v.loewis.de  Sat Jun  3 12:33:16 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 03 Jun 2006 12:33:16 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5rjkj$rph$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>
	<e5rjkj$rph$1@sea.gmane.org>
Message-ID: <4481656C.5040803@v.loewis.de>

Fredrik Lundh wrote:
>> it is updated whenever the scheduler schedules a different thread.
> 
> updated with what?  afaik, the scheduler doesn't have to wait for a 
> timer interrupt to reschedule things (think blocking, or interrupts that 
> request rescheduling, or new processes, or...) -- but it's always the 
> thread that runs when the timer interrupt arrives that gets the entire 
> jiffy time.

Sure: when a thread doesn't consume its entire quantum, accounting
becomes difficult. Still, if the scheduler reads the current time
when scheduling, it measures the time consumed.

> if the scheduler used the TSC to keep track of times, it would be 
> *measuring* process time.  but unless something changed very recently, 
> it doesn't.

You mean, "unless something changed very recently" *on Linux*, right?
Or when did you last read the sources of Windows XP?

It would still be measuring if the scheduler reads the latest value
of some system clock, although that would be much less accurate than
reading the TSC.

> times(2) reports time in 1/CLOCKS_PER_SEC second units, while jiffies 
> are counted in 1/HZ second units.  on my machine, CLOCKS_PER_SEC is a 
> thousand times larger than HZ.  what does this code print on your machine?

You are right; clock_t allows for higher precision than jiffies.

Regards,
Martin

From andrewdalke at gmail.com  Sat Jun  3 14:40:28 2006
From: andrewdalke at gmail.com (Andrew Dalke)
Date: Sat, 3 Jun 2006 14:40:28 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<e5oa7b$kvl$1@sea.gmane.org> <447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org> <4480123A.1090109@egenix.com>
	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>
	<44803D2A.5010108@egenix.com> <e5pvgm$mqv$1@sea.gmane.org>
	<d78db4cd0606021619q3c75495taa06f276d01153fb@mail.gmail.com>
	<1f7befae0606021644m3f9070d5q9048c67f777b8828@mail.gmail.com>
Message-ID: <d78db4cd0606030540w5cd5b34p30e93e4004c2230b@mail.gmail.com>

Tim:
> A lot of things get mixed up here ;-)  The _mean_ is actually useful
> if you're using a poor-resolution timer with a fast test.

In which case discrete probability distributions are better than my assumption
of a continuous distribution.

I looked at the distribution of times for 1,000 repeats of
   t1 = time.time()
   t2 = time.time()
   times.append(t2-t1)

The times and counts I found were

9.53674316406e-07 388
1.19209289551e-06 95
1.90734863281e-06 312
2.14576721191e-06 201
2.86102294922e-06 2
1.90734863281e-05 1
3.00407409668e-05 1

This implies my Mac's time.time() has a resolution of 2.3841857910000015e-07 s
(0.2?s or about 4.2MHz.)  Or possibily a small integer fraction thereof.  The
timer overhead takes between 4 and 9 ticks.  Ignoring the outliers, assuming I
have the CPU all to my benchmark for the timeslice then I expect about +/- 3
ticks of noise per test.

To measure 1% speedup reliably I'll need to run, what, 300-600 ticks?  That's
a millisecond, and with a time quantum of 10 ms there's a 1 in 10 chance that
I'll incur that overhead.

In other words, I don't think my high-resolution timer is high enough.  Got
a spare Cray I can use, and will you pay for the power bill?

                                Andrew
                                dalke at dalkescientific.com

From fredrik at pythonware.com  Sat Jun  3 15:02:27 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 03 Jun 2006 15:02:27 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4481656C.5040803@v.loewis.de>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>
	<4481656C.5040803@v.loewis.de>
Message-ID: <e5s193$2dn$1@sea.gmane.org>

Martin v. L?wis wrote:

> Sure: when a thread doesn't consume its entire quantum, accounting
> becomes difficult. Still, if the scheduler reads the current time
> when scheduling, it measures the time consumed.

yeah, but the point is that it *doesn't* read the current time: all the 
system does it to note that "alright, we've reached the end of another 
jiffy, and this thread was running at that point.  now, was it running 
in user space or in kernel space when we interrupted it?".  here's the 
relevant code, from kernel/timer.c and kernel/sched.c:

     #define jiffies_to_cputime(__hz) (__hz)

     void update_process_times(int user_tick)
     {
         struct task_struct *p = current;
         int cpu = smp_processor_id();

         if (user_tick)
             account_user_time(p, jiffies_to_cputime(1));
         else
             account_system_time(p, HARDIRQ_OFFSET,
		jiffies_to_cputime(1));
         run_local_timers();
         if (rcu_pending(cpu))
             rcu_check_callbacks(cpu, user_tick);
         scheduler_tick();
         run_posix_cpu_timers(p);
     }

     void account_user_time(struct task_struct *p, cputime_t cputime)
     {
         struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
         cputime64_t tmp;

         p->utime = cputime_add(p->utime, cputime);

         tmp = cputime_to_cputime64(cputime);
         if (TASK_NICE(p) > 0)
             cpustat->nice = cputime64_add(cpustat->nice, tmp);
         else
             cpustat->user = cputime64_add(cpustat->user, tmp);
     }

(update_process_times is called by the hardware timer interrupt handler, 
once per jiffy, HZ times per second.  task_struct contains information 
about a single thread, cpu_usage_stat is global stats for a CPU)

for the benchmarks, the problem is of course not that the benchmarking 
thread gives up too early; it's when other processes give up early, and 
the benchmark process is next in line.  in that case, the benchmark 
won't use a whole jiffy, but it's still charged for a full jiffy 
interval by the interupt handler (in my sleep test, *other processes* 
got charged for the time the program spent running that inner loop).

a modern computer can to *lots of stuff* in a single jiffy interval 
(whether it's 15 ms, 10 ms, 4 ms, or 1 ms), and even more in a single 
scheduler quantum (=a number of jiffies).

> You mean, "unless something changed very recently" *on Linux*, right?

on any system involved in this discussion.  they all worked the same 
way, last time I checked ;-)

> Or when did you last read the sources of Windows XP?

afaik, all Windows versions based on the current NT kernel (up to and 
including XP) uses tick-based sampling.  I don't know about Vista; given 
the platform requirements for Vista, it's perfectly possible that 
they've switched to TSC-based accounting.

> It would still be measuring if the scheduler reads the latest value
> of some system clock, although that would be much less accurate than
> reading the TSC.

hopefully, this is the last time I will have to repeat this, but on both 
Windows and Linux, the "system clock" used for process timing is a jiffy 
counter.

</F>


From tim.peters at gmail.com  Sat Jun  3 15:16:04 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 3 Jun 2006 09:16:04 -0400
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448141F6.7030806@v.loewis.de>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<447DC377.4000504@egenix.com>
	<17533.51096.462552.451772@montanaro.dyndns.org>
	<447DD153.8080202@egenix.com> <e5kkr6$14a$1@sea.gmane.org>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<448141F6.7030806@v.loewis.de>
Message-ID: <1f7befae0606030616m73b6755dp241fda729e442d9a@mail.gmail.com>

[Fredrik Lundh]
>> ...
>> since process time is *sampled*, not measured, process time isn't exactly in-
>> vulnerable either.

[Martin v. L?wis]
> I can't share that view. The scheduler knows *exactly* what thread is
> running on the processor at any time, and that thread won't change
> until the scheduler makes it change. So if you discount time spent
> in interrupt handlers (which might be falsely accounted for the
> thread that happens to run at the point of the interrupt), then
> process time *is* measured, not sampled, on any modern operating
> system:  it is updated whenever the scheduler schedules a different
> thread.

That doesn't seem to agree with, e.g.,

    http://lwn.net/2001/0412/kernel.php3

under "No more jiffies?":

    ...
    Among other things, it imposes a 10ms resolution on most timing-
    related activities, which can make it hard for user-space programs
    that need a tighter control over time. It also guarantees that
    process accounting will be inaccurate. Over the course of one
    10ms jiffy, several processes might have run, but the one actually
    on the CPU when the timer interrupt happens gets charged for the
    entire interval.

Maybe this varies by Linux flavor or version?  While the article above
was published in 2001, Googling didn't turn up any hint that Linux
jiffies have actually gone away, or become better loved, since then.

From fredrik at pythonware.com  Sat Jun  3 15:29:26 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 03 Jun 2006 15:29:26 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <1f7befae0606030616m73b6755dp241fda729e442d9a@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>
	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>
	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>
	<1f7befae0606030616m73b6755dp241fda729e442d9a@mail.gmail.com>
Message-ID: <e5s2rn$6dp$1@sea.gmane.org>

Tim Peters wrote:

> Maybe this varies by Linux flavor or version?  While the article above
> was published in 2001, Googling didn't turn up any hint that Linux
> jiffies have actually gone away, or become better loved, since then.

well, on x86, they have changed from 10 ms in 2.4 to 1 ms in early 2.6 
releases and 4 ms in later 2.6 releases, but that's about it.

(the code in my previous post was from a 2.6.17 development version, 
which, afaict, is about as up to date as you can be).

note that the jiffy interrupt handler does use the TSC (or similar 
mechanism) to update the wall clock time, so it wouldn't be that hard to 
refactor the code to use it also for process accounting.  but I suppose 
the devil is in the backwards-compatibility details.  just setting the 
HZ value to something very large will probably not work very well...

</F>


From tim.peters at gmail.com  Sat Jun  3 15:43:38 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 3 Jun 2006 09:43:38 -0400
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5rjkj$rph$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<17533.51096.462552.451772@montanaro.dyndns.org>
	<447DD153.8080202@egenix.com> <e5kkr6$14a$1@sea.gmane.org>
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>
	<448141F6.7030806@v.loewis.de> <e5rjkj$rph$1@sea.gmane.org>
Message-ID: <1f7befae0606030643yea5fe90p7da6bb60e9d4b94f@mail.gmail.com>

[Fredrik Lundh]
> .... but it's always the thread that runs when the timer interrupt
> arrives that gets the entire jiffy time.  for example, this script runs
> for ten seconds, usually without using any process time at all:
>
>      import time
>      for i in range(1000):
>          for i in range(1000):
>              i+i+i+i
>          time.sleep(0.005)
>
> while the same program, without the sleep, will run for a second or two,
> most of which is assigned to the process.

Nice example!  On my desktop box (WinXP, 3.4GHz), I had to make it
nastier to see it consume any "time" without the sleep:

import time
for i in range(1000):
    for i in range(10000): # 10x bigger
        i+i+i+i*(i+i+i+i) # more work
    time.sleep(0.005)
raw_input("done")

The raw_input is there so I can see Task Manager's idea of elapsed
"CPU Time" (sum of process "user time" and "kernel time") when it's
done.

Without the sleep, it gets charged 6 CPU seconds.  With the sleep, 0
CPU seconds.

But life would be more boring if people believed you the first time ;-)

From martin at v.loewis.de  Sat Jun  3 16:37:09 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 03 Jun 2006 16:37:09 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <1f7befae0606030616m73b6755dp241fda729e442d9a@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	
	<447DC377.4000504@egenix.com>	
	<17533.51096.462552.451772@montanaro.dyndns.org>	
	<447DD153.8080202@egenix.com> <e5kkr6$14a$1@sea.gmane.org>	
	<447DE055.4040105@egenix.com> <e5oa7b$kvl$1@sea.gmane.org>	
	<447FF966.5050807@egenix.com> <e5p06h$tp1$1@sea.gmane.org>	
	<448141F6.7030806@v.loewis.de>
	<1f7befae0606030616m73b6755dp241fda729e442d9a@mail.gmail.com>
Message-ID: <44819E95.3040804@v.loewis.de>

Tim Peters wrote:
>> then
>> process time *is* measured, not sampled, on any modern operating
>> system:  it is updated whenever the scheduler schedules a different
>> thread.
> 
> That doesn't seem to agree with, e.g.,
> 
>    http://lwn.net/2001/0412/kernel.php3
> 
> under "No more jiffies?":
[...]
> 
> Maybe this varies by Linux flavor or version?

No, Fredrik is right: Linux samples process time, instead of measuring
it. That only proves it is not a modern operating system :-)

I would still hope that Windows measures instead of sampling.

Regards,
Martin

From martin at v.loewis.de  Sat Jun  3 16:44:46 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 03 Jun 2006 16:44:46 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <1f7befae0606030643yea5fe90p7da6bb60e9d4b94f@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>
	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>
	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>
	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>
	<e5rjkj$rph$1@sea.gmane.org>
	<1f7befae0606030643yea5fe90p7da6bb60e9d4b94f@mail.gmail.com>
Message-ID: <4481A05E.50003@v.loewis.de>

Tim Peters wrote:
> Without the sleep, it gets charged 6 CPU seconds.  With the sleep, 0
> CPU seconds.
> 
> But life would be more boring if people believed you the first time ;-)

This only proves that it uses clock ticks for the accounting, and not
something with higher resolution. To find out whether it samples or
measures CPU usage, you really have to read the source code of the
operating system (or find some documentation of somebody who has seen
the source code).

Regards,
Martin

From john.m.camara at comcast.net  Sat Jun  3 17:25:12 2006
From: john.m.camara at comcast.net (john.m.camara at comcast.net)
Date: Sat, 03 Jun 2006 15:25:12 +0000
Subject: [Python-Dev] Python Benchmarks
Message-ID: <060320061525.18733.4481A9D8000534760000492D22007503300E9D0E030E0CD203D202080106@comcast.net>

Here are my suggestions:

- While running bench marks don't listen to music, watch videos, use the keyboard/mouse, or run anything other than the bench mark code.  Seams like common sense to me.

- I would average the timings of runs instead of taking the minimum value as sometimes bench marks could be running code that is not deterministic in its calculations (could be using random numbers that effect convergence).

- Before calculating the average number I would throw out samples outside 3 sigmas (the outliers).  This would eliminate the samples that are out of wack due to events that are out of our control.  To use this approach it would be necessary to run some minimum number of times.  I believe 30-40 samples would be necessary but I'm no expert in statistics.  I base this on my recollection  of a study on this I did some time in the late 90s.  I use to have a better feel for the number of samples that is required based on the number of sigmas that is used to determine the outliers but I have to confess that I just normally use a minimum of 100 samples to play it safe.  I'm sure with a little experimentation with bench marks the proper number of samples could be determined.

Here is a passage I found at http://www.statsoft.com/textbook/stbasic.html#Correlationsf that is related.

'''Quantitative Approach to Outliers. Some researchers use quantitative methods to exclude outliers. For example, they exclude observations that are outside the range of �2 standard deviations (or even �1.5 sd's) around the group or design cell mean. In some areas of research, such "cleaning" of the data is absolutely necessary. For example, in cognitive psychology research on reaction times, even if almost all scores in an experiment are in the range of 300-700 milliseconds, just a few "distracted reactions" of 10-15 seconds will completely change the overall picture. Unfortunately, defining an outlier is subjective (as it should be), and the decisions concerning how to identify them must be made on an individual basis (taking into account specific experimental paradigms and/or "accepted practice" and general research experience in the respective area). It should also be noted that in some rare cases, the relative frequency of outliers across a number of groups or cells of a d
esign can be subjected to analysis and provide interpretable results. For example, outliers could be indicative of the occurrence of a phenomenon that is qualitatively different than the typical pattern observed or expected in the sample, thus the relative frequency of outliers could provide evidence of a relative frequency of departure from the process or phenomenon that is typical for the majority of cases in a group.'''

Now I personally feel that using 1.5 or 2 sigma approach is rather loose for the case of bench marks and the suggestion I gave of 3 might be too tight.  From experimentation we might find that 2.5 is more appropriate. I usually use this approach while reviewing data obtained by fairly accurate sensors so being being conservative using 3 sigmas works well for these cases.

The last statement in the passage is worthy to note as a high ratio of outliers could be used as an indication that the bench mark results for a particular run are invalid.  This could be used to throw out bad results due to some one starting to listen to music while the bench marks are running, anti virus software starts to run, etc.

- Another improvement to bench marks can be obtained when both the old and new code is available to be benched mark together.  By running the bench marks of both codes together we could eliminate effects of noise if we assume noise at a given point of time would be applied to both sets of code.  Here is a modified version of the code that Andrew wrote previously to show this clearer than my words.

def compute_old():
    x = 0
    for i in range(1000):
        for j in range(1000):
            x = x + 1

def compute_new():
    x = 0
    for i in range(1000):
        for j in range(1000):
            x += 1

def bench():
    t1 = time.clock()
    compute_old()
    t2 = time.clock()
    compute_new()
    t3 = time.clock()
    return t2-t1, t3-t2

times_old = []
times_new = []
for i in range(1000):
    time_old, time_new = bench()
    times_old.append(time_old)
    times_new.append(time_new)

John

From nnorwitz at gmail.com  Sat Jun  3 18:48:45 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sat, 3 Jun 2006 09:48:45 -0700
Subject: [Python-Dev] ssize_t question: longs in header files
In-Reply-To: <1f7befae0605291046v65a11862w70c000f5e9cceabf@mail.gmail.com>
References: <ee2a432c0605282315s6cb5743bj4a37009a03ebbca2@mail.gmail.com>
	<1f7befae0605291046v65a11862w70c000f5e9cceabf@mail.gmail.com>
Message-ID: <ee2a432c0606030948w70ebdc51h391b9b92129b8cff@mail.gmail.com>

On 5/29/06, Tim Peters <tim.peters at gmail.com> wrote:
> [Neal Norwitz]
> >  * hash values
> > Include/abstract.h:     long PyObject_Hash(PyObject *o);  // also in object.h
> > Include/object.h:typedef long (*hashfunc)(PyObject *);
>
> We should leave these alone for now.  There's no real connection
> between the width of a hash value and the number of elements in a
> container, and Py_ssize_t is conceptually only related to the latter.

True.  Though it might be easier to have one big type changing than
two.  If this is likely to change in the future (and I think it should
to avoid hash collisions and provide better consistency on 64-bit
archs), would it be good to add:

   typedef long Py_hash_t;

This will not change the type, but will make it easy to change in the
future.  I'm uncertain about doing this in 2.5.  I think it would help
me port code, but I'm only familiar with the Python base, not wild and
crazy third party C extensions.

The reason why it's easier for me is that grep can help me find and
fix just about everything.  There are fewer exceptions (longs left).
It would also help mostly from a doc standpoint to have typedefs for
Py_visit_t and other ints as well.  But this also seems like
diminishing returns.

n

From andrewdalke at gmail.com  Sat Jun  3 19:10:52 2006
From: andrewdalke at gmail.com (Andrew Dalke)
Date: Sat, 3 Jun 2006 19:10:52 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <060320061525.18733.4481A9D8000534760000492D22007503300E9D0E030E0CD203D202080106@comcast.net>
References: <060320061525.18733.4481A9D8000534760000492D22007503300E9D0E030E0CD203D202080106@comcast.net>
Message-ID: <d78db4cd0606031010m791a33a9jfb4bd746ed7027a@mail.gmail.com>

On 6/3/06, john.m.camara at comcast.net <john.m.camara at comcast.net> wrote:
> - I would average the timings of runs instead of taking the minimum value as
> sometimes bench marks could be running code that is not deterministic in its
> calculations (could be using random numbers that effect convergence).

I would rewrite those to be deterministic.  Any benchmarks of mine which
use random numbers initializes the generator with a fixed seed and does
it in such a way that the order or choice of subbenchmarks does not affect
the individual results.  Any other way is madness.

> - Before calculating the average number I would throw out samples
> outside 3 sigmas (the outliers).

As I've insisted, talking about sigmas assumes a Gaussian distribution.
It's more likely that the timing variations (at least in stringbench) are
closer to a gamma distribution.

> Here is a passage I found ...
 ...
>   Unfortunately, defining an outlier is subjective (as it should be), and the
>   decisions concerning how to identify them must be made on an individual
>   basis (taking into account specific experimental paradigms

The experimental paradigm I've been using is:
  - precise and accurate clock on timescales much smaller than the
       benchmark (hence continuous distributions)
  - rare, random, short and uncorrelated interruptions

This leads to a gamma distribution (plus constant offset for minimum
compute time)

Gamma distributions have longer tails than Gaussians and hence
more "outliers".  If you think that averaging is useful then throwing
those outliers away will artificially lower the average value.

To me, using the minimum time, given the paradigm, makes sense.
How fast is the fastest runner in the world?  Do you have him run
a dozen times and get the average, or use the shortest time?

> I usually use this approach while reviewing data obtained by fairly
> accurate sensors so being being conservative using 3 sigmas works
> well for these cases.

That uses a different underlying physical process which is better
modeled by Gaussians.

Consider this.  For a given benchmark there is an absolute minimum time
for it to run on a given machine.  Suppose this is 10 seconds and the
benchmark timing comes out 10.2 seconds.  The +0.2 comes from
background overhead, though you don't know exactly what's due to overhead
and what's real.

If the times were Gaussian then there's as much chance of getting
benchmark times of 10.5 seconds as of 9.9 seconds.  But repeat the
benchmark as many times as you want and you'll never see 9.9 seconds,
though you will see 10.5.

> - Another improvement to bench marks can be obtained when both
> the old and new code is available to be benched mark together.

That's what stringbench does, comparing unicode and 8-bit strings.
However, how do you benchmark changes which are more complicated
than that?  For example, benchmark changes to the exception
mechanism, or builds under gcc 3.x and 4.x.

                                Andrew
                                dalke at dalkescientific.com

From doug.fort at gmail.com  Fri Jun  2 21:33:41 2006
From: doug.fort at gmail.com (Doug Fort)
Date: Fri, 2 Jun 2006 15:33:41 -0400
Subject: [Python-Dev] wsgiref documentation
Message-ID: <5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>

Hi,

I'm going over the possible tasks for the Arlington Sprint.
Documentation for wsgiref looks like somethng I could handle. My
friend Joe Griffin and I did something similar for Tim Peters'
FixedPoint module.

Is anyone already working on this?
-- 
Doug Fort, Consulting Programmer
http://www.dougfort.com

From collinw at gmail.com  Sun Jun  4 00:25:07 2006
From: collinw at gmail.com (Collin Winter)
Date: Sun, 4 Jun 2006 00:25:07 +0200
Subject: [Python-Dev] Unhashable objects and __contains__()
Message-ID: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>

I recently submitted a patch that would optimise "in (5, 6, 7)" (ie,
"in" ops on constant tuples) to "in frozenset([5, 6, 7])". Raymond
Hettinger rejected (rightly) the patch since it's not semantically
consistent. Quoth:

>> Sorry, this enticing idea has already been explored and
>> rejected.  This is issue is that the transformation is not
>> semanatically neutral.  Currently, writing "{} in (1,2,3)"
>> returns False, but after the transformation would raise an
>> exception, "TypeError: dict objects are unhashable".

My question is this: maybe set/frozenset.__contains__ (as well as
dict.__contains__, etc) should catch such TypeErrors and convert them
to a return value of False? It makes sense that "{} in frozenset([(1,
2, 3])" should be False, since unhashable objects (like {}) clearly
can't be part of the set/dict/whatever.

I am, however, a bit unsure as to how __contains__() would be sure it
was only catching the "this object can't be hash()ed" TypeErrors, as
opposed to other TypeErrors that might legimately arise from a call to
some __hash__() method.

Idea: what if Python's -O option caused PySequence_Contains() to
convert all errors into False return values?

Collin Winter

From g.brandl at gmx.net  Sun Jun  4 00:52:02 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 04 Jun 2006 00:52:02 +0200
Subject: [Python-Dev] Unhashable objects and __contains__()
In-Reply-To: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>
References: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>
Message-ID: <e5t3qi$82t$1@sea.gmane.org>

Collin Winter wrote:
> I recently submitted a patch that would optimise "in (5, 6, 7)" (ie,
> "in" ops on constant tuples) to "in frozenset([5, 6, 7])". Raymond
> Hettinger rejected (rightly) the patch since it's not semantically
> consistent. Quoth:
> 
>>> Sorry, this enticing idea has already been explored and
>>> rejected.  This is issue is that the transformation is not
>>> semanatically neutral.  Currently, writing "{} in (1,2,3)"
>>> returns False, but after the transformation would raise an
>>> exception, "TypeError: dict objects are unhashable".
> 
> My question is this: maybe set/frozenset.__contains__ (as well as
> dict.__contains__, etc) should catch such TypeErrors and convert them
> to a return value of False? It makes sense that "{} in frozenset([(1,
> 2, 3])" should be False, since unhashable objects (like {}) clearly
> can't be part of the set/dict/whatever.
> 
> I am, however, a bit unsure as to how __contains__() would be sure it
> was only catching the "this object can't be hash()ed" TypeErrors, as
> opposed to other TypeErrors that might legimately arise from a call to
> some __hash__() method.
> 
> Idea: what if Python's -O option caused PySequence_Contains() to
> convert all errors into False return values?

It would certainly give me an uneasy feeling if a command-line switch
caused such a change in semantics.

Georg


From guido at python.org  Sun Jun  4 00:57:18 2006
From: guido at python.org (Guido van Rossum)
Date: Sat, 3 Jun 2006 15:57:18 -0700
Subject: [Python-Dev] Unhashable objects and __contains__()
In-Reply-To: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>
References: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>
Message-ID: <ca471dc20606031557g1a918738o1a32f39711c7bb16@mail.gmail.com>

On 6/3/06, Collin Winter <collinw at gmail.com> wrote:
> My question is this: maybe set/frozenset.__contains__ (as well as
> dict.__contains__, etc) should catch such TypeErrors and convert them
> to a return value of False? It makes sense that "{} in frozenset([(1,
> 2, 3])" should be False, since unhashable objects (like {}) clearly
> can't be part of the set/dict/whatever.

Sounds like a bad idea. You already pointed out that it's tricky to
catch exceptions and turn them into values without the risk of masking
bugs that would cause those same exceptions.

In addition, IMO it's a good idea to point out that "{} in {}" is a
type error by raising an exception. It's just like "1 in 'abc'" -- the
'in' operation has an implementation that doesn't support all types,
and if you try a type that's not supported, you expect a type error. I
expect that this is more likely to help catch bugs than it is an
obstacle. (I do understand your use case -- I just don't believe it's
as important as the bug-catching property you'd be throwing away by
supporting that use case.)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Sun Jun  4 00:58:15 2006
From: guido at python.org (Guido van Rossum)
Date: Sat, 3 Jun 2006 15:58:15 -0700
Subject: [Python-Dev] Unhashable objects and __contains__()
In-Reply-To: <e5t3qi$82t$1@sea.gmane.org>
References: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>
	<e5t3qi$82t$1@sea.gmane.org>
Message-ID: <ca471dc20606031558m230cc8acp77223df8dd3ca22e@mail.gmail.com>

On 6/3/06, Georg Brandl <g.brandl at gmx.net> wrote:
> Collin Winter wrote:
> > Idea: what if Python's -O option caused PySequence_Contains() to
> > convert all errors into False return values?
>
> It would certainly give me an uneasy feeling if a command-line switch
> caused such a change in semantics.

I missed that. Collin must be suffering from a heat stroke. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From g.brandl at gmx.net  Sun Jun  4 01:04:49 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 04 Jun 2006 01:04:49 +0200
Subject: [Python-Dev] Request for patch review
In-Reply-To: <e56plp$jel$1@sea.gmane.org>
References: <e56plp$jel$1@sea.gmane.org>
Message-ID: <e5t4ih$9lv$1@sea.gmane.org>

Georg Brandl wrote:
> I've worked on two patches for NeedForSpeed, and would like someone
> familiar with the areas they touch to review them before I check them
> in, breaking all the buildbots which aren't broken yet ;)
> 
> They are:
> 
> http://python.org/sf/1346214
>     Better dead code elimination for the AST compiler

No one wants to look at this? It's not too complicated, I promise.

> http://python.org/sf/921466
>     Reduce number of open calls on startup GB

That's now committed.

Georg


From brett at python.org  Sun Jun  4 01:50:20 2006
From: brett at python.org (Brett Cannon)
Date: Sat, 3 Jun 2006 16:50:20 -0700
Subject: [Python-Dev] Request for patch review
In-Reply-To: <e5t4ih$9lv$1@sea.gmane.org>
References: <e56plp$jel$1@sea.gmane.org> <e5t4ih$9lv$1@sea.gmane.org>
Message-ID: <bbaeab100606031650n50c9770bn366ed7f4cc7c81da@mail.gmail.com>

On 6/3/06, Georg Brandl <g.brandl at gmx.net> wrote:
>
> Georg Brandl wrote:
> > I've worked on two patches for NeedForSpeed, and would like someone
> > familiar with the areas they touch to review them before I check them
> > in, breaking all the buildbots which aren't broken yet ;)
> >
> > They are:
> >
> > http://python.org/sf/1346214
> >     Better dead code elimination for the AST compiler
>
> No one wants to look at this? It's not too complicated, I promise.



Well, "wants' is a strong word.  =)

Code looks fine (didn't apply it, but looked at the patch file itself).  I
would break the detection for 'return' in generators into a separate patch
since it has nothing to do with detection of dead code.

-Brett

> http://python.org/sf/921466
> >     Reduce number of open calls on startup GB
>
> That's now committed.
>
> Georg
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060603/eb29f6b9/attachment.html 

From pj at place.org  Sun Jun  4 03:09:38 2006
From: pj at place.org (Paul Jimenez)
Date: Sat, 03 Jun 2006 20:09:38 -0500
Subject: [Python-Dev] Some more comments re new uriparse module,
	patch 1462525
In-Reply-To: <Pine.LNX.4.64.0606022059340.8454@localhost> 
References: <Pine.LNX.4.64.0606022059340.8454@localhost>
Message-ID: <20060604010938.DDA4E179C66@place.org>

On Friday, Jun 2, 2006, John J Lee writes:
>[Not sure whether this kind of thing is best posted as tracker comments 
>(but then the tracker gets terribly long and is mailed out every time a 
>change happens) or posted here.  Feel free to tell me I'm posting in the 
>wrong place...]

I think this is a fine place - more googleable, still archived, etc.

>Some comments on this patch (a new module, submitted by Paul Jimenez, 
>implementing the rules set out in RFC 3986 for URI parsing, joining URI 
>references with a base URI etc.)
>
>http://python.org/sf/1462525

Note that like many opensource authors, I wrote this to 'scratch an
itch' that I had... and am submitting it in hopes of saving someone else
somewhere some essentially identical work. I'm not married to it; I just
want something *like* it to end up in the stdlib so that I can use it.

>Sorry for the pause, Paul.  I finally read RFC 3986 -- which I must say is 
>probably the best-written RFC I've read (and there was much rejoicing).

No worries.  Yeah, the RFC is pretty clear (for once) :)

>I still haven't read 3987 and got to grips with the unicode issues 
>(whatever they are), but I have just implemented the same stuff you did, 
>so have some comments on non-unicode aspects of your implementation (the 
>version labelled "v23" on the tracker):
>
>
>Your urljoin implementation seems to pass the tests (the tests taken from 
>the RFC), but I have to I admit I don't understand it :-)  It doesn't seem 
>to take account of the distinction between undefined and empty URI 
>components.  For example, the authority of the URI reference may be empty 
>but still defined.  Anyway, if you're taking advantage of some subtle 
>identity that implies that you can get away with truth-testing in place of 
>"is None" tests, please don't ;-) It's slower than "is [not] None" tests 
>both for the computer and (especially!) the reader.

First of all, I must say that urljoin is my least favorite part of this
module; I include it only so as not to break backward compatibility -
I don't have any personal use-cases for such. That said, some of the
'join' semantics are indeed a bit subtle; it took a bit of tinkering to
make all the tests work. I was indeed using 'if foo:' instead of 'if
foo is not None:', but that can be easily fixed; I didn't know there
was a performance issue there. Stylistically I find them about the same
clarity-wise.

>I don't like the use of module posixpath to implement the algorithm 
>labelled "remove_dot_segments".  URIs are not POSIX filesystem paths, and 
>shouldn't depend on code meant to implement the latter.  But my own 
>implementation is exceedingly ugly ATM, so I'm in no position to grumble 
>too much :-)

While URIs themselves are not, of course, POSIX filesystem paths, I believe
there's a strong case that their path components are semantically identical
in this usage.  I see no need to duplicate code that I know can be fairly
tricky to get right; better to let someone else worry about the corner cases
and take advantage of their work when I can.

>Normalisation of the base URI is optional, and your urljoin function
>never normalises.  Instead, it parses the base and reference, then
>follows the algorithm of section 5.2 of the RFC.  Parsing is required
>before normalisation takes place.  So urljoin forces people who need
>to normalise the URI before to parse it twice, which is annoying.
>There should be some way to parse 5-tuples in instead of URIs.  E.g.,
>from my implementation:
>
>def urljoin(base_uri, uri_reference):
>     return urlunsplit(urljoin_parts(urlsplit(base_uri),
>                                     urlsplit(uri_reference)))
>

It would certainly be easy to add a version which took tuples instead
of strings, but I was attempting, as previously stated, to conform to the
extant urlparse.urljoin API for backward compatability.  Also as I previously
stated, I have no personal use-cases for urljoin so the issue of having to
double-parse if you do normalization never came to my attention.

>It would be nice to have a 5-tuple-like class (I guess implemented as a 
>subclass of tuple) that also exposes attributes (.authority, .path, etc.) 
>-- the same way module time does it.

That starts to edge over into a 'generic URI' class, which I'm uncomfortable
with due to the possibility of opaque URIs that don't conform to that spec.
The fallback of putting everthing other than the scheme into 'path' doesn't
appeal to me.

>The path component is required, though may be empty.  Your parser
>returns None (meaning "undefined") where it should return an empty
>string.

Indeed.  Fixed now; a fresh look at the code showed me where the mistakes
that made that seem necessary lay.

>Nit: Your tests involving ports contain non-digit characters in the
>port (viz. "port"), which is not valid by section 3.2.3 of the RFC.

Indeed.  Nit fixed.

>Smaller nit: the userinfo component was never allowed in http URLs,
>but you use them in your tests.  This issue is outside of RFC 3986, of
>course.

Actually it was allowed in http URLs as an alternate method of
specifying HTTP AUTH, but is indeed now deprecated; I suspect, due to
the prevalence of semantic attacks.

>Particularly because the userinfo component is deprecated, I'd rather
>that userinfo-splitting and joining were separate functions, with the
>other functions dealing only with the standard RFC 3986 5-tuples.

It's only deprecated for http; it's quite the convenience for other
protocols.

>DefaultSchemes should be a class attribute of URIParser

Got any good reasoning behind that?  I don't have any real
reason to keep it a module variable other than 
it vaguely feels too 'heavy' for a class attribute to me.
Maybe we can get a second opinion?

>The naming of URLParser / URIParser is still insane :-)  I suggest
>naming them _URIParser and URIParser respectively.

I dunno aobut insane; as I say in the code:

  URI generally refers to generic URIs and URL refers to
  to URIs that match scheme://user:password at host:port/path?query#fragment.

I suppose another way to say it is that I consider a URI to be more
opaque than a URL - my interpretation of section 1.1.3 of rfc3986 which
says:

   A URI can be further classified as a locator, a name, or both.  The
   term "Uniform Resource Locator" (URL) refers to the subset of URIs
   that, in addition to identifying a resource, provide a means of
   locating the resource by describing its primary access mechanism
   (e.g., its network "location"). 

>I guess there should be no mention of "URL" anywhere in the module --
>only "URI" (even though I hate "URI", as a mostly-worthless
>distinction from "URL", consistency inside the module is more
>important, and URI is technically correct and fits with all the
>terminology used in the RFC).  I'm still heavily -1 on calling it
>"uriparse" though, because of the highly misleading comparison with
>the name "urlparse" (the difference between the modules isn't the
>difference between URIs and URLs).

uriparse is more generic and extensible than urlparse is; just as uris
are to urls.  Honestly I don't care, I was just trying to come up with a
name that would be distinct enough to not cause confusion, wasn't too
inelegant (eg. urlparse2), and was appropriate enough that a future use
would look at it.  I'm open to suggestions.

>Re your comment on "mailto:" in the tracker: sure, I understand it's not 
>meant to be public, but the interface is!  .parse() will return a 4-tuple 
>for mailto: URLs.  For everything else, it will return a 7-tuple.  That's 
>silly.

That's URIs: the length of the tuple returned is scheme-dependent.  The
alternative is to provide less functionality by lumping everything after
the : into a single 'path' component that the programmer will then have to
parse again anyway.  This way they don't have to do that extra work.

>The documentation should explain that the function of URIParser is
>hiding scheme-dependent URI normalisation.

Its function is twofold: to allow hiding and to provide an extensible
framework with a uniform interface.  But yes, the documentation should
be better.

>Method names and locals are still likeThis, contrary to PEP 8.

Fixed most places, I think.

>docstrings and other whitespace are still non-standard -- follow PEP 8
>(and PEP 257, which PEP 8 references) Doesn't have to be totally rigid
>of course -- e.g. lining up the ":" characters in the tests is fine.

I've given the whole thing a once-over to try and come up to spec, but 
I'm sure I missed something, so feel free to point out anything I missed.

>Standard stdlib form documentation is still missing.  I'll be told off
>if I don't read you your rights: you don't have to submit in LaTeX
>markup -- apparently there are hordes of eager LaTeX markers-up
>lurking ready to pounce on poorly-formatted documentation <wink>
>
>Test suite still needs tweaking to put it in standard stdlib form

Excuse my ignorance, but is there a PEP somewhere for 'standard stdlib
form' ?  What do I need to do to bring it up to snuff?

  --pj


From thomas at python.org  Sun Jun  4 11:13:21 2006
From: thomas at python.org (Thomas Wouters)
Date: Sun, 4 Jun 2006 11:13:21 +0200
Subject: [Python-Dev] [Python-checkins] r46603 -
	python/trunk/Lib/test/test_struct.py
In-Reply-To: <2m8xoduxy0.fsf@starship.python.net>
References: <20060602130346.9071C1E400B@bag.python.org>
	<ee2a432c0606030950t11d8e740wa409607643e04169@mail.gmail.com>
	<9e804ac0606031651q56199989ra8662d9e57ba8660@mail.gmail.com>
	<1f7befae0606031839t32ca5ddch361c2074fc81f20f@mail.gmail.com>
	<2m8xoduxy0.fsf@starship.python.net>
Message-ID: <9e804ac0606040213i45aabb3hfc1205d1d3b406ed@mail.gmail.com>

On 6/4/06, Michael Hudson <mwh at python.net> wrote:
[ For non-checkins readers: Martin Blais checked in un-unittestification of
test_struct, which spawned questions form Neal and me about whether that's
really the right thing to do. I also foolishly<0.5 wink> siggested that, if
we switch away from unittest, we switch to py.test instead of the old
unstructured tests ]

"Tim Peters" <tim.peters at gmail.com> writes:
> > unittest, and especially doctest, encourage breaking tests into small
> > units.  An example of neither is test_descr.py, which can be a real
> > bitch to untangle when it fails.
>
> Also, there is an advantage to have more structure to the tests; if
> all of python's tests used unittest, my regrtest -R gimmickery would
> be able to identify tests, rather than test files, that leaked and I'm
> pretty sure that this would have saved me a few hours in the last
> couple of years.  Also, you can more easily identify particular tests
> that fail intermittently.  Etc.


I'm not arguing against structure, just against all the unittest cumber. For
example, py.test doesn't do the output-comparing, and it does require you to
put tests in separate functions. However, it doesn't require (but does
allow) test classes. Test-generators are generators that *return* tests,
which are then run, so that you can have separate tests for
runtime-calculated tasks, and yet still have them be separate tests for
error reporting and such. py.test also allows tests to print during
execution, and that output is kept around as debug output: it's only shown
when the test fails. It also comes with a convenient command-line tool that
can run directories, modules, individual tests, etc -- which, for unittest,
I *always* have to copy-paste select bits out of regrtest and test_support
for. My own project testing has gotten much more exhaustive since I started
using py.test, it's just much, much more convenient.

I'm not arguing for inclusing of py.test so much as unittest taking over
most of its features (obviously not for 2.5, though.) I really don't see the
point of test-classes the way we currently have them. When I look at the
stdlib tests, most of the TestCases have methods that don't really need to
be in a single TestCase (and I believe the 'normal' JUnit style has them as
separate classes in those cases. yikes.) I don't really see the point in
using all the variations of 'assert' as methods, except for 'assertRaises'.
Et cetera. py.test's approach is simpler, more direct, easier to read and
write, and just as powerful (and in the case of test-generators, more so.)
And you can very easily emulate the unittest API with py.test ;-)

Since there's time for 2.6, I'd encourage everyone to look at py.test, maybe
we can just merge it with unittest ;P

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060604/54a653ea/attachment.html 

From mwh at python.net  Sun Jun  4 11:41:50 2006
From: mwh at python.net (Michael Hudson)
Date: Sun, 04 Jun 2006 10:41:50 +0100
Subject: [Python-Dev] [Python-checkins] r46603 -
 python/trunk/Lib/test/test_struct.py
In-Reply-To: <9e804ac0606040213i45aabb3hfc1205d1d3b406ed@mail.gmail.com>
	(Thomas Wouters's message of "Sun, 4 Jun 2006 11:13:21 +0200")
References: <20060602130346.9071C1E400B@bag.python.org>
	<ee2a432c0606030950t11d8e740wa409607643e04169@mail.gmail.com>
	<9e804ac0606031651q56199989ra8662d9e57ba8660@mail.gmail.com>
	<1f7befae0606031839t32ca5ddch361c2074fc81f20f@mail.gmail.com>
	<2m8xoduxy0.fsf@starship.python.net>
	<9e804ac0606040213i45aabb3hfc1205d1d3b406ed@mail.gmail.com>
Message-ID: <2m3beluujl.fsf@starship.python.net>

"Thomas Wouters" <thomas at python.org> writes:

> On 6/4/06, Michael Hudson <mwh at python.net> wrote:
> [ For non-checkins readers: Martin Blais checked in un-unittestification
> of test_struct, which spawned questions form Neal and me about whether
> that's really the right thing to do. I also foolishly< 0.5 wink> siggested
> that, if we switch away from unittest, we switch to py.test instead of the
> old unstructured tests ] 
>
>   "Tim Peters" <tim.peters at gmail.com> writes:
>   > unittest, and especially doctest, encourage breaking tests into small
>   > units.  An example of neither is test_descr.py, which can be a real
>   > bitch to untangle when it fails.
>
>   Also, there is an advantage to have more structure to the tests; if
>   all of python's tests used unittest, my regrtest -R gimmickery would
>   be able to identify tests, rather than test files, that leaked and I'm
>   pretty sure that this would have saved me a few hours in the last
>   couple of years.  Also, you can more easily identify particular tests
>   that fail intermittently.  Etc.
>
> I'm not arguing against structure, just against all the unittest cumber.
> For example, py.test doesn't do the output-comparing, and it does require
> you to put tests in separate functions. However, it doesn't require (but
> does allow) test classes. Test-generators are generators that *return*
> tests, which are then run, so that you can have separate tests for
> runtime-calculated tasks, and yet still have them be separate tests for
> error reporting and such. py.test also allows tests to print during
> execution, and that output is kept around as debug output: it's only shown
> when the test fails. It also comes with a convenient command-line tool
> that can run directories, modules, individual tests, etc -- which, for
> unittest, I *always* have to copy-paste select bits out of regrtest and
> test_support for. My own project testing has gotten much more exhaustive
> since I started using py.test, it's just much, much more convenient.

I don't want to pull the 'do you know who I am?' routine, and I know
you're addressing python-dev rather than just me, but I'm currently
sitting in the same room as the guy who wrote py.test :-)

I'm also not sure what point you're trying to make: I *know* py.test
is better than unittest, that's not what I was saying.  But unittest
is better than old-skool output comparison tests.

I guess you're not really replying to my mail, in fact... :)

Cheers,
mwh

-- 
  <glyph> we need PB for C#
  * moshez squishes glyph
  <moshez> glyph: squishy insane person
                                                -- from Twisted.Quotes

From thomas at python.org  Sun Jun  4 11:55:36 2006
From: thomas at python.org (Thomas Wouters)
Date: Sun, 4 Jun 2006 11:55:36 +0200
Subject: [Python-Dev] [Python-checkins] r46603 -
	python/trunk/Lib/test/test_struct.py
In-Reply-To: <2m3beluujl.fsf@starship.python.net>
References: <20060602130346.9071C1E400B@bag.python.org>
	<ee2a432c0606030950t11d8e740wa409607643e04169@mail.gmail.com>
	<9e804ac0606031651q56199989ra8662d9e57ba8660@mail.gmail.com>
	<1f7befae0606031839t32ca5ddch361c2074fc81f20f@mail.gmail.com>
	<2m8xoduxy0.fsf@starship.python.net>
	<9e804ac0606040213i45aabb3hfc1205d1d3b406ed@mail.gmail.com>
	<2m3beluujl.fsf@starship.python.net>
Message-ID: <9e804ac0606040255h57c1998gf9bdb15ba78d899f@mail.gmail.com>

On 6/4/06, Michael Hudson <mwh at python.net> wrote:
>
> "Thomas Wouters" <thomas at python.org> writes:
>
> > On 6/4/06, Michael Hudson <mwh at python.net> wrote:
> > [ For non-checkins readers: Martin Blais checked in un-unittestification
> > of test_struct, which spawned questions form Neal and me about whether
> > that's really the right thing to do. I also foolishly< 0.5 wink>
> siggested
> > that, if we switch away from unittest, we switch to py.test instead of
> the
> > old unstructured tests ]
> >
> >   "Tim Peters" <tim.peters at gmail.com> writes:
> >   > unittest, and especially doctest, encourage breaking tests into
> small
> >   > units.  An example of neither is test_descr.py, which can be a real
> >   > bitch to untangle when it fails.
> >
> >   Also, there is an advantage to have more structure to the tests; if
> >   all of python's tests used unittest, my regrtest -R gimmickery would
> >   be able to identify tests, rather than test files, that leaked and I'm
> >   pretty sure that this would have saved me a few hours in the last
> >   couple of years.  Also, you can more easily identify particular tests
> >   that fail intermittently.  Etc.
> >
> > I'm not arguing against structure, just against all the unittest cumber.
> > For example, py.test doesn't do the output-comparing, and it does
> require
> > you to put tests in separate functions. However, it doesn't require (but
> > does allow) test classes. Test-generators are generators that *return*
> > tests, which are then run, so that you can have separate tests for
> > runtime-calculated tasks, and yet still have them be separate tests for
> > error reporting and such. py.test also allows tests to print during
> > execution, and that output is kept around as debug output: it's only
> shown
> > when the test fails. It also comes with a convenient command-line tool
> > that can run directories, modules, individual tests, etc -- which, for
> > unittest, I *always* have to copy-paste select bits out of regrtest and
> > test_support for. My own project testing has gotten much more exhaustive
> > since I started using py.test, it's just much, much more convenient.
>
> I don't want to pull the 'do you know who I am?' routine, and I know
> you're addressing python-dev rather than just me, but I'm currently
> sitting in the same room as the guy who wrote py.test :-)
>
> I'm also not sure what point you're trying to make: I *know* py.test
> is better than unittest, that's not what I was saying.  But unittest
> is better than old-skool output comparison tests.
>
> I guess you're not really replying to my mail, in fact... :)


I'm sorry, I guess I was misunderstanding your mail. I thought Tim's
reaction was "we want unittest because we want structure", and your reaction
was "yes, we need more structure", both of which I took as "I don't really
know anything about py.test" :) Since no one argued *against* structure, I'm
not sure where the structure argument comes from. As for not knowing about
your "involvement" with py.test, well, how could I? py.test doesn't list an
'author' anywhere I could find, the webpage just says "last edited by
Holger", and the debian package came with no CREDITS file other than the
'copyright' file, which doesn't list you ;-P

Credit-+=-mwh-where-credit-is-due--now-please-merge-with-unittest-already<wink>'ly
y'rs,
-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060604/f65fe514/attachment.htm 

From thomas at python.org  Sun Jun  4 11:56:48 2006
From: thomas at python.org (Thomas Wouters)
Date: Sun, 4 Jun 2006 11:56:48 +0200
Subject: [Python-Dev] [Python-checkins] r46603 -
	python/trunk/Lib/test/test_struct.py
In-Reply-To: <9e804ac0606040255h57c1998gf9bdb15ba78d899f@mail.gmail.com>
References: <20060602130346.9071C1E400B@bag.python.org>
	<ee2a432c0606030950t11d8e740wa409607643e04169@mail.gmail.com>
	<9e804ac0606031651q56199989ra8662d9e57ba8660@mail.gmail.com>
	<1f7befae0606031839t32ca5ddch361c2074fc81f20f@mail.gmail.com>
	<2m8xoduxy0.fsf@starship.python.net>
	<9e804ac0606040213i45aabb3hfc1205d1d3b406ed@mail.gmail.com>
	<2m3beluujl.fsf@starship.python.net>
	<9e804ac0606040255h57c1998gf9bdb15ba78d899f@mail.gmail.com>
Message-ID: <9e804ac0606040256qc6cb35dv3a4d854442e3d097@mail.gmail.com>

On 6/4/06, Thomas Wouters <thomas at python.org> wrote:

> On 6/4/06, Michael Hudson <mwh at python.net> wrote:
> >
> > I don't want to pull the 'do you know who I am?' routine, and I know
> > you're addressing python-dev rather than just me, but I'm currently
> > sitting in the same room as the guy who wrote py.test :-)
>
>
[me crediting mwh]

Oh, sitting int he same *room*. Sheesh, I should really learn to read my
mail. Sorry again :P

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060604/1627b214/attachment.html 

From skip at pobox.com  Sun Jun  4 13:14:56 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sun, 4 Jun 2006 06:14:56 -0500
Subject: [Python-Dev] Mac/wastemodule build failing
Message-ID: <17538.49328.860667.503218@montanaro.dyndns.org>

I saw a thread here a couple days ago about a bunch of old Mac cruft going
away.  I recall wastemodule.c being mentioned.  Now building it is failing
for me (Mac OSX 10.4.6).  Was it only wounded but not killed?  The first
couple of errors are the key I think:

  .../Mac/Modules/waste/wastemodule.c:19:30: error: WEObjectHandlers.h: No such file or directory
  .../Mac/Modules/waste/wastemodule.c:20:20: error: WETabs.h: No such file or directory

Skip

From ronaldoussoren at mac.com  Sun Jun  4 14:40:37 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Sun, 4 Jun 2006 14:40:37 +0200
Subject: [Python-Dev] Mac/wastemodule build failing
In-Reply-To: <17538.49328.860667.503218@montanaro.dyndns.org>
References: <17538.49328.860667.503218@montanaro.dyndns.org>
Message-ID: <61173E98-CB4C-4F74-B7D3-3E368A77ED1F@mac.com>


On 4-jun-2006, at 13:14, skip at pobox.com wrote:

> I saw a thread here a couple days ago about a bunch of old Mac  
> cruft going
> away.  I recall wastemodule.c being mentioned.  Now building it is  
> failing
> for me (Mac OSX 10.4.6).  Was it only wounded but not killed?  The  
> first
> couple of errors are the key I think:
>
>   .../Mac/Modules/waste/wastemodule.c:19:30: error:  
> WEObjectHandlers.h: No such file or directory
>   .../Mac/Modules/waste/wastemodule.c:20:20: error: WETabs.h: No  
> such file or directory
>

It that a new failure? Waste is a wrapper for the waste library,  
which is a 3th-party library that
must be at a specific location in your tree to build correctly.

Ronald

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2157 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060604/691e37a8/attachment.bin 

From aahz at pythoncraft.com  Sun Jun  4 15:36:41 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sun, 4 Jun 2006 06:36:41 -0700
Subject: [Python-Dev] [Python-checkins] r46603 -
	python/trunk/Lib/test/test_struct.py
In-Reply-To: <9e804ac0606040256qc6cb35dv3a4d854442e3d097@mail.gmail.com>
References: <20060602130346.9071C1E400B@bag.python.org>
	<ee2a432c0606030950t11d8e740wa409607643e04169@mail.gmail.com>
	<9e804ac0606031651q56199989ra8662d9e57ba8660@mail.gmail.com>
	<1f7befae0606031839t32ca5ddch361c2074fc81f20f@mail.gmail.com>
	<2m8xoduxy0.fsf@starship.python.net>
	<9e804ac0606040213i45aabb3hfc1205d1d3b406ed@mail.gmail.com>
	<2m3beluujl.fsf@starship.python.net>
	<9e804ac0606040255h57c1998gf9bdb15ba78d899f@mail.gmail.com>
	<9e804ac0606040256qc6cb35dv3a4d854442e3d097@mail.gmail.com>
Message-ID: <20060604133641.GA5852@panix.com>

On Sun, Jun 04, 2006, Thomas Wouters wrote:
>On 6/4/06, Thomas Wouters <thomas at python.org> wrote:
>>On 6/4/06, Michael Hudson <mwh at python.net> wrote:
>>>
>>> I don't want to pull the 'do you know who I am?' routine, and I know
>>> you're addressing python-dev rather than just me, but I'm currently
>>> sitting in the same room as the guy who wrote py.test :-)
>>
>> [me crediting mwh]
> 
> Oh, sitting int he same *room*. Sheesh, I should really learn to read my
> mail. Sorry again :P

Don't feel too sorry -- I read that as British irony for "yes, I'm the
author".  (As in, "I'm sitting in the same room as the author, and
there's only one person in the room.")
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From skip at pobox.com  Sun Jun  4 15:40:43 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sun, 4 Jun 2006 08:40:43 -0500
Subject: [Python-Dev] Mac/wastemodule build failing
In-Reply-To: <61173E98-CB4C-4F74-B7D3-3E368A77ED1F@mac.com>
References: <17538.49328.860667.503218@montanaro.dyndns.org>
	<61173E98-CB4C-4F74-B7D3-3E368A77ED1F@mac.com>
Message-ID: <17538.58075.248198.509193@montanaro.dyndns.org>

    >> I recall wastemodule.c being mentioned.  Now building it is failing
    >> for me (Mac OSX 10.4.6).  Was it only wounded but not killed?

    Ronald> It that a new failure?

Yes.  I svn up and rebuild at least once a week.

Skip

From andymac at bullseye.apana.org.au  Sun Jun  4 15:05:24 2006
From: andymac at bullseye.apana.org.au (Andrew MacIntyre)
Date: Mon, 05 Jun 2006 00:05:24 +1100
Subject: [Python-Dev] patch #1454481 vs buildbot
Message-ID: <4482DA94.8090707@bullseye.apana.org.au>

In reviewing the buildbot logs after committing this patch, I see 2
issues arising that I need advice about...

1.  The Solaris build failure in thread.c has me mystified as I can't
find any "_sysconf" symbol - is this in a system header?

2.  I don't know what to make of the failure of test_threading on Linux, 
as test_thread succeeds as far as I could see.  These tests succeed on my
FreeBSD box and also appear to be succeeding on the Windows buildbots.

Unfortunately I don't have access to either a Solaris box or a Linux box
so could use some hints about resolving these.

Thanks,
Andrew.

-------------------------------------------------------------------------
Andrew I MacIntyre                     "These thoughts are mine alone..."
E-mail: andymac at bullseye.apana.org.au  (pref) | Snail: PO Box 370
        andymac at pcug.org.au             (alt) |        Belconnen ACT 2616
Web:    http://www.andymac.org/               |        Australia

From ronaldoussoren at mac.com  Sun Jun  4 16:27:37 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Sun, 4 Jun 2006 16:27:37 +0200
Subject: [Python-Dev] Mac/wastemodule build failing
In-Reply-To: <17538.58075.248198.509193@montanaro.dyndns.org>
References: <17538.49328.860667.503218@montanaro.dyndns.org>
	<61173E98-CB4C-4F74-B7D3-3E368A77ED1F@mac.com>
	<17538.58075.248198.509193@montanaro.dyndns.org>
Message-ID: <2C828F71-07AB-4C25-BBA0-A0DBDE0F2F4A@mac.com>


On 4-jun-2006, at 15:40, skip at pobox.com wrote:

>>> I recall wastemodule.c being mentioned.  Now building it is failing
>>> for me (Mac OSX 10.4.6).  Was it only wounded but not killed?
>
>     Ronald> It that a new failure?
>
> Yes.  I svn up and rebuild at least once a week.

The failure was the result of the removeal of Mac/Wastemods, these  
are needed for the waste wrappers. I've just checked in a patch that  
removes these wrappers (revision 46644), they won't work on Intel  
macs, aren't used by anything in the current tree and are undocumented.

Ronald

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2157 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060604/de0879bc/attachment.bin 

From ncoghlan at gmail.com  Sun Jun  4 17:01:29 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 05 Jun 2006 01:01:29 +1000
Subject: [Python-Dev] Some more comments re new uriparse module,
 patch 1462525
In-Reply-To: <20060604010938.DDA4E179C66@place.org>
References: <Pine.LNX.4.64.0606022059340.8454@localhost>
	<20060604010938.DDA4E179C66@place.org>
Message-ID: <4482F5C9.20407@gmail.com>

Paul Jimenez wrote:
> On Friday, Jun 2, 2006, John J Lee writes:
>> [Not sure whether this kind of thing is best posted as tracker comments 
>> (but then the tracker gets terribly long and is mailed out every time a 
>> change happens) or posted here.  Feel free to tell me I'm posting in the 
>> wrong place...]
> 
> I think this is a fine place - more googleable, still archived, etc.
> 
>> Some comments on this patch (a new module, submitted by Paul Jimenez, 
>> implementing the rules set out in RFC 3986 for URI parsing, joining URI 
>> references with a base URI etc.)
>>
>> http://python.org/sf/1462525
> 
> Note that like many opensource authors, I wrote this to 'scratch an
> itch' that I had... and am submitting it in hopes of saving someone else
> somewhere some essentially identical work. I'm not married to it; I just
> want something *like* it to end up in the stdlib so that I can use it.

I started to write a reply to this with some comments on the API (including 
the internal subclassing API), but ended up with so many different suggestions 
it was easier to just post a variant of the module. I called it "urischemes" 
and posted it on SF:

http://python.org/sf/1500504

It takes more advantage of the strict hierarchy defined in RFC 3986 by always 
treating parsed URI's as 5-tuples, and the authority component as a 4-tuple. A 
parser is allowed to return any object it likes for the path, query or 
fragment components, so long as invoked str() on the result gives an 
appropriate string for use by make_uri.

Additionally, since the semantics won't be the same as urlparse anyway, I 
haven't been as worried about keeping the APIs identical (although they're 
still similar).

In various places, it also makes more use of keyword arguments and 
dictionaries to specify defaults values, rather than relying on tuples padded 
with lots of Nones.

There's more in the tracker item about the API and implementation differences. 
They're all about improving maintainability and extensibility rather than 
providing any additional functionality.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From tim.peters at gmail.com  Sun Jun  4 17:39:36 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sun, 4 Jun 2006 11:39:36 -0400
Subject: [Python-Dev] patch #1454481 vs buildbot
In-Reply-To: <4482DA94.8090707@bullseye.apana.org.au>
References: <4482DA94.8090707@bullseye.apana.org.au>
Message-ID: <1f7befae0606040839i4240eaeo343c352dd6a882af@mail.gmail.com>

[Andrew MacIntyre]
> In reviewing the buildbot logs after committing this patch, I see 2
> issues arising that I need advice about...
>
> 1.  The Solaris build failure in thread.c has me mystified as I can't
> find any "_sysconf" symbol - is this in a system header?

The patch's

#if THREAD_STACK_MIN < PTHREAD_STACK_MIN

assumes that the expansion of PTHREAD_STACK_MIN acts like a
compile-time constant expression, but there's no such guarantee.

    http://cvs.opensolaris.org/source/xref/on/usr/src/head/limits.h

shows that, on one version of Solaris, it's actually defined via

#define	PTHREAD_STACK_MIN ((size_t)_sysconf(_SC_THREAD_STACK_MIN))

That has a runtime value, but not a useful compile-time value.  The
only useful thing you can do with it in an #if expression is
defined(PTHREAD_STACK_MIN).

> 2.  I don't know what to make of the failure of test_threading on Linux,
> as test_thread succeeds as far as I could see.  These tests succeed on my
> FreeBSD box and also appear to be succeeding on the Windows buildbots.

Not all pthreads-using builds fail, and not all failing pthreads-using
builds fail in the same way.  Welcome to pthreads on Linux ;-)

BTW, this sucks:

test_thread
/home/buildbot/Buildbot/trunk.baxter-ubuntu/build/Lib/test/test_thread.py:138:
RuntimeWarning: thread stack size of 0x1000 bytes not supported
  thread.stack_size(tss)

That's from a successful run.  RuntimeWarning really doesn't make
sense for a failing operation.  This should raise an exception
(xyzError, not xyzWarning), or a failing stack_size() should return an
error value after ensuring the original stack size is still in effect.

> Unfortunately I don't have access to either a Solaris box or a Linux box
> so could use some hints about resolving these.

As above, they don't always fail in the same way across boxes.  The
most popular failure mode appears to be:

ERROR: test_various_ops_large_stack (test.test_threading.ThreadTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/buildslave/buildslave/python/trunk.norwitz-amd64/build/Lib/test/test_threading.py",
line 101, in test_various_ops_large_stack
    self.test_various_ops()
  File "/home/buildslave/buildslave/python/trunk.norwitz-amd64/build/Lib/test/test_threading.py",
line 77, in test_various_ops
    t.start()
  File "/home/buildslave/buildslave/python/trunk.norwitz-amd64/build/Lib/threading.py",
line 434, in start
    _start_new_thread(self.__bootstrap, ())
error: can't start new thread

While I don't know, best guess is that the system "ulimit -s" is set
to 8MB, so it's not actually possible to get a 16MB stack (as
test_various_ops_large_stack() asks for), and this error goes
undetected at the test's

         threading.stack_size(0x1000000)

call "for some reason".  Or maybe it is, but the RuntimeWarning got
lost -- if this were an exception instead, it would be easier to
reason about, and test_various_ops_large_stack() could disable itself
gracefully (by catching the exception and giving up) if the platform
didn't allow a 16MB stack ...

Ah, _pythread_pthread_set_stacksize doesn't do anything to verify that
the requested stack size is actually usable (just ensures that it's
less than THREAD_STACK_MIN).  pthread_attr_setstacksize() isn't
attempted until PyThread_start_new_thread() in thread_pthread.h:

#if defined(THREAD_STACK_SIZE)
	tss = (_pythread_stacksize != 0) ? _pythread_stacksize
					 : THREAD_STACK_SIZE;
	if (tss != 0) {
		if (pthread_attr_setstacksize(&attrs, tss) != 0) {
			pthread_attr_destroy(&attrs);
			return -1;
		}
	}
#endif

If PyThread_start_new_thread() fails in any way
(like,pthread_attr_setstacksize() failing), ""can't start new thread"
is the error we see.

The difference between test_thread and test_threading here is that
only test_threading asks for a 16MB stack; test_thread doesn't ask for
a stack larger than 4MB.

Until all this gets resolved, I strongly suggest reverting this patch
(if you don't, someone else will ...) and hammering out the problems
on a new branch instead.  See python-dev email from yesterday for how
to force a buildbot slave to build a branch.

From skip at pobox.com  Sun Jun  4 19:34:31 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sun, 4 Jun 2006 12:34:31 -0500
Subject: [Python-Dev] patch #1454481 vs buildbot
In-Reply-To: <1f7befae0606040839i4240eaeo343c352dd6a882af@mail.gmail.com>
References: <4482DA94.8090707@bullseye.apana.org.au>
	<1f7befae0606040839i4240eaeo343c352dd6a882af@mail.gmail.com>
Message-ID: <17539.6567.511005.458660@montanaro.dyndns.org>


    Tim> Until all this gets resolved, I strongly suggest reverting this
    Tim> patch ...

So I won't check in changes to suppress compilation warnings on my Mac.
Andrew, look in your mail for a patch file.

Skip



From brett at python.org  Sun Jun  4 19:50:59 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 4 Jun 2006 10:50:59 -0700
Subject: [Python-Dev] Request for trackers to evaluate as SF replacement for
	Python development
Message-ID: <bbaeab100606041050u2509f777p6577370bce2d9e54@mail.gmail.com>

The Python Software Foundation's Infrastructure committee has been charged
with finding a new tracker system to be used by the Python development team
as a replacement for SourceForge.  The development team is currently unhappy
with SF for several reasons which include:

* Bad interface
    Most obvious example is the "Check to Upload" button
* Lack of reliability
    SF has been known to go down during the day unexpectedly and stay down
for hours
* Lack of workflow controls
    For instance, you cannot delete a category once created

For these reasons and others, we are requesting the Python community help us
find a new tracker to use.  We are asking for test trackers to be set up to
allow us to test them to see which tracker we want to move the Python
development team to.  This is in order to allow the Infrastructure committee
to evaluate the various trackers to see which one meets our tracker needs
the best.

Because we are not sure exactly what are requirements for a tracker are we
do not have a comprehensive requirements document.  But we do have a short
list of bare minimum needs:

* Can import SF data
    http://effbot.org/zone/sandbox-sourceforge.htm contains instructions on
how to access the data dump and work with the support tools (graciously
developed by Fredrik Lundh)
* Can export data
    To prevent the need to develop our own tools to get our data out of the
next tracker, there must be a way to get a dump of the data (formatted or
raw) that includes *all* information
* Has an email interface
    To facilitate participation in tracker item discussions, an email
interface is required to lower the barrier to add comments, files, etc.

If there is a tracker you wish to propose for Python development team use,
these are the steps you must follow:

* Install a test tracker
    If you do not have the server resources needed, you may contact the
Infrastructure committee at infrastructure at python.org, but our resources
are limited by both machine and manpower, so *please* do what you can to use
your own servers; we do not expect you to provide hosting for the final
installation of the tracker for use by python-dev, though, if your tracker
is chosen
* Import the SF data dump
    http://effbot.org/zone/sandbox-sourceforge.htm
* Make the Infrastructure committee members administrators of the tracker
    A list of the committee members can be found at
http://wiki.python.org/moin/PythonSoftwareFoundationCommittees#infrastructure-committee-ic
* Add your tracker to the wiki page at
http://wiki.python.org/moin/CallForTrackers
    This includes specifying the contact information for a *single* lead
person to contact for any questions about the tracker; this is to keep
communication simple and prevent us from having competing installations of
the same tracker software
* Email the Infrastructure committee that your test tracker is up and ready
to be viewed

We will accept new trackers for up to a maximum of two months starting
2006-06-05 (and thus ending 2006-08-07).  If trackers cease to be suggested,
we will close acceptance one month after the last tracker proposed (this
means the maximum timeframe for all of this is three months, ending
2006-09-04).  This allows us to not have this process carry on for three
months if there is no need for it to thanks to people getting trackers up
quickly.

As the committee evaluates trackers we will add information about what we
like and dislike to the http://wiki.python.org/moin/GoodTrackerFeatures wiki
page so that various trackers and change their setttings and notify us of
such changes.  This prevents penalizing trackers that are set up quickly
(which could be taken as a sign of ease of maintenance) compared to trackers
that are set up later but possibly more tailored to what the Infrastructure
committee discovers they want from a tracker.

If you have any questions, feel free to email infrastructure at python.org .

- Brett Cannon
  Chairman, Python Software Foundation Infrastructure committee
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060604/a06d3f1f/attachment.html 

From collinw at gmail.com  Sun Jun  4 22:25:54 2006
From: collinw at gmail.com (Collin Winter)
Date: Sun, 4 Jun 2006 22:25:54 +0200
Subject: [Python-Dev] Unhashable objects and __contains__()
In-Reply-To: <ca471dc20606031558m230cc8acp77223df8dd3ca22e@mail.gmail.com>
References: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>
	<e5t3qi$82t$1@sea.gmane.org>
	<ca471dc20606031558m230cc8acp77223df8dd3ca22e@mail.gmail.com>
Message-ID: <43aa6ff70606041325g5b56e24bq99f8f11b5ebad1fb@mail.gmail.com>

On 6/4/06, Guido van Rossum <guido at python.org> wrote:
> On 6/3/06, Georg Brandl <g.brandl at gmx.net> wrote:
> > Collin Winter wrote:
> > > Idea: what if Python's -O option caused PySequence_Contains() to
> > > convert all errors into False return values?
> >
> > It would certainly give me an uneasy feeling if a command-line switch
> > caused such a change in semantics.
>
> I missed that. Collin must be suffering from a heat stroke. :-)

I don't think Munich gets hot enough for heat stroke ; ) The more
likely culprit is a lack of coffee this morning and a bit too much
weissbier last night.

Consider the idea eagerly withdrawn.

Collin Winter

From g.brandl at gmx.net  Sun Jun  4 23:05:09 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 04 Jun 2006 23:05:09 +0200
Subject: [Python-Dev] Unhashable objects and __contains__()
In-Reply-To: <43aa6ff70606041325g5b56e24bq99f8f11b5ebad1fb@mail.gmail.com>
References: <43aa6ff70606031525k4c70737ep6b7b89fdfd80bc7e@mail.gmail.com>	<e5t3qi$82t$1@sea.gmane.org>	<ca471dc20606031558m230cc8acp77223df8dd3ca22e@mail.gmail.com>
	<43aa6ff70606041325g5b56e24bq99f8f11b5ebad1fb@mail.gmail.com>
Message-ID: <e5vh1q$7jd$1@sea.gmane.org>

Collin Winter wrote:
> On 6/4/06, Guido van Rossum <guido at python.org> wrote:
>> On 6/3/06, Georg Brandl <g.brandl at gmx.net> wrote:
>> > Collin Winter wrote:
>> > > Idea: what if Python's -O option caused PySequence_Contains() to
>> > > convert all errors into False return values?
>> >
>> > It would certainly give me an uneasy feeling if a command-line switch
>> > caused such a change in semantics.
>>
>> I missed that. Collin must be suffering from a heat stroke. :-)
> 
> I don't think Munich gets hot enough for heat stroke ; )

I can confirm that, unfortunately.
The best thing you can say about our weather at the moment is that
the winter is relatively mild :(

> The more likely culprit is a lack of coffee this morning and a bit
 > too much weissbier last night.

I'd blame it on the coffee.

Georg


From paul at prescod.net  Mon Jun  5 02:18:35 2006
From: paul at prescod.net (Paul Prescod)
Date: Sun, 4 Jun 2006 20:18:35 -0400
Subject: [Python-Dev] Seeking Core Developers for Vancouver Python Workshop
Message-ID: <1cb725390606041718x51bab772lcfb8490ad03b10b1@mail.gmail.com>

We would like to continue the tradition of having several Python core
developers at the Vancouver Python Workshop. This fun conference is small
enough to be intimate but consistently attracts top-notch speakers and
attendees. We would like YOU (core developer or not) on that list. Although
we haven't selected all of our speakers, we've got three great keynotes
lined up: Guido Van Rossum, Jim Hugunin and Ian Caven (a local
Python-trepeneur).

http://www.vanpyz.org/conference

The conference will be very affordable, with cut-rate registration fees for
speakers, prices in Canadian dollars and carefully negotiated hotel rates.

Please contact me with any questions or proceed directly to the talk
submission form!

 Paul Prescod
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060604/d5dbed96/attachment.htm 

From jimjjewett at gmail.com  Mon Jun  5 03:27:09 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Sun, 4 Jun 2006 21:27:09 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Message-ID: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>

Jackilyn is adding logging to several stdlib modules for the Google
Summer of Code (PEP 337), and asked me to review her first few
changes.

There were a few comments that I felt I should double-check with
Python-dev first, in case my own intuition is wrong.

For reference, she is adding the following prologue to several modules:

    import logging
    _log = logging.getLogger('py.NAME')

where NAME is the module name


(1)  Should we ask Vinay Sajip to add a convenience function (for
getting the stdlib logger) to the logging module itself?

It seems like it might be overkill, amd it wouldn't save even a single line.

On the other hand, I wonder if we will eventually end up wanting a
common choke point for hooks.  PEP 337 did not recommend any such
function, and recommended setting the handlers and level for py.* as
sufficient control.

So I'm inclined to say "no changes to the logging package itself".

(2)  Should NAME be module.__name__?

Note that this means the log messages would be different when the
module is run as a script (and the name is therefore __main__).

I'm not thrilled with repeating the module name as a string, but I
think that may be the lesser of evils.

(3)  Should she put out a message when a (now logged) module is
loaded?  If so, at what level?

I'm leaning towards

    _log.debug("Stdlib NAME loaded as ", __name__)


(4)  Should she add (other) debugging information that is not already present?

I think that "if __debug__: print ..." or "if debuglevel>0:
sys.stdout.write..." should be changed, and things like set_debuglevel
should be translated to calls to change the level of the appropriate
logger.

What about print lines that are currently commented out?  Should these
be displayed at debug level?  (I think so.)

What about useful information that isn't currently displayed?  For
instance, the asyncore module is careful to reuse a global mapping if
it already exists (presumably from a reload), but that seems rare
enough that it should be worth at least an info message.

What about normal config info, such as the typical case of creating an
empty mapping?  In Java, I would expect a CONFIG message either way.
In python, that seems like too much of a change, but ... at debug
level ... maybe not doing it is just a bad habit.

Unless convinced otherwise, I'll ask her to use debug for commented
out lines that were still important enough to leave in the code, info
message for unexpected choices, and not to bother with logging "yup,
did the normal thing"


(5)  Should she clean up other issues when touching a module?

In general, stdlib code isn't updated just for style reasons, unless
there is a deprecation -- but in these cases, the module is already
changing slightly.

My inclination would be that she doesn't have to fix everything, but
should generally do so when either it is "easy" or the old code is
arguably a bug (such as bare excepts).  (And "easy" might change as
the summer wears on.)

-jJ

From skip at pobox.com  Mon Jun  5 04:34:27 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sun, 4 Jun 2006 21:34:27 -0500
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>
References: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>
Message-ID: <17539.38963.154864.27536@montanaro.dyndns.org>


    Jim> (1)  Should we ask Vinay Sajip to add a convenience function (for
    Jim>      getting the stdlib logger) to the logging module itself?

-1.

    Jim> (2)  Should NAME be module.__name__?

Seems reasonable.

    Jim> (3)  Should she put out a message when a (now logged) module is
    Jim>      loaded?  If so, at what level?

-1.  I don't think it buys you anything.  If you need it, the -v command
line flag should be sufficient.

    Jim> (4)  Should she add (other) debugging information that is not
    Jim>      already present?

I'd say "no".  Behavior changes should be kept to a bare minimum.

    Jim> (5)  Should she clean up other issues when touching a module?

I suggest they be submitted as separate patches so as not to confuse
distinct issues (say, if one part needs to be retracted).

Skip

From pje at telecommunity.com  Mon Jun  5 05:42:04 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 04 Jun 2006 23:42:04 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.co
 m>
Message-ID: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>

At 09:27 PM 6/4/2006 -0400, Jim Jewett wrote:
>Jackilyn is adding logging to several stdlib modules for the Google
>Summer of Code (PEP 337), and asked me to review her first few
>changes.

That PEP doesn't appear to have been approved, and I don't recall any 
discussion on Python-Dev.  I also couldn't find any in the archives, except 
for some brief discussion regarding a *small fraction* of the huge list of 
modules in PEP 337.

I personally don't see the value in adding this to anything but modules 
that already do some kind of logging.  And even some of the modules listed 
in the PEP that do some kind of output, I don't really see what the use 
case for using the logging module is.  (Why does timeit need a logger, for 
example?)


>There were a few comments that I felt I should double-check with
>Python-dev first, in case my own intuition is wrong.
>
>For reference, she is adding the following prologue to several modules:
>
>     import logging
>     _log = logging.getLogger('py.NAME')
>
>where NAME is the module name

If this *has* to be added to the modules that don't currently do any 
logging, can we please delay the import until it's actually needed?  i.e., 
until after some logging option is enabled?  I don't really like the 
logging module myself and would rather it were not imported as a side 
effect of merely using shlex or pkgutil!


>(5)  Should she clean up other issues when touching a module?
>
>In general, stdlib code isn't updated just for style reasons,

Which is a good enough reason, IMO, to vote -1 on the PEP if it's not pared 
back to reflect *only* modules with a valid use case for logging.

I think it would be a good idea to revisit the module list.  I can see a 
reasonable case for the BaseHTTP stuff and asyncore needing a logging 
framework, if you plan to make them part of some larger framework -- the 
configurability would be a plus, even if I personally don't like the way 
the logging module does configuration.  But most of the other modules, I 
just don't see why something more complex than prints are desirable.  As of 
Python 2.5, if you want stdout or stderr temporarily redirected, it's easy 
enough to wrap your calls in a with: block.



From fredrik at pythonware.com  Mon Jun  5 09:42:42 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 05 Jun 2006 09:42:42 +0200
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
References: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.co m>
	<5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
Message-ID: <e60n9j$7t8$1@sea.gmane.org>

Phillip J. Eby wrote:

> If this *has* to be added to the modules that don't currently do any 
> logging, can we please delay the import until it's actually needed?

now that we've had a needforspeed sprint, it's clear that it's time to 
start working on slowing things down again ;-)

> I think it would be a good idea to revisit the module list.  I can see a 
> reasonable case for the BaseHTTP stuff and asyncore needing a logging 
> framework, if you plan to make them part of some larger framework -- the 
> configurability would be a plus

asyncore's logging is configurable through subclassing, as is everything 
else in asyncore (the components don't do anything if you don't subclass 
them).  mapping to the logger module by default will save 5-10 lines of 
code in an application that wants to use the logger module.  *replacing* 
the existing logging hooks with the logger module will break tons of stuff.

BaseHTTP only does webserver-style logging, as far as I can tell.

</F>


From skip at pobox.com  Mon Jun  5 13:25:50 2006
From: skip at pobox.com (skip at pobox.com)
Date: Mon, 5 Jun 2006 06:25:50 -0500
Subject: [Python-Dev] Mac/wastemodule build failing
In-Reply-To: <2C828F71-07AB-4C25-BBA0-A0DBDE0F2F4A@mac.com>
References: <17538.49328.860667.503218@montanaro.dyndns.org>
	<61173E98-CB4C-4F74-B7D3-3E368A77ED1F@mac.com>
	<17538.58075.248198.509193@montanaro.dyndns.org>
	<2C828F71-07AB-4C25-BBA0-A0DBDE0F2F4A@mac.com>
Message-ID: <17540.5310.111063.522857@montanaro.dyndns.org>


    Ronald> The failure was the result of the removeal of Mac/Wastemods,
    Ronald> these are needed for the waste wrappers. I've just checked in a
    Ronald> patch that removes these wrappers (revision 46644), they won't
    Ronald> work on Intel macs, aren't used by anything in the current tree
    Ronald> and are undocumented.

Worked fine, thanks.

Skip



From amk at amk.ca  Mon Jun  5 14:08:52 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Mon, 5 Jun 2006 08:08:52 -0400
Subject: [Python-Dev] wsgiref documentation
In-Reply-To: <5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>
References: <5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>
Message-ID: <20060605120852.GA7519@localhost.localdomain>

On Fri, Jun 02, 2006 at 03:33:41PM -0400, Doug Fort wrote:
> I'm going over the possible tasks for the Arlington Sprint.
> Documentation for wsgiref looks like somethng I could handle.
> 
> Is anyone already working on this?

I had the start of an outline in sandbox/wsgiref-docs, but am not
working on them at the moment because no one is willing to say if the
list of documented classes is complete (or includes too much).  Please
feel free to start writing actual content.  I can check in whatever
you produce; just e-mail me the new version of the file.

--amk


From andymac at bullseye.apana.org.au  Mon Jun  5 12:13:14 2006
From: andymac at bullseye.apana.org.au (Andrew MacIntyre)
Date: Mon, 05 Jun 2006 21:13:14 +1100
Subject: [Python-Dev] patch #1454481 vs buildbot
In-Reply-To: <1f7befae0606040839i4240eaeo343c352dd6a882af@mail.gmail.com>
References: <4482DA94.8090707@bullseye.apana.org.au>
	<1f7befae0606040839i4240eaeo343c352dd6a882af@mail.gmail.com>
Message-ID: <448403BA.4040500@bullseye.apana.org.au>

Tim Peters wrote:

> #if THREAD_STACK_MIN < PTHREAD_STACK_MIN
> 
> assumes that the expansion of PTHREAD_STACK_MIN acts like a
> compile-time constant expression, but there's no such guarantee.
> 
>    http://cvs.opensolaris.org/source/xref/on/usr/src/head/limits.h
> 
> shows that, on one version of Solaris, it's actually defined via
> 
> #define    PTHREAD_STACK_MIN ((size_t)_sysconf(_SC_THREAD_STACK_MIN))
> 
> That has a runtime value, but not a useful compile-time value.  The
> only useful thing you can do with it in an #if expression is
> defined(PTHREAD_STACK_MIN).

Ok.

>> 2.  I don't know what to make of the failure of test_threading on Linux,
>> as test_thread succeeds as far as I could see.  These tests succeed on my
>> FreeBSD box and also appear to be succeeding on the Windows buildbots.
> 
> Not all pthreads-using builds fail, and not all failing pthreads-using
> builds fail in the same way.  Welcome to pthreads on Linux ;-)
> 
> BTW, this sucks:
> 
> test_thread
> /home/buildbot/Buildbot/trunk.baxter-ubuntu/build/Lib/test/test_thread.py:138: 
> 
> RuntimeWarning: thread stack size of 0x1000 bytes not supported
>  thread.stack_size(tss)
> 
> That's from a successful run.  RuntimeWarning really doesn't make
> sense for a failing operation.  This should raise an exception
> (xyzError, not xyzWarning), or a failing stack_size() should return an
> error value after ensuring the original stack size is still in effect.

Fair enough.

{...}

> If PyThread_start_new_thread() fails in any way
> (like,pthread_attr_setstacksize() failing), ""can't start new thread"
> is the error we see.
> 
> The difference between test_thread and test_threading here is that
> only test_threading asks for a 16MB stack; test_thread doesn't ask for
> a stack larger than 4MB.

Thanks for the analysis!

> Until all this gets resolved, I strongly suggest reverting this patch
> (if you don't, someone else will ...) and hammering out the problems
> on a new branch instead.  See python-dev email from yesterday for how
> to force a buildbot slave to build a branch.

I see that you've already reverted this - Thanks & sorry I couldn't get
to it quickly.

Regards,
Andrew.

-------------------------------------------------------------------------
Andrew I MacIntyre                     "These thoughts are mine alone..."
E-mail: andymac at bullseye.apana.org.au  (pref) | Snail: PO Box 370
        andymac at pcug.org.au             (alt) |        Belconnen ACT 2616
Web:    http://www.andymac.org/               |        Australia

From andymac at bullseye.apana.org.au  Mon Jun  5 12:14:33 2006
From: andymac at bullseye.apana.org.au (Andrew MacIntyre)
Date: Mon, 05 Jun 2006 21:14:33 +1100
Subject: [Python-Dev] patch #1454481 vs buildbot
In-Reply-To: <17539.6567.511005.458660@montanaro.dyndns.org>
References: <4482DA94.8090707@bullseye.apana.org.au>
	<1f7befae0606040839i4240eaeo343c352dd6a882af@mail.gmail.com>
	<17539.6567.511005.458660@montanaro.dyndns.org>
Message-ID: <44840409.1060506@bullseye.apana.org.au>

skip at pobox.com wrote:

> Andrew, look in your mail for a patch file.

Received, thanks.
Andrew.

-------------------------------------------------------------------------
Andrew I MacIntyre                     "These thoughts are mine alone..."
E-mail: andymac at bullseye.apana.org.au  (pref) | Snail: PO Box 370
        andymac at pcug.org.au             (alt) |        Belconnen ACT 2616
Web:    http://www.andymac.org/               |        Australia

From steve at holdenweb.com  Mon Jun  5 16:42:52 2006
From: steve at holdenweb.com (Steve Holden)
Date: Mon, 05 Jun 2006 15:42:52 +0100
Subject: [Python-Dev] Let's stop eating exceptions in dict lookup
In-Reply-To: <447B7621.3040100@ewtllc.com>
References: <20060529171147.GA1717@code0.codespeak.net>	<008f01c68354$fc01dd70$dc00000a@RaymondLaptop1>	<20060529195453.GB14908@code0.codespeak.net>	<00de01c68363$308d37c0$dc00000a@RaymondLaptop1>	<20060529213428.GA20141@code0.codespeak.net>
	<447B7621.3040100@ewtllc.com>
Message-ID: <448442EC.9030003@holdenweb.com>

Raymond Hettinger wrote:
> Armin Rigo wrote:
[...]
>>At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
>>the tests I try to time, and even going into an infinite loop consuming
>>all my memory - since the NFS sprint.  Am I allowed to be grumpy here,
>>and repeat that speed should not be used to justify bugs?  I'm proposing
>>a bug fix, I honestly don't care about 0.5% of speed.
>>  
>>
> If it is really 0.5%, then we're fine.  Just remember that PyStone is an 
> amazingly uninformative and crappy benchmark.
> 
Which sadly doesn't distinguish it particularly, since all benchmarks 
tend towards the uninformative and crappy.

> The "justify bugs" terminology is pejorative and inaccurate.  It is 
> clear that the current dict behavior was a concious design decision and 
> documented as such.  Perhaps the decision sucked and should be changed, 
> but it is not a bug.
> 
> 
>> and I consider
>>myself an everyday Python user,
>>  
>>
> 
> Something may have been lost in translation.  Using it every day is not 
> the same as being an everyday user ;-)  There is no doubt that you 
> routinely stress the language in ways ways that are not at all commonplace.
> 
Just the same, I think Armin's point that the change in behavior might 
induce breakage in "working" programs is one we need to consider 
carefully, even though programs relying on the current behaviour might 
reasonably be considered broken (for some value of "broken").

> All I'm asking is that there be a well thought-out assessment of whether 
> the original design decision was struck the appropriate balance between 
> practicality
> 
I think the discussion has touched all the major points, but this won't 
stave off the squeals of anguish form those running programs that break 
under 2.5.

There are some similarities between this change and the (2.0?) change 
that stopped socket.socket() from accepting two arguments. IMHO I think 
we should accept that the behaviour needs to change and be prepared for 
a few anguished squeals. FWIW I suspect they will be even fewer than 
anticipated.

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden

From steve at holdenweb.com  Mon Jun  5 16:42:52 2006
From: steve at holdenweb.com (Steve Holden)
Date: Mon, 05 Jun 2006 15:42:52 +0100
Subject: [Python-Dev] Let's stop eating exceptions in dict lookup
In-Reply-To: <447B7621.3040100@ewtllc.com>
References: <20060529171147.GA1717@code0.codespeak.net>	<008f01c68354$fc01dd70$dc00000a@RaymondLaptop1>	<20060529195453.GB14908@code0.codespeak.net>	<00de01c68363$308d37c0$dc00000a@RaymondLaptop1>	<20060529213428.GA20141@code0.codespeak.net>
	<447B7621.3040100@ewtllc.com>
Message-ID: <448442EC.9030003@holdenweb.com>

Raymond Hettinger wrote:
> Armin Rigo wrote:
[...]
>>At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
>>the tests I try to time, and even going into an infinite loop consuming
>>all my memory - since the NFS sprint.  Am I allowed to be grumpy here,
>>and repeat that speed should not be used to justify bugs?  I'm proposing
>>a bug fix, I honestly don't care about 0.5% of speed.
>>  
>>
> If it is really 0.5%, then we're fine.  Just remember that PyStone is an 
> amazingly uninformative and crappy benchmark.
> 
Which sadly doesn't distinguish it particularly, since all benchmarks 
tend towards the uninformative and crappy.

> The "justify bugs" terminology is pejorative and inaccurate.  It is 
> clear that the current dict behavior was a concious design decision and 
> documented as such.  Perhaps the decision sucked and should be changed, 
> but it is not a bug.
> 
> 
>> and I consider
>>myself an everyday Python user,
>>  
>>
> 
> Something may have been lost in translation.  Using it every day is not 
> the same as being an everyday user ;-)  There is no doubt that you 
> routinely stress the language in ways ways that are not at all commonplace.
> 
Just the same, I think Armin's point that the change in behavior might 
induce breakage in "working" programs is one we need to consider 
carefully, even though programs relying on the current behaviour might 
reasonably be considered broken (for some value of "broken").

> All I'm asking is that there be a well thought-out assessment of whether 
> the original design decision was struck the appropriate balance between 
> practicality
> 
I think the discussion has touched all the major points, but this won't 
stave off the squeals of anguish form those running programs that break 
under 2.5.

There are some similarities between this change and the (2.0?) change 
that stopped socket.socket() from accepting two arguments. IMHO I think 
we should accept that the behaviour needs to change and be prepared for 
a few anguished squeals. FWIW I suspect they will be even fewer than 
anticipated.

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From steve at holdenweb.com  Mon Jun  5 17:20:44 2006
From: steve at holdenweb.com (Steve Holden)
Date: Mon, 05 Jun 2006 16:20:44 +0100
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <447FF966.5050807@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>	<e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com>
Message-ID: <44844BCC.2030406@holdenweb.com>

M.-A. Lemburg wrote:
> Fredrik Lundh wrote:
> 
>>M.-A. Lemburg wrote:
>>
>>
>>>Seriously, I've been using and running pybench for years
>>>and even though tweaks to the interpreter do sometimes
>>>result in speedups or slow-downs where you wouldn't expect
>>>them (due to the interpreter using the Python objects),
>>>they are reproducable and often enough have uncovered
>>>that optimizations in one area may well result in slow-downs
>>>in other areas.
>>
>> > Often enough the results are related to low-level features
>> > of the architecture you're using to run the code such as
>> > cache size, cache lines, number of registers in the CPU or
>> > on the FPU stack, etc. etc.
>>
>>and that observation has never made you stop and think about whether 
>>there might be some problem with the benchmarking approach you're using? 
> 
> 
> The approach pybench is using is as follows:
> 
> * Run a calibration step which does the same as the actual
>   test without the operation being tested (ie. call the
>   function running the test, setup the for-loop, constant
>   variables, etc.)
> 
>   The calibration step is run multiple times and is used
>   to calculate an average test overhead time.
> 
I believe my recent changes now take the minimum time rather than 
computing an average, since the minimum seems to be the best reflection 
of achievable speed. I assumed that we wanted to measure achievable 
speed rather than average speed as our benchmark of performance.

> * Run the actual test which runs the operation multiple
>   times.
> 
>   The test is then adjusted to make sure that the
>   test overhead / test run ratio remains within
>   reasonable bounds.
> 
>   If needed, the operation code is repeated verbatim in
>   the for-loop, to decrease the ratio.
> 
> * Repeat the above for each test in the suite
> 
> * Repeat the suite N number of rounds
> 
> * Calculate the average run time of all test runs in all rounds.
> 
Again, we are now using the minimum value. The reasons are similar: if 
extraneous processes interfere with timings then we don't want that to 
be reflected in the given timings. That's why we now report "notional 
minimum round time", since it's highly unlikely that any specific test 
round will give the minimum time for all tests.

Even with these changes we still see some disturbing variations in 
timing both on Windows and on Unix-like platforms.
> 
>>  after all, if a change to e.g. the try/except code slows things down 
>>or speed things up, is it really reasonable to expect that the time it
>>takes to convert Unicode strings to uppercase should suddenly change due 
>>to cache effects or a changing number of registers in the CPU?  real 
>>hardware doesn't work that way...
> 
> 
> Of course, but then changes to try-except logic can interfere
> with the performance of setting up method calls. This is what
> pybench then uncovers.
> 
> The only problem I see in the above approach is the way
> calibration is done. The run-time of the calibration code
> may be to small w/r to the resolution of the used timers.
> 
> Again, please provide the parameters you've used to run the
> test case and the output. Things like warp factor, overhead,
> etc. could hint to the problem you're seeing.
> 
> 
>>is PyBench perhaps using the following approach:
>>
>>     T = set of tests
>>     for N in range(number of test runs):
>>         for t in T:
>>             t0 = get_process_time()
>>             t()
>>             t1 = get_process_time()
>>             assign t1 - t0 to test t
>>             print assigned time
>>
>>where t1 - t0 is very short?
> 
> 
> See above (or the code in pybench.py). t1-t0 is usually
> around 20-50 seconds:
> 
> """
>         The tests must set .rounds to a value high enough to let the
>         test run between 20-50 seconds. This is needed because
>         clock()-timing only gives rather inaccurate values (on Linux,
>         for example, it is accurate to a few hundreths of a
>         second). If you don't want to wait that long, use a warp
>         factor larger than 1.
> """
> 
First, I'm not sure that this is the case for the default test 
parameters on modern machines. On my current laptop, for example, I see 
a round time of roughly four seconds and a notional minimum round time 
of 3.663 seconds.

Secondly, while this recommendation may be very sensible, with 50 
individual tests a decrease in the warp factor to 1 (the default is 
currently 20) isn't sufficient to increase individual test times to your 
recommended value, and decreasing the warp factor tends also to decrease 
reliability and repeatability.

Thirdly, since each round of the suite at warp factor 1 takes between 80 
and 90 seconds, pybench run this way isn't something one can usefully 
use to quickly evaluate the impact of a single change - particularly 
since even continuing development work on the benchmark machine 
potentially affects the benchmark results in unknown ways.

> 
>>that's not a very good idea, given how get_process_time tends to be 
>>implemented on current-era systems (google for "jiffies")...  but it 
>>definitely explains the bogus subtest results I'm seeing, and the "magic 
>>hardware" behaviour you're seeing.
> 
> 
> That's exactly the reason why tests run for a relatively long
> time - to minimize these effects. Of course, using wall time
> make this approach vulnerable to other effects such as current
> load of the system, other processes having a higher priority
> interfering with the timed process, etc.
> 
> For this reason, I'm currently looking for ways to measure the
> process time on Windows.
> 
I wish you luck with this search, as we clearly do need to improve 
repeatability of pybench results across all platforms, and particularly 
on Windows.

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden

From steve at holdenweb.com  Mon Jun  5 17:20:44 2006
From: steve at holdenweb.com (Steve Holden)
Date: Mon, 05 Jun 2006 16:20:44 +0100
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <447FF966.5050807@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>	<e5oa7b$kvl$1@sea.gmane.org>
	<447FF966.5050807@egenix.com>
Message-ID: <44844BCC.2030406@holdenweb.com>

M.-A. Lemburg wrote:
> Fredrik Lundh wrote:
> 
>>M.-A. Lemburg wrote:
>>
>>
>>>Seriously, I've been using and running pybench for years
>>>and even though tweaks to the interpreter do sometimes
>>>result in speedups or slow-downs where you wouldn't expect
>>>them (due to the interpreter using the Python objects),
>>>they are reproducable and often enough have uncovered
>>>that optimizations in one area may well result in slow-downs
>>>in other areas.
>>
>> > Often enough the results are related to low-level features
>> > of the architecture you're using to run the code such as
>> > cache size, cache lines, number of registers in the CPU or
>> > on the FPU stack, etc. etc.
>>
>>and that observation has never made you stop and think about whether 
>>there might be some problem with the benchmarking approach you're using? 
> 
> 
> The approach pybench is using is as follows:
> 
> * Run a calibration step which does the same as the actual
>   test without the operation being tested (ie. call the
>   function running the test, setup the for-loop, constant
>   variables, etc.)
> 
>   The calibration step is run multiple times and is used
>   to calculate an average test overhead time.
> 
I believe my recent changes now take the minimum time rather than 
computing an average, since the minimum seems to be the best reflection 
of achievable speed. I assumed that we wanted to measure achievable 
speed rather than average speed as our benchmark of performance.

> * Run the actual test which runs the operation multiple
>   times.
> 
>   The test is then adjusted to make sure that the
>   test overhead / test run ratio remains within
>   reasonable bounds.
> 
>   If needed, the operation code is repeated verbatim in
>   the for-loop, to decrease the ratio.
> 
> * Repeat the above for each test in the suite
> 
> * Repeat the suite N number of rounds
> 
> * Calculate the average run time of all test runs in all rounds.
> 
Again, we are now using the minimum value. The reasons are similar: if 
extraneous processes interfere with timings then we don't want that to 
be reflected in the given timings. That's why we now report "notional 
minimum round time", since it's highly unlikely that any specific test 
round will give the minimum time for all tests.

Even with these changes we still see some disturbing variations in 
timing both on Windows and on Unix-like platforms.
> 
>>  after all, if a change to e.g. the try/except code slows things down 
>>or speed things up, is it really reasonable to expect that the time it
>>takes to convert Unicode strings to uppercase should suddenly change due 
>>to cache effects or a changing number of registers in the CPU?  real 
>>hardware doesn't work that way...
> 
> 
> Of course, but then changes to try-except logic can interfere
> with the performance of setting up method calls. This is what
> pybench then uncovers.
> 
> The only problem I see in the above approach is the way
> calibration is done. The run-time of the calibration code
> may be to small w/r to the resolution of the used timers.
> 
> Again, please provide the parameters you've used to run the
> test case and the output. Things like warp factor, overhead,
> etc. could hint to the problem you're seeing.
> 
> 
>>is PyBench perhaps using the following approach:
>>
>>     T = set of tests
>>     for N in range(number of test runs):
>>         for t in T:
>>             t0 = get_process_time()
>>             t()
>>             t1 = get_process_time()
>>             assign t1 - t0 to test t
>>             print assigned time
>>
>>where t1 - t0 is very short?
> 
> 
> See above (or the code in pybench.py). t1-t0 is usually
> around 20-50 seconds:
> 
> """
>         The tests must set .rounds to a value high enough to let the
>         test run between 20-50 seconds. This is needed because
>         clock()-timing only gives rather inaccurate values (on Linux,
>         for example, it is accurate to a few hundreths of a
>         second). If you don't want to wait that long, use a warp
>         factor larger than 1.
> """
> 
First, I'm not sure that this is the case for the default test 
parameters on modern machines. On my current laptop, for example, I see 
a round time of roughly four seconds and a notional minimum round time 
of 3.663 seconds.

Secondly, while this recommendation may be very sensible, with 50 
individual tests a decrease in the warp factor to 1 (the default is 
currently 20) isn't sufficient to increase individual test times to your 
recommended value, and decreasing the warp factor tends also to decrease 
reliability and repeatability.

Thirdly, since each round of the suite at warp factor 1 takes between 80 
and 90 seconds, pybench run this way isn't something one can usefully 
use to quickly evaluate the impact of a single change - particularly 
since even continuing development work on the benchmark machine 
potentially affects the benchmark results in unknown ways.

> 
>>that's not a very good idea, given how get_process_time tends to be 
>>implemented on current-era systems (google for "jiffies")...  but it 
>>definitely explains the bogus subtest results I'm seeing, and the "magic 
>>hardware" behaviour you're seeing.
> 
> 
> That's exactly the reason why tests run for a relatively long
> time - to minimize these effects. Of course, using wall time
> make this approach vulnerable to other effects such as current
> load of the system, other processes having a higher priority
> interfering with the timed process, etc.
> 
> For this reason, I'm currently looking for ways to measure the
> process time on Windows.
> 
I wish you luck with this search, as we clearly do need to improve 
repeatability of pybench results across all platforms, and particularly 
on Windows.

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From tim.peters at gmail.com  Mon Jun  5 19:30:52 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 5 Jun 2006 13:30:52 -0400
Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures
	refleak (1)
In-Reply-To: <4483301C.8000601@v.loewis.de>
References: <20060604090704.GA17397@python.psfb.org>
	<43E59BC0-9CD0-4524-9DB3-6CFD5E155721@commonground.com.au>
	<1f7befae0606040947i7b3fdbc3p23ba363a4103cd24@mail.gmail.com>
	<ee2a432c0606041037j1200de61jdb6204186173a55c@mail.gmail.com>
	<4483301C.8000601@v.loewis.de>
Message-ID: <1f7befae0606051030pc36fb25x95e9a7e55085c460@mail.gmail.com>

[moving to python-dev]

[Tim, gets different results across whole runs of
    python_d  ../Lib/test/regrtest.py -R 2:40: test_filecmp test_exceptions
]
>>> Does that make any sense?  Not to me -- I don't know of a clear reason
>>> other than wild loads/stores for why such runs should ever differ.

[Neal]
>> The problem generally has to do with modules cacheing things.

[Martin]
> Then the numbers should repeat on each run.

Right.  The point wasn't that I saw a variety of different integers in
the "leak vector" in a single run, it was that I got different results
_across_ runs:  no leaks in either test, a leak in one but not the
other, or (very rarely) leaks in both.

> So wild loads/stores are a more compelling explanation. Of course, it
> *should* even be repeatable with wild loads/stores, unless the OS
> shuffles the address space on each run (which at least Linux,
> unfortunately, does).

Well, I just tried this (illegal) C program under VC 7.1:

#include <stdio.h>
#include <stdlib.h>

int main() {
    int *p;
    int i, sum;

    p = (int *)malloc(sizeof(int));
    printf("%p %d ...", p, sum);
    for (sum = 0, i = -12; i < 29; ++i)
        sum += p[i];
    printf("%d\n", sum);

    return 0;
}

Here are several runs.  Note that the address malloc() returns is
always the same, but adding the junk "around" that address often gives
a different result, and the stack trash `sum` contains at first also
varies.  Put those together, and they show that wild loads from stack
trash _and_ heap trash can vary across runs:

C:\Code>cl boogle.c
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.3077 for 80x86
Copyright (C) Microsoft Corporation 1984-2002. All rights reserved.

boogle.c
c:\code\boogle.c(9) : warning C4700: local variable 'sum' used without
having been initialized
Microsoft (R) Incremental Linker Version 7.10.3077
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:boogle.exe
boogle.obj

C:\Code>for %i in (1 2 3 4 5 6 7 8 9) do boogle.exe

C:\Code>boogle.exe
00322EA8 315894624 ...75845519

C:\Code>boogle.exe
00322EA8 316050874 ...125913836

C:\Code>boogle.exe
00322EA8 316050874 ...125913836

C:\Code>boogle.exe
00322EA8 316207124 ...8930763

C:\Code>boogle.exe
00322EA8 316207124 ...8930763

C:\Code>boogle.exe
00322EA8 316207124 ...8930763

C:\Code>boogle.exe
00322EA8 316363374 ...42224689

C:\Code>boogle.exe
00322EA8 316363374 ...42224689

C:\Code>boogle.exe
00322EA8 316519624 ...92948238

How did I pick -12 and 29 for i's bounds?  Easy:  I started with much
larger bounds, and reduced them haphazardly until the program stopped
segfaulting :-)

Now I hate to think this is "the cause" for regrtest -R varying across
identical runs, but I don't have many other suspects in mind.  For
example, I tried forcing random.seed() at the start of
regrtest.main(), but that didn't make any visible difference.

From pje at telecommunity.com  Tue Jun  6 00:33:29 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 05 Jun 2006 18:33:29 -0400
Subject: [Python-Dev] wsgiref documentation
In-Reply-To: <20060605120852.GA7519@localhost.localdomain>
References: <5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>
	<5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060605110509.034a72c8@mail.telecommunity.com>

At 08:08 AM 6/5/2006 -0400, A.M. Kuchling wrote:
>I had the start of an outline in sandbox/wsgiref-docs, but am not
>working on them at the moment because no one is willing to say if the
>list of documented classes is complete (or includes too much).

Huh?  This is the first I've heard of it.

I was already working on some documentation in my local tree, though, so 
I've now started merging your work into it and checked in a snapshot at:

     http://svn.eby-sarna.com/wsgiref/docs/

I'll merge more into it later.  If anyone has any part of the remaining 
stuff that they'd like to volunteer to document, please let me know so I 
don't duplicate your work.  Thanks.


From amk at amk.ca  Tue Jun  6 02:19:29 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Mon, 5 Jun 2006 20:19:29 -0400
Subject: [Python-Dev] wsgiref documentation
In-Reply-To: <5.1.1.6.0.20060605110509.034a72c8@mail.telecommunity.com>
References: <5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>
	<5db086ad0606021233j4e8c2ffvc386919b8808db9b@mail.gmail.com>
	<5.1.1.6.0.20060605110509.034a72c8@mail.telecommunity.com>
Message-ID: <20060606001929.GA14289@Andrew-iBook2.local>

On Mon, Jun 05, 2006 at 06:33:29PM -0400, Phillip J. Eby wrote:
> At 08:08 AM 6/5/2006 -0400, A.M. Kuchling wrote:
> >I had the start of an outline in sandbox/wsgiref-docs, but am not
> >working on them at the moment because no one is willing to say if the
> >list of documented classes is complete (or includes too much).
> 
> Huh?  This is the first I've heard of it.

<http://mail.python.org/pipermail/python-dev/2006-April/064536.html>,
and the checkins to sandbox/wsgiref-docs/ would also have been an
indicator.  But if Doug wants to write the docs, he certainly should;
we can always use more contributors.

--amk

From jimjjewett at gmail.com  Tue Jun  6 02:49:47 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Mon, 5 Jun 2006 20:49:47 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <17539.38963.154864.27536@montanaro.dyndns.org>
References: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>
	<17539.38963.154864.27536@montanaro.dyndns.org>
Message-ID: <fb6fbf560606051749x54337035l2a1a40b55a09667a@mail.gmail.com>

On 6/4/06, skip at pobox.com <skip at pobox.com> wrote:
>     Jim> (2)  Should NAME be module.__name__?

> Seems reasonable.

(The clipped part was that the output will look a bit different when,
say, the module is run as a script and the name is __main__).

But if no one objects, I'll take this as a "good enough", since I
really don't like repeating the module name.

>     Jim> (3)  Should she put out a message when a (now logged) module is
>     Jim>      loaded?  If so, at what level?

> -1.  I don't think it buys you anything.  If you need it, the -v command
> line flag should be sufficient.

The timestamp could be useful, though it doesn't show up in the
default basicformat.

Realistically, I think the main reason is consistency with JSPs, which
(at high enough logging levels) produce huge amounts of trace
information.  If no one else chimes in, I'll consider the parallel
uncompelling.

(6)  And a new issue -- the PEP says py.modulename -- is it reasonable
to use something more specific?

The use case is again from asyncore; the existing API has separate
methods for "logging" a hit and "logging" instrumentation messages.
Apache would send these to two separate files (access_log and
error_log); it seems reasonable to support at least the possibility of
handlers treating the two types of messages separately.

If no explicit changes are made locally,

    py.asyncore.dispatcher.hits
    py.asyncore.dispatcher.messages

would both roll up to (PEP337) py.asycnore (and then to the root logger).

-jJ

From jimjjewett at gmail.com  Tue Jun  6 03:04:12 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Mon, 5 Jun 2006 21:04:12 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
Message-ID: <fb6fbf560606051804n78624f08wc23af049d7b16d6e@mail.gmail.com>

On 6/4/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> can we please delay the import until it's actually needed?  i.e.,
> until after some logging option is enabled?

I have asked her to make this change.

I don't like the extra conditional dance it causes, but I agree that
not wanting to log is a valid use case.

On the other hand, the one-time import cost is pretty low for a
long-running process, and eventually gets paid if any other module
calls logging.  Would it make more sense to offer a null package that
can be installed earlier in the search path if you want to truly
disable logging?

-jJ

From pje at telecommunity.com  Tue Jun  6 03:40:35 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 05 Jun 2006 21:40:35 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606051804n78624f08wc23af049d7b16d6e@mail.gmail.co
 m>
References: <fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>

At 09:04 PM 6/5/2006 -0400, Jim Jewett wrote:
>On 6/4/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > can we please delay the import until it's actually needed?  i.e.,
> > until after some logging option is enabled?
>
>I have asked her to make this change.
>
>I don't like the extra conditional dance it causes, but I agree that
>not wanting to log is a valid use case.
>
>On the other hand, the one-time import cost is pretty low for a
>long-running process, and eventually gets paid if any other module
>calls logging.  Would it make more sense to offer a null package that
>can be installed earlier in the search path if you want to truly
>disable logging?

I notice you've completely avoided the question of whether this should be 
being done at all.  It sounds like Fredrik is -1 on this even for the 
modules that I'm not -1 on.

As far as I can tell, this PEP hasn't actually been discussed.  Please 
don't waste time changing modules for which there is no consensus that this 
*should* be done.

The original discussion that occurred prior to PEP 337's creation discussed 
only modules that *already* do some kind of logging.  There was no 
discussion of changing *all* debugging output to use the logging module, 
nor of adding logging to modules that do not even have any debugging output 
(e.g. pkgutil).


From theller at python.net  Tue Jun  6 12:22:19 2006
From: theller at python.net (Thomas Heller)
Date: Tue, 06 Jun 2006 12:22:19 +0200
Subject: [Python-Dev] Include/structmember.h, Py_ssize_t
Message-ID: <e63l0l$n24$1@sea.gmane.org>

In Include/structmember.h, there is no T_... constant for Py_ssize_t
member fields.  Should there be one?

Thomas


From amk at amk.ca  Tue Jun  6 13:20:44 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Tue, 6 Jun 2006 07:20:44 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606051749x54337035l2a1a40b55a09667a@mail.gmail.com>
References: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>
	<17539.38963.154864.27536@montanaro.dyndns.org>
	<fb6fbf560606051749x54337035l2a1a40b55a09667a@mail.gmail.com>
Message-ID: <20060606112044.GA14316@Andrew-iBook2.local>

On Mon, Jun 05, 2006 at 08:49:47PM -0400, Jim Jewett wrote:
> If no explicit changes are made locally,
> 
>    py.asyncore.dispatcher.hits
>    py.asyncore.dispatcher.messages

These handler names seem really specific, though.  Why have
'dispatcher' in there?

Part of Jackilyn's task should be to refine and improve the PEP.
Logging is probably irrelevant for many modules, but which ones are
those?  What conventions should be followed for handler names?  Etc...

--amk

From fredrik at pythonware.com  Tue Jun  6 13:37:34 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 06 Jun 2006 13:37:34 +0200
Subject: [Python-Dev] Include/structmember.h, Py_ssize_t
In-Reply-To: <e63l0l$n24$1@sea.gmane.org>
References: <e63l0l$n24$1@sea.gmane.org>
Message-ID: <e63pdt$6f8$1@sea.gmane.org>

Thomas Heller wrote:

> In Include/structmember.h, there is no T_... constant for Py_ssize_t
> member fields.  Should there be one?

do you need one?  if so, I see no reason why you cannot add it...

</F>


From michele.simionato at gmail.com  Tue Jun  6 15:17:25 2006
From: michele.simionato at gmail.com (Michele Simionato)
Date: Tue, 6 Jun 2006 13:17:25 +0000 (UTC)
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org>
	<e5fidd$aod$1@sea.gmane.org> <e5fjnq$f8j$1@sea.gmane.org>
	<e5fkdh$gps$1@sea.gmane.org>
	<ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com>
	<loom.20060601T090603-423@post.gmane.org>
	<e5m9pk$cgl$1@sea.gmane.org>
	<loom.20060601T112347-75@post.gmane.org>
	<5.1.1.6.0.20060601115712.03c01008@mail.telecommunity.com>
Message-ID: <loom.20060606T150007-239@post.gmane.org>

Phillip J. Eby <pje <at> telecommunity.com> writes:

> I think the whole concept of inspecting for this is broken.  *Any* 
> function can return a generator-iterator.  A generator function is just a 
> function that happens to always return one.
> In other words, the confusion is in the idea of introspecting for this in 
> the first place, not that generator functions are of FunctionType.  The 
> best way to avoid the confusion is to avoid thinking that you can 
> distinguish one type of function from another without explicit guidance 
> from the function's author.

Nolo contendere.

I am convinced and I am taking back my feature request.


     Michele Simionato


From michele.simionato at gmail.com  Tue Jun  6 15:25:23 2006
From: michele.simionato at gmail.com (Michele Simionato)
Date: Tue, 6 Jun 2006 13:25:23 +0000 (UTC)
Subject: [Python-Dev] feature request: inspect.isgenerator
References: <loom.20060529T092512-825@post.gmane.org><e5fidd$aod$1@sea.gmane.org><e5fjnq$f8j$1@sea.gmane.org><e5fkdh$gps$1@sea.gmane.org><ee2a432c0605292148l1acd5d87t903261106b3eb7c9@mail.gmail.com><loom.20060601T090603-423@post.gmane.org><e5m9pk$cgl$1@sea.gmane.org>
	<loom.20060601T112347-75@post.gmane.org>
	<e5n28j$cld$1@sea.gmane.org>
Message-ID: <loom.20060606T151809-0@post.gmane.org>

Terry Reedy <tjreedy <at> udel.edu> writes:
> tout court?? is not English or commonly used at least in America

It is French:

http://encarta.msn.com/dictionary_561508877/tout_court.html

I thought it was common in English too, but clearly I was mistaken.
 
> Ok, you mean generator function, which produces generators, not generators 
> themselves.  So what you want is a new isgenfunc function.  That makes more 
> sense, in a sense, since I can see that you would want to wrap genfuncs 
> differently from regular funcs.  But then I wonder why you don't use a 
> different decorator since you know when you are writing a generator 
> function.

Because in a later refactoring I may want to replace a function with a
generator function or viceversa, and I don't want to use a different
decorator. The application I had in mind was a Web framework 
where you can write something like

@expose
def page(self):
   return 'Hello World!'

or

@expose
def page(self):
   yield 'Hello '
   yield 'World!'

indifferently. I seem to remember CherryPy has something like that.

          Michele Simionato


From jimjjewett at gmail.com  Tue Jun  6 15:41:27 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 6 Jun 2006 09:41:27 -0400
Subject: [Python-Dev] ssize_t: ints in header files
Message-ID: <fb6fbf560606060641i76eb4207qcb41f4b5631d6b1@mail.gmail.com>

(Neal Norwitz asked about changing some additional ints and longs to ssize_t)

Martin v. L?wis replied:

> ... column numbers shouldn't exceed 16-bits, and line #s
> shouldn't exceed 31 bits.

Why these particular numbers?

As nearly as I can tell, 8 bits is already too many columns for human
readability.

If python is being used as an intermediate language (code is
automatically generated, and not read by humans), then I don't see any
justification for any particular limits, except as an implementation
detail driven by convenience.

Similar arguments apply to row count, #args, etc.

With the exception of row count and possibly instruction count, the
only readability reason to allow even 256 is that we don't want to
accidentally encourage people to aim for the limit.  (I really don't
want people to answer the challenge and start inventing cases where a
huge function might be justified, just so that they can blog about
their workarounds; I would prefer that obfuscated python contests be
clearly labeled so that beginners aren't turned off.)

-jJ

From amk at amk.ca  Tue Jun  6 16:02:10 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Tue, 6 Jun 2006 10:02:10 -0400
Subject: [Python-Dev] DC Python sprint on July 29th
Message-ID: <20060606140210.GB25434@rogue.amk.ca>

The Arlington sprint this past Saturday went well, though the number
of Python developers was small and people mostly worked on other
projects.  

The CanDo group, the largest at the sprint with about 10 people, will
be having a three-day sprint July 28-30 (Fri-Sun) at the same
location.  We should take advantage of the opportunity to have another
Python sprint.  Let's schedule it for Saturday July 29th (the day
after OSCON ends in Oregon).

--amk


From jimjjewett at gmail.com  Tue Jun  6 16:13:55 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 6 Jun 2006 10:13:55 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
Message-ID: <fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>

On 6/5/06, Phillip J. Eby <pje at telecommunity.com> wrote:

> I notice you've completely avoided the question of whether this should be
> being done at all.

> As far as I can tell, this PEP hasn't actually been discussed.  Please
> don't waste time changing modules for which there is no consensus that this
> *should* be done.

Under a specific PEP number, no.  The concept of adding logging to the
stdlib, yes, periodically.  The typical outcome is that some people
say "why bother, besides it would slow things down" and others say
"yes, please."

I certainly agree that the PEP as written should not be treated as
fully pronounced.

I do think the discussion was stalled until we have a specific
implementation to discuss.  Google is generously funding one, and
Jackilyn is providing it.  I'm checking in here because when changes
are needed, I would prefer that she know as soon as possible.
Jackilyn has made it quite clear that she is willing to change her
direction if we ask her to, she just needs to know what the goals are.

> The original discussion that occurred prior to PEP 337's creation discussed
> only modules that *already* do some kind of logging.  There was no
> discussion of changing *all* debugging output to use the logging module,
> nor of adding logging to modules that do not even have any debugging output
> (e.g. pkgutil).

You may be reading too much ambition into the proposal.

For pkgutil in particular, the change is that instead of writing to
stderr (which can scroll off and get lost), it will write to the
errorlog.  In a truly default setup, that still ends up writing to
stderr.

The difference is that if a sysadmin does want to track problems, the
change can now be made in one single place.  Today, turning on that
instrumentation would require separate changes to every relevant
module, and requires you to already know what/where they are.

I did ask whether extra debugging/instrumentation information should
be added where it isn't already present.  I personally think the
answer is yes, but it sounds like the consensus answer is "not now" --
so she generally won't.

-jJ

From fredrik at pythonware.com  Tue Jun  6 16:27:46 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 06 Jun 2006 16:27:46 +0200
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
	<fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
Message-ID: <e643d2$d7j$1@sea.gmane.org>

Jim Jewett wrote:

> For pkgutil in particular, the change is that instead of writing to
> stderr (which can scroll off and get lost), it will write to the
> errorlog.  In a truly default setup, that still ends up writing to
> stderr.

umm.  if pkgutil fails to open a pkg file, isn't it rather likely that 
the program will terminate with an ImportError a few milliseconds later?

</F>


From jimjjewett at gmail.com  Tue Jun  6 16:36:06 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 6 Jun 2006 10:36:06 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
Message-ID: <fb6fbf560606060736j743edf18j2860b40f012aefea@mail.gmail.com>

>>    py.asyncore.dispatcher.hits
>>    py.asyncore.dispatcher.messages

> These handler names seem really specific, though.  Why have
> 'dispatcher' in there?

The existing logging that she is replacing is done through methods of
the dispatcher class.  The dispatcher class is only a portion of the
whole module.

> Part of Jackilyn's task should be to refine and improve the PEP.

Agreed.

> Logging is probably irrelevant for many modules, but which ones are
> those?  What conventions should be followed for handler names?  Etc...

Are you suggesting that the logging module should ship with a standard
configuration that does something specific for py.* loggers?  Or even
one that has different handlers for different stdlib modules?

I had assumed this would be considered too intrusive a change.  If no
one chimes in, then I'll ask her to put at least investigating this
into at least the second half of the summer.

-jJ

From jeremy at alum.mit.edu  Tue Jun  6 16:36:59 2006
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 6 Jun 2006 10:36:59 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <e643d2$d7j$1@sea.gmane.org>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
	<fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
	<e643d2$d7j$1@sea.gmane.org>
Message-ID: <e8bf7a530606060736g71d25891n7958f923ff48fe42@mail.gmail.com>

On 6/6/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Jim Jewett wrote:
>
> > For pkgutil in particular, the change is that instead of writing to
> > stderr (which can scroll off and get lost), it will write to the
> > errorlog.  In a truly default setup, that still ends up writing to
> > stderr.
>
> umm.  if pkgutil fails to open a pkg file, isn't it rather likely that
> the program will terminate with an ImportError a few milliseconds later?

Maybe a mean time of a few milliseconds later.  It really depends on
the operating system's scheduler.  If the failure occurs just at the
end of a scheduler quantum, the process may not run again for some
time.  This would happen regardless of whether the operating system
was modern.

Jeremy

From skip at pobox.com  Tue Jun  6 16:40:24 2006
From: skip at pobox.com (skip at pobox.com)
Date: Tue, 6 Jun 2006 09:40:24 -0500
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
	<fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
Message-ID: <17541.37848.951188.701281@montanaro.dyndns.org>


    >> As far as I can tell, this PEP hasn't actually been discussed.
    >> Please don't waste time changing modules for which there is no
    >> consensus that this *should* be done.

    Jim> Under a specific PEP number, no.  The concept of adding logging to
    Jim> the stdlib, yes, periodically.  The typical outcome is that some
    Jim> people say "why bother, besides it would slow things down" and
    Jim> others say "yes, please."

I'll chime in and suggest that any checkins be done on a branch for now.  I
have a distinct love/hate relationship with the logging module, so I'm
ambivalent about whether or not

    print >> sys.stderr, ...

should be replaced with

    stderr_logger.debug("...")

I'd have to see it in action before deciding.

I notice in the PEP that BaseHTTPServer is on the list of candidate modules.
Please don't mess with anything that logs in the common Apache log format.
There are lots of tools out there that munch on that sort of output.
Changing it would just break them.

Skip


From fredrik at pythonware.com  Tue Jun  6 16:41:14 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 06 Jun 2006 16:41:14 +0200
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060736j743edf18j2860b40f012aefea@mail.gmail.com>
References: <fb6fbf560606060736j743edf18j2860b40f012aefea@mail.gmail.com>
Message-ID: <e64469$h94$1@sea.gmane.org>

Jim Jewett wrote:

> The existing logging that she is replacing is done through methods of
> the dispatcher class.  The dispatcher class is only a portion of the
> whole module.

the dispatcher class is never used on its own; it's a base class for 
user-defined communication classes.

asyncore users don't think in terms of instances of a single dispatch 
class; they think in terms of their own communication classes, which 
inherit from asyncore.dispatch or some subclass thereof.

using a single handler name for all subclasses doesn't strike me as 
especially useful.

</F>


From mal at egenix.com  Tue Jun  6 16:59:43 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 06 Jun 2006 16:59:43 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5q1e7$tks$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com>	<e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<4480123A.1090109@egenix.com>	<d78db4cd0606020506n608a880cy8c2017b1e8beca79@mail.gmail.com>	<44803D2A.5010108@egenix.com>	<4480501B.9080709@egenix.com>
	<e5q1e7$tks$1@sea.gmane.org>
Message-ID: <4485985F.5080102@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> I just had an idea: if we could get each test to run
>> inside a single time slice assigned by the OS scheduler,
>> then we could benefit from the better resolution of the
>> hardware timers while still keeping the noise to a
>> minimum.
>>
>> I suppose this could be achieved by:
>>
>> * making sure that each tests needs less than 10ms to run
> 
> iirc, very recent linux kernels have a 1 millisecond tick.  so does 
> alphas, and probably some other platforms.

Indeed, that's also what the microbench.py example that I posted
demonstrates.

And of, course, you have to call time.sleep() *before* running
the test (which microbench.py does).

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 06 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From jimjjewett at gmail.com  Tue Jun  6 17:04:41 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 6 Jun 2006 11:04:41 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <17541.37848.951188.701281@montanaro.dyndns.org>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
	<fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
	<17541.37848.951188.701281@montanaro.dyndns.org>
Message-ID: <fb6fbf560606060804h6fa5b102t4d2327d1eb3d1974@mail.gmail.com>

On 6/6/06, skip at pobox.com <skip at pobox.com> wrote:
> I notice in the PEP that BaseHTTPServer is on the list of candidate modules.
> Please don't mess with anything that logs in the common Apache log format.
> There are lots of tools out there that munch on that sort of output.
> Changing it would just break them.

In general, the format of the messages shouldn't change; it is just
that there should be a common choke point for controlling them.

So by default, BaseHttpServer would still put out Apache log format,
and it would still be occasionally interrupted by output from other
modules.

This does argue in favor of allowing the more intrusive additions to
handlers and default configuration.  It would be useful to have a
handler that emitted only Apache log format records, and saved them
(by default) to a rotating file rather than stderr.(And it *might*
make sense to switch asyncore's hitlog default output to this format.)

-jJ

From fredrik at pythonware.com  Tue Jun  6 17:11:22 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 06 Jun 2006 17:11:22 +0200
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060804h6fa5b102t4d2327d1eb3d1974@mail.gmail.com>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>	<fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>	<17541.37848.951188.701281@montanaro.dyndns.org>
	<fb6fbf560606060804h6fa5b102t4d2327d1eb3d1974@mail.gmail.com>
Message-ID: <e645uq$mv4$2@sea.gmane.org>

Jim Jewett wrote:

> This does argue in favor of allowing the more intrusive additions to
> handlers and default configuration.  It would be useful to have a
> handler that emitted only Apache log format records, and saved them
> (by default) to a rotating file rather than stderr.(And it *might*
> make sense to switch asyncore's hitlog default output to this format.)

argh! can you please stop suggesting changes to API:s that you have 
never used ?

</F>


From mal at egenix.com  Tue Jun  6 17:15:30 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 06 Jun 2006 17:15:30 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e5rjkj$rph$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>
	<e5rjkj$rph$1@sea.gmane.org>
Message-ID: <44859C12.8080306@egenix.com>

Fredrik Lundh wrote:
> Martin v. L?wis wrote:
> 
>>> since process time is *sampled*, not measured, process time isn't exactly in-
>>> vulnerable either.
>> I can't share that view. The scheduler knows *exactly* what thread is
>> running on the processor at any time, and that thread won't change
>> until the scheduler makes it change. So if you discount time spent
>> in interrupt handlers (which might be falsely accounted for the
>> thread that happens to run at the point of the interrupt), then
>> process time *is* measured, not sampled, on any modern operating system:
>> it is updated whenever the scheduler schedules a different thread.
> 
> updated with what?  afaik, the scheduler doesn't have to wait for a 
> timer interrupt to reschedule things (think blocking, or interrupts that 
> request rescheduling, or new processes, or...) -- but it's always the 
> thread that runs when the timer interrupt arrives that gets the entire 
> jiffy time.  for example, this script runs for ten seconds, usually 
> without using any process time at all:
> 
>      import time
>      for i in range(1000):
>          for i in range(1000):
>              i+i+i+i
>          time.sleep(0.005)
> 
> while the same program, without the sleep, will run for a second or two, 
> most of which is assigned to the process.
> 
> if the scheduler used the TSC to keep track of times, it would be 
> *measuring* process time.  but unless something changed very recently, 
> it doesn't.  it's all done by sampling, typically 100 or 1000 times per 
> second.

This example is a bit misleading, since chances are high that
the benchmark will get a good priority bump by the scheduler.

>> On Linux, process time is accounted in jiffies. Unfortunately, for
>> compatibility, times(2) converts that to clock_t, losing precision.
>
> times(2) reports time in 1/CLOCKS_PER_SEC second units, while jiffies 
> are counted in 1/HZ second units.  on my machine, CLOCKS_PER_SEC is a 
> thousand times larger than HZ.  what does this code print on your machine?

You should use getrusage() for user and system time or even
better clock_gettime() (the POSIX real-time APIs).

>From the man-page of times:

RETURN VALUE
       The function times returns the number of clock ticks that have
elapsed  since  an  arbitrary
       point  in  the  past.

...

       The number of clock ticks per second can be obtained using
              sysconf(_SC_CLK_TCK);

On my Linux system this returns 100.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 06 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Tue Jun  6 17:20:21 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 06 Jun 2006 17:20:21 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44859C12.8080306@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>
	<44859C12.8080306@egenix.com>
Message-ID: <e646fk$qa8$1@sea.gmane.org>

M.-A. Lemburg wrote:

> This example is a bit misleading, since chances are high that
> the benchmark will get a good priority bump by the scheduler.

which makes it run infinitely fast ?  what planet are you buying your 
hardware on ? ;-)

</F>


From pje at telecommunity.com  Tue Jun  6 18:18:27 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 06 Jun 2006 12:18:27 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com
 >
References: <5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
	<5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20060606120527.01f90410@mail.telecommunity.com>

At 10:13 AM 6/6/2006 -0400, Jim Jewett wrote:
>On 6/5/06, Phillip J. Eby <pje at telecommunity.com> wrote:
>
>>I notice you've completely avoided the question of whether this should be
>>being done at all.
>
>>As far as I can tell, this PEP hasn't actually been discussed.  Please
>>don't waste time changing modules for which there is no consensus that this
>>*should* be done.
>
>Under a specific PEP number, no.  The concept of adding logging to the
>stdlib, yes, periodically.  The typical outcome is that some people
>say "why bother, besides it would slow things down" and others say
>"yes, please."

All the conversations I was able to find were limited to the topic of 
changing modules that *do logging*, not modules that have optional 
debugging output, nor adding debugging output to modules that do not have 
it now.  I'm +0 at best on changing modules that do logging now (not debug 
output or warnings, *logging*).  -1 on everything else.


>You may be reading too much ambition into the proposal.

Huh?  The packages are all listed right there in the PEP.


>For pkgutil in particular, the change is that instead of writing to
>stderr (which can scroll off and get lost), it will write to the
>errorlog.  In a truly default setup, that still ends up writing to
>stderr.

If anything, that pkgutil code should be replaced with a call to 
warnings.warn() instead.


>The difference is that if a sysadmin does want to track problems, the
>change can now be made in one single place.

Um, what?  You mean, one place per Python application instance, I 
presume.  Assuming that the application allows you to configure the logging 
system, and doesn't come preconfigured to do something else.


>   Today, turning on that
>instrumentation would require separate changes to every relevant
>module, and requires you to already know what/where they are.

And thus ensures that it won't be turned on by accident.


From amk at amk.ca  Tue Jun  6 19:18:11 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Tue, 6 Jun 2006 13:18:11 -0400
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060736j743edf18j2860b40f012aefea@mail.gmail.com>
References: <fb6fbf560606060736j743edf18j2860b40f012aefea@mail.gmail.com>
Message-ID: <20060606171811.GB25534@rogue.amk.ca>

On Tue, Jun 06, 2006 at 10:36:06AM -0400, Jim Jewett wrote:
> Are you suggesting that the logging module should ship with a standard
> configuration that does something specific for py.* loggers?  Or even
> one that has different handlers for different stdlib modules?

No, I meant some modules don't need logging.  e.g. adding logging to
the string module would be silly.  It makes more sense for larger
systems and frameworks (the HTTP servers, asyncore, maybe some things
in Tools/ like webchecker).

--amk

From mal at egenix.com  Tue Jun  6 19:54:26 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 06 Jun 2006 19:54:26 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44859C12.8080306@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>
	<e5rjkj$rph$1@sea.gmane.org> <44859C12.8080306@egenix.com>
Message-ID: <4485C152.5050705@egenix.com>

FWIW, these are my findings on the various timing strategies:

* Windows:

  time.time()
    - not usable; I get timings with an error interval of roughly 30%

  GetProcessTimes()
    - not usable; I get timings with an error interval of up to 100%
      with differences in steps of 15.626ms

  time.clock()
    - error interval of less than 10%; overall < 0.5%

* Linux:

  time.clock()
    - not usable; I get timings with error interval of about 30%
      with differences in steps of 100ms

  time.time()
    - error interval of less than 10%; overall < 0.5%

  resource.getrusage()
    - error interval of less than 10%; overall < 0.5%
      with differences in steps of 10ms

  clock_gettime()
    - these don't appear to work on my box; even though
      clock_getres() returns a promising 1ns.

All measurements were done on AMD64 boxes, using Linux 2.6
and WinXP Pro with Python 2.4. pybench 2.0 was used (which is
not yet checked in) and the warp factor was set to a value that
gave benchmark rounds times of between 2.5 and 3.5 seconds,
ie. short test run-times.

Overall, time.clock() on Windows and time.time() on Linux appear
to give the best repeatability of tests, so I'll make those the
defaults in pybench 2.0.

In short: Tim wins, I lose.

Was a nice experiment, though ;-)

One interesting difference I found while testing on Windows
vs. Linux is that the StringMappings test have quite a different
run-time on both systems: around 2500ms on Windows vs. 590ms
on Linux (on Python 2.4). UnicodeMappings doesn't show such
a signficant difference.

Perhaps the sprint changed this ?!

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 06 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From martin at v.loewis.de  Tue Jun  6 20:01:30 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 06 Jun 2006 20:01:30 +0200
Subject: [Python-Dev] Include/structmember.h, Py_ssize_t
In-Reply-To: <e63l0l$n24$1@sea.gmane.org>
References: <e63l0l$n24$1@sea.gmane.org>
Message-ID: <4485C2FA.6040707@v.loewis.de>

Thomas Heller wrote:
> In Include/structmember.h, there is no T_... constant for Py_ssize_t
> member fields.  Should there be one?

As Fredrik says: if you need it, feel free to add it.

Regards,
Martin

From martin at v.loewis.de  Tue Jun  6 20:04:56 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 06 Jun 2006 20:04:56 +0200
Subject: [Python-Dev] ssize_t: ints in header files
In-Reply-To: <fb6fbf560606060641i76eb4207qcb41f4b5631d6b1@mail.gmail.com>
References: <fb6fbf560606060641i76eb4207qcb41f4b5631d6b1@mail.gmail.com>
Message-ID: <4485C3C8.5020208@v.loewis.de>

Jim Jewett wrote:
> Martin v. L?wis replied:
> 
>> ... column numbers shouldn't exceed 16-bits, and line #s
>> shouldn't exceed 31 bits.
> 
> Why these particular numbers?
> 
> As nearly as I can tell, 8 bits is already too many columns for human
> readability.

There isn't a practical 8-bit integer type in C, so the smallest integer
you can get is "short", i.e. 15 resp. 16 bits. For line numbers, 65536
seems a little too restrictive, so 31 bits is the next-larger type.

> If python is being used as an intermediate language (code is
> automatically generated, and not read by humans), then I don't see any
> justification for any particular limits, except as an implementation
> detail driven by convenience.

Precisely so. The main point is that we should set a limit, and then
code according to that limit. There is no point to use a 64-bit integer
for code size constraints.

Regards,
Martin


From vinay_sajip at red-dove.com  Tue Jun  6 20:12:40 2006
From: vinay_sajip at red-dove.com (Vinay Sajip)
Date: Tue, 6 Jun 2006 19:12:40 +0100
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
References: <fb6fbf560606060736j743edf18j2860b40f012aefea@mail.gmail.com>
Message-ID: <001901c68994$d4eb66c0$0200a8c0@alpha>

> Are you suggesting that the logging module should ship with a standard
> configuration that does something specific for py.* loggers?  Or even
> one that has different handlers for different stdlib modules?

Sorry I'm a little late in to the discussion :-(

I can see people objecting to a "standard" configuration, as there will be
many differing interpretations of what the "standard" should be. Perhaps the
PEP should detail any proposed configuration. The configuration for py.*
loggers, if approved in the PEP, will need to be set up with some care and
probably need to be disabled by default. Once logging is introduced into the
stdlib, the logger hierarchy used by the stdlib modules (e.g.
"py.asyncore.dispatcher.hits", "py.asyncore.dispatcher.messages") will
become something of a backward-compatibility concern. For example, client
code might add handlers to specific portions of the hierarchy, and while
adding "child" loggers to existing levels will be OK, removing or renaming
parts of the hierarchy will cause client code to not produce the expected
logging behaviour. Having logger names follow package/subpackage/public
class should be OK since those couldn't change without breaking existing
code anyway.

One way of ring-fencing stdlib logging is to have the "py" logger created
with a level of (say) DEBUG and propagate = 0. This way, logging events
raised in stdlib code are not sent to the root logger's handlers, unless
client code explicitly sets the propagate flag to 1. The "py" logger could
be initialised with a bit-bucket handler which does nothing (and avoids the
"No handlers could be found for logger xxx" message). In my view it'd be
best to not add any other handlers in the stdlib itself, leaving that to
user code. With this approach, by default stdlib code will behave as it does
now. Even the verbose setting of DEBUG on the "py" logger will not produce
any output unless user code sets its propagate attribute to 1 or explicitly
adds a handler to it or any of its descendants.

My 2 cents...

Regards,

Vinay Sajip


From fredrik at pythonware.com  Tue Jun  6 20:50:26 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 06 Jun 2006 20:50:26 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4485C152.5050705@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>
	<44859C12.8080306@egenix.com> <4485C152.5050705@egenix.com>
Message-ID: <e64ipi$gqd$1@sea.gmane.org>

M.-A. Lemburg wrote:

> * Linux:
> 
>   time.clock()
>     - not usable; I get timings with error interval of about 30%
>       with differences in steps of 100ms

>   resource.getrusage()
>     - error interval of less than 10%; overall < 0.5%
>       with differences in steps of 10ms

hmm.  I could have sworn clock() returned the sum of the utime and stime 
fields you get from getrusage() (which is the sum of the utime and stime 
tick counters for all tasks associated with the process, converted from 
jiffy count to the appropriate time unit), but glibc is one big maze of 
twisty little passages, so I'm probably looking at the wrong place.

oh, well.

</F>


From g.brandl at gmx.net  Tue Jun  6 21:26:16 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 06 Jun 2006 21:26:16 +0200
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>
References: <fb6fbf560606041827i41a1a4abxce9563fbf91b7ed1@mail.gmail.com>
Message-ID: <e64kss$o6a$1@sea.gmane.org>

Jim Jewett wrote:
> Jackilyn is adding logging to several stdlib modules for the Google
> Summer of Code (PEP 337), and asked me to review her first few
> changes.

A related question: Will your student try to resolve the issues on SF
referring to logging, or is that not part of the project? There aren't
that many of them, and she's certainly quite acquainted with the code
base at some point.

Cheers,
Georg


From mal at egenix.com  Tue Jun  6 22:56:06 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 06 Jun 2006 22:56:06 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4485C152.5050705@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>
	<44859C12.8080306@egenix.com> <4485C152.5050705@egenix.com>
Message-ID: <4485EBE6.2020700@egenix.com>

M.-A. Lemburg wrote:
> FWIW, these are my findings on the various timing strategies:

Correction (due to a bug in my pybench dev version):

> * Windows:
> 
>   time.time()
>     - not usable; I get timings with an error interval of roughly 30%
> 
>   GetProcessTimes()
>     - not usable; I get timings with an error interval of up to 100%
>       with differences in steps of 15.626ms
> 
>   time.clock()
>     - error interval of less than 10%; overall < 0.5%
>
> * Linux:
> 
>   time.clock()
>     - not usable; I get timings with error interval of about 30%
>       with differences in steps of 100ms

This should read: steps of 10ms.

time.clock() uses POSIX clock ticks which are hard-wired
to 100Hz.

>   time.time()
>     - error interval of less than 10%; overall < 0.5%
> 
>   resource.getrusage()
>     - error interval of less than 10%; overall < 0.5%
>       with differences in steps of 10ms

This should read: steps of 1ms.

The true clock tick frequency on the test machine is 1kHz.

>   clock_gettime()
>     - these don't appear to work on my box; even though
>       clock_getres() returns a promising 1ns.
> 
> All measurements were done on AMD64 boxes, using Linux 2.6
> and WinXP Pro with Python 2.4. pybench 2.0 was used (which is
> not yet checked in) and the warp factor was set to a value that
> gave benchmark rounds times of between 2.5 and 3.5 seconds,
> ie. short test run-times.
> 
> Overall, time.clock() on Windows and time.time() on Linux appear
> to give the best repeatability of tests, so I'll make those the
> defaults in pybench 2.0.
> 
> In short: Tim wins, I lose.
> 
> Was a nice experiment, though ;-)
> 
> One interesting difference I found while testing on Windows
> vs. Linux is that the StringMappings test have quite a different
> run-time on both systems: around 2500ms on Windows vs. 590ms
> on Linux (on Python 2.4). UnicodeMappings doesn't show such
> a signficant difference.
> 
> Perhaps the sprint changed this ?!

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 06 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From brett at python.org  Wed Jun  7 00:29:40 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 6 Jun 2006 15:29:40 -0700
Subject: [Python-Dev] How to fix the buffer object's broken char buffer
	support
Message-ID: <bbaeab100606061529t31bb8b90xc2a06cec2ea8d78b@mail.gmail.com>

If you run ``import array; int(buffer(array.array('c')))`` the
interpreter will segfault.  While investigating this I discovered that
buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the
result by calling the wrapped object bf_getreadbuffer or
bf_getwritebuffer.  This is wrong since it is essentially redirecting
the expected call to the wrong tp_as_buffer slot for the wrapped
object.  Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.

I see two options here.  One is to remove the bf_getcharbuffer slot
from the buffer object.  The other option is to fix it so that it only
returns bf_getcharbuffer and doesn't redirect improperly (this also
brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also
be defined for buffer objects).

Since I don't use buffer objects I don't know if it is better to fix
this or just rip it out.

-Brett

From pje at telecommunity.com  Wed Jun  7 00:49:45 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 06 Jun 2006 18:49:45 -0400
Subject: [Python-Dev] wsgiref doc draft; reviews/patches wanted
Message-ID: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>

I've finished my draft for the wsgiref documentation (including stuff I 
swiped from AMK's draft; thanks AMK!), and am looking for comments before I 
add it to the stdlib documentation.

Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs
PDF:    http://peak.telecommunity.com/wsgiref.pdf
HTML:   http://peak.telecommunity.com/wsgiref_docs/

My current plan is to make a hopefully-final release of the standalone 
version of wsgiref on PyPI, then clone that version for inclusion in the 
stdlib.

The latest version of wsgiref in the eby-sarna SVN includes a new 
``make_server()`` convenience function (addressing Titus' concerns about 
the constructor signatures while retaining backward compatibility) and it 
adds a ``wsgiref.validate`` module based on paste.lint.

In addition to those two new features, tests were added for the new 
validate module and for WSGIServer.  The test suite and directory layout of 
the package were also simplified and consolidated to make merging to the 
stdlib easier.

Feedback welcomed.


From thomas at python.org  Wed Jun  7 02:07:48 2006
From: thomas at python.org (Thomas Wouters)
Date: Wed, 7 Jun 2006 02:07:48 +0200
Subject: [Python-Dev] 'fast locals' in Python 2.5
Message-ID: <9e804ac0606061707w64a5b90pddd62d31bce1e7d6@mail.gmail.com>

I just submitted http://python.org/sf/1501934 and assigned it to Neal so it
doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles
the following code incorrectly:

>>> g = 1
>>> def f1():
...     g += 1
...
>>> f1()
>>> g
2

It looks like the compiler is not seeing augmented assignment as creating a
local name, as this fails properly:

>>> def f2():
...     g += 1
...     g = 5
...
>>> f2()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in f2
UnboundLocalError: local variable 'g' referenced before assignment

The dis.dis output confirms this:
>>> dis.dis(f1)
  1           0 LOAD_GLOBAL              0 (g)
              3 LOAD_CONST               1 (1)
              6 INPLACE_ADD
              7 STORE_GLOBAL             0 (g)
             10 LOAD_CONST               0 (None)
             13 RETURN_VALUE

If anyone feels like fixing it and happens to remember where the new
compiler does the fast-locals optimization (I recall a few people were
working on extra optimizations and all), please do :-) (I can probably look
at it before 2.5 if no one else does, though.) It may be a good idea to
check for more such cornercases while we're at it (but I couldn't find any
in the fast-locals bit.)

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060607/d79597bd/attachment.html 

From greg.ewing at canterbury.ac.nz  Wed Jun  7 02:32:02 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 07 Jun 2006 12:32:02 +1200
Subject: [Python-Dev] Stdlib Logging questions (PEP 337 SoC)
In-Reply-To: <fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
References: <5.1.1.6.0.20060604231709.02f07700@mail.telecommunity.com>
	<fb6fbf560606051757x6ea829e2r88dd9de3c9717a12@mail.gmail.com>
	<5.1.1.6.0.20060605213503.04812ba0@mail.telecommunity.com>
	<fb6fbf560606060713l4466cafx904c5b52adaa6b79@mail.gmail.com>
Message-ID: <44861E82.7070803@canterbury.ac.nz>

Jim Jewett wrote:

> For pkgutil in particular, the change is that instead of writing to
> stderr (which can scroll off and get lost), it will write to the
> errorlog.  In a truly default setup, that still ends up writing to
> stderr.

This might be better addressed by providing a centralised
way of redirecting stdout and/or stderr through the logging
module. That would fix the problem for all modules, even
if they know nothing about logging.

--
Greg

From tim.peters at gmail.com  Wed Jun  7 03:02:55 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Tue, 6 Jun 2006 21:02:55 -0400
Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures
	refleak (1)
In-Reply-To: <1f7befae0606051030pc36fb25x95e9a7e55085c460@mail.gmail.com>
References: <20060604090704.GA17397@python.psfb.org>
	<43E59BC0-9CD0-4524-9DB3-6CFD5E155721@commonground.com.au>
	<1f7befae0606040947i7b3fdbc3p23ba363a4103cd24@mail.gmail.com>
	<ee2a432c0606041037j1200de61jdb6204186173a55c@mail.gmail.com>
	<4483301C.8000601@v.loewis.de>
	<1f7befae0606051030pc36fb25x95e9a7e55085c460@mail.gmail.com>
Message-ID: <1f7befae0606061802v2c0b822ale28250d398b526c@mail.gmail.com>

[Tim, gets different results across whole runs of
     python_d  ../Lib/test/regrtest.py -R 2:40: test_filecmp test_exceptions
]

I think I found the cause for test_filecmp giving different results
across runs, at least on Windows.  It appears to be due to this test
line:

        self.failUnless(d.same_files == ['file'])

and you _still_ think I'm nuts ;-)  The skinny is that

        d = filecmp.dircmp(self.dir, self.dir_same)

and filecmp contains a module-level _cache with a funky scheme for
avoiding file comparisons if various os.stat() values haven't changed.
 But st_mtime on Windows doesn't necessarily change when a file is
modified -- it has limited resolution (2 seconds on FAT32, and I'm
having a hard time finding a believable answer for NTFS (which I'm
using)).

In any case, filecmp._cache _usually_ doesn't change during a run, but
sometimes it sprouts a new entry, like

 {('c:\\docume~1\\owner\\locals~1\\temp\\dir\\file',
   'c:\\docume~1\\owner\\locals~1\\temp\\dir-same\\file'):
     ((32768, 27L, 1149640843.78125),
      (32768, 27L, 1149640843.796875),
      True)
 }

and then that shows up as a small "leak".

That's easily repaired, and after doing so I haven't seen test_filecmp
report a leak again.

test_exceptions is a different story.  My first 12 post-fix runs of:

    python_d ..\Lib\test\regrtest.py -R2:40: test_filecmp test_exceptions

gave leak-free:

    test_filecmp
    beginning 42 repetitions
    123456789012345678901234567890123456789012
    ..........................................
    test_exceptions
    beginning 42 repetitions
    123456789012345678901234567890123456789012
    ..........................................
    All 2 tests OK.
    [25878 refs]

output, but the 13th was unlucky:

    test_filecmp
    beginning 42 repetitions
    123456789012345678901234567890123456789012
    ..........................................
    test_exceptions
    beginning 42 repetitions
    123456789012345678901234567890123456789012
    ..........................................
    test_exceptions leaked [0, 203, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0] references
    All 2 tests OK.
    [25883 refs]

Running test_filecmp too isn't necessary for me to see this --
test_exceptions can be run by itself, although it typically takes me
about 15 runs before "a leak" is reported.  When a leak is reported,
it's always 203, and there's only one of those in the leak vector, but
I've seen it at index positions 0, 1, 2, and 3 (i.e., it moves around;
it was at index 1 in the output above).

Anyone bored enough to report what happens on Linux?  Anyone remember
adding a goofy cache to exception internals?

a-suitable-msg-for-6/6/6-ly y'rs  - tim

From guido at python.org  Wed Jun  7 04:13:43 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 6 Jun 2006 19:13:43 -0700
Subject: [Python-Dev] How to fix the buffer object's broken char buffer
	support
In-Reply-To: <bbaeab100606061529t31bb8b90xc2a06cec2ea8d78b@mail.gmail.com>
References: <bbaeab100606061529t31bb8b90xc2a06cec2ea8d78b@mail.gmail.com>
Message-ID: <ca471dc20606061913m3b8f0b31qd412582b6bf57a5d@mail.gmail.com>

On 6/6/06, Brett Cannon <brett at python.org> wrote:
> If you run ``import array; int(buffer(array.array('c')))`` the
> interpreter will segfault.  While investigating this I discovered that
> buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the
> result by calling the wrapped object bf_getreadbuffer or
> bf_getwritebuffer.  This is wrong since it is essentially redirecting
> the expected call to the wrong tp_as_buffer slot for the wrapped
> object.  Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.
>
> I see two options here.  One is to remove the bf_getcharbuffer slot
> from the buffer object.  The other option is to fix it so that it only
> returns bf_getcharbuffer and doesn't redirect improperly (this also
> brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also
> be defined for buffer objects).
>
> Since I don't use buffer objects I don't know if it is better to fix
> this or just rip it out.

How ironic. the charbuffer slot was added late in the game -- now we'd
be ripping it out...

I suspect that there's a reason for it; but in Py3k it will
*definitely* be ripped out. Buffers will purely deal in byte then,
never in characters; you won't be able to get a buffer from a
(unicode) string at all.

Unhelpfully,

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Wed Jun  7 06:00:35 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 6 Jun 2006 21:00:35 -0700
Subject: [Python-Dev] How to fix the buffer object's broken char buffer
	support
In-Reply-To: <ca471dc20606061913m3b8f0b31qd412582b6bf57a5d@mail.gmail.com>
References: <bbaeab100606061529t31bb8b90xc2a06cec2ea8d78b@mail.gmail.com>
	<ca471dc20606061913m3b8f0b31qd412582b6bf57a5d@mail.gmail.com>
Message-ID: <bbaeab100606062100s3c0be7bdh31fe10ce460375c8@mail.gmail.com>

On 6/6/06, Guido van Rossum <guido at python.org> wrote:
> On 6/6/06, Brett Cannon <brett at python.org> wrote:
> > If you run ``import array; int(buffer(array.array('c')))`` the
> > interpreter will segfault.  While investigating this I discovered that
> > buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the
> > result by calling the wrapped object bf_getreadbuffer or
> > bf_getwritebuffer.  This is wrong since it is essentially redirecting
> > the expected call to the wrong tp_as_buffer slot for the wrapped
> > object.  Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.
> >
> > I see two options here.  One is to remove the bf_getcharbuffer slot
> > from the buffer object.  The other option is to fix it so that it only
> > returns bf_getcharbuffer and doesn't redirect improperly (this also
> > brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also
> > be defined for buffer objects).
> >
> > Since I don't use buffer objects I don't know if it is better to fix
> > this or just rip it out.
>
> How ironic. the charbuffer slot was added late in the game -- now we'd
> be ripping it out...
>
> I suspect that there's a reason for it; but in Py3k it will
> *definitely* be ripped out. Buffers will purely deal in byte then,
> never in characters; you won't be able to get a buffer from a
> (unicode) string at all.
>
> Unhelpfully,

I actually figured out a reasonable way to integrate it into the
buffer object so that it won't be a huge issue.  Just took a while to
make sure there was not a ton of copy-and-paste and deciphering the
docs (have a separate patch going for clarifying those).

So buffer objects will properly support charbuffer in 2.5 (won't
backport since it is a semantic change).  Hopefully it won't break too
much stuff.  =)

-Brett

From martin at v.loewis.de  Wed Jun  7 07:53:51 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 07 Jun 2006 07:53:51 +0200
Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures
	refleak (1)
In-Reply-To: <1f7befae0606061802v2c0b822ale28250d398b526c@mail.gmail.com>
References: <20060604090704.GA17397@python.psfb.org>	
	<43E59BC0-9CD0-4524-9DB3-6CFD5E155721@commonground.com.au>	
	<1f7befae0606040947i7b3fdbc3p23ba363a4103cd24@mail.gmail.com>	
	<ee2a432c0606041037j1200de61jdb6204186173a55c@mail.gmail.com>	
	<4483301C.8000601@v.loewis.de>	
	<1f7befae0606051030pc36fb25x95e9a7e55085c460@mail.gmail.com>
	<1f7befae0606061802v2c0b822ale28250d398b526c@mail.gmail.com>
Message-ID: <448669EF.7010508@v.loewis.de>

Tim Peters wrote:
> and filecmp contains a module-level _cache with a funky scheme for
> avoiding file comparisons if various os.stat() values haven't changed.
> But st_mtime on Windows doesn't necessarily change when a file is
> modified -- it has limited resolution (2 seconds on FAT32, and I'm
> having a hard time finding a believable answer for NTFS (which I'm
> using)).

The time stamp itself has a precision of 100ns (it really is a
FILETIME). I don't know whether there is any documentation that
explains how often it is updated; I doubt it has a higher resolution
than the system clock :-)

> Anyone bored enough to report what happens on Linux? 

I had to run it 18 times to get

test_exceptions
beginning 42 repetitions
123456789012345678901234567890123456789012
..........................................
test_exceptions leaked [203, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0] references
1 test OK.

Regards,
Martin

From nnorwitz at gmail.com  Wed Jun  7 08:47:09 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Tue, 6 Jun 2006 23:47:09 -0700
Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures
	refleak (1)
In-Reply-To: <448669EF.7010508@v.loewis.de>
References: <20060604090704.GA17397@python.psfb.org>
	<43E59BC0-9CD0-4524-9DB3-6CFD5E155721@commonground.com.au>
	<1f7befae0606040947i7b3fdbc3p23ba363a4103cd24@mail.gmail.com>
	<ee2a432c0606041037j1200de61jdb6204186173a55c@mail.gmail.com>
	<4483301C.8000601@v.loewis.de>
	<1f7befae0606051030pc36fb25x95e9a7e55085c460@mail.gmail.com>
	<1f7befae0606061802v2c0b822ale28250d398b526c@mail.gmail.com>
	<448669EF.7010508@v.loewis.de>
Message-ID: <ee2a432c0606062347n61c3cdb0ofe8e728085c8164@mail.gmail.com>

[Tim and Martin talking about leak tests when running regtest with -R]

I've disabled the LEAKY_TESTS exclusion in build.sh.  This means if
any test reports leaks when regtest.py -R :: is run, mail will be sent
to python-checkins.  The next run should kick off in a few hours (4
and 16 ET).  We'll see what it reports.

n

From tim.peters at gmail.com  Wed Jun  7 09:16:15 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Wed, 7 Jun 2006 03:16:15 -0400
Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures
	refleak (1)
In-Reply-To: <448669EF.7010508@v.loewis.de>
References: <20060604090704.GA17397@python.psfb.org>
	<43E59BC0-9CD0-4524-9DB3-6CFD5E155721@commonground.com.au>
	<1f7befae0606040947i7b3fdbc3p23ba363a4103cd24@mail.gmail.com>
	<ee2a432c0606041037j1200de61jdb6204186173a55c@mail.gmail.com>
	<4483301C.8000601@v.loewis.de>
	<1f7befae0606051030pc36fb25x95e9a7e55085c460@mail.gmail.com>
	<1f7befae0606061802v2c0b822ale28250d398b526c@mail.gmail.com>
	<448669EF.7010508@v.loewis.de>
Message-ID: <1f7befae0606070016i7108828o51bd3ab2791eb870@mail.gmail.com>

[Tim]
>> and filecmp contains a module-level _cache with a funky scheme for
>> avoiding file comparisons if various os.stat() values haven't changed.
>> But st_mtime on Windows doesn't necessarily change when a file is
>> modified -- it has limited resolution (2 seconds on FAT32, and I'm
>> having a hard time finding a believable answer for NTFS (which I'm
>> using)).

[Martin]
> The time stamp itself has a precision of 100ns (it really is a
> FILETIME).

Right -- see "believable" above ;-)

> I don't know whether there is any documentation that
> explains how often it is updated; I doubt it has a higher resolution
> than the system clock :-)

Me too.

>> Anyone bored enough to report what happens on Linux?

> I had to run it 18 times to get
>
> test_exceptions
> beginning 42 repetitions
> 123456789012345678901234567890123456789012
> ..........................................
> test_exceptions leaked [203, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0] references
> 1 test OK.

Thank you!  I'm sorry to hear you were so bored ;-)

So here's a fun mystery for someone less sleepy than I am at this
time:  patch 1501987 (which I checked in) appears to have cured this,
but neither I nor its author seem to know why.  test_exceptions was
picking a pickle protocol at random (WTF?!), and the patch makes it
try all pickle protocols instead.

Now that I typed that, I discovered I really don't care why it cured
it, so it's all yours :-)

From arigo at tunes.org  Wed Jun  7 10:39:41 2006
From: arigo at tunes.org (Armin Rigo)
Date: Wed, 7 Jun 2006 10:39:41 +0200
Subject: [Python-Dev] 'fast locals' in Python 2.5
In-Reply-To: <9e804ac0606061707w64a5b90pddd62d31bce1e7d6@mail.gmail.com>
References: <9e804ac0606061707w64a5b90pddd62d31bce1e7d6@mail.gmail.com>
Message-ID: <20060607083940.GA12003@code0.codespeak.net>

Hi,

On Wed, Jun 07, 2006 at 02:07:48AM +0200, Thomas Wouters wrote:
> I just submitted http://python.org/sf/1501934 and assigned it to Neal so it
> doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles
> the following code incorrectly:

No, no, it's an underground move by Jeremy to allow assignment to
variables of enclosing scopes:

    [in 2.5]
    def f():
        x = 5
        def g():
            x += 1
        g()
        print x     # 6

The next move is to say that this is an expected feature of the
augmented assignment operators, from which it follows naturally that we
need a pseudo-augmented assignment that doesn't actually read the old
value:

    [in 2.6]
    def f():
        x = 5
        def g():
            x := 6
        g()
        print x    # 6

Credits to Samuele's evil side for the ideas.  His non-evil side doesn't
agree, and neither does mine, of course :-)

More seriously, a function with a variable that is only written to as
the target of augmented assignments cannot possibly be something else
than a newcomer's mistake: the augmented assignments will always raise
UnboundLocalError.  Maybe this should be a SyntaxWarning?


A bientot,

Armin

From mal at egenix.com  Wed Jun  7 10:52:18 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 07 Jun 2006 10:52:18 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <20060602081021.jio6uf0wn0okc8kw@login.werra.lunarpages.com>
References: <20060602081021.jio6uf0wn0okc8kw@login.werra.lunarpages.com>
Message-ID: <448693C2.9010903@egenix.com>

Michael Chermside wrote:
> Marc-Andre Lemburg writes:
>> Using the minimum looks like the way to go for calibration.
>>
>> I wonder whether the same is true for the actual tests; since
>> you're looking for the expected run-time, the minimum may
>> not necessarily be the choice.
> 
> No, you're not looking for the expected run-time. The expected
> run-time is a function of the speed of the CPU, the architechure
> of same, what else is running simultaneously -- perhaps even
> what music you choose to listen to that day. It is NOT a
> constant for a given piece of code, and is NOT what you are
> looking for.

I was thinking of the expected value of the test for run-time
(the statistical value). This would likely have a better
repeatability than e.g. the average (see Andrew's analysis)
or the minimum which can be affected by artifacts due to the
method of measurement (see Fredrik's analysis).

The downside is that you need quite a few data points to
make a reasonable assumption on the value of the expected
value.

Another problem is that of sometimes running into the situation
where you have a distribution of values which is in fact
the overlap of two (or more) different distributions (see Andrew's
graphics).

In the end, the minimum is the best compromise, IMHO, since it is
easy to get a good estimate fast.

pybench stores all measured times in the test pickle, so
it is possible to apply different statistical methods later
on - even after the test was run.

> What you really want to do in benchmarking is to *compare* the
> performance of two (or more) different pieces of code. You do,
> of course, care about the real-world performance. So if you
> had two algorithms and one ran twice as fast when there were no
> context switches and 10 times slower when there was background
> activity on the machine, then you'd want prefer the algorithm
> that supports context switches. But that's not a realistic
> situation. What is far more common is that you run one test
> while listening to the Grateful Dead and another test while
> listening to Bach, and that (plus other random factors and the
> phase of the moon) causes one test to run faster than the
> other.

I wonder which one of the two ;-)

> Taking the minimum time clearly subtracts some noise, which is
> a good thing when comparing performance for two or more pieces
> of code. It fails to account for the distribution of times, so
> if one piece of code occasionally gets lucky and takes far less
> time then minimum time won't be a good choice... but it would
> be tricky to design code that would be affected by the scheduler
> in this fashion even if you were explicitly trying!

Tried that and even though you can trick the scheduler
into running your code without context switch, the time
left to do benchmarks boils down to a milli-second - there's
not a lot you can test in such a time interval.

What's worse: the available timers don't have good enough
resolution to make the timings useful.

> Later he continues:
>> Tim thinks that it's better to use short running tests and
>> an accurate timer, accepting the added noise and counting
>> on the user making sure that the noise level is at a
>> minimum.
>>
>> Since I like to give users the option of choosing for
>> themselves, I'm going to make the choice of timer an
>> option.
> 
> I'm generally a fan of giving programmers choices. However,
> this is an area where we have demonstrated that even very
> competent programmers often have misunderstandings (read this
> thread for evidence!). So be very careful about giving such
> a choice: the default behavior should be chosen by people
> who think carefully about such things, and the documentation
> on the option should give a good explanation of the tradeoffs
> or at least a link to such an explanation.

I'll use good defaults (see yesterdays posting), which
essentially means: use Tim's approach... until we all have
OSes with real-time APIs using hardware timers instead of
jiffie counters.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 07 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From kristjan at ccpgames.com  Wed Jun  7 11:51:15 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Wed, 7 Jun 2006 09:51:15 -0000
Subject: [Python-Dev] [Python-checkins] Python Regression Test
	Failuresrefleak (1)
Message-ID: <129CEF95A523704B9D46959C922A28000282AE64@nemesis.central.ccp.cc>

Right, it is a FILETIME in the API, but the resolution stored on disk is limited to what the disk format provides.  FAT32 is particularly skinny.
I imagine that the value to store comes from GetSystemTimeAsFileTime which is updated with the clock interrupt.

K 

-----Original Message-----
From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of "Martin v. L?wis"
Sent: 7. j?n? 2006 05:54
To: Tim Peters
Cc: Neal Norwitz; Python Dev
Subject: Re: [Python-Dev] [Python-checkins] Python Regression Test Failuresrefleak (1)

Tim Peters wrote:
> and filecmp contains a module-level _cache with a funky scheme for 
> avoiding file comparisons if various os.stat() values haven't changed.
> But st_mtime on Windows doesn't necessarily change when a file is 
> modified -- it has limited resolution (2 seconds on FAT32, and I'm 
> having a hard time finding a believable answer for NTFS (which I'm 
> using)).

The time stamp itself has a precision of 100ns (it really is a FILETIME). I don't know whether there is any documentation that explains how often it is updated; I doubt it has a higher resolution than the system clock :-)

> Anyone bored enough to report what happens on Linux? 

I had to run it 18 times to get

test_exceptions
beginning 42 repetitions
123456789012345678901234567890123456789012
..........................................
test_exceptions leaked [203, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] references
1 test OK.

Regards,
Martin
_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com

From kristjan at ccpgames.com  Wed Jun  7 11:37:38 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Wed, 7 Jun 2006 09:37:38 -0000
Subject: [Python-Dev] How to fix the buffer object's broken char
	buffersupport
Message-ID: <129CEF95A523704B9D46959C922A28000282AE5B@nemesis.central.ccp.cc>

As a side note, It always seemed to me that the bf_getcharbuffer?s semantics were poorly defined.  At least in the 2.3 documentation.  Has that, and the need for it, changed recently?

Kristj?n

-----Original Message-----
From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Brett Cannon
Sent: 6. j?n? 2006 22:30
To: Python Dev
Subject: [Python-Dev] How to fix the buffer object's broken char buffersupport

If you run ``import array; int(buffer(array.array('c')))`` the interpreter will segfault.  While investigating this I discovered that buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the result by calling the wrapped object bf_getreadbuffer or bf_getwritebuffer.  This is wrong since it is essentially redirecting the expected call to the wrong tp_as_buffer slot for the wrapped object.  Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.

I see two options here.  One is to remove the bf_getcharbuffer slot from the buffer object.  The other option is to fix it so that it only returns bf_getcharbuffer and doesn't redirect improperly (this also brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also be defined for buffer objects).

Since I don't use buffer objects I don't know if it is better to fix this or just rip it out.

-Brett
_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com

From amk at amk.ca  Wed Jun  7 14:38:30 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Wed, 7 Jun 2006 08:38:30 -0400
Subject: [Python-Dev] wsgiref doc draft; reviews/patches wanted
In-Reply-To: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
References: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
Message-ID: <20060607123830.GB9578@localhost.localdomain>

On Tue, Jun 06, 2006 at 06:49:45PM -0400, Phillip J. Eby wrote:
> Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs

Minor correction: svn://svn.eby-sarna.com/svnroot/wsgiref/docs
(at least, http didn't work for me).

The docs look good, and I think they'd be ready to go in.

--amk

From p.f.moore at gmail.com  Wed Jun  7 15:52:09 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 7 Jun 2006 14:52:09 +0100
Subject: [Python-Dev] wsgiref doc draft; reviews/patches wanted
In-Reply-To: <20060607123830.GB9578@localhost.localdomain>
References: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
	<20060607123830.GB9578@localhost.localdomain>
Message-ID: <79990c6b0606070652s503126adm452dc59fc5e23bb2@mail.gmail.com>

On 6/7/06, A.M. Kuchling <amk at amk.ca> wrote:
> On Tue, Jun 06, 2006 at 06:49:45PM -0400, Phillip J. Eby wrote:
> > Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs
>
> Minor correction: svn://svn.eby-sarna.com/svnroot/wsgiref/docs
> (at least, http didn't work for me).
>
> The docs look good, and I think they'd be ready to go in.

http://svn.eby-sarna.com/wsgiref/docs/ works for me.

Paul.

From steve at holdenweb.com  Wed Jun  7 16:23:47 2006
From: steve at holdenweb.com (Steve Holden)
Date: Wed, 07 Jun 2006 15:23:47 +0100
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4485C152.5050705@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>
	<44859C12.8080306@egenix.com> <4485C152.5050705@egenix.com>
Message-ID: <4486E173.9040800@holdenweb.com>

M.-A. Lemburg wrote:
[...]
> Overall, time.clock() on Windows and time.time() on Linux appear
> to give the best repeatability of tests, so I'll make those the
> defaults in pybench 2.0.
> 
> In short: Tim wins, I lose.
> 
> Was a nice experiment, though ;-)
> 
Perhaps so, but it would have been nice if you could have come to this 
conclusion before asking me not to make this change, which would 
otherwise have been checked in two weeks ago.

Still, as long as we can all agree on this and move forward I suppose 
the intervening debate at least leaves us better-informed.

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From mal at egenix.com  Wed Jun  7 17:30:04 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 07 Jun 2006 17:30:04 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4486E173.9040800@holdenweb.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>
	<4485C152.5050705@egenix.com> <4486E173.9040800@holdenweb.com>
Message-ID: <4486F0FC.7090806@egenix.com>

Steve Holden wrote:
> M.-A. Lemburg wrote:
> [...]
>> Overall, time.clock() on Windows and time.time() on Linux appear
>> to give the best repeatability of tests, so I'll make those the
>> defaults in pybench 2.0.
>>
>> In short: Tim wins, I lose.
>>
>> Was a nice experiment, though ;-)
>>
> Perhaps so, but it would have been nice if you could have come to this 
> conclusion before asking me not to make this change, which would 
> otherwise have been checked in two weeks ago.

I still believe that measuring process time is better than
wall time and have tried hard to find suitable timers for
implementing this.

However, the tests using the various different timers have
shown that this approach doesn't work out due to the problems
with how process time is measured on the platforms in question
(Linux and Windows).

We should revisit this choice once suitable timers are
available on those platforms.

Note that even with the wall-time timers and using the minimum
function as estimator you still get results which exhibit
random noise.

At least now we know that there's apparently no way to get it
removed.

> Still, as long as we can all agree on this and move forward I suppose 
> the intervening debate at least leaves us better-informed.

Isn't that the whole point of such a discussion ?

I'll check in pybench 2.0 once I've tested it enough.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 07 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From pje at telecommunity.com  Wed Jun  7 17:41:59 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 07 Jun 2006 11:41:59 -0400
Subject: [Python-Dev] wsgiref doc draft; reviews/patches wanted
In-Reply-To: <20060607123830.GB9578@localhost.localdomain>
References: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
	<5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20060607114108.01e92b40@mail.telecommunity.com>

At 08:38 AM 6/7/2006 -0400, A.M. Kuchling wrote:
>On Tue, Jun 06, 2006 at 06:49:45PM -0400, Phillip J. Eby wrote:
> > Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs
>
>Minor correction: svn://svn.eby-sarna.com/svnroot/wsgiref/docs
>(at least, http didn't work for me).

Oops...  I meant:

http://svn.eby-sarna.com/wsgiref/docs/

Kind of garbled up the svn: and http: URLs; the HTTP one is for ViewCVS.


>The docs look good, and I think they'd be ready to go in.
>
>--amk
>_______________________________________________
>Python-Dev mailing list
>Python-Dev at python.org
>http://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: 
>http://mail.python.org/mailman/options/python-dev/pje%40telecommunity.com


From brett at python.org  Wed Jun  7 19:01:24 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 7 Jun 2006 10:01:24 -0700
Subject: [Python-Dev] How to fix the buffer object's broken char
	buffersupport
In-Reply-To: <129CEF95A523704B9D46959C922A28000282AE5B@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A28000282AE5B@nemesis.central.ccp.cc>
Message-ID: <bbaeab100606071001n3b8775acx8288c05d30c23b9a@mail.gmail.com>

On 6/7/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
> As a side note, It always seemed to me that the bf_getcharbuffer?s semantics were poorly defined.  At least in the 2.3 documentation.  Has that, and the need for it, changed recently?
>

I have tried to clean up the language a bit based on how I was
interpreting the docs.  Hopefully that isn't wrong since I am basing
my patch on it.  =)

But since I don't use buffers ever and Py3k is ditching buffers for
the bytes type, I am personally not going to worry about changing
their definition beyond clarifying the docs.

-Brett

> Kristj?n
>
> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Brett Cannon
> Sent: 6. j?n? 2006 22:30
> To: Python Dev
> Subject: [Python-Dev] How to fix the buffer object's broken char buffersupport
>
> If you run ``import array; int(buffer(array.array('c')))`` the interpreter will segfault.  While investigating this I discovered that buffer objects, for their tp_as_buffer->bf_getcharbuffer, return the result by calling the wrapped object bf_getreadbuffer or bf_getwritebuffer.  This is wrong since it is essentially redirecting the expected call to the wrong tp_as_buffer slot for the wrapped object.  Plus it doesn't have Py_TPFLAGS_HAVE_GETCHARBUFFER defined.
>
> I see two options here.  One is to remove the bf_getcharbuffer slot from the buffer object.  The other option is to fix it so that it only returns bf_getcharbuffer and doesn't redirect improperly (this also brings up the issue if Py_TPFLAGS_HAVE_GETCHARBUFFER should then also be defined for buffer objects).
>
> Since I don't use buffer objects I don't know if it is better to fix this or just rip it out.
>
> -Brett
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
>

From fredrik at pythonware.com  Wed Jun  7 19:20:53 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 07 Jun 2006 19:20:53 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4485C152.5050705@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>
	<44859C12.8080306@egenix.com> <4485C152.5050705@egenix.com>
Message-ID: <e671tn$rhg$1@sea.gmane.org>

M.-A. Lemburg wrote:

> One interesting difference I found while testing on Windows
> vs. Linux is that the StringMappings test have quite a different
> run-time on both systems: around 2500ms on Windows vs. 590ms
> on Linux (on Python 2.4). UnicodeMappings doesn't show such
> a signficant difference.
> 
> Perhaps the sprint changed this ?!

nope.

but stringbench revealed the same thing, of course.

the difference is most likely due to an inefficient implementation of 
locale-aware character lookups in Visual C (MSVC supports passing wide 
chars, but I don't think gcc bothers to do that; afaik, it's not part of 
the C standard).

solving this is straightforward (let the locale module set a global flag 
if Python runs under a non-C locale, and use a built-in table as long as 
that flag isn't set), but I haven't gotten around to deal with that yet.

</F>


From martin at v.loewis.de  Wed Jun  7 22:06:27 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 07 Jun 2006 22:06:27 +0200
Subject: [Python-Dev] [Python-checkins] Python Regression Test
 Failuresrefleak (1)
In-Reply-To: <129CEF95A523704B9D46959C922A28000282AE64@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A28000282AE64@nemesis.central.ccp.cc>
Message-ID: <448731C3.9060209@v.loewis.de>

Kristj?n V. J?nsson wrote:
> Right, it is a FILETIME in the API, but the resolution stored on disk
> is limited to what the disk format provides.

When I said "it really is a FILETIME", I meant precisely this: it is a
file time on disk, too, for NTFS. Basically, the Win32 notion of
FILETIME *originates* from the way NTFS stores time stamps.

For FAT, the on-disk precision is worse, of course.

Regards,
Martin

From mal at egenix.com  Wed Jun  7 23:11:14 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 07 Jun 2006 23:11:14 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e671tn$rhg$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>
	<4485C152.5050705@egenix.com> <e671tn$rhg$1@sea.gmane.org>
Message-ID: <448740F2.4020900@egenix.com>

Some more interesting results from comparing Python 2.4 (other) against
the current SVN snapshot (this):

Testnames                        minimum run-time        average  run-time
                                 this    other   diff    this    other
 diff
-------------------------------------------------------------------------------
           BuiltinMethodLookup:   142ms   124ms  +14.5%   151ms   134ms
 +13.1%
                 ConcatUnicode:    95ms   120ms  -20.7%   104ms   131ms
 -20.5%
               CreateInstances:   102ms    92ms  +10.0%   107ms    96ms
 +11.5%
       CreateUnicodeWithConcat:    98ms   122ms  -19.2%   103ms   129ms
 -20.1%
             DictWithFloatKeys:   128ms   149ms  -14.1%   133ms   177ms
 -24.8%
                NestedForLoops:   141ms   126ms  +11.8%   144ms   128ms
 +12.3%
           PythonFunctionCalls:   131ms   108ms  +21.5%   133ms   109ms
 +21.3%
                  SecondImport:   135ms   114ms  +18.3%   140ms   117ms
 +20.0%
           SecondPackageImport:   136ms   122ms  +11.2%   144ms   124ms
 +16.1%
         SecondSubmoduleImport:   166ms   146ms  +13.5%   171ms   148ms
 +15.9%
       SimpleComplexArithmetic:   106ms   131ms  -19.1%   112ms   133ms
 -16.2%
              StringPredicates:   109ms    96ms  +13.6%   114ms    99ms
 +15.7%
                TryRaiseExcept:   119ms   137ms  -13.3%   122ms   140ms
 -12.6%
               UnicodeMappings:   140ms   157ms  -10.8%   141ms   160ms
 -11.4%
             UnicodePredicates:   111ms    98ms  +12.9%   115ms   100ms
 +15.3%
                UnicodeSlicing:   101ms   114ms  -11.2%   108ms   116ms
  -6.7%


It appears as if the import mechanism took a hit between the
versions.

The NFS sprint results are also visible.

A little disturbing is the slow-down for Python function calls
and the built-in method lookup. Did anything change in these parts
of the interpreter ?


This is the machine I used for running the pybench:
    Timer:  time.time
    Machine Details:
       Platform ID:  Linux-2.6.8-24.19-default-x86_64-with-SuSE-9.2-x86-64
       Processor:    x86_64

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 07 2006)
>>> Python/Zope Consulting and Support ...
>>> http://www.egenix.com/ mxODBC.Zope.Database.Adapter ...
>>> http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ...
>>> http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From mal at egenix.com  Wed Jun  7 23:19:17 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 07 Jun 2006 23:19:17 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448740F2.4020900@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>
	<e671tn$rhg$1@sea.gmane.org> <448740F2.4020900@egenix.com>
Message-ID: <448742D5.6040401@egenix.com>

M.-A. Lemburg wrote:
> Some more interesting results from comparing Python 2.4 (other) against
> the current SVN snapshot (this):

Here's the list again, this time without wrapping (sigh):

Testnames                        minimum run-time        average  run-time
                                 this    other   diff    this    other   diff
-------------------------------------------------------------------------------
           BuiltinMethodLookup:   141ms   124ms  +13.9%   148ms   134ms  +10.8%
                 ConcatUnicode:    97ms   120ms  -19.5%   104ms   131ms  -20.6%
               CreateInstances:   102ms    92ms  +10.3%   104ms    96ms   +8.0%
       CreateUnicodeWithConcat:    98ms   122ms  -19.1%   103ms   129ms  -20.6%
             DictWithFloatKeys:   128ms   149ms  -14.4%   130ms   177ms  -26.4%
                NestedForLoops:   140ms   126ms  +11.1%   143ms   128ms  +11.8%
           PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%
                  SecondImport:   136ms   114ms  +18.9%   138ms   117ms  +18.2%
           SecondPackageImport:   141ms   122ms  +15.4%   143ms   124ms  +15.3%
         SecondSubmoduleImport:   166ms   146ms  +13.3%   179ms   148ms  +21.3%
       SimpleComplexArithmetic:   107ms   131ms  -18.5%   121ms   133ms   -9.2%
              StringPredicates:   109ms    96ms  +13.5%   117ms    99ms  +18.7%
                TryRaiseExcept:   115ms   137ms  -16.2%   129ms   140ms   -7.6%
               UnicodeMappings:   140ms   157ms  -10.7%   142ms   160ms  -11.3%
             UnicodePredicates:   111ms    98ms  +13.3%   115ms   100ms  +15.6%
                UnicodeSlicing:   103ms   114ms  -10.1%   108ms   116ms   -6.7%

> It appears as if the import mechanism took a hit between the
> versions.
> 
> The NFS sprint results are also visible.
> 
> A little disturbing is the slow-down for Python function calls
> and the built-in method lookup. Did anything change in these parts
> of the interpreter ?
> 
> 
> This is the machine I used for running the pybench:
>     Timer:  time.time
>     Machine Details:
>        Platform ID:  Linux-2.6.8-24.19-default-x86_64-with-SuSE-9.2-x86-64
>        Processor:    x86_64
> 

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 07 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Wed Jun  7 23:22:19 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 07 Jun 2006 23:22:19 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448740F2.4020900@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>
	<e671tn$rhg$1@sea.gmane.org> <448740F2.4020900@egenix.com>
Message-ID: <e67g2b$o33$1@sea.gmane.org>

M.-A. Lemburg wrote:

> Some more interesting results from comparing Python 2.4 (other) against
> the current SVN snapshot (this):

been there, done that, found the results lacking.

we spent a large part of the first NFS day to investigate all
reported slowdowns, and found that only one slowdown could be 
independently confirmed (the exception issue).

try running some of your subtests under timeit, and see if you
can repeat the results.

</F>


From raymond.hettinger at verizon.net  Wed Jun  7 23:49:23 2006
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Wed, 07 Jun 2006 14:49:23 -0700
Subject: [Python-Dev] Is implicit underscore assignment buggy?
Message-ID: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>

When the result of an expression is None, the interactive interpreter 
correctly suppresses the display of the result.  However, it also 
 suppresses the underscore assignment.  I'm not sure if that is correct 
 or desirable because a subsequent statement has no way of knowing 
 whether the underscore assignment is current or whether it represents an 
 earlier non-None result.
 
 Here's an example from a co-worker's regular expression experiments:
 
>>> import re, string
>>> re.search('lmnop', string.letters)
<_sre.SRE_Match object at 0xb6f2c480>
>>> re.search('pycon', string.letters)
>>> if _ is not None:
 ...         print _.group()
lmnop
 
 
 
 Raymond
 


From fredrik at pythonware.com  Thu Jun  8 00:04:27 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 00:04:27 +0200
Subject: [Python-Dev] Is implicit underscore assignment buggy?
In-Reply-To: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
Message-ID: <e67ihc$1hg$1@sea.gmane.org>

Raymond Hettinger wrote:

> When the result of an expression is None, the interactive interpreter 
> correctly suppresses the display of the result.  However, it also 
>  suppresses the underscore assignment.  I'm not sure if that is correct 
>  or desirable because a subsequent statement has no way of knowing 
>  whether the underscore assignment is current or whether it represents an 
>  earlier non-None result.

why would a subsequent statement need to know that ?  are you sure you 
didn't mean "user" instead of "subsequent statement" ?

for users, it's actually quite simple to figure out what's in the _ 
variable: it's the most recently *printed* result.  if you cannot see 
it, it's not in there.

</F>


From raymond.hettinger at verizon.net  Thu Jun  8 00:35:58 2006
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Wed, 07 Jun 2006 15:35:58 -0700
Subject: [Python-Dev] Is implicit underscore assignment buggy?
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
	<e67ihc$1hg$1@sea.gmane.org>
Message-ID: <003501c68a82$c36908a0$ea146b0a@RaymondLaptop1>

> for users, it's actually quite simple to figure out what's in the _ 
> variable: it's the most recently *printed* result.  if you cannot see 
> it, it's not in there.

Of course, there's a pattern to it.  The question is whether it is the *right*
behavior.  Would the underscore assignment be more useful and intuitive
if it always contained the immediately preceding result, even if it was None?
In some cases (such as the regexp example), None is a valid and useful
possible result of a computation and you may want to access that result with _.

BTW, there is a trivial exception to the "most recently printed result" rule.

    >>> 13
    13
    >>> _ = None
    >>> _                   # _ is no longer the most recently printed result


Raymond




From aahz at pythoncraft.com  Thu Jun  8 00:41:22 2006
From: aahz at pythoncraft.com (Aahz)
Date: Wed, 7 Jun 2006 15:41:22 -0700
Subject: [Python-Dev] Is implicit underscore assignment buggy?
In-Reply-To: <003501c68a82$c36908a0$ea146b0a@RaymondLaptop1>
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
	<e67ihc$1hg$1@sea.gmane.org>
	<003501c68a82$c36908a0$ea146b0a@RaymondLaptop1>
Message-ID: <20060607224122.GC20875@panix.com>

On Wed, Jun 07, 2006, Raymond Hettinger wrote:
>Fredrik:
>>
>> for users, it's actually quite simple to figure out what's in the _ 
>> variable: it's the most recently *printed* result.  if you cannot see 
>> it, it's not in there.
> 
> Of course, there's a pattern to it.  The question is whether it is the
> *right* behavior.  Would the underscore assignment be more useful and
> intuitive if it always contained the immediately preceding result,
> even if it was None?  In some cases (such as the regexp example), None
> is a valid and useful possible result of a computation and you may
> want to access that result with _.

My take is that Fredrik is correct about the current behavior being most
generally useful even if it is slightly less consistent, as well as being
undesired in rare circumstances.  Consider that your message is the only
one I've seen in more than five years of monitoring python-dev and
c.l.py.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From jeremy at alum.mit.edu  Thu Jun  8 00:49:38 2006
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 7 Jun 2006 18:49:38 -0400
Subject: [Python-Dev] Is implicit underscore assignment buggy?
In-Reply-To: <003501c68a82$c36908a0$ea146b0a@RaymondLaptop1>
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
	<e67ihc$1hg$1@sea.gmane.org>
	<003501c68a82$c36908a0$ea146b0a@RaymondLaptop1>
Message-ID: <e8bf7a530606071549t3d956821kb91801577b257634@mail.gmail.com>

On 6/7/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote:
> > for users, it's actually quite simple to figure out what's in the _
> > variable: it's the most recently *printed* result.  if you cannot see
> > it, it's not in there.
>
> Of course, there's a pattern to it.  The question is whether it is the *right*
> behavior.  Would the underscore assignment be more useful and intuitive
> if it always contained the immediately preceding result, even if it was None?
> In some cases (such as the regexp example), None is a valid and useful
> possible result of a computation and you may want to access that result with _.

If you're using _ in an interactive environment, it's usually because
you don't want to re-type the value of the expression.  If the value
is None, it isn't hard to type.

> BTW, there is a trivial exception to the "most recently printed result" rule.
>
>     >>> 13
>     13
>     >>> _ = None
>     >>> _                   # _ is no longer the most recently printed result

If you want to assign to _, the results are your own fault.

Jeremy

From bob at redivi.com  Thu Jun  8 01:19:14 2006
From: bob at redivi.com (Bob Ippolito)
Date: Wed, 7 Jun 2006 16:19:14 -0700
Subject: [Python-Dev] Is implicit underscore assignment buggy?
In-Reply-To: <20060607224122.GC20875@panix.com>
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
	<e67ihc$1hg$1@sea.gmane.org>
	<003501c68a82$c36908a0$ea146b0a@RaymondLaptop1>
	<20060607224122.GC20875@panix.com>
Message-ID: <22D06A6E-9486-4A33-B390-0DEA3D205666@redivi.com>


On Jun 7, 2006, at 3:41 PM, Aahz wrote:

> On Wed, Jun 07, 2006, Raymond Hettinger wrote:
>> Fredrik:
>>>
>>> for users, it's actually quite simple to figure out what's in the _
>>> variable: it's the most recently *printed* result.  if you cannot  
>>> see
>>> it, it's not in there.
>>
>> Of course, there's a pattern to it.  The question is whether it is  
>> the
>> *right* behavior.  Would the underscore assignment be more useful and
>> intuitive if it always contained the immediately preceding result,
>> even if it was None?  In some cases (such as the regexp example),  
>> None
>> is a valid and useful possible result of a computation and you may
>> want to access that result with _.
>
> My take is that Fredrik is correct about the current behavior being  
> most
> generally useful even if it is slightly less consistent, as well as  
> being
> undesired in rare circumstances.  Consider that your message is the  
> only
> one I've seen in more than five years of monitoring python-dev and
> c.l.py.

I agree. I've definitely made use of the current behavior, e.g. for  
printing a different representation of _ before doing something else  
with it.

-bob


From guido at python.org  Thu Jun  8 01:25:34 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 7 Jun 2006 16:25:34 -0700
Subject: [Python-Dev] Is implicit underscore assignment buggy?
In-Reply-To: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
Message-ID: <ca471dc20606071625y55199a73r6ae369d5c8de4f52@mail.gmail.com>

This is by design.

The intent is that as long as you call something that returns no
value, your last result is not thrown away. IOW _ is the last result
that wasn't None.

Please don't change this.

--Guido

On 6/7/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote:
> When the result of an expression is None, the interactive interpreter
> correctly suppresses the display of the result.  However, it also
>  suppresses the underscore assignment.  I'm not sure if that is correct
>  or desirable because a subsequent statement has no way of knowing
>  whether the underscore assignment is current or whether it represents an
>  earlier non-None result.
>
>  Here's an example from a co-worker's regular expression experiments:
>
> >>> import re, string
> >>> re.search('lmnop', string.letters)
> <_sre.SRE_Match object at 0xb6f2c480>
> >>> re.search('pycon', string.letters)
> >>> if _ is not None:
>  ...         print _.group()
> lmnop
>
>
>
>  Raymond
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Thu Jun  8 01:55:19 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 7 Jun 2006 16:55:19 -0700
Subject: [Python-Dev] Is "t#" argument format meant to be char buffer,
	or just read-only?
Message-ID: <bbaeab100606071655u3cce5ff3ja950a40ad19065fb@mail.gmail.com>

I fixed the crasher for ``int(buffer(array.array('c')))`` by making
buffer objects operate properly.  Problem is that by doing so I broke
the ctypes tests with a bunch of similar errors::

======================================================================
ERROR: test_endian_double (ctypes.test.test_byteswap.Test)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/code/python/trunk/Lib/ctypes/test/test_byteswap.py", line
134, in test_endian_double
    self.failUnlessEqual(bin(struct.pack("d", math.pi)), bin(s))
  File "/code/python/trunk/Lib/ctypes/test/test_byteswap.py", line 7, in bin
    return hexlify(buffer(s)).upper()
TypeError: requested buffer type not available

Turns out the test does the following::

  binascii.hexlify(buffer(ctypes.c_double(math.pi)))

This is a problem because binascii.hexlify() uses "t#" as its argument
format string to PyArg_ParseTuple() and that fails now with a
TypeError since ctypes.c_double (which subclasses ctypes._SimpleCData
which defines the buffer interface) does not have a char buffer
defined.

Now this used to pass since buffer objects just used the read or write
buffer in place of the char buffer, regardless if the wrapped object
had a char buffer function defined.

But in checking out what "t#" did, I found a slight ambiguity in the
docs.  The docs say "read-only character buffer" for the short
description, but "read-only buffer" for the longer description.  Which
is it?

Plus, Thomas, you might want to change _SimpleCData if you want it to
truly suport char buffers.

-Brett

From rhettinger at ewtllc.com  Wed Jun  7 21:23:44 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Wed, 07 Jun 2006 12:23:44 -0700
Subject: [Python-Dev] Is implicit underscore assignment buggy?
Message-ID: <448727C0.7000800@ewtllc.com>

When the result of an expression is None, the interactive interpreter 
correctly suppresses the display of the result.  However, it also 
suppresses the underscore assignment.  I'm not sure if that is correct 
or desirable because a subsequent statement has no way of knowing 
whether the underscore assignment is current or whether it represents an 
earlier non-None result.

Here's an example from a co-worker's regular expression experiments:

 >>> import re, string
 >>> re.search('lmnop', string.letters)
<_sre.SRE_Match object at 0xb6f2c480>
 >>> re.search('pycon', string.letters)
 >>> if _ is not None:
...         print _.group()
lmnop



Raymond

From tjreedy at udel.edu  Thu Jun  8 04:31:09 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 7 Jun 2006 22:31:09 -0400
Subject: [Python-Dev] Is implicit underscore assignment buggy?
References: <001501c68a7c$4247b010$ea146b0a@RaymondLaptop1>
	<ca471dc20606071625y55199a73r6ae369d5c8de4f52@mail.gmail.com>
Message-ID: <e6825d$esq$1@sea.gmane.org>


"Guido van Rossum" <guido at python.org> wrote in message 
news:ca471dc20606071625y55199a73r6ae369d5c8de4f52 at mail.gmail.com...
> This is by design.
>
> The intent is that as long as you call something that returns no
> value, your last result is not thrown away. IOW _ is the last result
> that wasn't None.
>
> Please don't change this.

What might be improved is the documentation of this, which I could not find 
in a few minutes of searching.  As of current 2.4 docs, the Tutorial, 
Language, and Library manuals all have index pages for identifiers 
beginning with '_' but none contain '_' itself.  (And none have entries for 
'underscore'.)

Language Reference 2.3.2 Reserved classes of identifiers lists all the 
other special identifier classes starting with '_' but not '_' itself.  I 
think this would be a good place for an entry for '_' and its special 
meaning in interactive mode.

Tutorial 2.1.2 Interactive Mode would be another place to mention the 
special use of '_'.

If there is something I missed (that is not indexed), then I missed it.

Terry Jan Reedy




From tjreedy at udel.edu  Thu Jun  8 06:18:23 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 8 Jun 2006 00:18:23 -0400
Subject: [Python-Dev] Symbol page for Language Reference Manual Index
Message-ID: <e688ef$5ip$1@sea.gmane.org>

Many math books have an index or glossary of symbols connecting the symbols 
used in the text with their meaning in that text.  I have often found these 
useful.

I believe that people learning and using Python would similarly benefit 
from an index to the non-alphabetic symbols (and multi-symbol syntactic 
units) used in Python code.

I am willing to do perhaps half the work needed to produce such in time for 
the 2.5 release.  In particular, I am willing to write a plain text file 
listing symbols (in ascii sort order) and section numbers, in an agreed-on 
format, if the idea is approved and someone else agrees to convert section 
numbers to page links and do the necessary latex/html formatting (with a 
Python script?).

The main formatting decision is between the two formats already used for 
identifiers versus topics.  For example, for the multiple meanings of '()' 
(which is one syntactic unit):

()  (calls) 5.3.4
()  (expressions) 5.2.3
()  (tuple literals) 5.2.3

with each line a link -- see '__dict__' for an example --  or

()
    calls 5.3.4
    expressions 5.2.3
    tuple 5.2.3

with each subline a link -- see arithmetic for an example.

[I just realized that some links need to be within-page rather than to the 
top of the page and that I can cut and paste additional info if I find the 
appropriate regular index entry, such as
http://docs.python.org/ref/parenthesized.html#l2h-342 for 5.2.3.  But I 
will work this sort of thing out with whoever formats.]

Terry Jan Reedy




From theller at python.net  Thu Jun  8 10:27:09 2006
From: theller at python.net (Thomas Heller)
Date: Thu, 08 Jun 2006 10:27:09 +0200
Subject: [Python-Dev] Is "t#" argument format meant to be char buffer,
	or just read-only?
In-Reply-To: <bbaeab100606071655u3cce5ff3ja950a40ad19065fb@mail.gmail.com>
References: <bbaeab100606071655u3cce5ff3ja950a40ad19065fb@mail.gmail.com>
Message-ID: <e68n0o$fl4$1@sea.gmane.org>

Brett Cannon wrote:
> I fixed the crasher for ``int(buffer(array.array('c')))`` by making
> buffer objects operate properly.  Problem is that by doing so I broke
> the ctypes tests with a bunch of similar errors::

You have not yet committed this fix, right?

> ======================================================================
> ERROR: test_endian_double (ctypes.test.test_byteswap.Test)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "/code/python/trunk/Lib/ctypes/test/test_byteswap.py", line
> 134, in test_endian_double
>     self.failUnlessEqual(bin(struct.pack("d", math.pi)), bin(s))
>   File "/code/python/trunk/Lib/ctypes/test/test_byteswap.py", line 7, in bin
>     return hexlify(buffer(s)).upper()
> TypeError: requested buffer type not available
> 
> Turns out the test does the following::
> 
>   binascii.hexlify(buffer(ctypes.c_double(math.pi)))
> 
> This is a problem because binascii.hexlify() uses "t#" as its argument
> format string to PyArg_ParseTuple() and that fails now with a
> TypeError since ctypes.c_double (which subclasses ctypes._SimpleCData
> which defines the buffer interface) does not have a char buffer
> defined.
> 
> Now this used to pass since buffer objects just used the read or write
> buffer in place of the char buffer, regardless if the wrapped object
> had a char buffer function defined.
> 
> But in checking out what "t#" did, I found a slight ambiguity in the
> docs.  The docs say "read-only character buffer" for the short
> description, but "read-only buffer" for the longer description.  Which
> is it?

I am using binascii.hexlify(buffer(obj)) as a simple way to look at the bytes of
the memory block.

I think that hexlify should be able to use any buffer object that has
a readable memory block, not only those with charbuffers.

The docs say that the binascii methods are used to "convert between binary
and various ASCII-encoded binary representations".


> Plus, Thomas, you might want to change _SimpleCData if you want it to
> truly suport char buffers.

I did not implement that because the memory block contains binary data,
not text.

Thomas


From mal at egenix.com  Thu Jun  8 10:37:51 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 10:37:51 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e67g2b$o33$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>
	<448740F2.4020900@egenix.com> <e67g2b$o33$1@sea.gmane.org>
Message-ID: <4487E1DF.3030302@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> Some more interesting results from comparing Python 2.4 (other) against
>> the current SVN snapshot (this):
> 
> been there, done that, found the results lacking.
> 
> we spent a large part of the first NFS day to investigate all
> reported slowdowns, and found that only one slowdown could be 
> independently confirmed (the exception issue).
> 
> try running some of your subtests under timeit, and see if you
> can repeat the results.

The results were produced by pybench 2.0 and use time.time
on Linux, plus a different calibration strategy. As a result
these timings are a lot more repeatable than with pybench 1.3
and I've confirmed the timings using several runs to make sure.

Still, here's the timeit.py measurement of the PythonFunctionCall
test (note that I've scaled down the test in terms of number
of rounds for timeit.py):

Python 2.4:
10 loops, best of 3: 21.9 msec per loop
10 loops, best of 3: 21.8 msec per loop
10 loops, best of 3: 21.8 msec per loop
10 loops, best of 3: 21.9 msec per loop
10 loops, best of 3: 21.9 msec per loop

Python 2.5 as of last night:
100 loops, best of 3: 18 msec per loop
100 loops, best of 3: 18.4 msec per loop
100 loops, best of 3: 18.4 msec per loop
100 loops, best of 3: 18.2 msec per loop

The pybench 2.0 result:

PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%

Looks about right, I'd say.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              24 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From greg.ewing at canterbury.ac.nz  Thu Jun  8 10:52:01 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 08 Jun 2006 20:52:01 +1200
Subject: [Python-Dev] Is "t#" argument format meant to be char buffer,
 or just read-only?
In-Reply-To: <e68n0o$fl4$1@sea.gmane.org>
References: <bbaeab100606071655u3cce5ff3ja950a40ad19065fb@mail.gmail.com>
	<e68n0o$fl4$1@sea.gmane.org>
Message-ID: <4487E531.70504@canterbury.ac.nz>

Thomas Heller wrote:

> I think that hexlify should be able to use any buffer object that has
> a readable memory block, not only those with charbuffers.
> 
> The docs say that the binascii methods are used to "convert between binary
> and various ASCII-encoded binary representations".

So why the heck is hexlify looking for charbuffers and not
byte buffers in the first place?

--
Greg

From ncoghlan at gmail.com  Thu Jun  8 12:48:07 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 08 Jun 2006 20:48:07 +1000
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4487E1DF.3030302@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>
	<e67g2b$o33$1@sea.gmane.org> <4487E1DF.3030302@egenix.com>
Message-ID: <44880067.70601@gmail.com>

M.-A. Lemburg wrote:
> Still, here's the timeit.py measurement of the PythonFunctionCall
> test (note that I've scaled down the test in terms of number
> of rounds for timeit.py):
> 
> Python 2.4:
> 10 loops, best of 3: 21.9 msec per loop
> 10 loops, best of 3: 21.8 msec per loop
> 10 loops, best of 3: 21.8 msec per loop
> 10 loops, best of 3: 21.9 msec per loop
> 10 loops, best of 3: 21.9 msec per loop
> 
> Python 2.5 as of last night:
> 100 loops, best of 3: 18 msec per loop
> 100 loops, best of 3: 18.4 msec per loop
> 100 loops, best of 3: 18.4 msec per loop
> 100 loops, best of 3: 18.2 msec per loop
> 
> The pybench 2.0 result:
> 
> PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%
> 
> Looks about right, I'd say.

If the pybench result is still 2.5 first, then the two results are 
contradictory - your timeit results are showing Python 2.5 as being faster 
(assuming the headings are on the right blocks of tests).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From mal at egenix.com  Thu Jun  8 12:53:24 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 12:53:24 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44880067.70601@gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>
	<4487E1DF.3030302@egenix.com> <44880067.70601@gmail.com>
Message-ID: <448801A4.5070704@egenix.com>

Nick Coghlan wrote:
> M.-A. Lemburg wrote:
>> Still, here's the timeit.py measurement of the PythonFunctionCall
>> test (note that I've scaled down the test in terms of number
>> of rounds for timeit.py):
>>

Python 2.5 as of last night:

>> 10 loops, best of 3: 21.9 msec per loop
>> 10 loops, best of 3: 21.8 msec per loop
>> 10 loops, best of 3: 21.8 msec per loop
>> 10 loops, best of 3: 21.9 msec per loop
>> 10 loops, best of 3: 21.9 msec per loop

Python 2.4:

>> 100 loops, best of 3: 18 msec per loop
>> 100 loops, best of 3: 18.4 msec per loop
>> 100 loops, best of 3: 18.4 msec per loop
>> 100 loops, best of 3: 18.2 msec per loop
>>
>> The pybench 2.0 result:
>>
>> PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%
>>
>> Looks about right, I'd say.
> 
> If the pybench result is still 2.5 first, then the two results are
> contradictory - your timeit results are showing Python 2.5 as being
> faster (assuming the headings are on the right blocks of tests).

<sigh> I put the headings for the timeit.py output on the
wrong blocks. Thanks for pointing this out.

Anyway, try for yourself. Just add these lines to pybench/Call.py
at the end and then run Call.py using Python 2.4 vs. 2.5:

### Test to make Fredrik happy...

if __name__ == '__main__':
    import timeit
    timeit.TestClass = PythonFunctionCalls
    timeit.main(['-s', 'test = TestClass(); test.rounds = 1000',
                 'test.test()'])

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              24 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Thu Jun  8 13:07:23 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 13:07:23 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4487E1DF.3030302@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>
	<e67g2b$o33$1@sea.gmane.org> <4487E1DF.3030302@egenix.com>
Message-ID: <e690dc$gat$1@sea.gmane.org>

M.-A. Lemburg wrote:

> Still, here's the timeit.py measurement of the PythonFunctionCall
> test (note that I've scaled down the test in terms of number
> of rounds for timeit.py):
> 
> Python 2.4:
> 10 loops, best of 3: 21.9 msec per loop
> 10 loops, best of 3: 21.8 msec per loop
> 10 loops, best of 3: 21.8 msec per loop
> 10 loops, best of 3: 21.9 msec per loop
> 10 loops, best of 3: 21.9 msec per loop
> 
> Python 2.5 as of last night:
> 100 loops, best of 3: 18 msec per loop
> 100 loops, best of 3: 18.4 msec per loop
> 100 loops, best of 3: 18.4 msec per loop
> 100 loops, best of 3: 18.2 msec per loop
> 
> The pybench 2.0 result:
> 
> PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%
> 
> Looks about right, I'd say.

except for the sign, I'd say.

pybench reported a slowdown from 108 to 130 ms, which prompted you to write

     > A little disturbing is the slow-down for Python function calls
     > and the built-in method lookup. Did anything change in these parts
     > of the interpreter ?

but timeit is reporting a ~20% speedup (21.8 to 18 ms).  on my machine, 
using the loop body from Calls.PythonFunctionCalls.test as a separate 
global function called by timeit, I get:

     25 usec per loop for Python 2.4.3
     22.5 usec per loop for Python 2.5 trunk

which seems to match your timeit results quite well.  and we *did* speed 
up frame handling on the reykjavik sprint.

another sprint optimization was exception handling, and pybench did 
notice this (137 to 115 ms).  here's what timeit says on my machine:

     15.1 usec per loop for Python 2.4.3
     23.5 usec per loop for Python 2.5 alpha 2
     11.6 usec per loop for Python 2.5 trunk

something's not quite right...

</F>


From fredrik at pythonware.com  Thu Jun  8 13:14:34 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 13:14:34 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448801A4.5070704@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>
	<44880067.70601@gmail.com> <448801A4.5070704@egenix.com>
Message-ID: <e690qq$hqs$1@sea.gmane.org>

M.-A. Lemburg wrote:

> <sigh> I put the headings for the timeit.py output on the
> wrong blocks. Thanks for pointing this out.

so how do you explain the Try/Except results, where timeit and pybench 
seems to agree?

</F>


From mal at egenix.com  Thu Jun  8 13:32:24 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 13:32:24 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e690qq$hqs$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<44880067.70601@gmail.com>
	<448801A4.5070704@egenix.com> <e690qq$hqs$1@sea.gmane.org>
Message-ID: <44880AC8.5090501@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> <sigh> I put the headings for the timeit.py output on the
>> wrong blocks. Thanks for pointing this out.
> 
> so how do you explain the Try/Except results, where timeit and pybench 
> seems to agree?

The pybench results match those of timeit.py on my test machine
in both cases. I just mixed up the headers when I wrote the email.

Here's the console print-out:

Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py
10 loops, best of 3: 21.8 msec per loop
Tools/pybench> ~/projects/Python/Installation/bin/python Exceptions.py
100 loops, best of 3: 15.4 msec per loop

Tools/pybench> ~/projects/Python/Installation/bin/python
Python 2.5a2 (trunk, Jun  8 2006, 01:51:06)
[GCC 3.3.4 (pre 3.3.5 20040809)] on linux2

Tools/pybench> python Calls.py
100 loops, best of 3: 18.2 msec per loop
Tools/pybench> python Exceptions.py
100 loops, best of 3: 17 msec per loop

Tools/pybench> python
Python 2.4.2 (#1, Oct  1 2005, 15:24:35)
[GCC 3.3.4 (pre 3.3.5 20040809)] on linux2

Calls.py is using timit.py against the PythonFunctionCalls
test and Exceptions.py the TryRaiseExcept test.

Function calls are slower in 2.5, while try-except in 2.5 is
faster than 2.4.

I've attached the Calls.py file below.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              24 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Calls.py
Url: http://mail.python.org/pipermail/python-dev/attachments/20060608/dda00a33/attachment.pot 

From fredrik at pythonware.com  Thu Jun  8 13:54:24 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 13:54:24 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44880AC8.5090501@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<44880067.70601@gmail.com>	<448801A4.5070704@egenix.com>
	<e690qq$hqs$1@sea.gmane.org> <44880AC8.5090501@egenix.com>
Message-ID: <e6935g$p0n$2@sea.gmane.org>

M.-A. Lemburg wrote:

> The pybench results match those of timeit.py on my test machine
> in both cases. I just mixed up the headers when I wrote the email.

on a line by line basis ?

> Testnames                        minimum run-time        average  run-time
>                                  this    other   diff    this    other   diff
> -------------------------------------------------------------------------------
>            PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%
>                 TryRaiseExcept:   115ms   137ms  -16.2%   129ms   140ms   -7.6%

</F>


From mal at egenix.com  Thu Jun  8 14:15:35 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 14:15:35 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e6935g$p0n$2@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<44880067.70601@gmail.com>	<448801A4.5070704@egenix.com>	<e690qq$hqs$1@sea.gmane.org>
	<44880AC8.5090501@egenix.com> <e6935g$p0n$2@sea.gmane.org>
Message-ID: <448814E7.1020904@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> The pybench results match those of timeit.py on my test machine
>> in both cases. I just mixed up the headers when I wrote the email.
> 
> on a line by line basis ?

No idea what you mean ? I posted the corrected version after Nick
told me about the apparent mistake.

>> Testnames                        minimum run-time        average  run-time
>>                                  this    other   diff    this    other   diff
>> -------------------------------------------------------------------------------
>>            PythonFunctionCalls:   130ms   108ms  +21.3%   132ms   109ms  +20.9%
>>                 TryRaiseExcept:   115ms   137ms  -16.2%   129ms   140ms   -7.6%

(this=Python 2.5; other=Python2.4)

timeit.py results (see my last email), converted to the pybench
output:

             PythonFunctionCalls:   21.8ms   18.2ms  +19.8%
                  TryRaiseExcept:   15.4ms   17.0ms   -9.4%

The timeit.py results for TryRaiseExcept on Python 2.4 vary
between 17.0 - 18.1ms. With Python 2.5 this doesn't happen.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              24 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From amk at amk.ca  Thu Jun  8 14:24:37 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Thu, 8 Jun 2006 08:24:37 -0400
Subject: [Python-Dev] Symbol page for Language Reference Manual Index
In-Reply-To: <e688ef$5ip$1@sea.gmane.org>
References: <e688ef$5ip$1@sea.gmane.org>
Message-ID: <20060608122437.GA27249@rogue.amk.ca>

On Thu, Jun 08, 2006 at 12:18:23AM -0400, Terry Reedy wrote:
> I am willing to do perhaps half the work needed to produce such in time for 
> the 2.5 release.  In particular, I am willing to write a plain text file 
> listing symbols (in ascii sort order) and section numbers, in an agreed-on 
> format, if the idea is approved and someone else agrees to convert section 
> numbers to page links and do the necessary latex/html formatting (with a 
> Python script?).

There's a pool of volunteers for LaTeX formatting, so someone will
certainly handle that step.

> [I just realized that some links need to be within-page rather than to the 
> top of the page and that I can cut and paste additional info if I find the 
> appropriate regular index entry, such as
> http://docs.python.org/ref/parenthesized.html#l2h-342 for 5.2.3.  But I 
> will work this sort of thing out with whoever formats.]

It's probably easiest to make a list of symbols and the corresponding
section names and optionally a snippet of the paragraph that should be
the target of the link. Section numbers like 5.2.3 and link anchors
like #l2h-342 are generated by LaTeX2HTML and not visible when you're
editing the source text.

--amk

From fredrik at pythonware.com  Thu Jun  8 14:35:47 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 14:35:47 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44880AC8.5090501@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<44880067.70601@gmail.com>	<448801A4.5070704@egenix.com>
	<e690qq$hqs$1@sea.gmane.org> <44880AC8.5090501@egenix.com>
Message-ID: <e695j6$37i$1@sea.gmane.org>

M.-A. Lemburg wrote:

> The pybench results match those of timeit.py on my test machine
> in both cases.

but they don't match the timeit results on similar machines, nor do they 
  reflect what was done at the sprint.

> Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py
> 10 loops, best of 3: 21.8 msec per loop

10 loops ?

what happens if you run the actual test code (the stuff inside the for 
loop) inside timeit, instead of running your test loop inside timeit?

</F>


From fredrik at pythonware.com  Thu Jun  8 14:50:37 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 14:50:37 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4487E1DF.3030302@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>
	<e67g2b$o33$1@sea.gmane.org> <4487E1DF.3030302@egenix.com>
Message-ID: <e696es$5hs$1@sea.gmane.org>

M.-A. Lemburg wrote:

> The results were produced by pybench 2.0 and use time.time
> on Linux, plus a different calibration strategy. As a result
> these timings are a lot more repeatable than with pybench 1.3
> and I've confirmed the timings using several runs to make sure.

can you check in 2.0 ?  (if it's not quite ready for public consumption, 
put it in the sandbox).

</F>


From engelbert.gruber at ssg.co.at  Thu Jun  8 14:52:07 2006
From: engelbert.gruber at ssg.co.at (engelbert.gruber at ssg.co.at)
Date: Thu, 8 Jun 2006 14:52:07 +0200 (CEST)
Subject: [Python-Dev] Symbol page for Language Reference Manual Index
In-Reply-To: <20060608122437.GA27249@rogue.amk.ca>
References: <e688ef$5ip$1@sea.gmane.org> <20060608122437.GA27249@rogue.amk.ca>
Message-ID: <Pine.LNX.4.64.0606081448500.7511@lx3.local>

On Thu, 8 Jun 2006, A.M. Kuchling wrote:

> On Thu, Jun 08, 2006 at 12:18:23AM -0400, Terry Reedy wrote:
>> I am willing to do perhaps half the work needed to produce such in time for
>> the 2.5 release.  In particular, I am willing to write a plain text file
>> listing symbols (in ascii sort order) and section numbers, in an agreed-on
>> format, if the idea is approved and someone else agrees to convert section
>> numbers to page links and do the necessary latex/html formatting (with a
>> Python script?).

> There's a pool of volunteers for LaTeX formatting, so someone will
> certainly handle that step.

would rst2docpy be of any help
(see http://docutils.sourceforge.net/sandbox/docpy-writer/)

although the text below doesnt sound like it could do it.

>> [I just realized that some links need to be within-page rather than to the
>> top of the page and that I can cut and paste additional info if I find the
>> appropriate regular index entry, such as
>> http://docs.python.org/ref/parenthesized.html#l2h-342 for 5.2.3.  But I
>> will work this sort of thing out with whoever formats.]
>
> It's probably easiest to make a list of symbols and the corresponding
> section names and optionally a snippet of the paragraph that should be
> the target of the link. Section numbers like 5.2.3 and link anchors
> like #l2h-342 are generated by LaTeX2HTML and not visible when you're
> editing the source text.


From mal at egenix.com  Thu Jun  8 14:54:36 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 14:54:36 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e695j6$37i$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<44880067.70601@gmail.com>	<448801A4.5070704@egenix.com>	<e690qq$hqs$1@sea.gmane.org>
	<44880AC8.5090501@egenix.com> <e695j6$37i$1@sea.gmane.org>
Message-ID: <44881E0C.7090905@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> The pybench results match those of timeit.py on my test machine
>> in both cases.
> 
> but they don't match the timeit results on similar machines, nor do they 
> reflect what was done at the sprint.

Huh ? They do show the speedups you achieved at the sprint.

>> Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py
>> 10 loops, best of 3: 21.8 msec per loop
> 
> 10 loops ?
> 
> what happens if you run the actual test code (the stuff inside the for 
> loop) inside timeit, instead of running your test loop inside timeit?

More or less the same results:

Python 2.4:

Tools/pybench> python Calls.py
100000 loops, best of 3: 18.9 usec per loop
Tools/pybench> python Calls.py
100000 loops, best of 3: 18.8 usec per loop
Tools/pybench> python Calls.py
100000 loops, best of 3: 18.7 usec per loop

Python 2.5 (trunk as-of 2006-06-08):

Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py
10000 loops, best of 3: 22.9 usec per loop
Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py
10000 loops, best of 3: 23.7 usec per loop
Tools/pybench> ~/projects/Python/Installation/bin/python Calls.py
10000 loops, best of 3: 23.4 usec per loop

I've attached the modified Call.py below so that you can
run it as well.

All this on AMD64, Linux2.6, gcc3.3.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Calls.py
Url: http://mail.python.org/pipermail/python-dev/attachments/20060608/734307c9/attachment.pot 

From fredrik at pythonware.com  Thu Jun  8 15:08:39 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 08 Jun 2006 15:08:39 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44881E0C.7090905@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<44880067.70601@gmail.com>	<448801A4.5070704@egenix.com>	<e690qq$hqs$1@sea.gmane.org>	<44880AC8.5090501@egenix.com>
	<e695j6$37i$1@sea.gmane.org> <44881E0C.7090905@egenix.com>
Message-ID: <e697gn$am6$1@sea.gmane.org>

M.-A. Lemburg wrote:

> Huh ? They do show the speedups you achieved at the sprint.

the results you just posted appear to show a 20% slowdown for function 
calls, and a 10% speedup for exceptions.

both things were optimized at the sprint, and the improvements were 
confirmed on several machines.  on my machine, using timeit on the test 
bodies, I get noticeable speedups for both tests; from my earlier post:

function calls:

>      25 usec per loop for Python 2.4.3
>      22.5 usec per loop for Python 2.5 trunk

try/except:

>      15.1 usec per loop for Python 2.4.3
>      23.5 usec per loop for Python 2.5 alpha 2
>      11.6 usec per loop for Python 2.5 trunk

maybe the function call issue is an AMD64 thing?  or a compiler thing? 
do you see the same result on other hardware?

</F>


From thomas at python.org  Thu Jun  8 15:11:57 2006
From: thomas at python.org (Thomas Wouters)
Date: Thu, 8 Jun 2006 15:11:57 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44881E0C.7090905@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<448740F2.4020900@egenix.com> <e67g2b$o33$1@sea.gmane.org>
	<4487E1DF.3030302@egenix.com> <44880067.70601@gmail.com>
	<448801A4.5070704@egenix.com> <e690qq$hqs$1@sea.gmane.org>
	<44880AC8.5090501@egenix.com> <e695j6$37i$1@sea.gmane.org>
	<44881E0C.7090905@egenix.com>
Message-ID: <9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>

On 6/8/06, M.-A. Lemburg <mal at egenix.com> wrote:

> All this on AMD64, Linux2.6, gcc3.3.


FWIW, my AMD64, linux 2.6, gcc 4.0 machine reports 29.0-29.5 usec for 2.5,
30.0-31.0 for 2.4 and 30.5-31.5 for 2.3, using the code you attached. In
other words, 2.5 is definately not slower here. At least, not if I use the
same compile options for 2.5 as for 2.4... ;-)

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/f822a107/attachment.html 

From mal at egenix.com  Thu Jun  8 15:35:46 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 15:35:46 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	
	<448740F2.4020900@egenix.com> <e67g2b$o33$1@sea.gmane.org>	
	<4487E1DF.3030302@egenix.com> <44880067.70601@gmail.com>	
	<448801A4.5070704@egenix.com> <e690qq$hqs$1@sea.gmane.org>	
	<44880AC8.5090501@egenix.com> <e695j6$37i$1@sea.gmane.org>	
	<44881E0C.7090905@egenix.com>
	<9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>
Message-ID: <448827B2.2080305@egenix.com>

Thomas Wouters wrote:
> On 6/8/06, M.-A. Lemburg <mal at egenix.com> wrote:
> 
>> All this on AMD64, Linux2.6, gcc3.3.
> 
> 
> FWIW, my AMD64, linux 2.6, gcc 4.0 machine reports 29.0-29.5 usec for 2.5,
> 30.0-31.0 for 2.4 and 30.5-31.5 for 2.3, using the code you attached. In
> other words, 2.5 is definately not slower here. At least, not if I use the
> same compile options for 2.5 as for 2.4... ;-)

I checked, both Python versions were compiled using these
options (and the same compiler):

# Compiler options
OPT=            -DNDEBUG -g -O3 -Wall -Wstrict-prototypes
BASECFLAGS=      -fno-strict-aliasing

Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see
turn into a speedup :-)

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From brett at python.org  Thu Jun  8 15:52:03 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 8 Jun 2006 06:52:03 -0700
Subject: [Python-Dev] Is "t#" argument format meant to be char buffer,
	or just read-only?
In-Reply-To: <e68n0o$fl4$1@sea.gmane.org>
References: <bbaeab100606071655u3cce5ff3ja950a40ad19065fb@mail.gmail.com>
	<e68n0o$fl4$1@sea.gmane.org>
Message-ID: <bbaeab100606080652i6e88990dh82b27c795d619ec5@mail.gmail.com>

On 6/8/06, Thomas Heller <theller at python.net> wrote:
>
> Brett Cannon wrote:
> > I fixed the crasher for ``int(buffer(array.array('c')))`` by making
> > buffer objects operate properly.  Problem is that by doing so I broke
> > the ctypes tests with a bunch of similar errors::
>
> You have not yet committed this fix, right?


Nope; I was waiting to hear back on all of this; no committing of code that
knowingly breaks tests and all.

> ======================================================================
> > ERROR: test_endian_double (ctypes.test.test_byteswap.Test)
> > ----------------------------------------------------------------------
> > Traceback (most recent call last):
> >   File "/code/python/trunk/Lib/ctypes/test/test_byteswap.py", line
> > 134, in test_endian_double
> >     self.failUnlessEqual(bin(struct.pack("d", math.pi)), bin(s))
> >   File "/code/python/trunk/Lib/ctypes/test/test_byteswap.py", line 7, in
> bin
> >     return hexlify(buffer(s)).upper()
> > TypeError: requested buffer type not available
> >
> > Turns out the test does the following::
> >
> >   binascii.hexlify(buffer(ctypes.c_double(math.pi)))
> >
> > This is a problem because binascii.hexlify() uses "t#" as its argument
> > format string to PyArg_ParseTuple() and that fails now with a
> > TypeError since ctypes.c_double (which subclasses ctypes._SimpleCData
> > which defines the buffer interface) does not have a char buffer
> > defined.
> >
> > Now this used to pass since buffer objects just used the read or write
> > buffer in place of the char buffer, regardless if the wrapped object
> > had a char buffer function defined.
> >
> > But in checking out what "t#" did, I found a slight ambiguity in the
> > docs.  The docs say "read-only character buffer" for the short
> > description, but "read-only buffer" for the longer description.  Which
> > is it?
>
> I am using binascii.hexlify(buffer(obj)) as a simple way to look at the
> bytes of
> the memory block.
>
> I think that hexlify should be able to use any buffer object that has
> a readable memory block, not only those with charbuffers.
>
> The docs say that the binascii methods are used to "convert between binary
> and various ASCII-encoded binary representations".



Perhaps s# should be used instead since the docs say a read buffer can be
used for that.

> Plus, Thomas, you might want to change _SimpleCData if you want it to
> > truly suport char buffers.
>
> I did not implement that because the memory block contains binary data,
> not text.



OK.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/28987d78/attachment.htm 

From joe.gregorio at gmail.com  Wed Jun  7 20:56:27 2006
From: joe.gregorio at gmail.com (Joe Gregorio)
Date: Wed, 7 Jun 2006 14:56:27 -0400
Subject: [Python-Dev] [Web-SIG] wsgiref doc draft; reviews/patches wanted
In-Reply-To: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
References: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
Message-ID: <3f1451f50606071156h9e612e3y602918973349a61f@mail.gmail.com>

Phillip,

1. It's not really clear from the abstract 'what' this library
provides. You might want
   to consider moving the text from 1.1 up to the same level as the abstract.

2.  In section 1.1 you might want to consider dropping the sentence:
"Only authors
    of web servers and programming frameworks need to know every detail..."
    It doesn't offer any concrete information and just indirectly
     makes WSGI look complicated.

3. From the abstract:  "Having a standard interface makes it easy to use a
      WSGI-supporting application with a number of different web servers."

     is a little akward, how about:

    "Having a standard interface makes it easy to use an application
    that supports WSGI with a number of different web servers."

4. I believe the order of submodules presented is important and think that
   they should be listed with 'handlers' and 'simple_server' first:

    wsgiref.handlers - server/gateway base classes
    wsgiref.simple_server - a simple WSGI HTTP server
    wsgiref.util - WSGI environment utilities
    wsgiref.headers - WSGI response header tools
    wsgiref.validate - WSGI conformance checker

5. You might consider moving 'headers' into 'util'. Of course, you could
    go all the way in simplifying and move 'validate' in there too.

    wsgiref.handlers - server/gateway base classes
    wsgiref.simple_server - a simple WSGI HTTP server
    wsgiref.util - WSGI environment utilities

Besides those nits it looks very good and will be a fine
addition to the core library.

   -joe


On 6/6/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> I've finished my draft for the wsgiref documentation (including stuff I
> swiped from AMK's draft; thanks AMK!), and am looking for comments before I
> add it to the stdlib documentation.
>
> Source: http://svn.eby-sarna.com/svnroot/wsgiref/docs
> PDF:    http://peak.telecommunity.com/wsgiref.pdf
> HTML:   http://peak.telecommunity.com/wsgiref_docs/
>
> My current plan is to make a hopefully-final release of the standalone
> version of wsgiref on PyPI, then clone that version for inclusion in the
> stdlib.
>
> The latest version of wsgiref in the eby-sarna SVN includes a new
> ``make_server()`` convenience function (addressing Titus' concerns about
> the constructor signatures while retaining backward compatibility) and it
> adds a ``wsgiref.validate`` module based on paste.lint.
>
> In addition to those two new features, tests were added for the new
> validate module and for WSGIServer.  The test suite and directory layout of
> the package were also simplified and consolidated to make merging to the
> stdlib easier.
>
> Feedback welcomed.
>
> _______________________________________________
> Web-SIG mailing list
> Web-SIG at python.org
> Web SIG: http://www.python.org/sigs/web-sig
> Unsubscribe: http://mail.python.org/mailman/options/web-sig/joe.gregorio%40gmail.com
>


-- 
Joe Gregorio        http://bitworking.org

From thomas at python.org  Thu Jun  8 16:45:50 2006
From: thomas at python.org (Thomas Wouters)
Date: Thu, 8 Jun 2006 16:45:50 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448827B2.2080305@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<4487E1DF.3030302@egenix.com> <44880067.70601@gmail.com>
	<448801A4.5070704@egenix.com> <e690qq$hqs$1@sea.gmane.org>
	<44880AC8.5090501@egenix.com> <e695j6$37i$1@sea.gmane.org>
	<44881E0C.7090905@egenix.com>
	<9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>
	<448827B2.2080305@egenix.com>
Message-ID: <9e804ac0606080745o31a01777j106ac35b8d5d1a48@mail.gmail.com>

On 6/8/06, M.-A. Lemburg <mal at egenix.com> wrote:

> Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see
> turn into a speedup :-)


It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (only
machine I had available with those gcc versions) and both 2.95 and 4.0 show
a 10-20% speedup of your testcase in 2.5 compared to 2.4. 3.3 showed a 10%
slowdown or so.

Test-more'ly y'rs,

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/154e149b/attachment.htm 

From g.brandl at gmx.net  Thu Jun  8 16:53:43 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 08 Jun 2006 16:53:43 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <9e804ac0606080745o31a01777j106ac35b8d5d1a48@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<4487E1DF.3030302@egenix.com>
	<44880067.70601@gmail.com>	<448801A4.5070704@egenix.com>
	<e690qq$hqs$1@sea.gmane.org>	<44880AC8.5090501@egenix.com>
	<e695j6$37i$1@sea.gmane.org>	<44881E0C.7090905@egenix.com>	<9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>	<448827B2.2080305@egenix.com>
	<9e804ac0606080745o31a01777j106ac35b8d5d1a48@mail.gmail.com>
Message-ID: <e69dln$2mj$1@sea.gmane.org>

Thomas Wouters wrote:
> 
> 
> On 6/8/06, *M.-A. Lemburg* <mal at egenix.com <mailto:mal at egenix.com>> wrote:
> 
>     Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see
>     turn into a speedup :-)
> 
> 
> It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (only 
> machine I had available with those gcc versions) and both 2.95 and 4.0 
> show a 10-20% speedup of your testcase in 2.5 compared to 2.4. 3.3 
> showed a 10% slowdown or so.

Does 4.0 show a general slowdown on your test machines? I saw a drop
of average Pystones from 44000 to 40000 and from 42000 to 39000 on
my boxes switching from GCC 3.4.6 to 4.1.1.

Georg


From mal at egenix.com  Thu Jun  8 17:08:03 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 08 Jun 2006 17:08:03 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <9e804ac0606080745o31a01777j106ac35b8d5d1a48@mail.gmail.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	
	<4487E1DF.3030302@egenix.com> <44880067.70601@gmail.com>	
	<448801A4.5070704@egenix.com> <e690qq$hqs$1@sea.gmane.org>	
	<44880AC8.5090501@egenix.com> <e695j6$37i$1@sea.gmane.org>	
	<44881E0C.7090905@egenix.com>	
	<9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>	
	<448827B2.2080305@egenix.com>
	<9e804ac0606080745o31a01777j106ac35b8d5d1a48@mail.gmail.com>
Message-ID: <44883D53.4020206@egenix.com>

Thomas Wouters wrote:
> On 6/8/06, M.-A. Lemburg <mal at egenix.com> wrote:
> 
>> Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see
>> turn into a speedup :-)
> 
> 
> It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (only
> machine I had available with those gcc versions) and both 2.95 and 4.0 show
> a 10-20% speedup of your testcase in 2.5 compared to 2.4. 3.3 showed a 10%
> slowdown or so.
> 
> Test-more'ly y'rs,

Looks like it's time to upgrade to SuSE 10.1, then :-)

Thanks for checking.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 08 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From thomas at python.org  Thu Jun  8 17:55:13 2006
From: thomas at python.org (Thomas Wouters)
Date: Thu, 8 Jun 2006 17:55:13 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e69dln$2mj$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>
	<448801A4.5070704@egenix.com> <e690qq$hqs$1@sea.gmane.org>
	<44880AC8.5090501@egenix.com> <e695j6$37i$1@sea.gmane.org>
	<44881E0C.7090905@egenix.com>
	<9e804ac0606080611r268129e0uc7bc36343a94f059@mail.gmail.com>
	<448827B2.2080305@egenix.com>
	<9e804ac0606080745o31a01777j106ac35b8d5d1a48@mail.gmail.com>
	<e69dln$2mj$1@sea.gmane.org>
Message-ID: <9e804ac0606080855i13077055q1a400a3ef515422c@mail.gmail.com>

On 6/8/06, Georg Brandl <g.brandl at gmx.net> wrote:

> Does 4.0 show a general slowdown on your test machines? I saw a drop
> of average Pystones from 44000 to 40000 and from 42000 to 39000 on
> my boxes switching from GCC 3.4.6 to 4.1.1.


Yep, looks like it does. Don't have time to run more extensive tests, though
(I'd have to make sure the machine is unloaded for a much longer period of
time, and I don't have the time :)

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/69061b28/attachment.html 

From chris at atlee.ca  Thu Jun  8 19:29:25 2006
From: chris at atlee.ca (Chris AtLee)
Date: Thu, 8 Jun 2006 13:29:25 -0400
Subject: [Python-Dev] zlib module doesn't build - inflateCopy() not found
In-Reply-To: <ca471dc20605210834o435d0bbcybdc8c469a63df4b9@mail.gmail.com>
References: <ca471dc20605191431qaf6de33l8dac8342b2c14dbe@mail.gmail.com>
	<446EB648.90804@v.loewis.de> <e4mh9s$fgq$1@sea.gmane.org>
	<ca471dc20605200747i759c28d7v5cb0793093ada2ba@mail.gmail.com>
	<44703A88.3040800@v.loewis.de>
	<ca471dc20605210834o435d0bbcybdc8c469a63df4b9@mail.gmail.com>
Message-ID: <7790b6530606081029g29096evaa1640eff479b151@mail.gmail.com>

On 5/21/06, Guido van Rossum <guido at python.org> wrote:
> Then options 2 and 3 are both fine.
>
> Not compiling at all is *not*, so if nobody has time to implement 2 or
> 3, we'll have to do 4.
>
> --Guido

Is this thread still alive?

I've posted patch #1503046 to sourceforge which implements option #2
by checking for inflateCopy() in the system zlib during the configure
step.

I'm not sure if this works when using the zlib library included with
python or not.

Cheers,
Chris

From g.brandl at gmx.net  Thu Jun  8 19:51:57 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 08 Jun 2006 19:51:57 +0200
Subject: [Python-Dev] zlib module doesn't build - inflateCopy() not found
In-Reply-To: <7790b6530606081029g29096evaa1640eff479b151@mail.gmail.com>
References: <ca471dc20605191431qaf6de33l8dac8342b2c14dbe@mail.gmail.com>	<446EB648.90804@v.loewis.de>
	<e4mh9s$fgq$1@sea.gmane.org>	<ca471dc20605200747i759c28d7v5cb0793093ada2ba@mail.gmail.com>	<44703A88.3040800@v.loewis.de>	<ca471dc20605210834o435d0bbcybdc8c469a63df4b9@mail.gmail.com>
	<7790b6530606081029g29096evaa1640eff479b151@mail.gmail.com>
Message-ID: <e69o3t$djn$1@sea.gmane.org>

Chris AtLee wrote:
> On 5/21/06, Guido van Rossum <guido at python.org> wrote:
>> Then options 2 and 3 are both fine.
>>
>> Not compiling at all is *not*, so if nobody has time to implement 2 or
>> 3, we'll have to do 4.
>>
>> --Guido
> 
> Is this thread still alive?

At least I still have this on my todo list.

> I've posted patch #1503046 to sourceforge which implements option #2
> by checking for inflateCopy() in the system zlib during the configure
> step.

Thank you! As this involves autotools magic, I hope Martin (who assigned
the patch to himself) can look over it instead of me.

Georg


From jjl at pobox.com  Thu Jun  8 20:45:22 2006
From: jjl at pobox.com (John J Lee)
Date: Thu, 8 Jun 2006 18:45:22 +0000 (UTC)
Subject: [Python-Dev] Some more comments re new uriparse module,
 patch 1462525
In-Reply-To: <4482F5C9.20407@gmail.com>
References: <Pine.LNX.4.64.0606022059340.8454@localhost>
	<20060604010938.DDA4E179C66@place.org> <4482F5C9.20407@gmail.com>
Message-ID: <Pine.LNX.4.64.0606081842090.8417@localhost>

On Mon, 5 Jun 2006, Nick Coghlan wrote:
[...]
> I started to write a reply to this with some comments on the API (including 
> the internal subclassing API), but ended up with so many different 
> suggestions it was easier to just post a variant of the module. I called it 
> "urischemes" and posted it on SF:
>
> http://python.org/sf/1500504
[...]

At first glance, looks good.  I hope to review it properly later.

One point: I don't think there should be any mention of "URL" in the 
module -- we should use "URI" everywhere (see my comments on Paul's 
original version for a bit more on this).


John


From skip at pobox.com  Thu Jun  8 20:52:09 2006
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 8 Jun 2006 13:52:09 -0500
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
Message-ID: <17544.29145.93728.612908@montanaro.dyndns.org>

Maybe this belongs in the dev faq.  I didn't see anything there or in the
Subversion book.

I have three Python branches, trunk, release23-maint and release24-maint.
In the (for example) release24-maint, what svn up command would I use to get
to the 2.4.2 version?  In cvs I'd use something like 'cvs up -r r242'.  How
do I get a list of tags?  In cvs I'd do something like 'cvs log | less'.

Thx,

Skip

From facundobatista at gmail.com  Thu Jun  8 21:11:06 2006
From: facundobatista at gmail.com (Facundo Batista)
Date: Thu, 8 Jun 2006 16:11:06 -0300
Subject: [Python-Dev] tarfile and unicode filenames in windows
Message-ID: <e04bdf310606081211n186cc17dx714e1aacbc6096ae@mail.gmail.com>

I'm working in Windows 2K SP4. I have a directory with non-ascii names
(i.e.: "cami?n.txt").

I'm trying to tar.bzip it:

    nomdir = sys.argv[1]
    tar = tarfile.open("prueba.tar.bz2", "w:bz2")
    tar.add(nomdir)
    tar.close()

This works ok, even considering that the "?" in the filename is not
ascii 7-bits.

But then I put a file in that directory that has a more strange name
(one with an "o" and a dash above it): My?-?.txt

Here, the tarfile can't find the file. This is the same limitation
that with listdir(), where I have to pass the directory name unicoded,
to the system be able to find it. So:

    nomdir = unicode(sys.argv[1])
    tar = tarfile.open("prueba.tar.bz2", "w:bz2")
    tar.add(nomdir)
    tar.close()

The problem is that when tarfile finds that name, it crashes:

Traceback (most recent call last):
  File "comprim.py", line 8, in ?
    tar.add(nomdir)
  File "C:\python24\lib\tarfile.py", line 1239, in add
    self.add(os.path.join(name, f), os.path.join(arcname, f))
  File "C:\python24\lib\tarfile.py", line 1232, in add
    self.addfile(tarinfo, f)
  File "C:\python24\lib\tarfile.py", line 1297, in addfile
    self.fileobj.write(tarinfo.tobuf())
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf3' in
position 8: ordinal not in range(128)

This is because tarinfo.tobuf() creates a unicode object (because it
has the filename on it), and file.write() must have a standard string.

This is a known problem? Shall I post a bug? Couldn't find any
regarding this, and google didn't help here.

Thank you very much!

-- 
.    Facundo

Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/

From phd at mail2.phd.pp.ru  Thu Jun  8 21:20:23 2006
From: phd at mail2.phd.pp.ru (Oleg Broytmann)
Date: Thu, 8 Jun 2006 23:20:23 +0400
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
In-Reply-To: <17544.29145.93728.612908@montanaro.dyndns.org>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
Message-ID: <20060608192023.GA10334@phd.pp.ru>

On Thu, Jun 08, 2006 at 01:52:09PM -0500, skip at pobox.com wrote:
> Maybe this belongs in the dev faq.  I didn't see anything there or in the
> Subversion book.

http://svnbook.red-bean.com/nightly/en/svn.ref.svn.c.checkout.html
http://svnbook.red-bean.com/nightly/en/svn.ref.svn.c.update.html

> I have three Python branches, trunk, release23-maint and release24-maint.
> In the (for example) release24-maint, what svn up command would I use to get
> to the 2.4.2 version?  In cvs I'd use something like 'cvs up -r r242'.  How

   You have to know the branch URL and the revision number. Then do
svn co -r 242 http://svn.example.org/svnroot/branch/
   or
svn up -r 242

> do I get a list of tags?  In cvs I'd do something like 'cvs log | less'.

   Tags and branches in Subversion are just directories in the virtual SVN
filesystem. So you can use svn ls,
http://svnbook.red-bean.com/nightly/en/svn.ref.svn.c.list.html

   or view the repository via ViewVC. These are python tags:
http://svn.python.org/view/python/tags/ . Here is the tag for 2.4.2:
http://svn.python.org/view/python/tags/r242/

Oleg.
-- 
     Oleg Broytmann            http://phd.pp.ru/            phd at phd.pp.ru
           Programmers don't die, they just GOSUB without RETURN.

From tim.peters at gmail.com  Thu Jun  8 21:26:40 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Thu, 8 Jun 2006 15:26:40 -0400
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
In-Reply-To: <17544.29145.93728.612908@montanaro.dyndns.org>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
Message-ID: <1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>

[skip at pobox.com]
> Maybe this belongs in the dev faq.  I didn't see anything there or in the
> Subversion book.
>
> I have three Python branches, trunk, release23-maint and release24-maint.
> In the (for example) release24-maint, what svn up command would I use to get
> to the 2.4.2 version?   In cvs I'd use something like 'cvs up -r r242'.  How
> do I get a list of tags?  In cvs I'd do something like 'cvs log | less'.

Second question first:

    svn list svn+ssh://pythondev at svn.python.org/python/tags

s/tags/branches/ if you want a list of branches.  Note that there's
nothing special about tags or branches in SVN -- they're just
directories, with agreed-to-by-project-convention names.  That's why
you don't find any commands that treat tags or branches as distinct
concepts.

First question:

   cd to the root of your release24-maint checkout, then
   svn switch svn+ssh://pythondev at svn.python.org/python/tags/r242

From skip at pobox.com  Thu Jun  8 21:45:08 2006
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 8 Jun 2006 14:45:08 -0500
Subject: [Python-Dev] Subversion repository question - back up to older
 versions
In-Reply-To: <20060608192023.GA10334@phd.pp.ru>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
	<20060608192023.GA10334@phd.pp.ru>
Message-ID: <17544.32324.116605.3555@montanaro.dyndns.org>

Oleg,

Thanks for the help.  With the tags url I was able to identify the revision
I needed to update to.

Skip

From skip at pobox.com  Thu Jun  8 21:55:29 2006
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 8 Jun 2006 14:55:29 -0500
Subject: [Python-Dev] Subversion repository question - back up to older
 versions
In-Reply-To: <1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
	<1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
Message-ID: <17544.32945.408351.767417@montanaro.dyndns.org>


    >> I have three Python branches, trunk, release23-maint and
    >> release24-maint.  In the (for example) release24-maint, what svn up
    >> command would I use to get to the 2.4.2 version?

    Tim> First question:

    Tim>    cd to the root of your release24-maint checkout, then
    Tim>    svn switch svn+ssh://pythondev at svn.python.org/python/tags/r242

How is that different than noting that r242 corresponds to revision 39619
and executing:

    svn up -r 39619

?

Thx,

Skip

From phd at mail2.phd.pp.ru  Thu Jun  8 22:03:54 2006
From: phd at mail2.phd.pp.ru (Oleg Broytmann)
Date: Fri, 9 Jun 2006 00:03:54 +0400
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
In-Reply-To: <17544.32945.408351.767417@montanaro.dyndns.org>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
	<1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
	<17544.32945.408351.767417@montanaro.dyndns.org>
Message-ID: <20060608200354.GA14265@phd.pp.ru>

On Thu, Jun 08, 2006 at 02:55:29PM -0500, skip at pobox.com wrote:
>     Tim>    svn switch svn+ssh://pythondev at svn.python.org/python/tags/r242
> 
> How is that different than noting that r242 corresponds to revision 39619
> and executing:
> 
>     svn up -r 39619

   svn up updates the working directory without changing base URL. Commits
will go to the trunk (or whatever tag/branch it was before updating).
svn switch updates *and* switch the base URL - commits will go to the
tag/branch you are switching to.
   You can also merge some changes from another tag/branch:
svn merge -r 2112:2121 http://svn.example.org/svnroot/anotherbranch/
   (merge the diff between revisions 2121 and 2112 in anotherbranch into
the current working directory).

Oleg.
-- 
     Oleg Broytmann            http://phd.pp.ru/            phd at phd.pp.ru
           Programmers don't die, they just GOSUB without RETURN.

From phd at mail2.phd.pp.ru  Thu Jun  8 22:10:56 2006
From: phd at mail2.phd.pp.ru (Oleg Broytmann)
Date: Fri, 9 Jun 2006 00:10:56 +0400
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
In-Reply-To: <20060608200354.GA14265@phd.pp.ru>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
	<1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
	<17544.32945.408351.767417@montanaro.dyndns.org>
	<20060608200354.GA14265@phd.pp.ru>
Message-ID: <20060608201055.GA14732@phd.pp.ru>

On Fri, Jun 09, 2006 at 12:03:54AM +0400, Oleg Broytmann wrote:
> svn switch updates *and* switch the base URL - commits will go to the
> tag/branch you are switching to.

   Another point of view: svn switch is kind of optimized svn checkout.
svn co starts afresh and needs to transfer the entire tree; svn switch
already has a working tree so it only needs to transfer the diff. Less
traffic...

Oleg.
-- 
     Oleg Broytmann            http://phd.pp.ru/            phd at phd.pp.ru
           Programmers don't die, they just GOSUB without RETURN.

From tjreedy at udel.edu  Thu Jun  8 23:17:30 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 8 Jun 2006 17:17:30 -0400
Subject: [Python-Dev] Symbol page for Language Reference Manual Index
References: <e688ef$5ip$1@sea.gmane.org> <20060608122437.GA27249@rogue.amk.ca>
Message-ID: <e6a45a$s22$1@sea.gmane.org>


"A.M. Kuchling" <amk at amk.ca> wrote in message 
news:20060608122437.GA27249 at rogue.amk.ca...
> On Thu, Jun 08, 2006 at 12:18:23AM -0400, Terry Reedy wrote:
>> [I just realized that some links need to be within-page rather than to 
>> the
>> top of the page and that I can cut and paste additional info if I find 
>> the
>> appropriate regular index entry, such as
>> http://docs.python.org/ref/parenthesized.html#l2h-342 for 5.2.3.  But I
>> will work this sort of thing out with whoever formats.]
>
> It's probably easiest to make a list of symbols and the corresponding
> section names and optionally a snippet of the paragraph that should be
> the target of the link. Section numbers like 5.2.3 and link anchors
> like #l2h-342 are generated by LaTeX2HTML and not visible when you're
> editing the source text.

I looked at the ref latex source and realized that the indexes have no 
source themselves but are autogenerated from index markup like in the 
following:

\section{Function definitions\label{function}}
\indexii{function}{definition}
\stindex{def}

A function definition defines a user-defined function object (see
section~\ref{types}):
\obindex{user-defined function}
\obindex{function}

Section numbers will allow the lines to be sorted in section insertion 
point so the formatter does not have to jump around within and between the 
multiple latex source files.  So I need to put all the info for each entry 
on one line.  I'll let the formatter worry about which index tag to use or 
whether a new \syindex tag should be added, and how to make \index{{}} 
work.

Terry Jan Reedy




From martin at v.loewis.de  Thu Jun  8 23:34:40 2006
From: martin at v.loewis.de (=?ISO-8859-13?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 08 Jun 2006 23:34:40 +0200
Subject: [Python-Dev] tarfile and unicode filenames in windows
In-Reply-To: <e04bdf310606081211n186cc17dx714e1aacbc6096ae@mail.gmail.com>
References: <e04bdf310606081211n186cc17dx714e1aacbc6096ae@mail.gmail.com>
Message-ID: <448897F0.6040107@v.loewis.de>

Facundo Batista wrote:
> This is because tarinfo.tobuf() creates a unicode object (because it
> has the filename on it), and file.write() must have a standard string.
> 
> This is a known problem? Shall I post a bug? Couldn't find any
> regarding this, and google didn't help here.

You could file a bug report, but I doubt that helps much. To get this
to work, somebody would have to research on how precisely non-ASCII
file names are supposed to be encoded in a tarfile. I know Unix 2003
specifies something to this effect (in pax(1)), but somebody would have
to understand and implement that. As this is all fairly non-trivial,
and must also consider a lot of prior art, it is unlikely that something
will be done about it in the next years.

Regards,
Martin

From tim.peters at gmail.com  Fri Jun  9 04:05:49 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Thu, 8 Jun 2006 22:05:49 -0400
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
Message-ID: <1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>

FYI, here's the minimal set of failing tests:

$ python_d ../Lib/test/regrtest.py test_file test_optparse
test_file
test_optparse
test test_optparse failed -- Traceback (most recent call last):
  File "C:\Code\python\lib\test\test_optparse.py", line 1042, in
test_filetype_noexist
    test_support.TESTFN)
  File "C:\Code\python\lib\test\test_optparse.py", line 158, in assertParseFail
    self.assertFalse("expected parse failure")
AssertionError

1 test OK.
1 test failed:
    test_optparse
[23476 refs]

That also, using -w and -f, reproduces the bizarre HPPA behavior when
test_optparse is rerun in verbose mode (test_filetype_noexist passes
then, but test_version fails).

I have no idea why any of this is true, but there's good and bad news:
 reverting rev 46757 does _not_ make the problem go away.  So you're
off the hook, but we don't know who to crucify in your place ;-)

As to why the failure only showed up recently, I'm not sure, but
test_file must run before test_optparse, and it looks like the problem
goes away if "too many"(!) other tests intervene.  The Win2K buildbot
is unique in that test_file has been followed very soon by
test_optparse two builds in a row.

From brett at python.org  Fri Jun  9 04:30:55 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 8 Jun 2006 19:30:55 -0700
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
Message-ID: <bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>

On 6/8/06, Tim Peters <tim.peters at gmail.com> wrote:
>
> FYI, here's the minimal set of failing tests:
>
> $ python_d ../Lib/test/regrtest.py test_file test_optparse
> test_file
> test_optparse
> test test_optparse failed -- Traceback (most recent call last):
>   File "C:\Code\python\lib\test\test_optparse.py", line 1042, in
> test_filetype_noexist
>     test_support.TESTFN)
>   File "C:\Code\python\lib\test\test_optparse.py", line 158, in
> assertParseFail
>     self.assertFalse("expected parse failure")
> AssertionError


Different type of failure as well; if you look at the original failure it
has to do with the help output having an extra newline.


1 test OK.
> 1 test failed:
>     test_optparse
> [23476 refs]
>
> That also, using -w and -f, reproduces the bizarre HPPA behavior when
> test_optparse is rerun in verbose mode (test_filetype_noexist passes
> then, but test_version fails).
>
> I have no idea why any of this is true, but there's good and bad news:
> reverting rev 46757 does _not_ make the problem go away.


Actually, that run had two checkins; there was also 46755.  But when I ``svn
update -r46754`` it still fails on my OS X laptop.  So still ain't my
fault.  =)


  So you're
> off the hook, but we don't know who to crucify in your place ;-)


Oh, we live in America; we have a list to pull from.  =)

As to why the failure only showed up recently, I'm not sure, but
> test_file must run before test_optparse, and it looks like the problem
> goes away if "too many"(!) other tests intervene.  The Win2K buildbot
> is unique in that test_file has been followed very soon by
> test_optparse two builds in a row.
>

We don't have any mechanism in place to record when we find tests failing in
a row to always run them in that order until we fix it, do we?  Nor do we
have a script to just continually check out older revisions in svn, compile,
and test until the tests do pass, huh?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/57f96165/attachment.htm 

From tim.peters at gmail.com  Fri Jun  9 04:56:27 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Thu, 8 Jun 2006 22:56:27 -0400
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
Message-ID: <1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>

[Tim]
>> FYI, here's the minimal set of failing tests:
>>
>> $ python_d ../Lib/test/regrtest.py test_file test_optparse
>> test_file
>> test_optparse
>> test test_optparse failed -- Traceback (most recent call last):
>>   File "C:\Code\python\lib\test\test_optparse.py", line 1042, in
test_filetype_noexist
>>     test_support.TESTFN)
>>   File "C:\Code\python\lib\test\test_optparse.py", line 158, in
assertParseFail
>>     self.assertFalse("expected parse failure")
>> AssertionError

[Brett]
> Different type of failure as well;

Not so.

> if you look at the original failure it has to do with the help output
> having an extra newline.

While if you look at the original failure ;-), you'll see that _both_
failure modes occur.  The one I showed above occurs when test_optparse
runs the first time; the one you're thinking of occurs when regrest
*re*runs test_optparse in verbose mode.  The original HPPA log
contained both failures.

>> ...
>> I have no idea why any of this is true, but there's good and bad news:
>> reverting rev 46757 does _not_ make the problem go away.

> Actually, that run had two checkins; there was also 46755.

Build 992 on the W2k trunk buildbot had only 46757 in its blamelist,
and was the first failing test run there.

> But when I ``svn update -r46754`` it still fails on my OS X laptop.

What revision was your laptop at before the update?  It could help a
lot to know the earliest revision at which this fails.

> So still ain't my fault.  =)

No, you're so argumentative today I'm starting to suspect it is ;-)

...

>> As to why the failure only showed up recently, I'm not sure, but
>> test_file must run before test_optparse, and it looks like the problem
>> goes away if "too many"(!) other tests intervene.  The Win2K buildbot
>> is unique in that test_file has been followed very soon by
>> test_optparse two builds in a row.

> We don't have any mechanism in place to record when we find tests failing in
> a row to always run them in that order until we fix it, do we?

That's right -- none.  If would be easy to check in a little temporary
tweak -- think I'll do that.

> Nor do we have a script to just continually check out older revisions
> in svn, compile, and test until the tests do pass, huh?

We don't, and I don't either.  IIRC, Neil did quite a bit of that some
time ago, and he may have a script for it.  Doing a binary search
under SVN should be very easy, given that a revision number identifies
the entire state of the repository.

From brett at python.org  Fri Jun  9 05:03:19 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 8 Jun 2006 20:03:19 -0700
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
	<1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
Message-ID: <bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>

On 6/8/06, Tim Peters <tim.peters at gmail.com> wrote:
>
> [Tim]
> >> FYI, here's the minimal set of failing tests:
> >>
> >> $ python_d ../Lib/test/regrtest.py test_file test_optparse
> >> test_file
> >> test_optparse
> >> test test_optparse failed -- Traceback (most recent call last):
> >>   File "C:\Code\python\lib\test\test_optparse.py", line 1042, in
> test_filetype_noexist
> >>     test_support.TESTFN)
> >>   File "C:\Code\python\lib\test\test_optparse.py", line 158, in
> assertParseFail
> >>     self.assertFalse("expected parse failure")
> >> AssertionError
>
> [Brett]
> > Different type of failure as well;
>
> Not so.
>
> > if you look at the original failure it has to do with the help output
> > having an extra newline.
>
> While if you look at the original failure ;-), you'll see that _both_
> failure modes occur.  The one I showed above occurs when test_optparse
> runs the first time; the one you're thinking of occurs when regrest
> *re*runs test_optparse in verbose mode.  The original HPPA log
> contained both failures.


Ah, my mistake.

>> ...
> >> I have no idea why any of this is true, but there's good and bad news:
> >> reverting rev 46757 does _not_ make the problem go away.
>
> > Actually, that run had two checkins; there was also 46755.
>
> Build 992 on the W2k trunk buildbot had only 46757 in its blamelist,
> and was the first failing test run there.
>
> > But when I ``svn update -r46754`` it still fails on my OS X laptop.
>
> What revision was your laptop at before the update?  It could help a
> lot to know the earliest revision at which this fails.


No clue.  I had not updated my local version in quite some time since most
of my dev as of late has been at work.

> So still ain't my fault.  =)
>
> No, you're so argumentative today I'm starting to suspect it is ;-)


Sorry, but at the moment Python is failing on ``make install`` when it runs
compileall .

...
>
> >> As to why the failure only showed up recently, I'm not sure, but
> >> test_file must run before test_optparse, and it looks like the problem
> >> goes away if "too many"(!) other tests intervene.  The Win2K buildbot
> >> is unique in that test_file has been followed very soon by
> >> test_optparse two builds in a row.
>
> > We don't have any mechanism in place to record when we find tests
> failing in
> > a row to always run them in that order until we fix it, do we?
>
> That's right -- none.  If would be easy to check in a little temporary
> tweak -- think I'll do that.
>
> > Nor do we have a script to just continually check out older revisions
> > in svn, compile, and test until the tests do pass, huh?
>
> We don't, and I don't either.  IIRC, Neil did quite a bit of that some
> time ago, and he may have a script for it.  Doing a binary search
> under SVN should be very easy, given that a revision number identifies
> the entire state of the repository.
>


That would be handy.  Question is do we just want a progressive backtrack or
an actual binary search that goes back a set number of revisions and then
begins to creep back up in rev. numbers when it realizes it has gone back
too far.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/c161a02b/attachment.html 

From tim.peters at gmail.com  Fri Jun  9 05:15:59 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Thu, 8 Jun 2006 23:15:59 -0400
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
	<1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
	<bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>
Message-ID: <1f7befae0606082015l63a85567lb4d5eeb4791deb2b@mail.gmail.com>

...

[Tim]
>> What revision was your laptop at before the update?  It could help a
>> lot to know the earliest revision at which this fails.

[Brett]
> No clue.  I had not updated my local version in quite some time since most
> of my dev as of late has been at work.

A good clue is to look at the "Revsion: NNNNN" line from "svn info"
output executed from the root of your checkout.  Or if you have the
Python executable handy:

>>> import sys
>>> sys.subversion
('CPython', 'trunk', '46762')

No, I'm not making that up!

>> ...
>> Doing a binary search under SVN should be very easy, given that
>> a revision number identifies the entire state of the repository.

> That would be handy.  Question is do we just want a progressive backtrack or
> an actual binary search that goes back a set number of revisions and then
> begins to creep back up in rev. numbers when it realizes it has gone back
> too far.

What we really want to do is solve the problem.  If we're going to tie
up my machine doing it, I want as few builds as theoretically
possible.  If we're going to tie up your machine, it's fine by me if
it goes back one checkin at a time until 1991 :-)

From brett at python.org  Fri Jun  9 05:38:29 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 8 Jun 2006 20:38:29 -0700
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <1f7befae0606082015l63a85567lb4d5eeb4791deb2b@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
	<1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
	<bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>
	<1f7befae0606082015l63a85567lb4d5eeb4791deb2b@mail.gmail.com>
Message-ID: <bbaeab100606082038i52b4ced6w3c26d886cc9fc3b2@mail.gmail.com>

On 6/8/06, Tim Peters <tim.peters at gmail.com> wrote:

> ...
>
> [Tim]
> >> What revision was your laptop at before the update?  It could help a
> >> lot to know the earliest revision at which this fails.
>
> [Brett]
> > No clue.  I had not updated my local version in quite some time since
> most
> > of my dev as of late has been at work.
>
> A good clue is to look at the "Revsion: NNNNN" line from "svn info"
> output executed from the root of your checkout.  Or if you have the
> Python executable handy:
>
> >>> import sys
> >>> sys.subversion
> ('CPython', 'trunk', '46762')
>
> No, I'm not making that up!


Oh, I believe you.  Issue is that I did a svn update when I got home today.
I have another checkout that I never modify (to use as a reference checkout)
that I have not updated since rev. 43738 and it passes the tests.  That sure
helps narrow it down, doesn't it.  =)  A quick check of rev. 46750 has the
test passing as well.


>> ...
> >> Doing a binary search under SVN should be very easy, given that
> >> a revision number identifies the entire state of the repository.
>
> > That would be handy.  Question is do we just want a progressive
> backtrack or
> > an actual binary search that goes back a set number of revisions and
> then
> > begins to creep back up in rev. numbers when it realizes it has gone
> back
> > too far.
>
> What we really want to do is solve the problem.


Of course.

  If we're going to tie
> up my machine doing it, I want as few builds as theoretically
> possible.  If we're going to tie up your machine, it's fine by me if
> it goes back one checkin at a time until 1991 :-)
>


=)  On my slow machine, it might be another 15 years before we get to
current on HEAD.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/039d9f85/attachment.htm 

From tim.peters at gmail.com  Fri Jun  9 05:42:28 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Thu, 8 Jun 2006 23:42:28 -0400
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <bbaeab100606082038i52b4ced6w3c26d886cc9fc3b2@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
	<1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
	<bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>
	<1f7befae0606082015l63a85567lb4d5eeb4791deb2b@mail.gmail.com>
	<bbaeab100606082038i52b4ced6w3c26d886cc9fc3b2@mail.gmail.com>
Message-ID: <1f7befae0606082042n7a178acdpb8bcfcd2de0aa787@mail.gmail.com>

Well, this sure sucks.  This is the earliest revision at which the tests fail:

"""
r46752 | georg.brandl | 2006-06-08 10:50:53 -0400 (Thu, 08 Jun 2006) | 3 lines
Changed paths:
   M /python/trunk/Lib/test/test_file.py

Convert test_file to unittest.
"""

If _that's_ not a reason for using doctest, I don't know what is ;-)

From brett at python.org  Fri Jun  9 06:05:22 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 8 Jun 2006 21:05:22 -0700
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <1f7befae0606082042n7a178acdpb8bcfcd2de0aa787@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<bbaeab100606081647j52337dc2na2f005c42568005e@mail.gmail.com>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
	<1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
	<bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>
	<1f7befae0606082015l63a85567lb4d5eeb4791deb2b@mail.gmail.com>
	<bbaeab100606082038i52b4ced6w3c26d886cc9fc3b2@mail.gmail.com>
	<1f7befae0606082042n7a178acdpb8bcfcd2de0aa787@mail.gmail.com>
Message-ID: <bbaeab100606082105v143bb02aq5fffd9dbec660e1f@mail.gmail.com>

On 6/8/06, Tim Peters <tim.peters at gmail.com> wrote:
>
> Well, this sure sucks.  This is the earliest revision at which the tests
> fail:
>
> """
> r46752 | georg.brandl | 2006-06-08 10:50:53 -0400 (Thu, 08 Jun 2006) | 3
> lines
> Changed paths:
>    M /python/trunk/Lib/test/test_file.py
>
> Convert test_file to unittest.
> """
>
> If _that's_ not a reason for using doctest, I don't know what is ;-)
>


That's hilarious!  OK, that's it.  I am definitely focusing my efforts for
2.6 in improving testing.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060608/074f232c/attachment.html 

From tim.peters at gmail.com  Fri Jun  9 06:21:45 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 9 Jun 2006 00:21:45 -0400
Subject: [Python-Dev] [Python-checkins] buildbot warnings in hppa Ubuntu
	dapper trunk
In-Reply-To: <bbaeab100606082105v143bb02aq5fffd9dbec660e1f@mail.gmail.com>
References: <20060608202043.D5CEC1E4004@bag.python.org>
	<1f7befae0606081820p27c1d827n95a4a59f365cdfeb@mail.gmail.com>
	<1f7befae0606081905n32af04beqf37236a2bffe4034@mail.gmail.com>
	<bbaeab100606081930u7de74c9ct4113f828502d264e@mail.gmail.com>
	<1f7befae0606081956u110433c0pda3179ba4bc35410@mail.gmail.com>
	<bbaeab100606082003l1f5114a3n280c9a074975e589@mail.gmail.com>
	<1f7befae0606082015l63a85567lb4d5eeb4791deb2b@mail.gmail.com>
	<bbaeab100606082038i52b4ced6w3c26d886cc9fc3b2@mail.gmail.com>
	<1f7befae0606082042n7a178acdpb8bcfcd2de0aa787@mail.gmail.com>
	<bbaeab100606082105v143bb02aq5fffd9dbec660e1f@mail.gmail.com>
Message-ID: <1f7befae0606082121u913571es82cdcd8687bf3d3e@mail.gmail.com>

[Tim]
>> Well, this sure sucks.  This is the earliest revision at which the
tests fail:
>>
>> """
>> r46752 | georg.brandl | 2006-06-08 10:50:53 -0400 (Thu, 08 Jun
2006) | 3 lines
>> Changed paths:
>>    M /python/trunk/Lib/test/test_file.py
>>
>> Convert test_file to unittest.
>> """
>>
>> If _that's_ not a reason for using doctest, I don't know what is ;-)

[Brett]
> That's hilarious!  OK, that's it.  I am definitely focusing my efforts for
> 2.6 in improving testing.

LOL.  Believe it or not, we wouldn't have had a problem here if
doctest had been used instead -- but the reason isn't particularly
sane ;-)

Before the conversion, test_file.py ended with:

try:
    bug801631()
finally:
    os.unlink(TESTFN)

so that TESTFN got removed _no matter what_.  Some of the individual
tests here are careless about cleaning up after themselves, and that
last clause used to hide a multitude of lazy sins.  A doctest would
naturally also have ended with a "clean up the mess" block.

After the conversion to unittest, there was no final cleanup block,
and it just so happened that "testUnicodeOpen" has the alphabetically
largest test name, so unittest runs it last, and testUnicodeOpen() was
one of the lazy tests that didn't clean up after itself.

What that has to do with test_optparse is left as an exercise for the reader ;-)

From mike at skew.org  Fri Jun  9 06:54:18 2006
From: mike at skew.org (Mike Brown)
Date: Thu, 8 Jun 2006 22:54:18 -0600 (MDT)
Subject: [Python-Dev] Some more comments re new uriparse module,
	patch 1462525
In-Reply-To: <Pine.LNX.4.64.0606081842090.8417@localhost>
Message-ID: <200606090454.k594sKXT021925@chilled.skew.org>

John J Lee wrote:
> > http://python.org/sf/1500504
> [...]
> 
> At first glance, looks good.  I hope to review it properly later.
> 
> One point: I don't think there should be any mention of "URL" in the 
> module -- we should use "URI" everywhere (see my comments on Paul's 
> original version for a bit more on this).

Agreed.

Although you've added the test cases from 4Suite and credited me for them,
only a few of the test cases were invented by me.  I'd rather you credited
them to their original sources, as I did.

Also, I believe Graham Klyne has been adding some new cases to his Haskell
tools, but hasn't been updating the other spreadsheet and RDF files in which 
he publishes them in a more usable form. My tests only use what's in the 
spreadsheet, so I've only got 88 out of 99 "testRelative" cases from
http://cvs.haskell.org/cgi-bin/cvsweb.cgi/fptools/libraries/network/tests/URITest.hs
So if you really want to be thorough, grab the missing cases from there.

-

It appears that Paul uploaded a new version of his library on June 3:
http://python.org/sf/1462525
I'm unclear on the relationship between the two now. Are they both up for 
consideration?

-

One thing I forgot to mention in private email is that I'm concerned that the
inclusion of URI reference resolution functionality has exceeded the scope of
this 'urischemes' module that purports to be for 'extensible URI parsing'.  It
is becoming a scheme-aware and general-purpose syntactic processing library
for URIs, and should be characterized as such in its name as well as in its
documentation. 

Even without a new name and more accurately documented scope, people are going
to see no reason not to add the rest of STD 66's functionality to it
(percent-encoding, normalization for testing equivalence, syntax
validation...). As you can see in Ft.Lib.Uri, the latter two are not at all
hard to implement, especially if you use regular expressions. These all fall 
under syntactic operations on URIs, just like reference-resolution.

Percent-encoding gets very hairy with its API details due to application-level
uses that don't jive with STD 66 (e.g. the fuzzy specs and convoluted history
governing application/x-www-form-urlencoded), the nuances of character
encoding and Python string types, and widely varying expectations of users.


From nnorwitz at gmail.com  Fri Jun  9 08:23:40 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 8 Jun 2006 23:23:40 -0700
Subject: [Python-Dev] beta1 coming real soon
Message-ID: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>

It's June 9 in most parts of the world.  The schedule calls for beta 1
on June 14.  That means there's less than 4 days until the expected
code freeze.  Please don't rush to get everything in at the last
minute.  The buildbots should remain green to keep Anthony happy and
me sane (or is it the other way around).

If you plan to make a checkin adding a feature (even a simple one),
you oughta let people know by responding to this message.  Please get
the bug fixes in ASAP.  Remember to add tests!

I would really like it if someone on Windows could test the perf of
patch http://python.org/sf/1479611 and see if it improves perf.  I
will test again on Mac ppc, Linux x86, and Linux amd64 to verify gains

n

From nnorwitz at gmail.com  Fri Jun  9 08:33:32 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 8 Jun 2006 23:33:32 -0700
Subject: [Python-Dev] 2.5 issues need resolving in a few days
Message-ID: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>

The most important outstanding issue is the xmlplus/xmlcore issue.
It's not going to get fixed unless someone works on it.  There's only
a few days left before beta 1.  Can someone please address this?  If
that means reverting changes to maintain compatibility, so be it.

There is still the missing documentation for ctypes and element tree.
I know there's been progress on ctypes.  What are we going to do about
ElementTree?  Are we going to have another undocumented module in the
core or should we pull ET out (that would also fix the xml issue)?

Finally, there are the AST regressions.

If there are patches that address any of these issues, respond with a link here.

I'm guessing that all the possible features are not going to make it into 2.5.

n

From theller at python.net  Fri Jun  9 09:00:12 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 09 Jun 2006 09:00:12 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <e6b69s$pqf$1@sea.gmane.org>

Neal Norwitz wrote:
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).

oops - I still had marked June 24 in my calendar!

> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

I intend to merge in the current ctypes SF CVS, will try that today.

Thomas


From rasky at develer.com  Fri Jun  9 09:34:00 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Fri, 9 Jun 2006 09:34:00 +0200
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
References: <17544.29145.93728.612908@montanaro.dyndns.org><1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
	<17544.32945.408351.767417@montanaro.dyndns.org>
Message-ID: <07ca01c68b97$137c65c0$3db72997@bagio>

skip at pobox.com wrote:

>     >> I have three Python branches, trunk, release23-maint and
>     >> release24-maint.  In the (for example) release24-maint, what
>     svn up >> command would I use to get to the 2.4.2 version?
>
>     Tim> First question:
>
>     Tim>    cd to the root of your release24-maint checkout, then
>     Tim>    svn switch
> svn+ssh://pythondev at svn.python.org/python/tags/r242
>
> How is that different than noting that r242 corresponds to revision
> 39619
> and executing:
>
>     svn up -r 39619


If you realize that each file/directory in Subversion is uniquely identified by
a 2-space coordinate system [url, revision] (given a checkout, you can use "svn
info" to get its coordinates), then we can say that "svn up -r 39619" keeps the
url changed and change the revision to whatever number you specified. In other
words, you get the state of the working copy at whatever state it was that URL
at that time. For instance, if you execute it within the trunk working copy,
you will get the trunk at the moment 2.4.2 was released.

On the other hand, "svn switch" moves the url: it basically "moves" your
checkout from [whatever_url, whatever_rev] to [url_specified, HEAD],
downloading the minimal set of diffs to do so. Given that /tags/r242 is a tag,
it means that any revision is good, since nobody is going to commit into that
directory (it will stay unchanged forever). So [/tags/r242, HEAD] is the same
of any other [/tags/r242, REVNUM]  (assuming of course that /tags/r242 was
already created at the time of REVNUM).

So basically you want svn switch to [/tags/r242, HEAD] if you don't plan on
doing modifications, while you want [/branches/release24-maint, HEAD] if you
want to work on the 2.4 branch. Going to [/branches/release24-maint, 39619]
does not really serve many purposes: you have to find out and write 39619
manually, you still can't do commits since of course you want to work on the
top of the branch, and you get less meaningful information if you later run
"svn info" on the working copy (as you're probably going to forget what
[/branches/release24-maint, 39619] means pretty soon, while [/tags/r242, NNN]
is more clear).

Giovanni Bajo


From python-dev at zesty.ca  Fri Jun  9 09:45:15 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Fri, 9 Jun 2006 02:45:15 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
Message-ID: <Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>

Quite a few people have expressed interest in having UUID
functionality in the standard library, and previously on this
list some suggested possibly using the uuid.py module i wrote:

    http://zesty.ca/python/uuid.py

This module provides a UUID class, functions for generating
version 1, 3, 4, and 5 UUIDs, and can convert UUIDs to and from
strings of 32 hex digits and strings of 16 bytes.

The thread of messages posted on python-dev begins here:

    http://mail.python.org/pipermail/python-dev/2006-March/062119.html

My reading of this is that there was a pretty good consensus on
the issues with this module that need to be addressed:

    (a) UUIDs should be immutable (and usable as dict keys).

    (b) The str() of a UUID should not contain curly braces.

    (c) The uuid1() function, which generates a version-1 UUID,
        uses a very slow method of getting the MAC address.

    (d) The retrieved MAC address should be cached.

    (e) Tests need to be written.

The big question seems to be item (c); all the other items are easy
to take care of.  Assuming (a), (b), (d), and (e) are done, i see a
few options for how to proceed from there:

    1.  Remove the uuid1() function, then put uuid.py in the
        standard library so at least we'll have the rest of the
        UUID functionality in 2.5b1.  Fill in uuid1() later.

    2.  Remove the MAC-address functionality from uuid.py; instead
        let the caller give the MAC address as an argument to uuid1().
        Put that in the standard library for 2.5b1 and fill in the
        function for retrieving the MAC address later.

    3.  Add uuid.py to the standard library with its current slow
        method of finding the MAC address (parsing the output of
        ifconfig or ipconfig), but cache the output so it's only
        slow the first time.

    4.  Figure out how to use ctypes to retrieve the MAC address.
        This would only work on certain platforms, but we could
        probably cover the major ones.  On the other hand, it seems
        unlikely that this would be fully hammered out before the
        code freeze.

    5.  Don't include any UUID functionality in 2.5b1; put it off
        until 2.6.

What are your thoughts on this?

Thanks!


-- ?!ng

From fredrik at pythonware.com  Fri Jun  9 10:11:14 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 09 Jun 2006 10:11:14 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
Message-ID: <e6baf3$8fl$1@sea.gmane.org>

Ka-Ping Yee wrote:

> Quite a few people have expressed interest in having UUID
> functionality in the standard library, and previously on this
> list some suggested possibly using the uuid.py module i wrote:
> 
>     http://zesty.ca/python/uuid.py

+1!

> This module provides a UUID class, functions for generating
> version 1, 3, 4, and 5 UUIDs, and can convert UUIDs to and from
> strings of 32 hex digits and strings of 16 bytes.
> 
> The thread of messages posted on python-dev begins here:
> 
>     http://mail.python.org/pipermail/python-dev/2006-March/062119.html
> 
> My reading of this is that there was a pretty good consensus on
> the issues with this module that need to be addressed:
> 
>     (a) UUIDs should be immutable (and usable as dict keys).
> 
>     (b) The str() of a UUID should not contain curly braces.
> 
>     (c) The uuid1() function, which generates a version-1 UUID,
>         uses a very slow method of getting the MAC address.
> 
>     (d) The retrieved MAC address should be cached.
> 
>     (e) Tests need to be written.

       (f) "import uuid" or "import uuidlib" ?

> The big question seems to be item (c); all the other items are easy
> to take care of.  Assuming (a), (b), (d), and (e) are done, i see a
> few options for how to proceed from there:
> 
>     1.  Remove the uuid1() function, then put uuid.py in the
>         standard library so at least we'll have the rest of the
>         UUID functionality in 2.5b1.  Fill in uuid1() later.
> 
>     2.  Remove the MAC-address functionality from uuid.py; instead
>         let the caller give the MAC address as an argument to uuid1().
>         Put that in the standard library for 2.5b1 and fill in the
>         function for retrieving the MAC address later.
> 
>     3.  Add uuid.py to the standard library with its current slow
>         method of finding the MAC address (parsing the output of
>         ifconfig or ipconfig), but cache the output so it's only
>         slow the first time.
> 
>     4.  Figure out how to use ctypes to retrieve the MAC address.
>         This would only work on certain platforms, but we could
>         probably cover the major ones.  On the other hand, it seems
>         unlikely that this would be fully hammered out before the
>         code freeze.
> 
>     5.  Don't include any UUID functionality in 2.5b1; put it off
>         until 2.6.

       6.  Combine 2 and 3: require the user to pass in a MAC argument
           to uuid1, but provide a SlowGetMacAddress helper that wraps
           the existing code.

</F>


From mal at egenix.com  Fri Jun  9 11:32:14 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 09 Jun 2006 11:32:14 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e696es$5hs$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>
	<4487E1DF.3030302@egenix.com> <e696es$5hs$1@sea.gmane.org>
Message-ID: <4489401E.9040606@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> The results were produced by pybench 2.0 and use time.time
>> on Linux, plus a different calibration strategy. As a result
>> these timings are a lot more repeatable than with pybench 1.3
>> and I've confirmed the timings using several runs to make sure.
> 
> can you check in 2.0 ?  (if it's not quite ready for public consumption, 
> put it in the sandbox).

I'll check it in once it's ready for prime-time, either later
today or early next week.

You can download a current snapshot from:

http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 09 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              23 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From gh at ghaering.de  Fri Jun  9 11:00:19 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Fri, 09 Jun 2006 11:00:19 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <e6bd7b$iec$1@sea.gmane.org>

Neal Norwitz wrote:
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around). [...]

I would have liked to implement the last major missing piece from 
pysqlite - the authorizer hook - release pysqlite 2.3.0 and merge the 
changes into the Python core sqlite3 module.

This would be an additional feature and not require any changes to the 
existing code.

If you'd rather not have this merged because of new code, then I'll skip it.

There are other changes I did in pysqlite for pysqlite 2.3.0 that I'd 
more strongly like to integrate here:

http://initd.org/tracker/pysqlite/changeset/274

This changeset fixes a real bug that can lead to hard crashes and also 
makes the sqlite module more sane:

- converter names are looked up in a case-insensitive manner (the old 
behaviour is confusing to users)
- Errors in callbacks are not silently ignored any longer, but lead to 
the query being aborted
- Additionaly, the error can be echoed to stderr if a debugging flag is 
set (default is off)

-- Gerhard


From skip at pobox.com  Fri Jun  9 12:26:01 2006
From: skip at pobox.com (skip at pobox.com)
Date: Fri, 9 Jun 2006 05:26:01 -0500
Subject: [Python-Dev] UUID module
In-Reply-To: <e6baf3$8fl$1@sea.gmane.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
Message-ID: <17545.19641.359024.818295@montanaro.dyndns.org>


    Fredrik>  6.  Combine 2 and 3: require the user to pass in a MAC argument
    Fredrik>      to uuid1, but provide a SlowGetMacAddress helper that wraps
    Fredrik>      the existing code.

Or make the MAC address an optional arg to uuid1.  If given, use it.  If
not, use the slow lookup (then cache the result).

Skip

From fredrik at pythonware.com  Fri Jun  9 12:28:01 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 09 Jun 2006 12:28:01 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <4489401E.9040606@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>
	<e696es$5hs$1@sea.gmane.org> <4489401E.9040606@egenix.com>
Message-ID: <e6bifh$5bv$1@sea.gmane.org>

M.-A. Lemburg wrote:

> You can download a current snapshot from:
> 
> http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip

believe it or not, but this hangs on my machine, under 2.5 trunk.  and 
it hangs hard; nether control-c, break, or the task manager manages to 
kill it.

if it's any clue, it prints

> -------------------------------------------------------------------------------
> PYBENCH 2.0
> -------------------------------------------------------------------------------
> * using Python 2.5a2
> * disabled garbage collection
> * system check interval set to maximum: 2147483647
> * using timer: time.clock

and that's it; the process is just sitting there, using exactly 0% CPU.

</F>


From fredrik at pythonware.com  Fri Jun  9 12:33:25 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 09 Jun 2006 12:33:25 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <17545.19641.359024.818295@montanaro.dyndns.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<e6baf3$8fl$1@sea.gmane.org>
	<17545.19641.359024.818295@montanaro.dyndns.org>
Message-ID: <e6bipl$654$2@sea.gmane.org>

skip at pobox.com wrote:

>     Fredrik>  6.  Combine 2 and 3: require the user to pass in a MAC argument
>     Fredrik>      to uuid1, but provide a SlowGetMacAddress helper that wraps
>     Fredrik>      the existing code.
> 
> Or make the MAC address an optional arg to uuid1.  If given, use it.  If
> not, use the slow lookup (then cache the result).

the reason for making it a required argument is to make it clear that 
the code is using a "suboptimal" way to get at the MAC value.

explicit is better than implicit etc etc.

</F>


From ocean at m2.ccsnet.ne.jp  Fri Jun  9 12:16:45 2006
From: ocean at m2.ccsnet.ne.jp (H.Yamamoto)
Date: Fri, 9 Jun 2006 19:16:45 +0900
Subject: [Python-Dev] beta1 coming real soon
Message-ID: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>

> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

Is there any chance to fix mbcs bug? I think this is critical...
My patch is here: http://python.org/sf/1455898

# Maybe, no one is using this codec?


From mal at egenix.com  Fri Jun  9 12:44:34 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 09 Jun 2006 12:44:34 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <e6bifh$5bv$1@sea.gmane.org>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<e696es$5hs$1@sea.gmane.org>
	<4489401E.9040606@egenix.com> <e6bifh$5bv$1@sea.gmane.org>
Message-ID: <44895112.4040600@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> You can download a current snapshot from:
>>
>> http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip
> 
> believe it or not, but this hangs on my machine, under 2.5 trunk.  and 
> it hangs hard; nether control-c, break, or the task manager manages to 
> kill it.

Weird.

> if it's any clue, it prints
> 
>> -------------------------------------------------------------------------------
>> PYBENCH 2.0
>> -------------------------------------------------------------------------------
>> * using Python 2.5a2
>> * disabled garbage collection
>> * system check interval set to maximum: 2147483647
>> * using timer: time.clock
> 
> and that's it; the process is just sitting there, using exactly 0% CPU.

This is the output to expect:

-------------------------------------------------------------------------------
PYBENCH 2.0
-------------------------------------------------------------------------------
* using Python 2.4.2
* disabled garbage collection
* system check interval set to maximum: 2147483647
* using timer: time.time

Calibrating tests. Please wait...

Running 10 round(s) of the suite at warp factor 10:

* Round 1 done in 6.627 seconds.
* Round 2 done in 7.307 seconds.
* Round 3 done in 7.180 seconds.
...

Note that the calibration step takes a while.

Looking at the code, the only place where it could
hang (because it's relying on a few external tools)
is when fetching the platform details:

def get_machine_details():

    import platform
    buildno, builddate = platform.python_build()
    python = platform.python_version()
    if python > '2.0':
        try:
            unichr(100000)
        except ValueError:
            # UCS2 build (standard)
            unicode = 'UCS2'
        else:
            # UCS4 build (most recent Linux distros)
            unicode = 'UCS4'
    else:
        unicode = None
    bits, linkage = platform.architecture()
    return {
        'platform': platform.platform(),
        'processor': platform.processor(),
        'executable': sys.executable,
        'python': platform.python_version(),
        'compiler': platform.python_compiler(),
        'buildno': buildno,
        'builddate': builddate,
        'unicode': unicode,
        'bits': bits,
        }

It does run fine on my WinXP machine, both with the win32
package installed or not.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 09 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              23 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From walter at livinglogic.de  Fri Jun  9 12:56:42 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri, 09 Jun 2006 12:56:42 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>
References: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>
Message-ID: <448953EA.9080006@livinglogic.de>

H.Yamamoto wrote:

>> If you plan to make a checkin adding a feature (even a simple one),
>> you oughta let people know by responding to this message.  Please get
>> the bug fixes in ASAP.  Remember to add tests!
> 
> Is there any chance to fix mbcs bug? I think this is critical...
> My patch is here: http://python.org/sf/1455898

The best way to throughly test the patch is of course to check it in. ;)

I've tested the patch on Windows and there were no obvious bugs. Of
course to *really* test the patch a Windows installation with a
multibyte locale is required.

> # Maybe, no one is using this codec?

The audience is indeed limited.

Servus,
   Walter

From skip at pobox.com  Fri Jun  9 14:11:24 2006
From: skip at pobox.com (skip at pobox.com)
Date: Fri, 9 Jun 2006 07:11:24 -0500
Subject: [Python-Dev] UUID module
In-Reply-To: <e6bipl$654$2@sea.gmane.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<17545.19641.359024.818295@montanaro.dyndns.org>
	<e6bipl$654$2@sea.gmane.org>
Message-ID: <17545.25964.7890.354981@montanaro.dyndns.org>


    Fredrik> the reason for making it a required argument is to make it
    Fredrik> clear that the code is using a "suboptimal" way to get at the
    Fredrik> MAC value.

    Fredrik> explicit is better than implicit etc etc.

Can't we expect there will be a faster way to get at the MAC address at some
point in the future, maybe via a _uuid extension module that does all the
magic in C?  Or is there something inherently slow in discovering a
machine's MAC address (I realize such a task would probably be quite
platform-depenedent).

Skip


From skip at pobox.com  Fri Jun  9 14:54:36 2006
From: skip at pobox.com (skip at pobox.com)
Date: Fri, 9 Jun 2006 07:54:36 -0500
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
In-Reply-To: <07ca01c68b97$137c65c0$3db72997@bagio>
References: <17544.29145.93728.612908@montanaro.dyndns.org>
	<1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
	<17544.32945.408351.767417@montanaro.dyndns.org>
	<07ca01c68b97$137c65c0$3db72997@bagio>
Message-ID: <17545.28556.139183.195205@montanaro.dyndns.org>


    Giovanni> If you realize that each file/directory in Subversion is
    Giovanni> uniquely identified by a 2-space coordinate system [url,
    Giovanni> revision] ...

Thanks, I found this very helpful.  I found it so helpful that I added a
question to the dev faq with this as the answer.  Hope you don't mind...
It should show up on

    http://www.python.org/dev/faq/

as question 3.23 in a few minutes.

Skip


From aahz at pythoncraft.com  Fri Jun  9 15:10:03 2006
From: aahz at pythoncraft.com (Aahz)
Date: Fri, 9 Jun 2006 06:10:03 -0700
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
Message-ID: <20060609131003.GA17971@panix.com>

There was also discussion of a change to the way "quit" works in
interactive mode.  I see no record of it, so I guess that's not going in,
either.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From p.f.moore at gmail.com  Fri Jun  9 15:16:54 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Fri, 9 Jun 2006 14:16:54 +0100
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <20060609131003.GA17971@panix.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<20060609131003.GA17971@panix.com>
Message-ID: <79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>

On 6/9/06, Aahz <aahz at pythoncraft.com> wrote:
> There was also discussion of a change to the way "quit" works in
> interactive mode.  I see no record of it, so I guess that's not going in,
> either.

It's already in 2.5a2, if I'm thinking of the same thing you are...
Paul.

From thomas at python.org  Fri Jun  9 15:26:36 2006
From: thomas at python.org (Thomas Wouters)
Date: Fri, 9 Jun 2006 15:26:36 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <e6baf3$8fl$1@sea.gmane.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
Message-ID: <9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>

On 6/9/06, Fredrik Lundh <fredrik at pythonware.com> wrote:

>        6.  Combine 2 and 3: require the user to pass in a MAC argument
>            to uuid1, but provide a SlowGetMacAddress helper that wraps
>            the existing code.


That sounds like the right thing to do, although I wouldn't call it "slow";
just let it be documented as 'might not always work and might be
inefficient', so there's room to make it the perfect and preferred way to
get a MAC address without having to rename it. (Not that I think that's a
reliable prospect, but hey ;)


-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060609/d29739fb/attachment.html 

From g.brandl at gmx.net  Fri Jun  9 15:33:48 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 09 Jun 2006 15:33:48 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>	<20060609131003.GA17971@panix.com>
	<79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
Message-ID: <e6btbs$efr$1@sea.gmane.org>

Paul Moore wrote:
> On 6/9/06, Aahz <aahz at pythoncraft.com> wrote:
>> There was also discussion of a change to the way "quit" works in
>> interactive mode.  I see no record of it, so I guess that's not going in,
>> either.
> 
> It's already in 2.5a2, if I'm thinking of the same thing you are...
> Paul.

It is, but it seems to disturb IDLE. That's no problem for me, but if someone
is using IDLE, he should look into it.

Georg


From ronaldoussoren at mac.com  Fri Jun  9 16:07:13 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Fri, 09 Jun 2006 16:07:13 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <e6btbs$efr$1@sea.gmane.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<20060609131003.GA17971@panix.com>
	<79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
	<e6btbs$efr$1@sea.gmane.org>
Message-ID: <8901577.1149862033151.JavaMail.ronaldoussoren@mac.com>

 
On Friday, June 09, 2006, at 03:34PM, Georg Brandl <g.brandl at gmx.net> wrote:

>Paul Moore wrote:
>> On 6/9/06, Aahz <aahz at pythoncraft.com> wrote:
>>> There was also discussion of a change to the way "quit" works in
>>> interactive mode.  I see no record of it, so I guess that's not going in,
>>> either.
>> 
>> It's already in 2.5a2, if I'm thinking of the same thing you are...
>> Paul.
>
>It is, but it seems to disturb IDLE. That's no problem for me, but if someone
>is using IDLE, he should look into it.

It doesn't disturb IDLE, but doesn't work in IDLE either. IDLE overrides the exit and quit builtins with different messages, hence IDLE users won't see the new behaviour.

Ronald


From kristjan at ccpgames.com  Fri Jun  9 16:15:58 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Fri, 9 Jun 2006 14:15:58 -0000
Subject: [Python-Dev] beta1 coming real soon
Message-ID: <129CEF95A523704B9D46959C922A280002A4C4E6@nemesis.central.ccp.cc>

Thanks for the reminder.
What I intend to add is to finalize the PCBuild8 directory, and fix CRT runtime error handling for VC8.
The change as proposed involves adding macros around a select few CRT calls (fopen, strftime, etc) where user supplied parameters are passed to the CRT innards.
Code would be added conditionally to errors.c and pyerrors.h, and macros defined in pyerrors.h.
Any objections?

Kristj?n


-----Original Message-----
From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Neal Norwitz
Sent: 9. j?n? 2006 06:24
To: Python Dev
Subject: [Python-Dev] beta1 coming real soon

It's June 9 in most parts of the world.  The schedule calls for beta 1 on June 14.  That means there's less than 4 days until the expected code freeze.  Please don't rush to get everything in at the last minute.  The buildbots should remain green to keep Anthony happy and me sane (or is it the other way around).

If you plan to make a checkin adding a feature (even a simple one), you oughta let people know by responding to this message.  Please get the bug fixes in ASAP.  Remember to add tests!

I would really like it if someone on Windows could test the perf of patch http://python.org/sf/1479611 and see if it improves perf.  I will test again on Mac ppc, Linux x86, and Linux amd64 to verify gains

n
_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com

From guido at python.org  Fri Jun  9 16:28:25 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 9 Jun 2006 07:28:25 -0700
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
Message-ID: <ca471dc20606090728o1690a78aj2c65ead6462b154e@mail.gmail.com>

On 6/8/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> The most important outstanding issue is the xmlplus/xmlcore issue.
> It's not going to get fixed unless someone works on it.  There's only
> a few days left before beta 1.  Can someone please address this?  If
> that means reverting changes to maintain compatibility, so be it.

Really? The old situation is really evil, and the new approach is at
least marginally better by giving users a way to migrate to a new
non-evil approach. What exactly is the backwards incompatibility you
speak of?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From aahz at pythoncraft.com  Fri Jun  9 16:28:47 2006
From: aahz at pythoncraft.com (Aahz)
Date: Fri, 9 Jun 2006 07:28:47 -0700
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<20060609131003.GA17971@panix.com>
	<79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
Message-ID: <20060609142847.GA25387@panix.com>

On Fri, Jun 09, 2006, Paul Moore wrote:
> On 6/9/06, Aahz <aahz at pythoncraft.com> wrote:
>>
>> There was also discussion of a change to the way "quit" works in
>> interactive mode.  I see no record of it, so I guess that's not going in,
>> either.
> 
> It's already in 2.5a2, if I'm thinking of the same thing you are...

Okay, I guess I mis-remembered what had been agreed to.  Should this go
into What's New?  This also disagrees with Misc/NEWS:

- Patch #1446372: quit and exit can now be called from the interactive
  interpreter to exit.

Here are my tests:

: python
Python 2.4 (#1, Jan 17 2005, 14:59:14)
[GCC 3.3.3 (NetBSD nb3 20040520)] on netbsd2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit
'Use Ctrl-D (i.e. EOF) to exit.'

> ./python
Python 2.5a2 (trunk:46583, May 31 2006, 20:56:06)
[GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit
Use quit() or Ctrl-D (i.e. EOF) to exit
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From rasky at develer.com  Fri Jun  9 16:28:49 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Fri, 9 Jun 2006 16:28:49 +0200
Subject: [Python-Dev] Subversion repository question - back up to older
	versions
References: <17544.29145.93728.612908@montanaro.dyndns.org>
	<1f7befae0606081226r43c88376lf09080ef8810c58d@mail.gmail.com>
	<17544.32945.408351.767417@montanaro.dyndns.org>
	<07ca01c68b97$137c65c0$3db72997@bagio>
	<17545.28556.139183.195205@montanaro.dyndns.org>
Message-ID: <01ed01c68bd1$06731970$bf03030a@trilan>

skip at pobox.com wrote:

>     Giovanni> If you realize that each file/directory in Subversion is
>     Giovanni> uniquely identified by a 2-space coordinate system [url,
>     Giovanni> revision] ...
>
> Thanks, I found this very helpful.  I found it so helpful that I
> added a question to the dev faq with this as the answer.  Hope you
> don't mind... It should show up on
>
>     http://www.python.org/dev/faq/
>
> as question 3.23 in a few minutes.

Sure, I'm glad to help. You may want to revise it a little since it wasn't
meant to be read out of the context...
-- 
Giovanni Bajo


From thomas at python.org  Fri Jun  9 16:41:22 2006
From: thomas at python.org (Thomas Wouters)
Date: Fri, 9 Jun 2006 16:41:22 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <20060609142847.GA25387@panix.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<20060609131003.GA17971@panix.com>
	<79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
	<20060609142847.GA25387@panix.com>
Message-ID: <9e804ac0606090741o41366d05r6e67081bb4e6a39c@mail.gmail.com>

On 6/9/06, Aahz <aahz at pythoncraft.com> wrote:
>
> On Fri, Jun 09, 2006, Paul Moore wrote:
> > On 6/9/06, Aahz <aahz at pythoncraft.com> wrote:
> >>
> >> There was also discussion of a change to the way "quit" works in
> >> interactive mode.  I see no record of it, so I guess that's not going
> in,
> >> either.
> >
> > It's already in 2.5a2, if I'm thinking of the same thing you are...
>
> Okay, I guess I mis-remembered what had been agreed to.  Should this go
> into What's New?  This also disagrees with Misc/NEWS:
>
> - Patch #1446372: quit and exit can now be called from the interactive
>   interpreter to exit.
>
> Here are my tests:
>
> : python
> Python 2.4 (#1, Jan 17 2005, 14:59:14)
> [GCC 3.3.3 (NetBSD nb3 20040520)] on netbsd2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> quit
> 'Use Ctrl-D (i.e. EOF) to exit.'
>
> > ./python
> Python 2.5a2 (trunk:46583, May 31 2006, 20:56:06)
> [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> quit
> Use quit() or Ctrl-D (i.e. EOF) to exit


Note the magic word, "called":

 centurion:~/python/python/trunk > python2.4
Python 2.4.4c0 (#2, Jun  8 2006, 01:12:27)
[GCC 4.1.2 20060604 (prerelease) (Debian 4.1.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: 'str' object is not callable
>>>

centurion:~/python/python/trunk > ./python
Python 2.5a2 (trunk:46753, Jun  8 2006, 17:46:46)
[GCC 4.1.2 20060604 (prerelease) (Debian 4.1.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()
centurion:~/python/python/trunk >



-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060609/e0477598/attachment-0001.html 

From amk at amk.ca  Fri Jun  9 16:55:03 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Fri, 9 Jun 2006 10:55:03 -0400
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <20060609142847.GA25387@panix.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<20060609131003.GA17971@panix.com>
	<79990c6b0606090616t6cd8e1c1s5c07a8f77f030a7f@mail.gmail.com>
	<20060609142847.GA25387@panix.com>
Message-ID: <20060609145503.GA28328@rogue.amk.ca>

On Fri, Jun 09, 2006 at 07:28:47AM -0700, Aahz wrote:
> Okay, I guess I mis-remembered what had been agreed to.  Should this go
> into What's New?

Already there:
<http://docs.python.org/dev/whatsnew/other-lang.html#SECTION0001310000000000000000>.

(Fred, is it possible to set the anchors used for *sub*sections?
\label{} sets the filename used for sections, which is great, but
subsection URLs are still annoyingly long.)

--amk


From fdrake at acm.org  Fri Jun  9 16:47:20 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 9 Jun 2006 10:47:20 -0400
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ca471dc20606090728o1690a78aj2c65ead6462b154e@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<ca471dc20606090728o1690a78aj2c65ead6462b154e@mail.gmail.com>
Message-ID: <200606091047.20917.fdrake@acm.org>

On Friday 09 June 2006 10:28, Guido van Rossum wrote:
 > Really? The old situation is really evil, and the new approach is at
 > least marginally better by giving users a way to migrate to a new
 > non-evil approach. What exactly is the backwards incompatibility you
 > speak of?

The "incompatibility" depends on your point of view for this one.  I don't 
think there is any for client code; you get the old behavior for the "xml" 
package, and predictable behavior for the "xmlcore" package.

Martin's objection is that the sources for the "xmlcore" package can no longer 
be shared with the PyXML project.  I understand that he wants to reduce the 
cost of maintaining two trees for what's essentially the same code.  I played 
with some ideas for making the tree more agnostic to where it "really" lives, 
but wasn't particularly successful.

When I was working on that, I found that the PyXML unit tests weren't passing.  
I didn't have time to pursue that, though.  On the whole, I'm unconvinced 
that there's value in continuing to worry about being able to copy the source 
tree between the two projects at this time.  There's almost no effort going 
into PyXML any more, as far as I can tell.  In that light, the maintenance 
cost seems irrelevant compared to not finally solving the fundamental problem 
of magic in the "xml" package import.

I must consider the problem a nice-to-solve issue rather than a particularly 
important issue.  All the benefit is for PyXML, and shouldn't really impact 
Python releases.


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From pj at place.org  Fri Jun  9 16:56:41 2006
From: pj at place.org (Paul Jimenez)
Date: Fri, 09 Jun 2006 09:56:41 -0500
Subject: [Python-Dev] Some more comments re new uriparse module,
	patch 1462525
In-Reply-To: <200606090454.k594sKXT021925@chilled.skew.org> 
References: <200606090454.k594sKXT021925@chilled.skew.org>
Message-ID: <20060609145641.7F94E82C0@place.org>

On Thursday, Jun 8, 2006, Mike Brown writes:
>
>It appears that Paul uploaded a new version of his library on June 3:
>http://python.org/sf/1462525
>I'm unclear on the relationship between the two now. Are they both up for 
>consideration?

That version was in response to comments from JJ Lee.  Email also went to pydev
(archived at http://mail.python.org/pipermail/python-dev/2006-June/065583.html)
about it.

>One thing I forgot to mention in private email is that I'm concerned that the
>inclusion of URI reference resolution functionality has exceeded the scope of
>this 'urischemes' module that purports to be for 'extensible URI parsing'.  It
>is becoming a scheme-aware and general-purpose syntactic processing library
>for URIs, and should be characterized as such in its name as well as in its
>documentation. 

...which is why i called it 'uriparse'. 

>Even without a new name and more accurately documented scope, people are going
>to see no reason not to add the rest of STD 66's functionality to it
>(percent-encoding, normalization for testing equivalence, syntax
>validation...). As you can see in Ft.Lib.Uri, the latter two are not at all
>hard to implement, especially if you use regular expressions. These all fall 
>under syntactic operations on URIs, just like reference-resolution.
>
>Percent-encoding gets very hairy with its API details due to application-level
>uses that don't jive with STD 66 (e.g. the fuzzy specs and convoluted history
>governing application/x-www-form-urlencoded), the nuances of character
>encoding and Python string types, and widely varying expectations of users.

...all of which I consider scope creep. If someone else wants to add
it, more power to them; I wrote this code to fix the deficiencies in
the existing urlparse library, not to be an all-singing all-dancing
STD 66 module. In fact, I don't care whether it's my code or someone
else's that goes into the library - I just want something better than
the existing urlparse library to go in, because the existing stuff has
been acknowledged as insufficient. I've even provided working code with
modifications to fix comments and criticism I've received. If you or
someone else want to extend what I've done to add features or other
functionality, that would be fine with me. If you want to rewrite the
entire thing in a different vein (as Nick Coghlan as done), be my guest.
I'm not married to my code or API or anything but getting an improved
library into the stdlib. To that end, if it's decided to go with my
code, I'll happily put in the work to bring it up to python community
standards. Additional functionality will have to come from someone else
though, as I'm not willing to try and scratch an itch I don't have - and
I've already got a day job.

  --pj



From jhylton at google.com  Fri Jun  9 16:53:19 2006
From: jhylton at google.com (Jeremy Hylton)
Date: Fri, 9 Jun 2006 10:53:19 -0400
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
Message-ID: <9f2b746a0606090753x10dd9a5bhb2d09800dc4236dc@mail.google.com>

I will be looking at the open AST issues today.

Jeremy

On 6/9/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> The most important outstanding issue is the xmlplus/xmlcore issue.
> It's not going to get fixed unless someone works on it.  There's only
> a few days left before beta 1.  Can someone please address this?  If
> that means reverting changes to maintain compatibility, so be it.
>
> There is still the missing documentation for ctypes and element tree.
> I know there's been progress on ctypes.  What are we going to do about
> ElementTree?  Are we going to have another undocumented module in the
> core or should we pull ET out (that would also fix the xml issue)?
>
> Finally, there are the AST regressions.
>
> If there are patches that address any of these issues, respond with a link here.
>
> I'm guessing that all the possible features are not going to make it into 2.5.
>
> n
>

From mwh at python.net  Fri Jun  9 17:30:18 2006
From: mwh at python.net (Michael Hudson)
Date: Fri, 09 Jun 2006 16:30:18 +0100
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	(Neal Norwitz's message of "Thu, 8 Jun 2006 23:23:40 -0700")
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <2mbqt2qrcl.fsf@starship.python.net>

"Neal Norwitz" <nnorwitz at gmail.com> writes:

> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).

I think it must be that way around, it's not like Anthony is sane
now...

Cheers,
mwh

-- 
  > Or can I sweep that can of worms under the rug?
  Please shove them under the garage.
   -- Greg Ward and Guido van Rossum mix their metaphors on python-dev

From theller at python.net  Fri Jun  9 17:31:43 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 09 Jun 2006 17:31:43 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <4489945F.6080000@python.net>

Neal Norwitz wrote:
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).
> 
> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

Would it be possible to move the schedule for beta 1 a few days?

I'd like to merge ctypes CVS HEAD (with some fixes I still have to make)
into Python SVN, but unfortunately sourceforge CVS is down again ;-(,
and I'm running out of time.

Thanks,
Thomas


From theller at python.net  Fri Jun  9 17:34:30 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 09 Jun 2006 17:34:30 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <44899506.2070008@python.net>

Neal Norwitz wrote:
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).
> 
> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

The other question is about feature freeze on external libraries.
Is it strictly forbidden to add new features in ctypes during the
(Python) beta period?

Thomas


From noamraph at gmail.com  Fri Jun  9 17:53:00 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Fri, 9 Jun 2006 18:53:00 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without Parentheses
Message-ID: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>

Hello,

Recently I discovered that a small change to the Python grammar that
could help me a lot.

It's simply this: Currently, the expression "x[]" is a syntax error. I
suggest that it will be a valid syntax, and equivalent to "x[()]",
just as "x[a, b]" is equivalent to "x[(a, b)]" right now.

I discussed this in python-list, and Fredrik Lundh suggested that I
quickly write a pre-PEP if I want this to go into 2.5. Since I want
this, I wrote a pre-PEP.

It's available in the wiki, at
http://wiki.python.org/moin/EmptySubscriptListPEP and I also copied it
to this message.

I know that now is really close to 2.5b1, but I thought that perhaps
there was still a chance for this suggestion getting in, since:
 * It's a simple change and there's almost nothing to be decided
except whether to accept it or not.
 * It has a simple implementation (It was fairly easy for me to
implement it, and I know almost nothing about the AST).
 * It causes no backwards compatibilities issues.

Ok, here's the pre-PEP. Please say what you think about it.

Have a good day,
Noam


PEP: XXX
Title: Allow Empty Subscript List Without Parentheses
Version: $Revision$
Last-Modified: $Date$
Author: Noam Raphael <spam.noam at gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 09-Jun-2006
Python-Version: 2.5?
Post-History: 30-Aug-2002

Abstract
========

This PEP suggests to allow the use of an empty subscript list, for
example ``x[]``, which is currently a syntax error. It is suggested
that in such a case, an empty tuple will be passed as an argument to
the __getitem__ and __setitem__ methods. This is consistent with the
current behaviour of passing a tuple with n elements to those methods
when a subscript list of length n is used, if it includes a comma.


Specification
=============

The Python grammar specifies that inside the square brackets trailing
an expression, a list of "subscripts", separated by commas, should be
given. If the list consists of a single subscript without a trailing
comma, a single object (an ellipsis, a slice or any other object) is
passed to the resulting __getitem__ or __setitem__ call. If the list
consists of many subscripts, or of a single subscript with a trailing
comma, a tuple is passed to the resulting __getitem__ or __setitem__
call, with an item for each subscript.

Here is the formal definition of the grammar:

::
    trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
    subscriptlist: subscript (',' subscript)* [',']
    subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
    sliceop: ':' [test]

This PEP suggests to allow an empty subscript list, with nothing
inside the square brackets. It will result in passing an empty tuple
to the resulting __getitem__ or __setitem__ call.

The change in the grammar is to make "subscriptlist" in the first
quoted line optional:

::
    trailer: '(' [arglist] ')' | '[' [subscriptlist] ']' | '.' NAME


Motivation
==========

This suggestion allows you to refer to zero-dimensional arrays elegantly. In
NumPy, you can have arrays with a different number of dimensions. In
order to refer to a value in a two-dimensional array, you write
``a[i, j]``. In order to refer to a value in a one-dimensional array,
you write ``a[i]``. You can also have a zero-dimensional array, which
holds a single value (a scalar). To refer to its value, you currently
need to write ``a[()]``, which is unexpected - the user may not even
know that when he writes ``a[i, j]`` he constructs a tuple, so he
won't guess the ``a[()]`` syntax. If the suggestion is accepted, the
user will be able to write ``a[]`` in order to refer to the value, as
expected. It will even work without changing the NumPy package at all!

In the normal use of NumPy, you usually don't encounter
zero-dimensional arrays. However, the author of this PEP is designing
another library for managing multi-dimensional arrays of data. Its
purpose is similar to that of a spreadsheet - to analyze data and
preserve the relations between a source of a calculation and its
destination. In such an environment you may have many
multi-dimensional arrays - for example, the sales of several products
over several time periods. But you may also have several
zero-dimensional arrays, that is, single values - for example, the
income tax rate. It is desired that the access to the zero-dimensional
arrays will be consistent with the access to the multi-dimensional
arrays. Just using the name of the zero-dimensional array to obtain
its value isn't going to work - the array and the value it contains
have to be distinguished.


Rationale
=========

Passing an empty tuple to the __getitem__ or __setitem__ call was
chosen because it is consistent with passing a tuple of n elements
when a subscript list of n elements is used. Also, it will make NumPy
and similar packages work as expected for zero-dimensional arrays
without
any changes.

Another hint for consistency: Currently, these equivalences hold:

::
    x[i, j, k]  <-->  x[(i, j, k)]
    x[i, j]     <-->  x[(i, j)]
    x[i, ]      <-->  x[(i, )]
    x[i]        <-->  x[(i)]

If this PEP is accepted, another equivalence will hold:

::
    x[]         <-->  x[()]


Backwards Compatibility
=======================

This change is fully backwards compatible, since it only assigns a
meaning to a previously illegal syntax.


Reference Implementation
========================

Available as SF Patch no. 1503556.
(and also in http://python.pastebin.com/768317 )

It passes the Python test suite, but currently doesn't provide
additional tests or documentation.


Copyright
=========

This document has been placed in the public domain.

From pje at telecommunity.com  Fri Jun  9 18:01:46 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri, 09 Jun 2006 12:01:46 -0400
Subject: [Python-Dev] [Web-SIG] wsgiref doc draft;
 reviews/patches  wanted
In-Reply-To: <3f1451f50606071156h9e612e3y602918973349a61f@mail.gmail.com
 >
References: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
	<5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20060609115435.00a04748@mail.telecommunity.com>

At 02:56 PM 6/7/2006 -0400, Joe Gregorio wrote:
>Phillip,
>
>1. It's not really clear from the abstract 'what' this library
>provides. You might want
>    to consider moving the text from 1.1 up to the same level as the abstract.

Done.


>2.  In section 1.1 you might want to consider dropping the sentence:
>"Only authors
>     of web servers and programming frameworks need to know every detail..."
>     It doesn't offer any concrete information and just indirectly
>      makes WSGI look complicated.

That bit was taken from AMK's draft; I'm going to trust his intuition here 
as to why he thought it was desirable to say this.


>3. From the abstract:  "Having a standard interface makes it easy to use a
>       WSGI-supporting application with a number of different web servers."
>
>      is a little akward, how about:
>
>     "Having a standard interface makes it easy to use an application
>     that supports WSGI with a number of different web servers."

Done.


>4. I believe the order of submodules presented is important and think that
>    they should be listed with 'handlers' and 'simple_server' first:

I agree that the order is important, but I intentionally chose the current 
order to be a gentle slope of complexity, from the near-trivial functions 
on up to the server/handler framework last.  I'm not sure what ordering 
principle you're suggesting to use instead.


>5. You might consider moving 'headers' into 'util'. Of course, you could
>     go all the way in simplifying and move 'validate' in there too.

Not and maintain backward compatibility.  There is, after all, code in the 
field using these modules for about a year and a half now.


From ronaldoussoren at mac.com  Fri Jun  9 18:16:00 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Fri, 9 Jun 2006 18:16:00 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <32DAAC49-D60C-4A81-8615-8BBBA7331DAE@mac.com>


On 9-jun-2006, at 8:23, Neal Norwitz wrote:

> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).
>
> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

I want to checkin patch 1491759 "IDLE L&F on MacOSX". This makes the  
IDLE interface more palatable on OSX, although it still doesn't look  
very good due to issues with AquaTk. Unless someone tells me not to  
I'll check this in this weekend.

I also wouldn't mind if someone wants to review patch 1446489  
"zipfile: support for ZIP64". This add support for the ZIP64  
extensions to zipfile which makes it possible to create huge  
zipfiles. I'm using this at work to manage zipfiles that are over  
6GByte in size and contain over 50K files each. The patch includes  
documentation updates and new tests ;-)

Ronald

From ronaldoussoren at mac.com  Fri Jun  9 18:24:30 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Fri, 9 Jun 2006 18:24:30 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <44899506.2070008@python.net>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<44899506.2070008@python.net>
Message-ID: <83C75008-C9DC-4D0A-9D39-21347C36D0E2@mac.com>


On 9-jun-2006, at 17:34, Thomas Heller wrote:

>
>
> The other question is about feature freeze on external libraries.
> Is it strictly forbidden to add new features in ctypes during the
> (Python) beta period?

Now that you mention the feature freeze...

The tools that generate the Carbon bindings on OSX (that is most of  
Mac/Modules) currently process the Universal Headers, which are  
basically the OS9 SDK. I'm contemplating modifying them to use the  
system headers instead, which would make it possible to update the  
Carbon bindings and support API features that were introduced in OSX.  
It is however rather unlikely that I'll manage to finish this before  
beta1 is out.

How hard is the feature freeze? Would it be possible to update the  
Carbon bindings after beta1? Given Apple's focus on backward  
compatibility the update should only add new functionality, not  
remove existing functions/types.

Ronald

From guido at python.org  Fri Jun  9 18:44:59 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 9 Jun 2006 09:44:59 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
Message-ID: <ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>

This is an elaborate joke, right?

On 6/9/06, Noam Raphael <noamraph at gmail.com> wrote:
> Hello,
>
> Recently I discovered that a small change to the Python grammar that
> could help me a lot.
>
> It's simply this: Currently, the expression "x[]" is a syntax error. I
> suggest that it will be a valid syntax, and equivalent to "x[()]",
> just as "x[a, b]" is equivalent to "x[(a, b)]" right now.
>
> I discussed this in python-list, and Fredrik Lundh suggested that I
> quickly write a pre-PEP if I want this to go into 2.5. Since I want
> this, I wrote a pre-PEP.
>
> It's available in the wiki, at
> http://wiki.python.org/moin/EmptySubscriptListPEP and I also copied it
> to this message.
>
> I know that now is really close to 2.5b1, but I thought that perhaps
> there was still a chance for this suggestion getting in, since:
>  * It's a simple change and there's almost nothing to be decided
> except whether to accept it or not.
>  * It has a simple implementation (It was fairly easy for me to
> implement it, and I know almost nothing about the AST).
>  * It causes no backwards compatibilities issues.
>
> Ok, here's the pre-PEP. Please say what you think about it.
>
> Have a good day,
> Noam
>
>
> PEP: XXX
> Title: Allow Empty Subscript List Without Parentheses
> Version: $Revision$
> Last-Modified: $Date$
> Author: Noam Raphael <spam.noam at gmail.com>
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 09-Jun-2006
> Python-Version: 2.5?
> Post-History: 30-Aug-2002
>
> Abstract
> ========
>
> This PEP suggests to allow the use of an empty subscript list, for
> example ``x[]``, which is currently a syntax error. It is suggested
> that in such a case, an empty tuple will be passed as an argument to
> the __getitem__ and __setitem__ methods. This is consistent with the
> current behaviour of passing a tuple with n elements to those methods
> when a subscript list of length n is used, if it includes a comma.
>
>
> Specification
> =============
>
> The Python grammar specifies that inside the square brackets trailing
> an expression, a list of "subscripts", separated by commas, should be
> given. If the list consists of a single subscript without a trailing
> comma, a single object (an ellipsis, a slice or any other object) is
> passed to the resulting __getitem__ or __setitem__ call. If the list
> consists of many subscripts, or of a single subscript with a trailing
> comma, a tuple is passed to the resulting __getitem__ or __setitem__
> call, with an item for each subscript.
>
> Here is the formal definition of the grammar:
>
> ::
>     trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
>     subscriptlist: subscript (',' subscript)* [',']
>     subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
>     sliceop: ':' [test]
>
> This PEP suggests to allow an empty subscript list, with nothing
> inside the square brackets. It will result in passing an empty tuple
> to the resulting __getitem__ or __setitem__ call.
>
> The change in the grammar is to make "subscriptlist" in the first
> quoted line optional:
>
> ::
>     trailer: '(' [arglist] ')' | '[' [subscriptlist] ']' | '.' NAME
>
>
> Motivation
> ==========
>
> This suggestion allows you to refer to zero-dimensional arrays elegantly. In
> NumPy, you can have arrays with a different number of dimensions. In
> order to refer to a value in a two-dimensional array, you write
> ``a[i, j]``. In order to refer to a value in a one-dimensional array,
> you write ``a[i]``. You can also have a zero-dimensional array, which
> holds a single value (a scalar). To refer to its value, you currently
> need to write ``a[()]``, which is unexpected - the user may not even
> know that when he writes ``a[i, j]`` he constructs a tuple, so he
> won't guess the ``a[()]`` syntax. If the suggestion is accepted, the
> user will be able to write ``a[]`` in order to refer to the value, as
> expected. It will even work without changing the NumPy package at all!
>
> In the normal use of NumPy, you usually don't encounter
> zero-dimensional arrays. However, the author of this PEP is designing
> another library for managing multi-dimensional arrays of data. Its
> purpose is similar to that of a spreadsheet - to analyze data and
> preserve the relations between a source of a calculation and its
> destination. In such an environment you may have many
> multi-dimensional arrays - for example, the sales of several products
> over several time periods. But you may also have several
> zero-dimensional arrays, that is, single values - for example, the
> income tax rate. It is desired that the access to the zero-dimensional
> arrays will be consistent with the access to the multi-dimensional
> arrays. Just using the name of the zero-dimensional array to obtain
> its value isn't going to work - the array and the value it contains
> have to be distinguished.
>
>
> Rationale
> =========
>
> Passing an empty tuple to the __getitem__ or __setitem__ call was
> chosen because it is consistent with passing a tuple of n elements
> when a subscript list of n elements is used. Also, it will make NumPy
> and similar packages work as expected for zero-dimensional arrays
> without
> any changes.
>
> Another hint for consistency: Currently, these equivalences hold:
>
> ::
>     x[i, j, k]  <-->  x[(i, j, k)]
>     x[i, j]     <-->  x[(i, j)]
>     x[i, ]      <-->  x[(i, )]
>     x[i]        <-->  x[(i)]
>
> If this PEP is accepted, another equivalence will hold:
>
> ::
>     x[]         <-->  x[()]
>
>
> Backwards Compatibility
> =======================
>
> This change is fully backwards compatible, since it only assigns a
> meaning to a previously illegal syntax.
>
>
> Reference Implementation
> ========================
>
> Available as SF Patch no. 1503556.
> (and also in http://python.pastebin.com/768317 )
>
> It passes the Python test suite, but currently doesn't provide
> additional tests or documentation.
>
>
> Copyright
> =========
>
> This document has been placed in the public domain.
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Fri Jun  9 18:48:23 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri, 09 Jun 2006 12:48:23 -0400
Subject: [Python-Dev] FYI: wsgiref is now checked in
Message-ID: <5.1.1.6.0.20060609124432.02e7c348@mail.telecommunity.com>

The checked-in code substantially matches the public 0.1 release of 
wsgiref.  There are some minor changes to the docs and the test module, but 
these have also been made in the SVN trunk of wsgiref's home repository, so 
that future releases don't diverge too much.  The plan is to continue to 
maintain the standalone version and update the stdlib from it as 
appropriate, although I don't know of anything that would be changing any 
time soon.

The checkin includes a wsgiref.egg-info file, so if you have a program that 
uses setuptools to depend on wsgiref being installed, setuptools under 
Python 2.5 should detect that the stdlib already includes wsgiref.

Thanks for all the feedback and assistance and code contributions.


From brett at python.org  Fri Jun  9 18:54:29 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 9 Jun 2006 09:54:29 -0700
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <e6b69s$pqf$1@sea.gmane.org>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<e6b69s$pqf$1@sea.gmane.org>
Message-ID: <bbaeab100606090954g175c85d1r3f641a71505ebc9e@mail.gmail.com>

On 6/9/06, Thomas Heller <theller at python.net> wrote:
>
> Neal Norwitz wrote:
> > It's June 9 in most parts of the world.  The schedule calls for beta 1
> > on June 14.  That means there's less than 4 days until the expected
> > code freeze.  Please don't rush to get everything in at the last
> > minute.  The buildbots should remain green to keep Anthony happy and
> > me sane (or is it the other way around).
>
> oops - I still had marked June 24 in my calendar!


Do enough people use Google Calendar or a calendar app that support iCal
feeds that it would be useful for someone to maintain a Gcal calendar that
has the various Python dev related dates in it?

-Brett

> If you plan to make a checkin adding a feature (even a simple one),
> > you oughta let people know by responding to this message.  Please get
> > the bug fixes in ASAP.  Remember to add tests!
>
> I intend to merge in the current ctypes SF CVS, will try that today.
>
> Thomas
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060609/f48d5774/attachment.htm 

From barry at python.org  Fri Jun  9 18:57:33 2006
From: barry at python.org (Barry Warsaw)
Date: Fri, 9 Jun 2006 12:57:33 -0400
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <bbaeab100606090954g175c85d1r3f641a71505ebc9e@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<e6b69s$pqf$1@sea.gmane.org>
	<bbaeab100606090954g175c85d1r3f641a71505ebc9e@mail.gmail.com>
Message-ID: <20060609125733.28747052@resist.wooz.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Fri, 9 Jun 2006 09:54:29 -0700
"Brett Cannon" <brett at python.org> wrote:

> Do enough people use Google Calendar or a calendar app that support
> iCal feeds that it would be useful for someone to maintain a Gcal
> calendar that has the various Python dev related dates in it?

Great idea!
- -Barry
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)

iQCVAwUBRImofnEjvBPtnXfVAQL61wP/XLobIymHE5WsVsS+6Rrcy3mHPgnPgjlR
CYLLD0/71Qn5RXKTEvJ1ZWxLgzRSKeT2gzrp1T+bvcblksZcXBGYLXC2y5d0xo2W
WLRnFLeVUmA0X+t573EmvOoA+4flwSy7sm26ui6nM1PTMo+/j+AfGOAkIoxTheFu
0SrPIJVpue4=
=n7WJ
-----END PGP SIGNATURE-----

From guido at python.org  Fri Jun  9 19:05:39 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 9 Jun 2006 10:05:39 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
Message-ID: <ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>

On 6/9/06, Nicko van Someren <nicko at nicko.org> wrote:
> On 9 Jun 2006, at 17:44, Guido van Rossum wrote:
>
> > This is an elaborate joke, right?
> >
> > On 6/9/06, Noam Raphael <noamraph at gmail.com> wrote:
> ...
> >> It's simply this: Currently, the expression "x[]" is a syntax
> >> error. I
> >> suggest that it will be a valid syntax, and equivalent to "x[()]",
> >> just as "x[a, b]" is equivalent to "x[(a, b)]" right now.
> ...
> >> Motivation
> >> ==========
> >>
> >> This suggestion allows you to refer to zero-dimensional arrays
> >> elegantly.
>
> I don't think that this suggestion is any less reasonable the the
> very existence of zero-dimensional arrays in the first place,
> although in my personal opinion that's a fairly low bar.

The language doesn't have zero-dimensional arrays, although it doesn't
prevent users from defining them. but why would one want to index a
zero-dimensional array, since it has no dimensions? It should be
written as x, not x[].

The need for () is pretty clear and can be explained to beginners
(zero-length arrays are not that unusual).

The need for x[] is not clear to beginners, and accepting this as
legal syntax just moves certain typos from compile-time to run-time
detection.

The timing is such that there's no way this can be added to 2.5 --
beta 1 is about to be released. It's true that new features can be
added until that release -- but that's for features that have been
agreed upon for a long time and just haven't gotten implemented yet --
not for brand new proposals.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From nicko at nicko.org  Fri Jun  9 19:01:03 2006
From: nicko at nicko.org (Nicko van Someren)
Date: Fri, 9 Jun 2006 18:01:03 +0100
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
Message-ID: <960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>

On 9 Jun 2006, at 17:44, Guido van Rossum wrote:

> This is an elaborate joke, right?
>
> On 6/9/06, Noam Raphael <noamraph at gmail.com> wrote:
...
>> It's simply this: Currently, the expression "x[]" is a syntax  
>> error. I
>> suggest that it will be a valid syntax, and equivalent to "x[()]",
>> just as "x[a, b]" is equivalent to "x[(a, b)]" right now.
...
>> Motivation
>> ==========
>>
>> This suggestion allows you to refer to zero-dimensional arrays  
>> elegantly.

I don't think that this suggestion is any less reasonable the the  
very existence of zero-dimensional arrays in the first place,  
although in my personal opinion that's a fairly low bar.

	Nicko



From aleaxit at gmail.com  Fri Jun  9 20:09:11 2006
From: aleaxit at gmail.com (Alex Martelli)
Date: Fri, 9 Jun 2006 11:09:11 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
Message-ID: <e8a0972d0606091109w2abd5e0bh89723ab2b36bb2ef@mail.gmail.com>

On 6/9/06, Guido van Rossum <guido at python.org> wrote:
   ...
> The language doesn't have zero-dimensional arrays, although it doesn't
> prevent users from defining them. but why would one want to index a
> zero-dimensional array, since it has no dimensions? It should be
> written as x, not x[].

Well, x=23 on one side, and  x[]=23 aka x[()]=23 on the other, have
drastically different semantics. Indexing refers to the contents of
the zero-dimensional container, rather than to a name to which the
container happens to be bound (but isn't any more, once one assigns to
that name rather than to an indexing thereof).

That being said, having to write x[()]=23 explicitly (rather than
x[]=23) wouldn't perturb me overmuch, personally -- so, I don't see a
need to rush this at the last minute into 2.5 beta (rather than
letting the idea ripen with a target of 2.6).


Alex

From tim.hochberg at ieee.org  Fri Jun  9 20:09:22 2006
From: tim.hochberg at ieee.org (Tim Hochberg)
Date: Fri, 09 Jun 2006 11:09:22 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
Message-ID: <e6cdkg$kci$1@sea.gmane.org>

Guido van Rossum wrote:
> On 6/9/06, Nicko van Someren <nicko at nicko.org> wrote:
[...]
> 
> The language doesn't have zero-dimensional arrays, although it doesn't
> prevent users from defining them. but why would one want to index a
> zero-dimensional array, since it has no dimensions? It should be
> written as x, not x[].

In Numpy, a 0-D array [for example, array(5)] is almost, but not quite, 
equivalent to  scalar [for example, 5]. The difference is that the 
former is mutable. Thus "a[()] = 3" will set the value of a 0-D array to 
3 and "a[()]" will extract the current, scalar value of a, for instance 
if you need a hashable object.

Whether that makes x[] desirable I won't venture an opinion. I don't see 
a lot of use of 0-D arrays in practice.

[...]


-tim


From martin at v.loewis.de  Fri Jun  9 20:13:13 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 09 Jun 2006 20:13:13 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <44899506.2070008@python.net>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<44899506.2070008@python.net>
Message-ID: <4489BA39.40400@v.loewis.de>

Thomas Heller wrote:
>> If you plan to make a checkin adding a feature (even a simple one),
>> you oughta let people know by responding to this message.  Please get
>> the bug fixes in ASAP.  Remember to add tests!
> 
> The other question is about feature freeze on external libraries.
> Is it strictly forbidden to add new features in ctypes during the
> (Python) beta period?

I think it strictly requires explicit permission of the release manager.
If many people want more time, we should move the release schedule.

OTOH, there will always be Python 2.6.

Regards,
Martin

From martin at v.loewis.de  Fri Jun  9 20:22:27 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 09 Jun 2006 20:22:27 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <83C75008-C9DC-4D0A-9D39-21347C36D0E2@mac.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>	<44899506.2070008@python.net>
	<83C75008-C9DC-4D0A-9D39-21347C36D0E2@mac.com>
Message-ID: <4489BC63.4010904@v.loewis.de>

Ronald Oussoren wrote:
> How hard is the feature freeze? Would it be possible to update the  
> Carbon bindings after beta1? Given Apple's focus on backward  
> compatibility the update should only add new functionality, not  
> remove existing functions/types.

I'd say it's absolute wrt. to requiring permission from the release
manager.

The point of not allowing new features after a beta release is
that one wants to avoid getting untested new features into a release.
For that goal, it is not that relevant whether the new features
are guaranteed not to break any existing features - they still
don't get the testing that the beta releases try to achieve.

Regards,
Martin

From greg.ewing at canterbury.ac.nz  Sat Jun 10 01:48:22 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 10 Jun 2006 11:48:22 +1200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
Message-ID: <448A08C6.7020106@canterbury.ac.nz>

Noam Raphael wrote:

> This PEP suggests to allow the use of an empty subscript list, for
> example ``x[]``, which is currently a syntax error. It is suggested
> that in such a case, an empty tuple will be passed as an argument to
> the __getitem__ and __setitem__ methods. This is consistent with the
> current behaviour of passing a tuple with n elements to those methods
> when a subscript list of length n is used, if it includes a comma.

It's not, really -- unless other places where a tuple
can appear were changed likewise, e.g.

   x =

would have to assign an empty tuple to x, etc.

> But you may also have several
> zero-dimensional arrays, that is, single values - for example, the
> income tax rate.

But *why* do these need to be 0-dimensional arrays rather
than just scalars? I'm having trouble seeing any use
for such a distinction.

I'm particularly unconvinced by the spreadsheet argument.
In that context, I'd say that everything is a 2-D array,
and a single cell is a 1x1 2-D array, not a 0-D array.

--
Greg

From greg.ewing at canterbury.ac.nz  Sat Jun 10 01:55:06 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 10 Jun 2006 11:55:06 +1200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <e8a0972d0606091109w2abd5e0bh89723ab2b36bb2ef@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e8a0972d0606091109w2abd5e0bh89723ab2b36bb2ef@mail.gmail.com>
Message-ID: <448A0A5A.9080801@canterbury.ac.nz>

Alex Martelli wrote:

> Well, x=23 on one side, and  x[]=23 aka x[()]=23 on the other, have
> drastically different semantics. Indexing refers to the contents of
> the zero-dimensional container, rather than to a name to which the
> container happens to be bound (but isn't any more, once one assigns to
> that name rather than to an indexing thereof).

It's not clear to me that a 0-D array should be regarded
as a container holding a single item, rather than just an
item on its own.

Think about how you get from an N dimensional array to
an N-1 dimensional array: you index it, e.g.

   A2 = [[1, 2], [3, 4]] # a 2D array

   A1 = A2[1] # a 1D array

   A0 = A1[1] # a 0D array???

   print A0

What do you think this will print?

--
Greg

From greg.ewing at canterbury.ac.nz  Sat Jun 10 02:02:30 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 10 Jun 2006 12:02:30 +1200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <e6cdkg$kci$1@sea.gmane.org>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org>
Message-ID: <448A0C16.9080301@canterbury.ac.nz>

Tim Hochberg wrote:

> In Numpy, a 0-D array [for example, array(5)] is almost, but not quite, 
> equivalent to  scalar [for example, 5]. The difference is that the 
> former is mutable.

Hmmm, I hadn't considered that. I suppose this is
something that arises from NumPy's "view" semantics
of indexing and slicing.

> Whether that makes x[] desirable I won't venture an opinion. I don't see 
> a lot of use of 0-D arrays in practice.

Actually, I *have* just thought of a use for it:

   def outer():
     x = array(0)
     def inner():
       x[] = 42

Bingo - write access to outer scopes!

Okay, I'm +0 on this now. But for that use, we'd need
a more convenient way of creating one than importing
NumPy and using array().

--
Greg

From brett at python.org  Sat Jun 10 03:49:00 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 9 Jun 2006 18:49:00 -0700
Subject: [Python-Dev] -Wi working for anyone else?
Message-ID: <bbaeab100606091849g3473a94fkbab1fe8df1014e98@mail.gmail.com>

I discovered last night that if you run ``./python.exe -Wi`` the interpreter
exists rather badly::

  Fatal Python error: PyThreadState_Get: no current thread

Anyone else seeing this error on any other platforms or have an inkling of
what checkin would cause this?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060609/2798626d/attachment.htm 

From tim.peters at gmail.com  Sat Jun 10 04:21:49 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 9 Jun 2006 22:21:49 -0400
Subject: [Python-Dev] -Wi working for anyone else?
In-Reply-To: <bbaeab100606091849g3473a94fkbab1fe8df1014e98@mail.gmail.com>
References: <bbaeab100606091849g3473a94fkbab1fe8df1014e98@mail.gmail.com>
Message-ID: <1f7befae0606091921o525ed9b3p7d3a7a3f50ff7b9b@mail.gmail.com>

[Brett Cannon]
> I discovered last night that if you run ``./python.exe -Wi`` the interpreter
> exists rather badly::
>
>   Fatal Python error: PyThreadState_Get: no current thread
>
> Anyone else seeing this error on any other platforms or have an inkling of
> what checkin would cause this?

See comments on the bug report report you opened:

    http://www.python.org/sf/1503294

From blais at furius.ca  Sat Jun 10 04:42:48 2006
From: blais at furius.ca (Martin Blais)
Date: Fri, 9 Jun 2006 22:42:48 -0400
Subject: [Python-Dev] Inject some tracing ...
Message-ID: <8393fff0606091942v6e23e8f0jbf820690aaa53cd1@mail.gmail.com>

Hello

I use the code below to inject a tracing function in my builtins that
prints out the file and line number where it occurs, and automatically
nicely formats the objects it receives.  This allows me to get traces
like that in my web server log files, or in my cmdline program
outputs:

[Fri Jun 09 22:44:15 2006]  (TRACE training.cgi:194)  ...

This way I immediately know where to go to remove the debugging
traces, and I don't have to import anything to use them (they are in
the bulitins, no need to import sys; print >> sys.stderr, ...).

Would people think this would be a good idea to add this to the
builtins in the future?

(Implementation follows.)

=============================


#!/usr/bin/env python

"""
Inject some tracing builtins debugging purposes.
"""

__version__ = "$Revision: 1781 $"
__author__ = "Martin Blais <blais at furius.ca>"

import sys, inspect, pprint
from os.path import basename


def trace(*args, **kwds):
    """
    Log the object to the 'outfile' file (keyword argument).  We also insert the
    file and line where this tracing statement was inserted.
    """
    # Get the output stream.
    outfile = kwds.pop('outfile', sys.stderr)

    # Get the parent file and line number.
    try:
        frame, filename, lineno, func, lines, idx = inspect.stack()[1]
        pfx = '(TRACE %s:%d) ' % (basename(filename), lineno)
    finally:
        del frame

    # Nicely format the stuff to be traced.
    pp = pprint.PrettyPrinter(indent=4, width=70, depth=1)
    msg = pfx + pp.pformat(args) + '\n'

    # Output.
    outfile.write(msg)
    outfile.flush()


# Inject into builtins for debugging.
import __builtin__
__builtin__.__dict__['trace'] = trace
__builtin__.__dict__['pprint'] = pprint.pprint
__builtin__.__dict__['pformat'] = pprint.pformat

From aleaxit at gmail.com  Sat Jun 10 05:25:31 2006
From: aleaxit at gmail.com (Alex Martelli)
Date: Fri, 9 Jun 2006 20:25:31 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448A0A5A.9080801@canterbury.ac.nz>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e8a0972d0606091109w2abd5e0bh89723ab2b36bb2ef@mail.gmail.com>
	<448A0A5A.9080801@canterbury.ac.nz>
Message-ID: <31246864-73B4-4B29-BF07-F32D20B2BBBB@gmail.com>


On Jun 9, 2006, at 4:55 PM, Greg Ewing wrote:
    ...
> Think about how you get from an N dimensional array to
> an N-1 dimensional array: you index it, e.g.
>
>   A2 = [[1, 2], [3, 4]] # a 2D array
>
>   A1 = A2[1] # a 1D array
>
>   A0 = A1[1] # a 0D array???
>
>   print A0
>
> What do you think this will print?

Don't confuse arrays with lists...:

 >>> A2 = Numeric.array([[1, 2], [3, 4]], Numeric.Float32)
 >>> A1 = A2[1]
 >>> A0 = A1[1]
 >>> type(A0)
<type 'array'>
 >>>

It doesn't work the same if you specify Numeric.Float64 instead -- an  
ancient wart of Numeric, of course.  Still, Numeric and its  
descendants are "the" way in Python to get multi-dimensional arrays,  
since the stdlib's array module only supports one-dimensional ones,  
and lists are not arrays.


Alex


From ncoghlan at gmail.com  Sat Jun 10 05:41:41 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 10 Jun 2006 13:41:41 +1000
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448A0C16.9080301@canterbury.ac.nz>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>	<e6cdkg$kci$1@sea.gmane.org>
	<448A0C16.9080301@canterbury.ac.nz>
Message-ID: <448A3F75.7090703@gmail.com>

Greg Ewing wrote:
> Tim Hochberg wrote:
> 
>> In Numpy, a 0-D array [for example, array(5)] is almost, but not quite, 
>> equivalent to  scalar [for example, 5]. The difference is that the 
>> former is mutable.
> 
> Hmmm, I hadn't considered that. I suppose this is
> something that arises from NumPy's "view" semantics
> of indexing and slicing.

I think it more comes from the n-dimensional array approach - 'n=0' is then a 
natural issue to consider. The Python core doesn't really get into that space, 
because it only really considers 1-dimensional sequences.

>> Whether that makes x[] desirable I won't venture an opinion. I don't see 
>> a lot of use of 0-D arrays in practice.
> 
> Actually, I *have* just thought of a use for it:
> 
>    def outer():
>      x = array(0)
>      def inner():
>        x[] = 42
> 
> Bingo - write access to outer scopes!
> 
> Okay, I'm +0 on this now. But for that use, we'd need
> a more convenient way of creating one than importing
> NumPy and using array().

Another 'initially -1, but +0 after reading the PEP & thread' here.

Also, with Travis's proposed dimarray type hopefully arriving in time for 
Python 2.6, I'd expect for this to be as simple as doing 'from array import 
dimarray' at the top of the module, and then:

     def outer():
       x = dimarray(0)
       def inner():
         x[] = 42

My personal hopes for the dimarray type are that it will have an internal data 
representation that's compatible with numpy, but Python-level semantics that 
are closer to those of the rest of the Python core (such as providing 
copy-on-slice behaviour instead of numpy's view-on-slice).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From tim.hochberg at ieee.org  Sat Jun 10 05:41:00 2006
From: tim.hochberg at ieee.org (Tim Hochberg)
Date: Fri, 09 Jun 2006 20:41:00 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <31246864-73B4-4B29-BF07-F32D20B2BBBB@gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>	<e8a0972d0606091109w2abd5e0bh89723ab2b36bb2ef@mail.gmail.com>	<448A0A5A.9080801@canterbury.ac.nz>
	<31246864-73B4-4B29-BF07-F32D20B2BBBB@gmail.com>
Message-ID: <e6df4b$oup$1@sea.gmane.org>

Alex Martelli wrote:
> On Jun 9, 2006, at 4:55 PM, Greg Ewing wrote:
>     ...
> 
>>Think about how you get from an N dimensional array to
>>an N-1 dimensional array: you index it, e.g.
>>
>>  A2 = [[1, 2], [3, 4]] # a 2D array
>>
>>  A1 = A2[1] # a 1D array
>>
>>  A0 = A1[1] # a 0D array???
>>
>>  print A0
>>
>>What do you think this will print?
> 
> 
> Don't confuse arrays with lists...:
> 
>  >>> A2 = Numeric.array([[1, 2], [3, 4]], Numeric.Float32)
>  >>> A1 = A2[1]
>  >>> A0 = A1[1]
>  >>> type(A0)
> <type 'array'>
>  >>>
> 
> It doesn't work the same if you specify Numeric.Float64 instead -- an  
> ancient wart of Numeric, of course.  Still, Numeric and its  
> descendants are "the" way in Python to get multi-dimensional arrays,  
> since the stdlib's array module only supports one-dimensional ones,  
> and lists are not arrays.


Note that this wart has been pretty much killed in numpy by supplying a 
full complement of scalar types:

 >>> import numpy
 >>> A2 = numpy.array([[1,2], [3,4]], numpy.float32)
 >>> A1 = A2[1]
 >>> A0 = A1[1]
 >>> A0
4.0
 >>> type(A0)
<type 'float32scalar'>

The same excercise with float64 will give you a float64 scalar. The 
behaviour in this area is overall much more consistent now. You can 
still get a 0-D array by doing array(4.0) and possibly a few other ways, 
but there much less common. These scalar objects are immutable, but have 
all (or at least most) of the the array methods and attributes. For example:

 >>> A0.dtype
dtype('<f4')

dtype is more or less equivalent to Numeric's typecode().


-tim


From ocean at m2.ccsnet.ne.jp  Sat Jun 10 05:46:53 2006
From: ocean at m2.ccsnet.ne.jp (H.Yamamoto)
Date: Sat, 10 Jun 2006 12:46:53 +0900
Subject: [Python-Dev] beta1 coming real soon
References: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>
	<448953EA.9080006@livinglogic.de>
Message-ID: <003f01c68c40$8360c2b0$0400a8c0@whiterabc2znlh>

----- Original Message ----- 
From: "Walter D?rwald" <walter at livinglogic.de>
To: "H.Yamamoto" <ocean at m2.ccsnet.ne.jp>
Cc: "python-dev" <python-dev at python.org>
Sent: Friday, June 09, 2006 7:56 PM
Subject: Re: [Python-Dev] beta1 coming real soon


> The best way to throughly test the patch is of course to check it in. ;)

Is it too risky? ;)

> I've tested the patch on Windows and there were no obvious bugs. Of
> course to *really* test the patch a Windows installation with a
> multibyte locale is required.
>
> > # Maybe, no one is using this codec?
>
> The audience is indeed limited.

Yes, I agree. And the audience who has "64bit" Windows with multibyte locale
should be much more limitted...



From greg.ewing at canterbury.ac.nz  Sat Jun 10 08:15:19 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 10 Jun 2006 18:15:19 +1200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <448A3F75.7090703@gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com>
Message-ID: <448A6377.8040902@canterbury.ac.nz>

Nick Coghlan wrote:

> I think it more comes from the n-dimensional array approach - 'n=0' is 
> then a natural issue to consider.

But only if it makes sense. I still think there are some
severe conceptual difficulties with 0D arrays. One is
the question of how many items it contains. With 1 or
more dimensions, you can talk about its size along any
chosen dimension. But with 0 dimensions there's no size
to measure. Who's to say a 0D array has a size of 1, then?
Part of my brain keeps saying it should be 0 -- i.e.
it contains nothing at all!

Also, what kind of thing does a[] yield? Do we finally,
at last, get an actual scalar, or do we get a -1 dimensional
array? :-)

I'm having trouble seeing a real use for a 0D array as
something distinct from a scalar, as opposed to them
just being an oddity that happens to arise as a side
effect of the way Numeric/Numpy are implemented.

--
Greg

From robert.kern at gmail.com  Sat Jun 10 08:22:15 2006
From: robert.kern at gmail.com (Robert Kern)
Date: Sat, 10 Jun 2006 01:22:15 -0500
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448A6377.8040902@canterbury.ac.nz>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>	<e6cdkg$kci$1@sea.gmane.org>
	<448A0C16.9080301@canterbury.ac.nz>	<448A3F75.7090703@gmail.com>
	<448A6377.8040902@canterbury.ac.nz>
Message-ID: <e6doek$a1k$1@sea.gmane.org>

Greg Ewing wrote:

> I'm having trouble seeing a real use for a 0D array as
> something distinct from a scalar, as opposed to them
> just being an oddity that happens to arise as a side
> effect of the way Numeric/Numpy are implemented.

This has been rehashed over and over again on numpy-discussion. The upshot is
that numpy no longer creates rank-zero arrays unless if the user really asks for
one. The remaining use cases are documented here:

  http://projects.scipy.org/scipy/numpy/wiki/ZeroRankArray

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco


From mike at skew.org  Sat Jun 10 09:00:39 2006
From: mike at skew.org (Mike Brown)
Date: Sat, 10 Jun 2006 01:00:39 -0600 (MDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <e6baf3$8fl$1@sea.gmane.org>
Message-ID: <200606100700.k5A70dTb053742@chilled.skew.org>

Fredrik Lundh wrote:
> Ka-Ping Yee wrote:
> 
> > Quite a few people have expressed interest in having UUID
> > functionality in the standard library, and previously on this
> > list some suggested possibly using the uuid.py module i wrote:
> > 
> >     http://zesty.ca/python/uuid.py
> 
> +1!

+1 as well.

I have a couple of suggestions for improving that implementation:

1. You're currently using os.urandom, which can raise a NotImplementedError. 
You should be prepared to fall back on a different PRNG... which leads to the
2nd suggestion:

2. random.randrange is a method on a default random.Random instance that,
although seeded by urandom (if available), may not be the user's preferred
PRNG.  I recommend making it possible for the user to supply their own
random.Random instance for use by the module.

That's all. :)

From python-dev at zesty.ca  Sat Jun 10 09:12:54 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 02:12:54 -0500 (CDT)
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <448A6377.8040902@canterbury.ac.nz>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
Message-ID: <Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>

On Sat, 10 Jun 2006, Greg Ewing wrote:
> I'm having trouble seeing a real use for a 0D array as
> something distinct from a scalar, as opposed to them
> just being an oddity that happens to arise as a side
> effect of the way Numeric/Numpy are implemented.

I think the whole discussion about the concept and meaning of
zero-dimensional arrays is mostly irrelevant to the original
issue.  The original issue is a *syntax* question: should
x[()] be written as x[]?

And from a syntax perspective, it's a bad idea.  x[] is much
more often a typo than an intentional attempt to index a
zero-dimensional array.  The compiler should continue to treat
it as a syntax error.


-- ?!ng

From fredrik at pythonware.com  Sat Jun 10 09:35:21 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 10 Jun 2006 09:35:21 +0200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>	<e6cdkg$kci$1@sea.gmane.org>
	<448A0C16.9080301@canterbury.ac.nz>	<448A3F75.7090703@gmail.com>
	<448A6377.8040902@canterbury.ac.nz>
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
Message-ID: <e6dsno$i7m$1@sea.gmane.org>

Ka-Ping Yee wrote:

> And from a syntax perspective, it's a bad idea.  x[] is much
> more often a typo than an intentional attempt to index a
> zero-dimensional array.

but how often is it a typo?

for example, judging from c.l.python traffic, forgetting to add a return 
statement is a quite common, but I haven't seen anyone arguing that we 
deprecate the implied "return None" behaviour.

</F>


From greg.ewing at canterbury.ac.nz  Sat Jun 10 12:57:42 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 10 Jun 2006 22:57:42 +1200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
Message-ID: <448AA5A6.6090803@canterbury.ac.nz>

Ka-Ping Yee wrote:

> I think the whole discussion about the concept and meaning of
> zero-dimensional arrays is mostly irrelevant to the original
> issue.  The original issue is a *syntax* question: should
> x[()] be written as x[]?

But, at least as presented in the PEP, it's a
syntax that was motivated by a perceived need
for dealing with 0D arrays. So it seems relevant
to ask whether 0D arrays are really needed or not.

Does anyone have any other use case for this
syntax?

--
Greg

From andymac at bullseye.apana.org.au  Sat Jun 10 12:15:42 2006
From: andymac at bullseye.apana.org.au (Andrew MacIntyre)
Date: Sat, 10 Jun 2006 21:15:42 +1100
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <448A9BCD.4080302@bullseye.apana.org.au>

Neal Norwitz wrote:
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).
> 
> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

I still harbour hopes of sorting SF1454481 out in time; if I have it 
sorted out by 1200UTC (2200AEST) on June 13, I'll merge it from the
branch I created for it.  "sorted out" includes addressing the issues
Tim, Skip & yourself raised.

Regards,
Andrew.

-- 
-------------------------------------------------------------------------
Andrew I MacIntyre                     "These thoughts are mine alone..."
E-mail: andymac at bullseye.apana.org.au  (pref) | Snail: PO Box 370
        andymac at pcug.org.au             (alt) |        Belconnen ACT 2616
Web:    http://www.andymac.org/               |        Australia

From python-dev at zesty.ca  Sat Jun 10 15:22:35 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 08:22:35 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>

Okay.  I have done as Fredrik suggested:
>      6.  Combine 2 and 3: require the user to pass in a MAC argument
>          to uuid1, but provide a SlowGetMacAddress helper that wraps
>          the existing code.

I agree with Thomas Wouters:
> That sounds like the right thing to do, although I wouldn't call it
> "slow"; just let it be documented as 'might not always work and
> might be inefficient',

The method for getting the node ID is called getnode() and is
documented as possibly slow.  If a hardware address cannot be
obtained, we use a random 48-bit number with the highest bit
set to 1, as recommended by RFC 4122.

I have done as Skip proposed here:
> Or make the MAC address an optional arg to uuid1.  If given, use it.
> If not, use the slow lookup (then cache the result).

I've made the address an optional argument, not a required one,
because i think "Explicit better than implicit" applies to changes
in output.  It makes sense to require an explicit argument if it's
actually going to produce a different result, but i don't think it
is necessary to require an explicit argument just because the
routine might be a bit slow.

Letting the address be an optional argument means that in future,
we can change the implementation of getnode() to make it faster or
more reliable, and users of the module will benefit without having
to change any of their code.

Finally, Phillip brought up PEAK:
> PEAK's uuid module does this such that if win32all is present,
> you get a Windows GUID, or if you have a FreeBSD 5+ or
> NetBSD 2+ kernel you use the local platform uuidgen API.  See e.g.:

...so i looked at PEAK's getnodeid48() routine and borrowed the
Win32 calls from there, with a comment giving attribution to PEAK.

Oh, and there is now a test suite that should cover all the code paths.

This is all posted at http://zesty.ca/python/uuid.py now,
documentation page at http://zesty.ca/python/uuid.html,
tests at http://zesty.ca/python/test_uuid.py .

I'll sleep now (6 am), commit tomorrow unless there are objections.

Thanks for your input, everyone!


-- ?!ng

From python-dev at zesty.ca  Sat Jun 10 15:27:08 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 08:27:08 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <200606100700.k5A70dTb053742@chilled.skew.org>
References: <200606100700.k5A70dTb053742@chilled.skew.org>
Message-ID: <Pine.LNX.4.58.0606100822420.5223@server1.LFW.org>

On Sat, 10 Jun 2006, Mike Brown wrote:
> I have a couple of suggestions for improving that implementation:
>
> 1. You're currently using os.urandom, which can raise a NotImplementedError.
> You should be prepared to fall back on a different PRNG...

The latest version (http://zesty.ca/python/uuid.py) does this.

> 2. random.randrange is a method on a default random.Random instance that,
> although seeded by urandom (if available), may not be the user's preferred
> PRNG.  I recommend making it possible for the user to supply their own
> random.Random instance for use by the module.

I decided not to add more code to do this, because the UUID
constructor is now designed in such a way that it's very simple
to convert your own randomness into a UUID.  If you want to use
another source of randomness, you'd just get 16 random bytes and
then call UUID(bytes=random_stuff, version=4).

That seems simpler to me than adding an extra argument and
requiring a randrange() method on it.


-- ?!ng

From python-dev at zesty.ca  Sat Jun 10 15:29:33 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 08:29:33 -0500 (CDT)
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606100828400.5223@server1.LFW.org>

On Thu, 8 Jun 2006, Neal Norwitz wrote:
> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

Just to make this appear on this thread: i'm planning to check in
the uuid.py module at http://zesty.ca/python/uuid.py (with tests).


-- ?!ng

From ncoghlan at gmail.com  Sat Jun 10 16:06:03 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 11 Jun 2006 00:06:03 +1000
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448AA5A6.6090803@canterbury.ac.nz>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>	<e6cdkg$kci$1@sea.gmane.org>
	<448A0C16.9080301@canterbury.ac.nz>	<448A3F75.7090703@gmail.com>
	<448A6377.8040902@canterbury.ac.nz>	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
	<448AA5A6.6090803@canterbury.ac.nz>
Message-ID: <448AD1CB.1080409@gmail.com>

Greg Ewing wrote:
> Ka-Ping Yee wrote:
> 
>> I think the whole discussion about the concept and meaning of
>> zero-dimensional arrays is mostly irrelevant to the original
>> issue.  The original issue is a *syntax* question: should
>> x[()] be written as x[]?
> 
> But, at least as presented in the PEP, it's a
> syntax that was motivated by a perceived need
> for dealing with 0D arrays. So it seems relevant
> to ask whether 0D arrays are really needed or not.
> 
> Does anyone have any other use case for this
> syntax?

I believe the NumPy link Robert posted does a pretty good job of thrashing out 
the use cases (or lack thereof).

I also thought a bit more about the places where the comma separator is used 
as part of the syntax, and realised that there is no consistent behaviour used 
when the expression is omitted entirely.

In return and yield, omitting the expression is equivalent to using 'None'

In print, omitting the expression is equivalent to using the empty string.

In raise, omitting the expression has no real equivalent short of:
   exctype, excval, exctb = sys.exc_info()
   raise exctype, excval, exctb

In an except clause, omitting the expression means the clause handles all 
exceptions.

In a function definition, omitting the expression means the function accepts 
no arguments.

In a function call, omitting the expression is equivalent to writing *().

Most other places (assignment statements, for loops, etc) omitting the 
expression that may contain a comma separator is simply not permitted.

The closest parallel would be with return/yield, as those actually create real 
tuples the same way subscripts do, and allow the expression to be omitted 
entirely.

By that parallel, however, an implicit subscript (if adopted) should be None 
rather than ().

Adapting the table from the pre-PEP to describe return statements (and yield 
expressions):

     return i, j, k  <-->  return (i, j, k)
     return i, j     <-->  return (i, j)
     return i,       <-->  return (i, )
     return i        <-->  return (i)
                           return ()    # (No implicit equivalent)
     return          <-->  return None

With the status quo, however, subscripts are simply equivalent to the RHS of 
an assignment statement in *requiring* that the expression be non-empty:

     x = i, j, k  <-->  x = (i, j, k)
     x = i, j     <-->  x = (i, j)
     x = i,       <-->  x = (i, )
     x = i        <-->  x = (i)
                        x = ()     # (No implicit equivalent)
                        x = None   # (No implicit equivalent)

The PEP doesn't make a sufficiently compelling case for introducing 
yet-another-variant on the implicit behaviour invoked when a particular 
subexpression is missing from a construct.

I guess I could have gone with my initial instinct of -1 and saved myself some 
mental exercise ;)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From p.f.moore at gmail.com  Sat Jun 10 17:01:07 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 10 Jun 2006 16:01:07 +0100
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
Message-ID: <79990c6b0606100801q55adda87o760bd7987883419@mail.gmail.com>

On 6/10/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> ...so i looked at PEAK's getnodeid48() routine and borrowed the
> Win32 calls from there, with a comment giving attribution to PEAK.

Instead of using pywin32, could you use ctypes, as that's part of core
Python? It looks like the only Win32 API you use is CoCreateGUID, so
wrapping it should be doable...

Paul.

From jdahlin at async.com.br  Sat Jun 10 17:22:48 2006
From: jdahlin at async.com.br (Johan Dahlin)
Date: Sat, 10 Jun 2006 12:22:48 -0300
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <Pine.LNX.4.58.0606100828400.5223@server1.LFW.org>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<Pine.LNX.4.58.0606100828400.5223@server1.LFW.org>
Message-ID: <448AE3C8.5080105@async.com.br>

Ka-Ping Yee wrote:
> On Thu, 8 Jun 2006, Neal Norwitz wrote:
>> If you plan to make a checkin adding a feature (even a simple one),
>> you oughta let people know by responding to this message.  Please get
>> the bug fixes in ASAP.  Remember to add tests!
> 
> Just to make this appear on this thread: i'm planning to check in
> the uuid.py module at http://zesty.ca/python/uuid.py (with tests).

Just a quick comment, ipconfig is localized on my system so it'll output
something like this:

   endere?o inet6: fe80::20e:a6ff:feac:c3bd/64 Escopo:Link

So it needs to be run with LANG set to C to avoid this.

Perhaps it would be better to use the same API ipconfig uses, but it's
highly platform specific of course.

Johan


From theller at python.net  Sat Jun 10 17:11:15 2006
From: theller at python.net (Thomas Heller)
Date: Sat, 10 Jun 2006 17:11:15 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<e6baf3$8fl$1@sea.gmane.org>	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
Message-ID: <e6enei$kpo$1@sea.gmane.org>

Ka-Ping Yee wrote:
> Finally, Phillip brought up PEAK:
>> PEAK's uuid module does this such that if win32all is present,
>> you get a Windows GUID, or if you have a FreeBSD 5+ or
>> NetBSD 2+ kernel you use the local platform uuidgen API.  See e.g.:
> 
> ...so i looked at PEAK's getnodeid48() routine and borrowed the
> Win32 calls from there, with a comment giving attribution to PEAK.
> 
> Oh, and there is now a test suite that should cover all the code paths.
> 
> This is all posted at http://zesty.ca/python/uuid.py now,
> documentation page at http://zesty.ca/python/uuid.html,
> tests at http://zesty.ca/python/test_uuid.py .

(From http://zesty.ca/python/uuid.py:)

def win32_getnode():
    """Get the hardware address on Windows using Win32 extensions."""
    try:
        import pywintypes
        return int(pywintypes.CreateGuid()[-13:-1], 16)
    except:
        pass
    try:
        import pythoncom
        return int(pythoncom.CreateGuid()[-13:-1], 16)
    except:
        pass



This does not work, for several reasons.

1. (pythoncom|pywintypes).CreateGuid() return a PyIID instance, which you cannot slice:

>>> import pywintypes
>>> pywintypes.CreateGuid()
IID('{4589E547-4CB5-4BCC-A7C3-6E88FAFB4301}')
>>> type(pywintypes.CreateGuid())
<type 'PyIID'>
>>> pywintypes.CreateGuid()[-13:-1]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsubscriptable object
>>>

So, you would have to change your code to 'str(pythoncom.CreateGuid())[-13:-1]'.

(BTW: Why does it first try pywintypes, the pythoncom?)

2. These functions call to win32 CoCreateGuid function, which create a new uuid.
As documented on MSDN, this calls the UuidCreate function which:

"The UuidCreate function generates a UUID that cannot be traced to the ethernet/token ring
address of the computer on which it was generated."

In other words, the last 12 characters do *not* identify the mac address, in fact your function,
if repaired, returns a different result each time it is called.  See this link for
further info:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/rpc/rpc/uuidcreate.asp

A possible solution, if you really need the mac address, is to call the UuidCreateSequential
function where available (W2k, XP, ...), and UuidCreate on older systems (W98, ME).
Link to MSDN:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/rpc/rpc/uuidcreatesequential.asp

3. Now that ctypes is part of Python, why not use it?  This snippet of code
displays the mac address of one of the ethernet adapters I have in my machine:

from ctypes import *
import binascii

class UUID(Structure):
    _fields_ = [("Data1", c_ulong),
                ("Data1", c_ushort),
                ("Data1", c_ushort),
                ("Data1", c_ubyte * 8)]

def CreateGuid():
    uuid = UUID()
    if 0 == windll.rpcrt4.UuidCreateSequential(byref(uuid)):
        return str(buffer(uuid))

print binascii.hexlify(CreateGuid()[-6:])

It should be extended to also work on w98 or me, probably.

Thomas


From p.f.moore at gmail.com  Sat Jun 10 17:19:04 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 10 Jun 2006 16:19:04 +0100
Subject: [Python-Dev] UUID module
In-Reply-To: <79990c6b0606100801q55adda87o760bd7987883419@mail.gmail.com>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<79990c6b0606100801q55adda87o760bd7987883419@mail.gmail.com>
Message-ID: <79990c6b0606100819l7227d24atcc723676ce295d49@mail.gmail.com>

On 6/10/06, Paul Moore <p.f.moore at gmail.com> wrote:
> On 6/10/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> > ...so i looked at PEAK's getnodeid48() routine and borrowed the
> > Win32 calls from there, with a comment giving attribution to PEAK.
>
> Instead of using pywin32, could you use ctypes, as that's part of core
> Python? It looks like the only Win32 API you use is CoCreateGUID, so
> wrapping it should be doable...

Here's some sample code (taken from Thomas Heller's comtypes module)

>>> from ctypes import oledll, Structure, byref
>>> from ctypes.wintypes import BYTE, WORD, DWORD
>>> class GUID(Structure):
...     _fields_ = [("Data1", DWORD),
...                 ("Data2", WORD),
...                 ("Data3", WORD),
...                 ("Data4", BYTE * 8)]
...
>>> guid = GUID()
>>> oledll.ole32.CoCreateGuid(byref(guid))
0
>>> guid
<__main__.GUID object at 0x00978EE0>
>>> guid.Data1
3391869098L
>>> guid.Data2
51115
>>> guid.Data3
20060
>>> guid.Data4
<__main__.c_byte_Array_8 object at 0x00978E40>
>>>

I'm not sure what the int(...[-13,-1], 16) does (as it stands, it
fails for me - I think you need to put str() round the CreateGuid
call) but I *think* it's the equivalent of guid.Data1 above.

So, you'd have:

def win32_getnode():
    """Get the hardware address on Windows using Win32 extensions."""
    from ctypes import oledll, Structure, byref
    from ctypes.wintypes import BYTE, WORD, DWORD
    class GUID(Structure):
        _fields_ = [("Data1", DWORD),
                    ("Data2", WORD),
                    ("Data3", WORD),
                    ("Data4", BYTE * 8)]

    guid = GUID()
    oledll.ole32.CoCreateGuid(byref(guid))
    return guid.Data1

Hope this is of use.
Paul.

From python-dev at zesty.ca  Sat Jun 10 17:22:53 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 10:22:53 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <e6enei$kpo$1@sea.gmane.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
Message-ID: <Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>

Hi Thomas,

> This does not work, for several reasons.
>
> 1. (pythoncom|pywintypes).CreateGuid() return a PyIID instance, which you cannot slice:

You're right.  The PEAK code must have been based on an earlier
version of the CreateGuid() routine.

I've fixed this, and added detection of the UUID version so that
the MAC address will only be picked up if the UUID version is 1.

> (BTW: Why does it first try pywintypes, the pythoncom?)

Because PEAK does this, and i see the CreateGuid routine imported
from both modules in Google searches for code, i assumed that it
used to live in one module and moved to the other.

I've also figured out how to get the MAC address using NetBIOS
calls, and added that to the repertoire of methods.  I've tested
this and it works fast.  I think this may be better than
UuidCreateSequential, because it should work on both Win98 and XP.

The updated version is posted at http://zesty.ca/python/uuid.py .


-- ?!ng

From theller at python.net  Sat Jun 10 17:35:17 2006
From: theller at python.net (Thomas Heller)
Date: Sat, 10 Jun 2006 17:35:17 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<e6baf3$8fl$1@sea.gmane.org>	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
Message-ID: <e6eorl$o0d$1@sea.gmane.org>

Ka-Ping Yee wrote:
> Hi Thomas,
> 
>> This does not work, for several reasons.
>>
>> 1. (pythoncom|pywintypes).CreateGuid() return a PyIID instance, which you cannot slice:
> 
> You're right.  The PEAK code must have been based on an earlier
> version of the CreateGuid() routine.
> 
> I've fixed this, and added detection of the UUID version so that
> the MAC address will only be picked up if the UUID version is 1.
> 
>> (BTW: Why does it first try pywintypes, the pythoncom?)
> 
> Because PEAK does this, and i see the CreateGuid routine imported
> from both modules in Google searches for code, i assumed that it
> used to live in one module and moved to the other.
> 
> I've also figured out how to get the MAC address using NetBIOS
> calls, and added that to the repertoire of methods.  I've tested
> this and it works fast.  I think this may be better than
> UuidCreateSequential, because it should work on both Win98 and XP.

I have not tested the speed, but extending my snippet to also work
on 98 should be nearly trivial:

try:
    _func = windll.rpcrt4.UuidCreateSequential
except AttributeError:
    _func = windll.rpcrt4.UuidCreate

def CreateGuid():
    uuid = UUID()
    if 0 == _func(byref(uuid)):
        return str(buffer(uuid))

Thomas


From p.f.moore at gmail.com  Sat Jun 10 17:36:16 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 10 Jun 2006 16:36:16 +0100
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <448AE3C8.5080105@async.com.br>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<Pine.LNX.4.58.0606100828400.5223@server1.LFW.org>
	<448AE3C8.5080105@async.com.br>
Message-ID: <79990c6b0606100836r177bc851nf5ecab688a7d9ef8@mail.gmail.com>

On 6/10/06, Johan Dahlin <jdahlin at async.com.br> wrote:
> Ka-Ping Yee wrote:
> > On Thu, 8 Jun 2006, Neal Norwitz wrote:
> >> If you plan to make a checkin adding a feature (even a simple one),
> >> you oughta let people know by responding to this message.  Please get
> >> the bug fixes in ASAP.  Remember to add tests!
> >
> > Just to make this appear on this thread: i'm planning to check in
> > the uuid.py module at http://zesty.ca/python/uuid.py (with tests).
>
> Just a quick comment, ipconfig is localized on my system so it'll output
> something like this:
>
>    endere?o inet6: fe80::20e:a6ff:feac:c3bd/64 Escopo:Link
>
> So it needs to be run with LANG set to C to avoid this.

Actually, the code uses "ifconfig", which doesn't exist on Windows.
You want the command "ipconfig /all". And it assumes Windows is
installed on the C: drive (normal, but not guaranteed).

> Perhaps it would be better to use the same API ipconfig uses, but it's
> highly platform specific of course.

Here's a VB script that gets all the MAC addresses on the system using WMI:

Dim objNetworkAdapters, objAdapter, objWMI

Set objWMI = Nothing
Set objWMI = GetObject("winmgmts:")

' Get a list of IP-enabled adapters.
Set objNetworkAdapters = objWMI.ExecQuery("select * from
Win32_NetworkAdapterConfiguration where IPEnabled = 1")

For Each objAdapter In objNetworkAdapters
      wscript.echo "Network adapter: " & objAdapter.Caption & " has
MAC address " & objAdapter.MacAddress
Next

It might be worth noting that on my PC, this gets one "real" network
adapter, and two VMWare virtual adapter. Whether this affects things,
I don't know. This script and ipconfig /all give the adapters in
different orders.

Paul.

From python-dev at zesty.ca  Sat Jun 10 17:38:40 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 10:38:40 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <e6eorl$o0d$1@sea.gmane.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
	<e6eorl$o0d$1@sea.gmane.org>
Message-ID: <Pine.LNX.4.58.0606101037590.5223@server1.LFW.org>

On Sat, 10 Jun 2006, Thomas Heller wrote:
> I have not tested the speed, but extending my snippet to also work
> on 98 should be nearly trivial:
>
> try:
>     _func = windll.rpcrt4.UuidCreateSequential
> except AttributeError:
>     _func = windll.rpcrt4.UuidCreate
>
> def CreateGuid():
>     uuid = UUID()
>     if 0 == _func(byref(uuid)):
>         return str(buffer(uuid))

Okay.  Thanks for this.  I'll give it a shot.


-- ?!ng

From python-dev at zesty.ca  Sat Jun 10 17:39:06 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 10:39:06 -0500 (CDT)
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <79990c6b0606100836r177bc851nf5ecab688a7d9ef8@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<Pine.LNX.4.58.0606100828400.5223@server1.LFW.org>
	<448AE3C8.5080105@async.com.br>
	<79990c6b0606100836r177bc851nf5ecab688a7d9ef8@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606101038430.5223@server1.LFW.org>

On Sat, 10 Jun 2006, Paul Moore wrote:
> Actually, the code uses "ifconfig", which doesn't exist on Windows.
> You want the command "ipconfig /all".

I fixed that before you posted this message. :)


-- ?!ng

From p.f.moore at gmail.com  Sat Jun 10 17:40:46 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 10 Jun 2006 16:40:46 +0100
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <Pine.LNX.4.58.0606101038430.5223@server1.LFW.org>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<Pine.LNX.4.58.0606100828400.5223@server1.LFW.org>
	<448AE3C8.5080105@async.com.br>
	<79990c6b0606100836r177bc851nf5ecab688a7d9ef8@mail.gmail.com>
	<Pine.LNX.4.58.0606101038430.5223@server1.LFW.org>
Message-ID: <79990c6b0606100840n7cbb2cbbh1490c0bc792e9ddd@mail.gmail.com>

On 6/10/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Sat, 10 Jun 2006, Paul Moore wrote:
> > Actually, the code uses "ifconfig", which doesn't exist on Windows.
> > You want the command "ipconfig /all".
>
> I fixed that before you posted this message. :)

Give Guido the time machine keys back! :-)
Paul.

From pje at telecommunity.com  Sat Jun 10 17:50:09 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 10 Jun 2006 11:50:09 -0400
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
References: <9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>

At 08:22 AM 6/10/2006 -0500, Ka-Ping Yee wrote:
>Finally, Phillip brought up PEAK:
> > PEAK's uuid module does this such that if win32all is present,
> > you get a Windows GUID, or if you have a FreeBSD 5+ or
> > NetBSD 2+ kernel you use the local platform uuidgen API.  See e.g.:
>
>...so i looked at PEAK's getnodeid48() routine and borrowed the
>Win32 calls from there, with a comment giving attribution to PEAK.

There appears to be a transcription error, there; the second win32 import 
isn't covered by a try/except and the ImportError seems to have disappeared 
as well.

Also, for Python 2.5, these imports could probably be replaced with a 
ctypes call, though I'm not experienced enough w/ctypes to figure out what 
the call should be.

Similarly, for the _uuidgen module, you've not included the C source for 
that module or the setup.py incantations to build it.  But again, it could 
probably be replaced by ctypes calls to uuidgen(2) on BSD-ish platforms.

I'll take a whack at addressing these once the code is in, though, unless 
there's a ctypes guru or two available...?


From python-dev at zesty.ca  Sat Jun 10 18:16:19 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 10 Jun 2006 11:16:19 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <e6eorl$o0d$1@sea.gmane.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
	<e6eorl$o0d$1@sea.gmane.org>
Message-ID: <Pine.LNX.4.58.0606101108450.5223@server1.LFW.org>

On Sat, 10 Jun 2006, Thomas Heller wrote:
> [some nice ctypes code]

Done.  Works like a charm.  Thanks for providing the code!

On Sat, 10 Jun 2006, Phillip J. Eby wrote:
> Also, for Python 2.5, these imports could probably be replaced with a
> ctypes call, though I'm not experienced enough w/ctypes to figure out what
> the call should be.

Happily, we have *the* ctypes guru here, and he's solved the problem
for Windows at least.

> Similarly, for the _uuidgen module, you've not included the C source for
> that module or the setup.py incantations to build it.

Yes, the idea was that even though _uuidgen isn't included with core
Python, users would magically benefit if they installed it (or if they
happen to be using Python in a distribution that includes it); it's
the same idea with the stuff that refers to Win32 extensions.  Is the
presence of _uuidgen sufficiently rare that i should leave out
uuidgen_getnode() for now, then?


-- ?!ng

From ronaldoussoren at mac.com  Sat Jun 10 18:20:28 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Sat, 10 Jun 2006 18:20:28 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <4489BC63.4010904@v.loewis.de>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<44899506.2070008@python.net>
	<83C75008-C9DC-4D0A-9D39-21347C36D0E2@mac.com>
	<4489BC63.4010904@v.loewis.de>
Message-ID: <FBDC5F4B-5822-4106-8A3C-380338DF5D63@mac.com>


On 9-jun-2006, at 20:22, Martin v. L?wis wrote:

> Ronald Oussoren wrote:
>> How hard is the feature freeze? Would it be possible to update the
>> Carbon bindings after beta1? Given Apple's focus on backward
>> compatibility the update should only add new functionality, not
>> remove existing functions/types.
>
> I'd say it's absolute wrt. to requiring permission from the release
> manager.

I suppose that's as good as it gets, and that's fine by me. The  
reason I asked is that if the anwer would be "no way" I definitely  
wouldn't bother to spent time on this.

>
> The point of not allowing new features after a beta release is
> that one wants to avoid getting untested new features into a release.
> For that goal, it is not that relevant whether the new features
> are guaranteed not to break any existing features - they still
> don't get the testing that the beta releases try to achieve.

If the past is any prediction of the future beta releases won't  
result in enough testing of the Carbon wrappers :-/. AFAIK there are  
known issues with at least some of the wrappers but sadly enough  
those issues are not on the tracker. I know of problems with OSA  
support because an external package that uses those extensions has  
basicly forked them to be able to apply bugfixes. I've just sent a  
message to the pythonmac-sig to ask people to file bugreports about  
problems they'd like to see fixed.

Hopefully Python 2.5 will see more testing on the mac because there  
will be prompt binary releases of the betas this time around.

Ronald


From fredrik at pythonware.com  Sat Jun 10 18:34:21 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 10 Jun 2006 18:34:21 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
Message-ID: <e6esab$388$1@sea.gmane.org>

Neal Norwitz wrote:

> There is still the missing documentation for ctypes and element tree.
> I know there's been progress on ctypes.  What are we going to do about
> ElementTree?  Are we going to have another undocumented module in the
> core 

if all undocumented modules had as much documentation and articles as 
ET, the world would be a lot better documented ;-)

I've posted a text version of the xml.etree.ElementTree PythonDoc here:

     http://www.python.org/sf/1504046

hopefully, one of the anything-to-latex volunteers will pick this up 
shortly; otherwise, I'll deal with that early next week.

</F>


From theller at python.net  Sat Jun 10 18:39:04 2006
From: theller at python.net (Thomas Heller)
Date: Sat, 10 Jun 2006 18:39:04 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>
References: <9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<e6baf3$8fl$1@sea.gmane.org>	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>
Message-ID: <e6esj7$34b$1@sea.gmane.org>

Phillip J. Eby wrote:
> At 08:22 AM 6/10/2006 -0500, Ka-Ping Yee wrote:
>>Finally, Phillip brought up PEAK:
>> > PEAK's uuid module does this such that if win32all is present,
>> > you get a Windows GUID, or if you have a FreeBSD 5+ or
>> > NetBSD 2+ kernel you use the local platform uuidgen API.  See e.g.:
>>
>>...so i looked at PEAK's getnodeid48() routine and borrowed the
>>Win32 calls from there, with a comment giving attribution to PEAK.
> 
> There appears to be a transcription error, there; the second win32 import 
> isn't covered by a try/except and the ImportError seems to have disappeared 
> as well.
> 
> Also, for Python 2.5, these imports could probably be replaced with a 
> ctypes call, though I'm not experienced enough w/ctypes to figure out what 
> the call should be.
> 
> Similarly, for the _uuidgen module, you've not included the C source for 
> that module or the setup.py incantations to build it.  But again, it could 
> probably be replaced by ctypes calls to uuidgen(2) on BSD-ish platforms.
> 
> I'll take a whack at addressing these once the code is in, though, unless 
> there's a ctypes guru or two available...?

I don't know if this is the uuidgen you're talking about, but on linux there
is libuuid:

>>> from ctypes import *
>>> lib = CDLL("libuuid.so.1")
>>> uuid = create_string_buffer(16)
>>> lib.uuid_generate(byref(uuid))
2131088494
>>> from binascii import hexlify
>>> hexlify(buffer(uuid))
'0c77b6d7e5f940b18e29a749057f6ed4'
>>>


From pje at telecommunity.com  Sat Jun 10 18:39:29 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 10 Jun 2006 12:39:29 -0400
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606101108450.5223@server1.LFW.org>
References: <e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
	<e6eorl$o0d$1@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>

At 11:16 AM 6/10/2006 -0500, Ka-Ping Yee wrote:
>On Sat, 10 Jun 2006, Thomas Heller wrote:
> > [some nice ctypes code]
>
>Done.  Works like a charm.  Thanks for providing the code!
>
>On Sat, 10 Jun 2006, Phillip J. Eby wrote:
> > Also, for Python 2.5, these imports could probably be replaced with a
> > ctypes call, though I'm not experienced enough w/ctypes to figure out what
> > the call should be.
>
>Happily, we have *the* ctypes guru here, and he's solved the problem
>for Windows at least.
>
> > Similarly, for the _uuidgen module, you've not included the C source for
> > that module or the setup.py incantations to build it.
>
>Yes, the idea was that even though _uuidgen isn't included with core
>Python, users would magically benefit if they installed it (or if they
>happen to be using Python in a distribution that includes it);

_uuidgen is actually peak.util._uuidgen; as far as I know, that's the only 
place you can get it.


>  it's
>the same idea with the stuff that refers to Win32 extensions.  Is the
>presence of _uuidgen sufficiently rare that i should leave out
>uuidgen_getnode() for now, then?

Either that, or we could add the code in to build it.  PEAK's setup.py does 
some relatively simple platform checks to determine whether you're on a BSD 
that has it.

The other alternative is to ask the guru nicely if he'll provide another 
ctypes snippet to call the uuidgen(2) system call if present.  :)

By the way, I'd love to see a uuid.uuid() constructor that simply calls the 
platform-specific default UUID constructor (CoCreateGuid or uuidgen(2)), if 
available, before falling back to one of the Python implementations.  Most 
of my UUID-using application code doesn't really care what type of UUID it 
gets, and if the platform has an efficient mechanism, it'd be convenient to 
use it.



From pje at telecommunity.com  Sat Jun 10 19:32:37 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 10 Jun 2006 13:32:37 -0400
Subject: [Python-Dev] UUID module
In-Reply-To: <e6esj7$34b$1@sea.gmane.org>
References: <5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20060610132400.01e990f8@mail.telecommunity.com>

At 06:39 PM 6/10/2006 +0200, Thomas Heller wrote:
>I don't know if this is the uuidgen you're talking about, but on linux there
>is libuuid:
>
> >>> from ctypes import *
> >>> lib = CDLL("libuuid.so.1")
> >>> uuid = create_string_buffer(16)
> >>> lib.uuid_generate(byref(uuid))
>2131088494
> >>> from binascii import hexlify
> >>> hexlify(buffer(uuid))
>'0c77b6d7e5f940b18e29a749057f6ed4'
> >>>

Nice.  :)  But no, this one's a uuidgen() system call on FreeBSD>=5.0 and 
NetBSD>=2.0; it may be in other BSD variants as well... perhaps OS X?


NAME
   uuidgen -- generate universally unique identifiers

LIBRARY
   Standard C Library (libc, -lc)

SYNOPSIS
   #include <sys/types.h>
   #include <sys/uuid.h>

   int
   uuidgen(struct uuid *store, int count);

DESCRIPTION
   The uuidgen() system call generates count universally unique identifiers
   (UUIDs) and writes them to the buffer pointed to by store.  The identi-
   fiers are generated according to the syntax and semantics of the DCE ver-
   sion 1 variant of universally unique identifiers.  See below for a more
   in-depth description of the identifiers.  When no IEEE 802 address is
   available for the node field, a random multi-cast address is generated
   for each invocation of the system call.  According to the algorithm of
   generating time-based UUIDs, this will also force a new random clock
   sequence, thereby increasing the likelihood for the identifier to be
   unique.

   When multiple identifiers are to be generated, the uuidgen() system call
   will generate a set of identifiers that is dense in such a way that there
   is no identifier that is larger than the smallest identifier in the set
   and smaller than the largest identifier in the set and that is not
   already in the set.

   Universally unique identifiers, also known as globally unique identifiers
   (GUIDs), have a binary representation of 128-bits.  The grouping and
   meaning of these bits is described by the following structure and its
   description of the fields that follow it:

   struct uuid {
           uint32_t        time_low;
           uint16_t        time_mid;
           uint16_t        time_hi_and_version;
           uint8_t         clock_seq_hi_and_reserved;
           uint8_t         clock_seq_low;
           uint8_t         node[_UUID_NODE_LEN];
   };

   time_low                   The least significant 32 bits of a 60-bit
                              timestamp.  This field is stored in the native
                              byte-order.

   time_mid                   The least significant 16 bits of the most sig-
                              nificant 28 bits of the 60-bit timestamp.
                              This field is stored in the native byte-order.

   time_hi_and_version        The most significant 12 bits of the 60-bit
                              timestamp multiplexed with a 4-bit version
                              number.  The version number is stored in the
                              most significant 4 bits of the 16-bit field.
                              This field is stored in the native byte-order.

   clock_seq_hi_and_reserved  The most significant 6 bits of a 14-bit
                              sequence number multiplexed with a 2-bit vari-
                              ant value.  Note that the width of the variant
                              value is determined by the variant itself.
                              Identifiers generated by the uuidgen() system
                              call have variant value 10b.  the variant
                              value is stored in the most significant bits
                              of the field.

   clock_seq_low              The least significant 8 bits of a 14-bit
                              sequence number.

   node                       The 6-byte IEEE 802 (MAC) address of one of
                              the interfaces of the node.  If no such inter-
                              face exists, a random multi-cast address is
                              used instead.

   The binary representation is sensitive to byte ordering.  Any multi-byte
   field is to be stored in the local or native byte-order and identifiers
   must be converted when transmitted to hosts that do not agree on the
   byte-order.  The specification does not however document what this means
   in concrete terms and is otherwise beyond the scope of this system call.

RETURN VALUES
   Upon successful completion, the value 0 is returned; otherwise the
   value -1 is returned and the global variable errno is set to indicate the
   error.

ERRORS
   The uuidgen() system call can fail with:

   [EFAULT]           The buffer pointed to by store could not be written to
                      for any or all identifiers.

   [EINVAL]           The count argument is less than 1 or larger than the
                         hard upper limit of 2048.

SEE ALSO
   uuidgen(1), uuid(3)

STANDARDS
   The identifiers are represented and generated in conformance with the DCE
   1.1 RPC specification.  The uuidgen() system call is itself not part of
   the specification.

HISTORY
   The uuidgen() system call first appeared in FreeBSD 5.0 and was subse-
   quently added to NetBSD 2.0.


      time_low                   The least significant 32 bits of a 60-bit
                                 timestamp.  This field is stored in the native
                                 byte-order.

      time_mid                   The least significant 16 bits of the most sig-
                                 nificant 28 bits of the 60-bit timestamp.
                                 This field is stored in the native byte-order.

      time_hi_and_version        The most significant 12 bits of the 60-bit
                                 timestamp multiplexed with a 4-bit version
                                 number.  The version number is stored in the
                                 most significant 4 bits of the 16-bit field.
                                 This field is stored in the native byte-order.

      clock_seq_hi_and_reserved  The most significant 6 bits of a 14-bit
                                 sequence number multiplexed with a 2-bit vari-
                                 ant value.  Note that the width of the variant
                                 value is determined by the variant itself.
                                 Identifiers generated by the uuidgen() system
                                 call have variant value 10b.  the variant
                                 value is stored in the most significant bits
                                 of the field.

      clock_seq_low              The least significant 8 bits of a 14-bit
                                 sequence number.

      node                       The 6-byte IEEE 802 (MAC) address of one of
                                 the interfaces of the node.  If no such inter-
                                 face exists, a random multi-cast address is
                                 used instead.

      The binary representation is sensitive to byte ordering.  Any multi-byte
      field is to be stored in the local or native byte-order and identifiers
      must be converted when transmitted to hosts that do not agree on the
      byte-order.  The specification does not however document what this means
      in concrete terms and is otherwise beyond the scope of this system call.

RETURN VALUES
      Upon successful completion, the value 0 is returned; otherwise the
      value -1 is returned and the global variable errno is set to indicate the
      error.

ERRORS
      The uuidgen() system call can fail with:

      [EFAULT]           The buffer pointed to by store could not be written to
                         for any or all identifiers.

      [EINVAL]           The count argument is less than 1 or larger than the
                         hard upper limit of 2048.


From noamraph at gmail.com  Sat Jun 10 21:00:16 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sat, 10 Jun 2006 22:00:16 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
Message-ID: <b348a0850606101200s87733bdla5fb67f0f6da04ed@mail.gmail.com>

Hello,

I'll try to answer the questions in one message. Sorry for not being
able to do it until now.

About the joke - it isn't, I really need it.

About the timing - Of course, I can live with this getting into 2.6,
and I think that I may even be able to stay alive if this were
rejected. I still personally think that if people agree that it's a
good idea it might get in, since there's almost nothing to be decided
except for that - but of course, I can understand not wanting to rush
things too much.

About whether NumPy should return real scalars or 0-dimensional arrays
- I don't know.

About the use case - I think that's the real thing I didn't explain
well and needs explanation, so I will try to do it better this time.

I'm talking about something similar to a spreadsheet in that it saves
data, calculation results, and the way to produce the results.
However, it is not similar to a spreadsheet in that the data isn't
saved in an infinite two-dimensional array with numerical indices.
Instead, the data is saved in a few "tables", each storing a different
kind of data. The tables may be with any desired number of dimensions,
and are indexed by meaningful indices, instead of by natural numbers.

For example, you may have a table called sales_data. It will store the
sales data in years from set([2003, 2004, 2005]), for car models from
set(['Subaru', 'Toyota', 'Ford']), for cities from set(['Jerusalem',
'Tel Aviv', 'Haifa']). To refer to the sales of Ford in Haifa in 2004,
you will simply write: sales_data[2004, 'Ford', 'Haifa']. If the table
is a source of data (that is, not calculated), you will be able to set
values by writing: sales_data[2004, 'Ford', 'Haifa'] = 1500.

Tables may be computed tables. For example, you may have a table which
holds for each year the total sales in that year, with the income tax
subtracted. It may be defined by a function like this:

lambda year: sum(sales_data[year, model, city] for model in models for
city in cities) / (1 + income_tax_rate)

Now, like in a spreadsheet, the function is kept, so that if you
change the data, the result will be automatically recalculated. So, if
you discovered a mistake in your data, you will be able to write:

sales_data[2004, 'Ford', 'Haifa'] = 2000

and total_sales[2004] will be automatically recalculated.

Now, note that the total_sales table depends also on the
income_tax_rate. This is a variable, just like sales_data. Unlike
sales_data, it's a single value. We should be able to change it, with
the result of all the cells of the total_sales table recalculated. But
how will we do it? We can write

income_tax_rate = 0.18

but it will have a completely different meaning. The way to make the
income_tax_rate changeable is to think of it as a 0-dimensional table.
It makes sense: sales_data depends on 3 parameters (year, model,
city), total_sales depends on 1 parameter (year), and income_tax_rate
depends on 0 parameters. That's the only difference. So, thinking of
it like this, we will simply write:

income_tax_rate[] = 0.18

Now the system can know that the income tax rate has changed, and
recalculate what's needed. We will also have to change the previous
function a tiny bit, to:

lambda year: sum(sales_data[year, model, city] for model in models for
city in cities) / (1 + income_tax_rate[])

But it's fine - it just makes it clearer that income_tax_rate[] is a
part of the model that may change its value.

I hope that I managed to explain the use case better this time -
please ask if my description isn't clear enough.

Thanks for your comments, please send more,
Noam

From aleaxit at gmail.com  Sat Jun 10 21:04:23 2006
From: aleaxit at gmail.com (Alex Martelli)
Date: Sat, 10 Jun 2006 12:04:23 -0700
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
Message-ID: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>

...claims:

Note that for even rather small len(x), the total number of
permutations of x is larger than the period of most random number
generators; this implies that "most" permutations of a long
sequence can never be generated.

Now -- why would the behavior of "most" random number generators be  
relevant here?  The module's docs claim, for its specific Mersenne  
Twister generator, a period of 2**19997-1, which is (e.g.) a  
comfortable  
130128673800676351960752618754658780303412233749552410245124492452914636 
028095467780746435724876612802011164778042889281426609505759158196749438 
742986040468247017174321241233929215223326801091468184945617565998894057 
859403269022650639413550466514556014961826309062543 times larger than  
the number of permutations of 2000 items, which doesn't really feel  
to me like a "rather small len(x)" in this context (what I'm most  
often shuffling is just a pack of cards -- len(x)==52 -- for example).

I suspect that the note is just a fossil from a time when the default  
random number generator was Whichman-Hill, with a much shorter  
period.  Should this note just be removed, or instead somehow  
reworded to point out that this is not in fact a problem for the  
module's current default random number generator?  Opinions welcome!


Alex


From fdrake at acm.org  Sat Jun 10 21:15:46 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 10 Jun 2006 15:15:46 -0400
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <e6esab$388$1@sea.gmane.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<e6esab$388$1@sea.gmane.org>
Message-ID: <200606101515.47750.fdrake@acm.org>

On Saturday 10 June 2006 12:34, Fredrik Lundh wrote:
 > if all undocumented modules had as much documentation and articles as
 > ET, the world would be a lot better documented ;-)
 >
 > I've posted a text version of the xml.etree.ElementTree PythonDoc here:

Here's a question that we should answer before the beta:

With the introduction of the xmlcore package in Python 2.5, should we document 
xml.etree or xmlcore.etree?  If someone installs PyXML with Python 2.5, I 
don't think they're going to get xml.etree, which will be really confusing.  
We can be sure that xmlcore.etree will be there.

I'd rather not propogate the pain caused "xml" package insanity any further.


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From noamraph at gmail.com  Sat Jun 10 21:18:26 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sat, 10 Jun 2006 22:18:26 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448AD1CB.1080409@gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
	<448AA5A6.6090803@canterbury.ac.nz> <448AD1CB.1080409@gmail.com>
Message-ID: <b348a0850606101218w653537b9ke163ff1f5c1f737b@mail.gmail.com>

Hello,

2006/6/10, Nick Coghlan <ncoghlan at gmail.com>:
> The closest parallel would be with return/yield, as those actually create real
> tuples the same way subscripts do, and allow the expression to be omitted
> entirely.
>
> By that parallel, however, an implicit subscript (if adopted) should be None
> rather than ().
>
> Adapting the table from the pre-PEP to describe return statements (and yield
> expressions):
>
>      return i, j, k  <-->  return (i, j, k)
>      return i, j     <-->  return (i, j)
>      return i,       <-->  return (i, )
>      return i        <-->  return (i)
>                            return ()    # (No implicit equivalent)
>      return          <-->  return None
>
> With the status quo, however, subscripts are simply equivalent to the RHS of
> an assignment statement in *requiring* that the expression be non-empty:
>
>      x = i, j, k  <-->  x = (i, j, k)
>      x = i, j     <-->  x = (i, j)
>      x = i,       <-->  x = (i, )
>      x = i        <-->  x = (i)
>                         x = ()     # (No implicit equivalent)
>                         x = None   # (No implicit equivalent)
>
> The PEP doesn't make a sufficiently compelling case for introducing
> yet-another-variant on the implicit behaviour invoked when a particular
> subexpression is missing from a construct.
>
I hope that my (hopefully) better explanation made the use case more
compelling, but I want to add two points in favour of an empty tuple:

1. If you want, you can ignore the x[(i, j, k)] equivalence
completely, since it doesn't work all the times - for example, you can
write "x[1:2, 3:4]", but you can't write "x[(1:2, 3:4)]". You can
think of x[i, j, k] as a syntax for specifying a cell in a
3-dimensional array, resulting in a call to x.__getitem__ with a
3-tuple describing the subscript for each dimension. In that view,
"x[]", which is a syntax for specifying the cell of a 0-dimensional,
should result in a __getitem__ call with an empty tuple, as there are
no subscripts to be described.

2. My equivalencies are better than yours :-), since they are dealing
with equivalencies for this specific syntax, while yours are dealing
with similar properies of a syntax for doing something completely
different.

> I guess I could have gone with my initial instinct of -1 and saved myself some
> mental exercise ;)

Why? Mental exercise is a good way to keep you mental ;)

Noam

From tim.peters at gmail.com  Sat Jun 10 21:22:36 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 10 Jun 2006 15:22:36 -0400
Subject: [Python-Dev] FYI: wsgiref is now checked in
In-Reply-To: <5.1.1.6.0.20060609124432.02e7c348@mail.telecommunity.com>
References: <5.1.1.6.0.20060609124432.02e7c348@mail.telecommunity.com>
Message-ID: <1f7befae0606101222j7be21daas987e7f24bf5e1445@mail.gmail.com>

Just noticed that, at least on Windows, test_wsgiref fails when Python
is run with -O (but passes without -O):

$ python -O -E -tt ../Lib/test/regrtest.py -v test_wsgiref
test_wsgiref
testAbstractMethods (test.test_wsgiref.HandlerTests) ... ok
testBasicErrorOutput (test.test_wsgiref.HandlerTests) ... ok
testCGIEnviron (test.test_wsgiref.HandlerTests) ... ok
testContentLength (test.test_wsgiref.HandlerTests) ... ok
testEnviron (test.test_wsgiref.HandlerTests) ... ok
testErrorAfterOutput (test.test_wsgiref.HandlerTests) ... ok
testHeaderFormats (test.test_wsgiref.HandlerTests) ... ok
testScheme (test.test_wsgiref.HandlerTests) ... ok
testExtras (test.test_wsgiref.HeaderTests) ... ok
testMappingInterface (test.test_wsgiref.HeaderTests) ... ok
testRequireList (test.test_wsgiref.HeaderTests) ... ok
test_plain_hello (test.test_wsgiref.IntegrationTests) ... ok
test_simple_validation_error (test.test_wsgiref.IntegrationTests) ... FAIL
test_validated_hello (test.test_wsgiref.IntegrationTests) ... ok
testAppURIs (test.test_wsgiref.UtilityTests) ... ok
testCrossDefaults (test.test_wsgiref.UtilityTests) ... ok
testDefaults (test.test_wsgiref.UtilityTests) ... ok
testFileWrapper (test.test_wsgiref.UtilityTests) ... FAIL
testGuessScheme (test.test_wsgiref.UtilityTests) ... ok
testHopByHop (test.test_wsgiref.UtilityTests) ... ok
testNormalizedShifts (test.test_wsgiref.UtilityTests) ... ok
testReqURIs (test.test_wsgiref.UtilityTests) ... ok
testSimpleShifts (test.test_wsgiref.UtilityTests) ... ok

======================================================================
FAIL: test_simple_validation_error (test.test_wsgiref.IntegrationTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Code\python\lib\test\test_wsgiref.py", line 156, in
test_simple_validation_error
    "AssertionError: Headers (('Content-Type', 'text/plain')) must"
AssertionError: 'ValueError: too many values to unpack' !=
"AssertionError: Headers (('Content-Type', 'text/plain')) mus
t be of type list: <type 'tuple'>"

======================================================================
FAIL: testFileWrapper (test.test_wsgiref.UtilityTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Code\python\lib\test\test_wsgiref.py", line 312, in testFileWrapper
    self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10])
  File "C:\Code\python\lib\test\test_wsgiref.py", line 211, in checkFW
    compare_generic_iter(make_it,match)
  File "C:\Code\python\lib\test\test_wsgiref.py", line 100, in
compare_generic_iter
    raise AssertionError("Too many items from __getitem__",it)
AssertionError: ('Too many items from __getitem__',
<wsgiref.util.FileWrapper instance at 0x00B432D8>)

----------------------------------------------------------------------
Ran 23 tests in 0.046s

FAILED (failures=2)
test test_wsgiref failed -- errors occurred; run in verbose mode for details
1 test failed:
    test_wsgiref

This may be because compare_generic_iter() uses `assert` statements,
and those vanish under -O.  If so, a test shouldn't normally use
`assert`.  On rare occasions it's appropriate, like test_struct's:

            if x < 0:
                expected += 1L << self.bitsize
                assert expected > 0

That isn't testing any of struct's functionality, it's documenting and
verifying a fundamental _belief_ of the test author's:  the test
itself is buggy if that assert ever triggers.  Or, IOW, it's being
used for what an assert statement should be used for :-)

From theller at python.net  Sat Jun 10 21:30:52 2006
From: theller at python.net (Thomas Heller)
Date: Sat, 10 Jun 2006 21:30:52 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <5.1.1.6.0.20060610132400.01e990f8@mail.telecommunity.com>
References: <5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<e6baf3$8fl$1@sea.gmane.org>	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>	<5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>
	<e6esj7$34b$1@sea.gmane.org>
	<5.1.1.6.0.20060610132400.01e990f8@mail.telecommunity.com>
Message-ID: <e6f6lb$v7o$1@sea.gmane.org>

Phillip J. Eby wrote:
> At 06:39 PM 6/10/2006 +0200, Thomas Heller wrote:
>>I don't know if this is the uuidgen you're talking about, but on linux there
>>is libuuid:
>>
>> >>> from ctypes import *
>> >>> lib = CDLL("libuuid.so.1")
>> >>> uuid = create_string_buffer(16)
>> >>> lib.uuid_generate(byref(uuid))
>>2131088494
>> >>> from binascii import hexlify
>> >>> hexlify(buffer(uuid))
>>'0c77b6d7e5f940b18e29a749057f6ed4'
>> >>>
> 
> Nice.  :)  But no, this one's a uuidgen() system call on FreeBSD>=5.0 and 
> NetBSD>=2.0; it may be in other BSD variants as well... perhaps OS X?

For completeness :-), although its probably getting off-topic:

$ uname -a
FreeBSD freebsd 6.0-RELEASE FreeBSD 6.0-RELEASE #0: Thu Nov  3 09:36:13 UTC 2005     root at x64.samsco.home:/usr/obj/usr/src/sys/GENERIC  i386
$ python
Python 2.4.1 (#2, Oct 12 2005, 01:36:32)
[GCC 3.4.4 [FreeBSD] 20050518] on freebsd6
Type "help", "copyright", "credits" or "license" for more information.
>>> from ctypes.util import find_library
>>> find_library("c")
'libc.so.6'
>>> from ctypes import *
>>> libc = CDLL("libc.so.6")
>>> uuid = create_string_buffer(16)
>>> libc.uuidgen(uuid, 1)
0
>>> uuid[:]
'\xd2\xff\x8e\xe3\xa3\xf8\xda\x11\xb0\x04\x00\x0c)\xd1\x18\x06'
>>>
$

On OS X, this call is not available, but /usr/lib/libc.dylib
has a uuid_generate function which is apparently compatible to
the linux example I posted above.

Thomas


From aahz at pythoncraft.com  Sat Jun 10 21:32:46 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sat, 10 Jun 2006 12:32:46 -0700
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <FBDC5F4B-5822-4106-8A3C-380338DF5D63@mac.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<44899506.2070008@python.net>
	<83C75008-C9DC-4D0A-9D39-21347C36D0E2@mac.com>
	<4489BC63.4010904@v.loewis.de>
	<FBDC5F4B-5822-4106-8A3C-380338DF5D63@mac.com>
Message-ID: <20060610193246.GB132@panix.com>

On Sat, Jun 10, 2006, Ronald Oussoren wrote:
>
> Hopefully Python 2.5 will see more testing on the mac because there  
> will be prompt binary releases of the betas this time around.

If there is in fact a binary beta release for OS X, there will
definitely be testing because we need to double-check all our 2.5 claims
for Python for Dummies (and my co-author only uses a Mac).
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From jcarlson at uci.edu  Sat Jun 10 21:52:05 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Sat, 10 Jun 2006 12:52:05 -0700
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
Message-ID: <20060610124332.F2B2.JCARLSON@uci.edu>


Alex Martelli <aleaxit at gmail.com> wrote:
> 
> ...claims:
> 
> Note that for even rather small len(x), the total number of
> permutations of x is larger than the period of most random number
> generators; this implies that "most" permutations of a long
> sequence can never be generated.
[snip]
> I suspect that the note is just a fossil from a time when the default  
> random number generator was Whichman-Hill, with a much shorter  
> period.  Should this note just be removed, or instead somehow  
> reworded to point out that this is not in fact a problem for the  
> module's current default random number generator?  Opinions welcome!

I'm recovering from a migraine, but here are my thoughts on the topic...

The number of permutations of n items is n!, which is > (n/2)^(n/2).
Solve:  2**19997 < (n/2)^(n/2)
        log_2(2**19997) < log_2((n/2)^(n/2))
        19997 < (n/2)*log(n/2)

Certainly with n >= 4096, the above holds (2048 * 11 = 22528)

 - Josiah


From ronaldoussoren at mac.com  Sat Jun 10 21:52:24 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Sat, 10 Jun 2006 21:52:24 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <20060610193246.GB132@panix.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<44899506.2070008@python.net>
	<83C75008-C9DC-4D0A-9D39-21347C36D0E2@mac.com>
	<4489BC63.4010904@v.loewis.de>
	<FBDC5F4B-5822-4106-8A3C-380338DF5D63@mac.com>
	<20060610193246.GB132@panix.com>
Message-ID: <A9CCD7EA-39D0-41A7-93CB-E8278E5EE344@mac.com>


On 10-jun-2006, at 21:32, Aahz wrote:

> On Sat, Jun 10, 2006, Ronald Oussoren wrote:
>>
>> Hopefully Python 2.5 will see more testing on the mac because there
>> will be prompt binary releases of the betas this time around.
>
> If there is in fact a binary beta release for OS X, there will
> definitely be testing because we need to double-check all our 2.5  
> claims
> for Python for Dummies (and my co-author only uses a Mac).

There will be a binary release and I'll be making it :-).  Does  
Python for Dummies cover using the Carbon package (the part that  
seems to get limited testing)? Given the title of the book I'd be  
(pleasantly) surprised.

Ronald
  

From martin at v.loewis.de  Sat Jun 10 22:01:21 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 10 Jun 2006 22:01:21 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <79990c6b0606100801q55adda87o760bd7987883419@mail.gmail.com>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<e6baf3$8fl$1@sea.gmane.org>	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<79990c6b0606100801q55adda87o760bd7987883419@mail.gmail.com>
Message-ID: <448B2511.7050601@v.loewis.de>

Paul Moore wrote:
> On 6/10/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
>> ...so i looked at PEAK's getnodeid48() routine and borrowed the
>> Win32 calls from there, with a comment giving attribution to PEAK.
> 
> Instead of using pywin32, could you use ctypes, as that's part of core
> Python? It looks like the only Win32 API you use is CoCreateGUID, so
> wrapping it should be doable...

http://docs.python.org/dev/lib/module-msilib.html#l2h-5633

Regards,
Martin

From skip at pobox.com  Sat Jun 10 22:02:57 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sat, 10 Jun 2006 15:02:57 -0500
Subject: [Python-Dev] UUID module
In-Reply-To: <e6f6lb$v7o$1@sea.gmane.org>
References: <5.1.1.6.0.20060610114311.01e84e48@mail.telecommunity.com>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6esj7$34b$1@sea.gmane.org>
	<5.1.1.6.0.20060610132400.01e990f8@mail.telecommunity.com>
	<e6f6lb$v7o$1@sea.gmane.org>
Message-ID: <17547.9585.192977.312328@montanaro.dyndns.org>


    Thomas> On OS X, this call is not available, but /usr/lib/libc.dylib has
    Thomas> a uuid_generate function which is apparently compatible to the
    Thomas> linux example I posted above.

Confirmed:

    % python
    Python 2.5a2 (trunk:46644M, Jun  4 2006, 10:58:16) 
    [GCC 4.0.0 (Apple Computer, Inc. build 5026)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from ctypes.util import find_library
    >>> print find_library("c")
    /usr/lib/libc.dylib
    >>> from ctypes import *
    >>> libc = CDLL("libc.dylib")
    >>> uuid = create_string_buffer(16)
    >>> libc.uuid_generate(uuid, 1)
    -1073747536
    >>> print repr(uuid[:])
    '\x03K\xab\x0e\x96\xba at 3\xa5\x9f\x04\x1d\x9b\x91\x08\x1b'

Skip

From jcarlson at uci.edu  Sat Jun 10 22:08:08 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Sat, 10 Jun 2006 13:08:08 -0700
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <20060610124332.F2B2.JCARLSON@uci.edu>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
Message-ID: <20060610125305.F2B5.JCARLSON@uci.edu>


Josiah Carlson <jcarlson at uci.edu> wrote:
> 
> Alex Martelli <aleaxit at gmail.com> wrote:
> > 
> > ...claims:
> > 
> > Note that for even rather small len(x), the total number of
> > permutations of x is larger than the period of most random number
> > generators; this implies that "most" permutations of a long
> > sequence can never be generated.
> [snip]
> > I suspect that the note is just a fossil from a time when the default  
> > random number generator was Whichman-Hill, with a much shorter  
> > period.  Should this note just be removed, or instead somehow  
> > reworded to point out that this is not in fact a problem for the  
> > module's current default random number generator?  Opinions welcome!
> 
> I'm recovering from a migraine, but here are my thoughts on the topic...
> 
> The number of permutations of n items is n!, which is > (n/2)^(n/2).
> Solve:  2**19997 < (n/2)^(n/2)
>         log_2(2**19997) < log_2((n/2)^(n/2))
>         19997 < (n/2)*log(n/2)
> 
> Certainly with n >= 4096, the above holds (2048 * 11 = 22528)
> 
>  - Josiah

I would also point out that even if MT had a larger period, there would
still be no guarantee that all permutations of a given sequence would be
able to be generated from the PRNG given some aribtrary internal state.

 - Josiah


From joe.gregorio at gmail.com  Fri Jun  9 18:40:33 2006
From: joe.gregorio at gmail.com (Joe Gregorio)
Date: Fri, 9 Jun 2006 12:40:33 -0400
Subject: [Python-Dev] [Web-SIG] wsgiref doc draft; reviews/patches wanted
In-Reply-To: <5.1.1.6.0.20060609115435.00a04748@mail.telecommunity.com>
References: <5.1.1.6.0.20060606184324.0200b360@mail.telecommunity.com>
	<5.1.1.6.0.20060609115435.00a04748@mail.telecommunity.com>
Message-ID: <3f1451f50606090940y29196043i99d10daddce2c297@mail.gmail.com>

On 6/9/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> >4. I believe the order of submodules presented is important and think that
> >    they should be listed with 'handlers' and 'simple_server' first:
>
> I agree that the order is important, but I intentionally chose the current
> order to be a gentle slope of complexity, from the near-trivial functions
> on up to the server/handler framework last.  I'm not sure what ordering
> principle you're suggesting to use instead.

When I first hit the documentation I was confused by the order.
This is wsgiref, a reference implementation of wsgi and I expected
wsgi servers and middleware, actual implementations of WSGI, to be
the most prominent part of the library and thus presented first.
The utility functions would come afterward, after you got the basic
wsgi pieces in place.

  -joe

-- 
Joe Gregorio        http://bitworking.org

From tom at vector-seven.com  Sat Jun 10 16:27:36 2006
From: tom at vector-seven.com (Thomas Lee)
Date: Sun, 11 Jun 2006 00:27:36 +1000
Subject: [Python-Dev] Switch statement
Message-ID: <20060610142736.GA19094@21degrees.com.au>

Hi all,

As the subject of this e-mail says, the attached patch adds a "switch"
statement to the Python language.

However, I've been reading through PEP 275 and it seems that the PEP
calls for a new opcode - SWITCH - to be added to support the new
construct.

I got a bit lost as to why the SWITCH opcode is necessary for the
implementation of the PEP. The reasoning seems to be
improving performance, but I'm not sure how a new opcode could improve
performance.

Anybody care to take the time to explain this to me, perhaps within the
context of my patch?

Cheers,
Tom

-- 
Tom Lee
http://www.vector-seven.com

-------------- next part --------------
Index: Python/graminit.c
===================================================================
--- Python/graminit.c	(revision 46818)
+++ Python/graminit.c	(working copy)
@@ -696,7 +696,7 @@
 	{1, arcs_34_3},
 	{1, arcs_34_4},
 };
-static arc arcs_35_0[7] = {
+static arc arcs_35_0[8] = {
 	{85, 1},
 	{86, 1},
 	{87, 1},
@@ -704,16 +704,17 @@
 	{89, 1},
 	{17, 1},
 	{90, 1},
+	{91, 1},
 };
 static arc arcs_35_1[1] = {
 	{0, 1},
 };
 static state states_35[2] = {
-	{7, arcs_35_0},
+	{8, arcs_35_0},
 	{1, arcs_35_1},
 };
 static arc arcs_36_0[1] = {
-	{91, 1},
+	{92, 1},
 };
 static arc arcs_36_1[1] = {
 	{26, 2},
@@ -725,8 +726,8 @@
 	{22, 4},
 };
 static arc arcs_36_4[3] = {
-	{92, 1},
-	{93, 5},
+	{93, 1},
+	{94, 5},
 	{0, 4},
 };
 static arc arcs_36_5[1] = {
@@ -749,411 +750,461 @@
 	{1, arcs_36_7},
 };
 static arc arcs_37_0[1] = {
-	{94, 1},
+	{95, 1},
 };
 static arc arcs_37_1[1] = {
-	{26, 2},
+	{82, 2},
 };
 static arc arcs_37_2[1] = {
 	{21, 3},
 };
 static arc arcs_37_3[1] = {
-	{22, 4},
+	{2, 4},
 };
-static arc arcs_37_4[2] = {
-	{93, 5},
-	{0, 4},
+static arc arcs_37_4[1] = {
+	{96, 5},
 };
-static arc arcs_37_5[1] = {
-	{21, 6},
+static arc arcs_37_5[2] = {
+	{97, 6},
+	{94, 7},
 };
 static arc arcs_37_6[1] = {
-	{22, 7},
+	{82, 8},
 };
 static arc arcs_37_7[1] = {
-	{0, 7},
+	{21, 9},
 };
-static state states_37[8] = {
+static arc arcs_37_8[1] = {
+	{21, 10},
+};
+static arc arcs_37_9[1] = {
+	{22, 11},
+};
+static arc arcs_37_10[1] = {
+	{22, 12},
+};
+static arc arcs_37_11[1] = {
+	{98, 13},
+};
+static arc arcs_37_12[3] = {
+	{97, 6},
+	{94, 7},
+	{98, 13},
+};
+static arc arcs_37_13[1] = {
+	{0, 13},
+};
+static state states_37[14] = {
 	{1, arcs_37_0},
 	{1, arcs_37_1},
 	{1, arcs_37_2},
 	{1, arcs_37_3},
-	{2, arcs_37_4},
-	{1, arcs_37_5},
+	{1, arcs_37_4},
+	{2, arcs_37_5},
 	{1, arcs_37_6},
 	{1, arcs_37_7},
+	{1, arcs_37_8},
+	{1, arcs_37_9},
+	{1, arcs_37_10},
+	{1, arcs_37_11},
+	{3, arcs_37_12},
+	{1, arcs_37_13},
 };
 static arc arcs_38_0[1] = {
-	{95, 1},
+	{99, 1},
 };
 static arc arcs_38_1[1] = {
-	{59, 2},
+	{26, 2},
 };
 static arc arcs_38_2[1] = {
-	{83, 3},
+	{21, 3},
 };
 static arc arcs_38_3[1] = {
-	{9, 4},
+	{22, 4},
 };
-static arc arcs_38_4[1] = {
-	{21, 5},
+static arc arcs_38_4[2] = {
+	{94, 5},
+	{0, 4},
 };
 static arc arcs_38_5[1] = {
-	{22, 6},
+	{21, 6},
 };
-static arc arcs_38_6[2] = {
-	{93, 7},
-	{0, 6},
+static arc arcs_38_6[1] = {
+	{22, 7},
 };
 static arc arcs_38_7[1] = {
-	{21, 8},
+	{0, 7},
 };
-static arc arcs_38_8[1] = {
-	{22, 9},
-};
-static arc arcs_38_9[1] = {
-	{0, 9},
-};
-static state states_38[10] = {
+static state states_38[8] = {
 	{1, arcs_38_0},
 	{1, arcs_38_1},
 	{1, arcs_38_2},
 	{1, arcs_38_3},
-	{1, arcs_38_4},
+	{2, arcs_38_4},
 	{1, arcs_38_5},
-	{2, arcs_38_6},
+	{1, arcs_38_6},
 	{1, arcs_38_7},
-	{1, arcs_38_8},
-	{1, arcs_38_9},
 };
 static arc arcs_39_0[1] = {
-	{96, 1},
+	{100, 1},
 };
 static arc arcs_39_1[1] = {
-	{21, 2},
+	{59, 2},
 };
 static arc arcs_39_2[1] = {
-	{22, 3},
+	{83, 3},
 };
-static arc arcs_39_3[2] = {
-	{97, 4},
-	{98, 5},
+static arc arcs_39_3[1] = {
+	{9, 4},
 };
 static arc arcs_39_4[1] = {
-	{21, 6},
+	{21, 5},
 };
 static arc arcs_39_5[1] = {
-	{21, 7},
+	{22, 6},
 };
-static arc arcs_39_6[1] = {
-	{22, 8},
+static arc arcs_39_6[2] = {
+	{94, 7},
+	{0, 6},
 };
 static arc arcs_39_7[1] = {
+	{21, 8},
+};
+static arc arcs_39_8[1] = {
 	{22, 9},
 };
-static arc arcs_39_8[4] = {
-	{97, 4},
-	{93, 10},
-	{98, 5},
-	{0, 8},
-};
 static arc arcs_39_9[1] = {
 	{0, 9},
 };
-static arc arcs_39_10[1] = {
-	{21, 11},
-};
-static arc arcs_39_11[1] = {
-	{22, 12},
-};
-static arc arcs_39_12[2] = {
-	{98, 5},
-	{0, 12},
-};
-static state states_39[13] = {
+static state states_39[10] = {
 	{1, arcs_39_0},
 	{1, arcs_39_1},
 	{1, arcs_39_2},
-	{2, arcs_39_3},
+	{1, arcs_39_3},
 	{1, arcs_39_4},
 	{1, arcs_39_5},
-	{1, arcs_39_6},
+	{2, arcs_39_6},
 	{1, arcs_39_7},
-	{4, arcs_39_8},
+	{1, arcs_39_8},
 	{1, arcs_39_9},
-	{1, arcs_39_10},
-	{1, arcs_39_11},
-	{2, arcs_39_12},
 };
 static arc arcs_40_0[1] = {
-	{99, 1},
+	{101, 1},
 };
 static arc arcs_40_1[1] = {
-	{26, 2},
+	{21, 2},
 };
-static arc arcs_40_2[2] = {
-	{100, 3},
-	{21, 4},
+static arc arcs_40_2[1] = {
+	{22, 3},
 };
-static arc arcs_40_3[1] = {
-	{21, 4},
+static arc arcs_40_3[2] = {
+	{102, 4},
+	{103, 5},
 };
 static arc arcs_40_4[1] = {
-	{22, 5},
+	{21, 6},
 };
 static arc arcs_40_5[1] = {
-	{0, 5},
+	{21, 7},
 };
-static state states_40[6] = {
+static arc arcs_40_6[1] = {
+	{22, 8},
+};
+static arc arcs_40_7[1] = {
+	{22, 9},
+};
+static arc arcs_40_8[4] = {
+	{102, 4},
+	{94, 10},
+	{103, 5},
+	{0, 8},
+};
+static arc arcs_40_9[1] = {
+	{0, 9},
+};
+static arc arcs_40_10[1] = {
+	{21, 11},
+};
+static arc arcs_40_11[1] = {
+	{22, 12},
+};
+static arc arcs_40_12[2] = {
+	{103, 5},
+	{0, 12},
+};
+static state states_40[13] = {
 	{1, arcs_40_0},
 	{1, arcs_40_1},
-	{2, arcs_40_2},
-	{1, arcs_40_3},
+	{1, arcs_40_2},
+	{2, arcs_40_3},
 	{1, arcs_40_4},
 	{1, arcs_40_5},
+	{1, arcs_40_6},
+	{1, arcs_40_7},
+	{4, arcs_40_8},
+	{1, arcs_40_9},
+	{1, arcs_40_10},
+	{1, arcs_40_11},
+	{2, arcs_40_12},
 };
-static arc arcs_41_0[2] = {
+static arc arcs_41_0[1] = {
+	{104, 1},
+};
+static arc arcs_41_1[1] = {
+	{26, 2},
+};
+static arc arcs_41_2[2] = {
+	{105, 3},
+	{21, 4},
+};
+static arc arcs_41_3[1] = {
+	{21, 4},
+};
+static arc arcs_41_4[1] = {
+	{22, 5},
+};
+static arc arcs_41_5[1] = {
+	{0, 5},
+};
+static state states_41[6] = {
+	{1, arcs_41_0},
+	{1, arcs_41_1},
+	{2, arcs_41_2},
+	{1, arcs_41_3},
+	{1, arcs_41_4},
+	{1, arcs_41_5},
+};
+static arc arcs_42_0[2] = {
 	{78, 1},
 	{19, 1},
 };
-static arc arcs_41_1[1] = {
+static arc arcs_42_1[1] = {
 	{82, 2},
 };
-static arc arcs_41_2[1] = {
+static arc arcs_42_2[1] = {
 	{0, 2},
 };
-static state states_41[3] = {
-	{2, arcs_41_0},
-	{1, arcs_41_1},
-	{1, arcs_41_2},
+static state states_42[3] = {
+	{2, arcs_42_0},
+	{1, arcs_42_1},
+	{1, arcs_42_2},
 };
-static arc arcs_42_0[1] = {
-	{101, 1},
+static arc arcs_43_0[1] = {
+	{106, 1},
 };
-static arc arcs_42_1[2] = {
+static arc arcs_43_1[2] = {
 	{26, 2},
 	{0, 1},
 };
-static arc arcs_42_2[2] = {
+static arc arcs_43_2[2] = {
 	{27, 3},
 	{0, 2},
 };
-static arc arcs_42_3[1] = {
+static arc arcs_43_3[1] = {
 	{26, 4},
 };
-static arc arcs_42_4[1] = {
+static arc arcs_43_4[1] = {
 	{0, 4},
 };
-static state states_42[5] = {
-	{1, arcs_42_0},
-	{2, arcs_42_1},
-	{2, arcs_42_2},
-	{1, arcs_42_3},
-	{1, arcs_42_4},
+static state states_43[5] = {
+	{1, arcs_43_0},
+	{2, arcs_43_1},
+	{2, arcs_43_2},
+	{1, arcs_43_3},
+	{1, arcs_43_4},
 };
-static arc arcs_43_0[2] = {
+static arc arcs_44_0[2] = {
 	{3, 1},
 	{2, 2},
 };
-static arc arcs_43_1[1] = {
+static arc arcs_44_1[1] = {
 	{0, 1},
 };
-static arc arcs_43_2[1] = {
-	{102, 3},
+static arc arcs_44_2[1] = {
+	{96, 3},
 };
-static arc arcs_43_3[1] = {
+static arc arcs_44_3[1] = {
 	{6, 4},
 };
-static arc arcs_43_4[2] = {
+static arc arcs_44_4[2] = {
 	{6, 4},
-	{103, 1},
+	{98, 1},
 };
-static state states_43[5] = {
-	{2, arcs_43_0},
-	{1, arcs_43_1},
-	{1, arcs_43_2},
-	{1, arcs_43_3},
-	{2, arcs_43_4},
+static state states_44[5] = {
+	{2, arcs_44_0},
+	{1, arcs_44_1},
+	{1, arcs_44_2},
+	{1, arcs_44_3},
+	{2, arcs_44_4},
 };
-static arc arcs_44_0[1] = {
-	{105, 1},
+static arc arcs_45_0[1] = {
+	{108, 1},
 };
-static arc arcs_44_1[2] = {
+static arc arcs_45_1[2] = {
 	{27, 2},
 	{0, 1},
 };
-static arc arcs_44_2[1] = {
-	{105, 3},
+static arc arcs_45_2[1] = {
+	{108, 3},
 };
-static arc arcs_44_3[2] = {
+static arc arcs_45_3[2] = {
 	{27, 4},
 	{0, 3},
 };
-static arc arcs_44_4[2] = {
-	{105, 3},
+static arc arcs_45_4[2] = {
+	{108, 3},
 	{0, 4},
 };
-static state states_44[5] = {
-	{1, arcs_44_0},
-	{2, arcs_44_1},
-	{1, arcs_44_2},
-	{2, arcs_44_3},
-	{2, arcs_44_4},
+static state states_45[5] = {
+	{1, arcs_45_0},
+	{2, arcs_45_1},
+	{1, arcs_45_2},
+	{2, arcs_45_3},
+	{2, arcs_45_4},
 };
-static arc arcs_45_0[2] = {
-	{106, 1},
-	{107, 1},
+static arc arcs_46_0[2] = {
+	{109, 1},
+	{110, 1},
 };
-static arc arcs_45_1[1] = {
+static arc arcs_46_1[1] = {
 	{0, 1},
 };
-static state states_45[2] = {
-	{2, arcs_45_0},
-	{1, arcs_45_1},
+static state states_46[2] = {
+	{2, arcs_46_0},
+	{1, arcs_46_1},
 };
-static arc arcs_46_0[1] = {
-	{108, 1},
+static arc arcs_47_0[1] = {
+	{111, 1},
 };
-static arc arcs_46_1[2] = {
+static arc arcs_47_1[2] = {
 	{23, 2},
 	{21, 3},
 };
-static arc arcs_46_2[1] = {
+static arc arcs_47_2[1] = {
 	{21, 3},
 };
-static arc arcs_46_3[1] = {
-	{105, 4},
+static arc arcs_47_3[1] = {
+	{108, 4},
 };
-static arc arcs_46_4[1] = {
+static arc arcs_47_4[1] = {
 	{0, 4},
 };
-static state states_46[5] = {
-	{1, arcs_46_0},
-	{2, arcs_46_1},
-	{1, arcs_46_2},
-	{1, arcs_46_3},
-	{1, arcs_46_4},
+static state states_47[5] = {
+	{1, arcs_47_0},
+	{2, arcs_47_1},
+	{1, arcs_47_2},
+	{1, arcs_47_3},
+	{1, arcs_47_4},
 };
-static arc arcs_47_0[2] = {
-	{106, 1},
-	{109, 2},
+static arc arcs_48_0[2] = {
+	{109, 1},
+	{112, 2},
 };
-static arc arcs_47_1[2] = {
-	{91, 3},
+static arc arcs_48_1[2] = {
+	{92, 3},
 	{0, 1},
 };
-static arc arcs_47_2[1] = {
+static arc arcs_48_2[1] = {
 	{0, 2},
 };
-static arc arcs_47_3[1] = {
-	{106, 4},
+static arc arcs_48_3[1] = {
+	{109, 4},
 };
-static arc arcs_47_4[1] = {
-	{93, 5},
+static arc arcs_48_4[1] = {
+	{94, 5},
 };
-static arc arcs_47_5[1] = {
+static arc arcs_48_5[1] = {
 	{26, 2},
 };
-static state states_47[6] = {
-	{2, arcs_47_0},
-	{2, arcs_47_1},
-	{1, arcs_47_2},
-	{1, arcs_47_3},
-	{1, arcs_47_4},
-	{1, arcs_47_5},
-};
-static arc arcs_48_0[1] = {
-	{110, 1},
-};
-static arc arcs_48_1[2] = {
-	{111, 0},
-	{0, 1},
-};
-static state states_48[2] = {
-	{1, arcs_48_0},
+static state states_48[6] = {
+	{2, arcs_48_0},
 	{2, arcs_48_1},
+	{1, arcs_48_2},
+	{1, arcs_48_3},
+	{1, arcs_48_4},
+	{1, arcs_48_5},
 };
 static arc arcs_49_0[1] = {
-	{112, 1},
+	{113, 1},
 };
 static arc arcs_49_1[2] = {
-	{113, 0},
+	{114, 0},
 	{0, 1},
 };
 static state states_49[2] = {
 	{1, arcs_49_0},
 	{2, arcs_49_1},
 };
-static arc arcs_50_0[2] = {
-	{114, 1},
+static arc arcs_50_0[1] = {
+	{115, 1},
+};
+static arc arcs_50_1[2] = {
+	{116, 0},
+	{0, 1},
+};
+static state states_50[2] = {
+	{1, arcs_50_0},
+	{2, arcs_50_1},
+};
+static arc arcs_51_0[2] = {
+	{117, 1},
+	{118, 2},
+};
+static arc arcs_51_1[1] = {
 	{115, 2},
 };
-static arc arcs_50_1[1] = {
-	{112, 2},
-};
-static arc arcs_50_2[1] = {
+static arc arcs_51_2[1] = {
 	{0, 2},
 };
-static state states_50[3] = {
-	{2, arcs_50_0},
-	{1, arcs_50_1},
-	{1, arcs_50_2},
+static state states_51[3] = {
+	{2, arcs_51_0},
+	{1, arcs_51_1},
+	{1, arcs_51_2},
 };
-static arc arcs_51_0[1] = {
+static arc arcs_52_0[1] = {
 	{82, 1},
 };
-static arc arcs_51_1[2] = {
-	{116, 0},
+static arc arcs_52_1[2] = {
+	{119, 0},
 	{0, 1},
 };
-static state states_51[2] = {
-	{1, arcs_51_0},
-	{2, arcs_51_1},
+static state states_52[2] = {
+	{1, arcs_52_0},
+	{2, arcs_52_1},
 };
-static arc arcs_52_0[10] = {
-	{117, 1},
-	{118, 1},
-	{119, 1},
+static arc arcs_53_0[10] = {
 	{120, 1},
 	{121, 1},
 	{122, 1},
 	{123, 1},
+	{124, 1},
+	{125, 1},
+	{126, 1},
 	{83, 1},
-	{114, 2},
-	{124, 3},
+	{117, 2},
+	{127, 3},
 };
-static arc arcs_52_1[1] = {
+static arc arcs_53_1[1] = {
 	{0, 1},
 };
-static arc arcs_52_2[1] = {
+static arc arcs_53_2[1] = {
 	{83, 1},
 };
-static arc arcs_52_3[2] = {
-	{114, 1},
+static arc arcs_53_3[2] = {
+	{117, 1},
 	{0, 3},
 };
-static state states_52[4] = {
-	{10, arcs_52_0},
-	{1, arcs_52_1},
-	{1, arcs_52_2},
-	{2, arcs_52_3},
+static state states_53[4] = {
+	{10, arcs_53_0},
+	{1, arcs_53_1},
+	{1, arcs_53_2},
+	{2, arcs_53_3},
 };
-static arc arcs_53_0[1] = {
-	{125, 1},
-};
-static arc arcs_53_1[2] = {
-	{126, 0},
-	{0, 1},
-};
-static state states_53[2] = {
-	{1, arcs_53_0},
-	{2, arcs_53_1},
-};
 static arc arcs_54_0[1] = {
-	{127, 1},
+	{128, 1},
 };
 static arc arcs_54_1[2] = {
-	{128, 0},
+	{129, 0},
 	{0, 1},
 };
 static state states_54[2] = {
@@ -1161,10 +1212,10 @@
 	{2, arcs_54_1},
 };
 static arc arcs_55_0[1] = {
-	{129, 1},
+	{130, 1},
 };
 static arc arcs_55_1[2] = {
-	{130, 0},
+	{131, 0},
 	{0, 1},
 };
 static state states_55[2] = {
@@ -1172,23 +1223,22 @@
 	{2, arcs_55_1},
 };
 static arc arcs_56_0[1] = {
-	{131, 1},
+	{132, 1},
 };
-static arc arcs_56_1[3] = {
-	{132, 0},
-	{57, 0},
+static arc arcs_56_1[2] = {
+	{133, 0},
 	{0, 1},
 };
 static state states_56[2] = {
 	{1, arcs_56_0},
-	{3, arcs_56_1},
+	{2, arcs_56_1},
 };
 static arc arcs_57_0[1] = {
-	{133, 1},
+	{134, 1},
 };
 static arc arcs_57_1[3] = {
-	{134, 0},
 	{135, 0},
+	{57, 0},
 	{0, 1},
 };
 static state states_57[2] = {
@@ -1198,142 +1248,128 @@
 static arc arcs_58_0[1] = {
 	{136, 1},
 };
-static arc arcs_58_1[5] = {
-	{28, 0},
+static arc arcs_58_1[3] = {
 	{137, 0},
 	{138, 0},
-	{139, 0},
 	{0, 1},
 };
 static state states_58[2] = {
 	{1, arcs_58_0},
-	{5, arcs_58_1},
+	{3, arcs_58_1},
 };
-static arc arcs_59_0[4] = {
-	{134, 1},
-	{135, 1},
-	{140, 1},
-	{141, 2},
+static arc arcs_59_0[1] = {
+	{139, 1},
 };
-static arc arcs_59_1[1] = {
-	{136, 2},
+static arc arcs_59_1[5] = {
+	{28, 0},
+	{140, 0},
+	{141, 0},
+	{142, 0},
+	{0, 1},
 };
-static arc arcs_59_2[1] = {
+static state states_59[2] = {
+	{1, arcs_59_0},
+	{5, arcs_59_1},
+};
+static arc arcs_60_0[4] = {
+	{137, 1},
+	{138, 1},
+	{143, 1},
+	{144, 2},
+};
+static arc arcs_60_1[1] = {
+	{139, 2},
+};
+static arc arcs_60_2[1] = {
 	{0, 2},
 };
-static state states_59[3] = {
-	{4, arcs_59_0},
-	{1, arcs_59_1},
-	{1, arcs_59_2},
+static state states_60[3] = {
+	{4, arcs_60_0},
+	{1, arcs_60_1},
+	{1, arcs_60_2},
 };
-static arc arcs_60_0[1] = {
-	{142, 1},
+static arc arcs_61_0[1] = {
+	{145, 1},
 };
-static arc arcs_60_1[3] = {
-	{143, 1},
+static arc arcs_61_1[3] = {
+	{146, 1},
 	{29, 2},
 	{0, 1},
 };
-static arc arcs_60_2[1] = {
-	{136, 3},
+static arc arcs_61_2[1] = {
+	{139, 3},
 };
-static arc arcs_60_3[1] = {
+static arc arcs_61_3[1] = {
 	{0, 3},
 };
-static state states_60[4] = {
-	{1, arcs_60_0},
-	{3, arcs_60_1},
-	{1, arcs_60_2},
-	{1, arcs_60_3},
+static state states_61[4] = {
+	{1, arcs_61_0},
+	{3, arcs_61_1},
+	{1, arcs_61_2},
+	{1, arcs_61_3},
 };
-static arc arcs_61_0[7] = {
+static arc arcs_62_0[7] = {
 	{13, 1},
-	{145, 2},
-	{148, 3},
-	{151, 4},
+	{148, 2},
+	{151, 3},
+	{154, 4},
 	{19, 5},
-	{153, 5},
-	{154, 6},
+	{156, 5},
+	{157, 6},
 };
-static arc arcs_61_1[3] = {
+static arc arcs_62_1[3] = {
 	{43, 7},
-	{144, 7},
+	{147, 7},
 	{15, 5},
 };
-static arc arcs_61_2[2] = {
-	{146, 8},
-	{147, 5},
-};
-static arc arcs_61_3[2] = {
-	{149, 9},
+static arc arcs_62_2[2] = {
+	{149, 8},
 	{150, 5},
 };
-static arc arcs_61_4[1] = {
-	{152, 10},
+static arc arcs_62_3[2] = {
+	{152, 9},
+	{153, 5},
 };
-static arc arcs_61_5[1] = {
+static arc arcs_62_4[1] = {
+	{155, 10},
+};
+static arc arcs_62_5[1] = {
 	{0, 5},
 };
-static arc arcs_61_6[2] = {
-	{154, 6},
+static arc arcs_62_6[2] = {
+	{157, 6},
 	{0, 6},
 };
-static arc arcs_61_7[1] = {
+static arc arcs_62_7[1] = {
 	{15, 5},
 };
-static arc arcs_61_8[1] = {
-	{147, 5},
-};
-static arc arcs_61_9[1] = {
+static arc arcs_62_8[1] = {
 	{150, 5},
 };
-static arc arcs_61_10[1] = {
-	{151, 5},
+static arc arcs_62_9[1] = {
+	{153, 5},
 };
-static state states_61[11] = {
-	{7, arcs_61_0},
-	{3, arcs_61_1},
-	{2, arcs_61_2},
-	{2, arcs_61_3},
-	{1, arcs_61_4},
-	{1, arcs_61_5},
-	{2, arcs_61_6},
-	{1, arcs_61_7},
-	{1, arcs_61_8},
-	{1, arcs_61_9},
-	{1, arcs_61_10},
+static arc arcs_62_10[1] = {
+	{154, 5},
 };
-static arc arcs_62_0[1] = {
-	{26, 1},
-};
-static arc arcs_62_1[3] = {
-	{155, 2},
-	{27, 3},
-	{0, 1},
-};
-static arc arcs_62_2[1] = {
-	{0, 2},
-};
-static arc arcs_62_3[2] = {
-	{26, 4},
-	{0, 3},
-};
-static arc arcs_62_4[2] = {
-	{27, 3},
-	{0, 4},
-};
-static state states_62[5] = {
-	{1, arcs_62_0},
+static state states_62[11] = {
+	{7, arcs_62_0},
 	{3, arcs_62_1},
-	{1, arcs_62_2},
+	{2, arcs_62_2},
 	{2, arcs_62_3},
-	{2, arcs_62_4},
+	{1, arcs_62_4},
+	{1, arcs_62_5},
+	{2, arcs_62_6},
+	{1, arcs_62_7},
+	{1, arcs_62_8},
+	{1, arcs_62_9},
+	{1, arcs_62_10},
 };
 static arc arcs_63_0[1] = {
 	{26, 1},
 };
 static arc arcs_63_1[3] = {
-	{156, 2},
+	{158, 2},
 	{27, 3},
 	{0, 1},
 };
@@ -1356,153 +1392,163 @@
 	{2, arcs_63_4},
 };
 static arc arcs_64_0[1] = {
-	{108, 1},
+	{26, 1},
 };
-static arc arcs_64_1[2] = {
-	{23, 2},
-	{21, 3},
+static arc arcs_64_1[3] = {
+	{159, 2},
+	{27, 3},
+	{0, 1},
 };
 static arc arcs_64_2[1] = {
-	{21, 3},
+	{0, 2},
 };
-static arc arcs_64_3[1] = {
+static arc arcs_64_3[2] = {
 	{26, 4},
+	{0, 3},
 };
-static arc arcs_64_4[1] = {
+static arc arcs_64_4[2] = {
+	{27, 3},
 	{0, 4},
 };
 static state states_64[5] = {
 	{1, arcs_64_0},
-	{2, arcs_64_1},
+	{3, arcs_64_1},
 	{1, arcs_64_2},
-	{1, arcs_64_3},
-	{1, arcs_64_4},
+	{2, arcs_64_3},
+	{2, arcs_64_4},
 };
-static arc arcs_65_0[3] = {
+static arc arcs_65_0[1] = {
+	{111, 1},
+};
+static arc arcs_65_1[2] = {
+	{23, 2},
+	{21, 3},
+};
+static arc arcs_65_2[1] = {
+	{21, 3},
+};
+static arc arcs_65_3[1] = {
+	{26, 4},
+};
+static arc arcs_65_4[1] = {
+	{0, 4},
+};
+static state states_65[5] = {
+	{1, arcs_65_0},
+	{2, arcs_65_1},
+	{1, arcs_65_2},
+	{1, arcs_65_3},
+	{1, arcs_65_4},
+};
+static arc arcs_66_0[3] = {
 	{13, 1},
-	{145, 2},
+	{148, 2},
 	{75, 3},
 };
-static arc arcs_65_1[2] = {
+static arc arcs_66_1[2] = {
 	{14, 4},
 	{15, 5},
 };
-static arc arcs_65_2[1] = {
-	{157, 6},
+static arc arcs_66_2[1] = {
+	{160, 6},
 };
-static arc arcs_65_3[1] = {
+static arc arcs_66_3[1] = {
 	{19, 5},
 };
-static arc arcs_65_4[1] = {
+static arc arcs_66_4[1] = {
 	{15, 5},
 };
-static arc arcs_65_5[1] = {
+static arc arcs_66_5[1] = {
 	{0, 5},
 };
-static arc arcs_65_6[1] = {
-	{147, 5},
+static arc arcs_66_6[1] = {
+	{150, 5},
 };
-static state states_65[7] = {
-	{3, arcs_65_0},
-	{2, arcs_65_1},
-	{1, arcs_65_2},
-	{1, arcs_65_3},
-	{1, arcs_65_4},
-	{1, arcs_65_5},
-	{1, arcs_65_6},
+static state states_66[7] = {
+	{3, arcs_66_0},
+	{2, arcs_66_1},
+	{1, arcs_66_2},
+	{1, arcs_66_3},
+	{1, arcs_66_4},
+	{1, arcs_66_5},
+	{1, arcs_66_6},
 };
-static arc arcs_66_0[1] = {
-	{158, 1},
+static arc arcs_67_0[1] = {
+	{161, 1},
 };
-static arc arcs_66_1[2] = {
+static arc arcs_67_1[2] = {
 	{27, 2},
 	{0, 1},
 };
-static arc arcs_66_2[2] = {
-	{158, 1},
+static arc arcs_67_2[2] = {
+	{161, 1},
 	{0, 2},
 };
-static state states_66[3] = {
-	{1, arcs_66_0},
-	{2, arcs_66_1},
-	{2, arcs_66_2},
+static state states_67[3] = {
+	{1, arcs_67_0},
+	{2, arcs_67_1},
+	{2, arcs_67_2},
 };
-static arc arcs_67_0[3] = {
+static arc arcs_68_0[3] = {
 	{75, 1},
 	{26, 2},
 	{21, 3},
 };
-static arc arcs_67_1[1] = {
+static arc arcs_68_1[1] = {
 	{75, 4},
 };
-static arc arcs_67_2[2] = {
+static arc arcs_68_2[2] = {
 	{21, 3},
 	{0, 2},
 };
-static arc arcs_67_3[3] = {
+static arc arcs_68_3[3] = {
 	{26, 5},
-	{159, 6},
+	{162, 6},
 	{0, 3},
 };
-static arc arcs_67_4[1] = {
+static arc arcs_68_4[1] = {
 	{75, 6},
 };
-static arc arcs_67_5[2] = {
-	{159, 6},
+static arc arcs_68_5[2] = {
+	{162, 6},
 	{0, 5},
 };
-static arc arcs_67_6[1] = {
+static arc arcs_68_6[1] = {
 	{0, 6},
 };
-static state states_67[7] = {
-	{3, arcs_67_0},
-	{1, arcs_67_1},
-	{2, arcs_67_2},
-	{3, arcs_67_3},
-	{1, arcs_67_4},
-	{2, arcs_67_5},
-	{1, arcs_67_6},
+static state states_68[7] = {
+	{3, arcs_68_0},
+	{1, arcs_68_1},
+	{2, arcs_68_2},
+	{3, arcs_68_3},
+	{1, arcs_68_4},
+	{2, arcs_68_5},
+	{1, arcs_68_6},
 };
-static arc arcs_68_0[1] = {
+static arc arcs_69_0[1] = {
 	{21, 1},
 };
-static arc arcs_68_1[2] = {
+static arc arcs_69_1[2] = {
 	{26, 2},
 	{0, 1},
 };
-static arc arcs_68_2[1] = {
+static arc arcs_69_2[1] = {
 	{0, 2},
 };
-static state states_68[3] = {
-	{1, arcs_68_0},
-	{2, arcs_68_1},
-	{1, arcs_68_2},
-};
-static arc arcs_69_0[1] = {
-	{82, 1},
-};
-static arc arcs_69_1[2] = {
-	{27, 2},
-	{0, 1},
-};
-static arc arcs_69_2[2] = {
-	{82, 1},
-	{0, 2},
-};
 static state states_69[3] = {
 	{1, arcs_69_0},
 	{2, arcs_69_1},
-	{2, arcs_69_2},
+	{1, arcs_69_2},
 };
 static arc arcs_70_0[1] = {
-	{26, 1},
+	{82, 1},
 };
 static arc arcs_70_1[2] = {
 	{27, 2},
 	{0, 1},
 };
 static arc arcs_70_2[2] = {
-	{26, 1},
+	{82, 1},
 	{0, 2},
 };
 static state states_70[3] = {
@@ -1513,445 +1559,463 @@
 static arc arcs_71_0[1] = {
 	{26, 1},
 };
-static arc arcs_71_1[1] = {
+static arc arcs_71_1[2] = {
+	{27, 2},
+	{0, 1},
+};
+static arc arcs_71_2[2] = {
+	{26, 1},
+	{0, 2},
+};
+static state states_71[3] = {
+	{1, arcs_71_0},
+	{2, arcs_71_1},
+	{2, arcs_71_2},
+};
+static arc arcs_72_0[1] = {
+	{26, 1},
+};
+static arc arcs_72_1[1] = {
 	{21, 2},
 };
-static arc arcs_71_2[1] = {
+static arc arcs_72_2[1] = {
 	{26, 3},
 };
-static arc arcs_71_3[2] = {
+static arc arcs_72_3[2] = {
 	{27, 4},
 	{0, 3},
 };
-static arc arcs_71_4[2] = {
+static arc arcs_72_4[2] = {
 	{26, 1},
 	{0, 4},
 };
-static state states_71[5] = {
-	{1, arcs_71_0},
-	{1, arcs_71_1},
-	{1, arcs_71_2},
-	{2, arcs_71_3},
-	{2, arcs_71_4},
+static state states_72[5] = {
+	{1, arcs_72_0},
+	{1, arcs_72_1},
+	{1, arcs_72_2},
+	{2, arcs_72_3},
+	{2, arcs_72_4},
 };
-static arc arcs_72_0[1] = {
-	{160, 1},
+static arc arcs_73_0[1] = {
+	{163, 1},
 };
-static arc arcs_72_1[1] = {
+static arc arcs_73_1[1] = {
 	{19, 2},
 };
-static arc arcs_72_2[2] = {
+static arc arcs_73_2[2] = {
 	{13, 3},
 	{21, 4},
 };
-static arc arcs_72_3[2] = {
+static arc arcs_73_3[2] = {
 	{9, 5},
 	{15, 6},
 };
-static arc arcs_72_4[1] = {
+static arc arcs_73_4[1] = {
 	{22, 7},
 };
-static arc arcs_72_5[1] = {
+static arc arcs_73_5[1] = {
 	{15, 6},
 };
-static arc arcs_72_6[1] = {
+static arc arcs_73_6[1] = {
 	{21, 4},
 };
-static arc arcs_72_7[1] = {
+static arc arcs_73_7[1] = {
 	{0, 7},
 };
-static state states_72[8] = {
-	{1, arcs_72_0},
-	{1, arcs_72_1},
-	{2, arcs_72_2},
-	{2, arcs_72_3},
-	{1, arcs_72_4},
-	{1, arcs_72_5},
-	{1, arcs_72_6},
-	{1, arcs_72_7},
+static state states_73[8] = {
+	{1, arcs_73_0},
+	{1, arcs_73_1},
+	{2, arcs_73_2},
+	{2, arcs_73_3},
+	{1, arcs_73_4},
+	{1, arcs_73_5},
+	{1, arcs_73_6},
+	{1, arcs_73_7},
 };
-static arc arcs_73_0[3] = {
-	{161, 1},
+static arc arcs_74_0[3] = {
+	{164, 1},
 	{28, 2},
 	{29, 3},
 };
-static arc arcs_73_1[2] = {
+static arc arcs_74_1[2] = {
 	{27, 4},
 	{0, 1},
 };
-static arc arcs_73_2[1] = {
+static arc arcs_74_2[1] = {
 	{26, 5},
 };
-static arc arcs_73_3[1] = {
+static arc arcs_74_3[1] = {
 	{26, 6},
 };
-static arc arcs_73_4[4] = {
-	{161, 1},
+static arc arcs_74_4[4] = {
+	{164, 1},
 	{28, 2},
 	{29, 3},
 	{0, 4},
 };
-static arc arcs_73_5[2] = {
+static arc arcs_74_5[2] = {
 	{27, 7},
 	{0, 5},
 };
-static arc arcs_73_6[1] = {
+static arc arcs_74_6[1] = {
 	{0, 6},
 };
-static arc arcs_73_7[1] = {
+static arc arcs_74_7[1] = {
 	{29, 3},
 };
-static state states_73[8] = {
-	{3, arcs_73_0},
-	{2, arcs_73_1},
-	{1, arcs_73_2},
-	{1, arcs_73_3},
-	{4, arcs_73_4},
-	{2, arcs_73_5},
-	{1, arcs_73_6},
-	{1, arcs_73_7},
+static state states_74[8] = {
+	{3, arcs_74_0},
+	{2, arcs_74_1},
+	{1, arcs_74_2},
+	{1, arcs_74_3},
+	{4, arcs_74_4},
+	{2, arcs_74_5},
+	{1, arcs_74_6},
+	{1, arcs_74_7},
 };
-static arc arcs_74_0[1] = {
+static arc arcs_75_0[1] = {
 	{26, 1},
 };
-static arc arcs_74_1[3] = {
-	{156, 2},
+static arc arcs_75_1[3] = {
+	{159, 2},
 	{25, 3},
 	{0, 1},
 };
-static arc arcs_74_2[1] = {
+static arc arcs_75_2[1] = {
 	{0, 2},
 };
-static arc arcs_74_3[1] = {
+static arc arcs_75_3[1] = {
 	{26, 2},
 };
-static state states_74[4] = {
-	{1, arcs_74_0},
-	{3, arcs_74_1},
-	{1, arcs_74_2},
-	{1, arcs_74_3},
+static state states_75[4] = {
+	{1, arcs_75_0},
+	{3, arcs_75_1},
+	{1, arcs_75_2},
+	{1, arcs_75_3},
 };
-static arc arcs_75_0[2] = {
-	{155, 1},
-	{163, 1},
+static arc arcs_76_0[2] = {
+	{158, 1},
+	{166, 1},
 };
-static arc arcs_75_1[1] = {
+static arc arcs_76_1[1] = {
 	{0, 1},
 };
-static state states_75[2] = {
-	{2, arcs_75_0},
-	{1, arcs_75_1},
+static state states_76[2] = {
+	{2, arcs_76_0},
+	{1, arcs_76_1},
 };
-static arc arcs_76_0[1] = {
-	{95, 1},
+static arc arcs_77_0[1] = {
+	{100, 1},
 };
-static arc arcs_76_1[1] = {
+static arc arcs_77_1[1] = {
 	{59, 2},
 };
-static arc arcs_76_2[1] = {
+static arc arcs_77_2[1] = {
 	{83, 3},
 };
-static arc arcs_76_3[1] = {
-	{104, 4},
+static arc arcs_77_3[1] = {
+	{107, 4},
 };
-static arc arcs_76_4[2] = {
-	{162, 5},
+static arc arcs_77_4[2] = {
+	{165, 5},
 	{0, 4},
 };
-static arc arcs_76_5[1] = {
+static arc arcs_77_5[1] = {
 	{0, 5},
 };
-static state states_76[6] = {
-	{1, arcs_76_0},
-	{1, arcs_76_1},
-	{1, arcs_76_2},
-	{1, arcs_76_3},
-	{2, arcs_76_4},
-	{1, arcs_76_5},
+static state states_77[6] = {
+	{1, arcs_77_0},
+	{1, arcs_77_1},
+	{1, arcs_77_2},
+	{1, arcs_77_3},
+	{2, arcs_77_4},
+	{1, arcs_77_5},
 };
-static arc arcs_77_0[1] = {
-	{91, 1},
+static arc arcs_78_0[1] = {
+	{92, 1},
 };
-static arc arcs_77_1[1] = {
-	{105, 2},
+static arc arcs_78_1[1] = {
+	{108, 2},
 };
-static arc arcs_77_2[2] = {
-	{162, 3},
+static arc arcs_78_2[2] = {
+	{165, 3},
 	{0, 2},
 };
-static arc arcs_77_3[1] = {
+static arc arcs_78_3[1] = {
 	{0, 3},
 };
-static state states_77[4] = {
-	{1, arcs_77_0},
-	{1, arcs_77_1},
-	{2, arcs_77_2},
-	{1, arcs_77_3},
+static state states_78[4] = {
+	{1, arcs_78_0},
+	{1, arcs_78_1},
+	{2, arcs_78_2},
+	{1, arcs_78_3},
 };
-static arc arcs_78_0[2] = {
-	{156, 1},
-	{165, 1},
+static arc arcs_79_0[2] = {
+	{159, 1},
+	{168, 1},
 };
-static arc arcs_78_1[1] = {
+static arc arcs_79_1[1] = {
 	{0, 1},
 };
-static state states_78[2] = {
-	{2, arcs_78_0},
-	{1, arcs_78_1},
+static state states_79[2] = {
+	{2, arcs_79_0},
+	{1, arcs_79_1},
 };
-static arc arcs_79_0[1] = {
-	{95, 1},
+static arc arcs_80_0[1] = {
+	{100, 1},
 };
-static arc arcs_79_1[1] = {
+static arc arcs_80_1[1] = {
 	{59, 2},
 };
-static arc arcs_79_2[1] = {
+static arc arcs_80_2[1] = {
 	{83, 3},
 };
-static arc arcs_79_3[1] = {
-	{106, 4},
+static arc arcs_80_3[1] = {
+	{109, 4},
 };
-static arc arcs_79_4[2] = {
-	{164, 5},
+static arc arcs_80_4[2] = {
+	{167, 5},
 	{0, 4},
 };
-static arc arcs_79_5[1] = {
+static arc arcs_80_5[1] = {
 	{0, 5},
 };
-static state states_79[6] = {
-	{1, arcs_79_0},
-	{1, arcs_79_1},
-	{1, arcs_79_2},
-	{1, arcs_79_3},
-	{2, arcs_79_4},
-	{1, arcs_79_5},
+static state states_80[6] = {
+	{1, arcs_80_0},
+	{1, arcs_80_1},
+	{1, arcs_80_2},
+	{1, arcs_80_3},
+	{2, arcs_80_4},
+	{1, arcs_80_5},
 };
-static arc arcs_80_0[1] = {
-	{91, 1},
+static arc arcs_81_0[1] = {
+	{92, 1},
 };
-static arc arcs_80_1[1] = {
-	{105, 2},
+static arc arcs_81_1[1] = {
+	{108, 2},
 };
-static arc arcs_80_2[2] = {
-	{164, 3},
+static arc arcs_81_2[2] = {
+	{167, 3},
 	{0, 2},
 };
-static arc arcs_80_3[1] = {
+static arc arcs_81_3[1] = {
 	{0, 3},
 };
-static state states_80[4] = {
-	{1, arcs_80_0},
-	{1, arcs_80_1},
-	{2, arcs_80_2},
-	{1, arcs_80_3},
+static state states_81[4] = {
+	{1, arcs_81_0},
+	{1, arcs_81_1},
+	{2, arcs_81_2},
+	{1, arcs_81_3},
 };
-static arc arcs_81_0[1] = {
+static arc arcs_82_0[1] = {
 	{26, 1},
 };
-static arc arcs_81_1[2] = {
+static arc arcs_82_1[2] = {
 	{27, 0},
 	{0, 1},
 };
-static state states_81[2] = {
-	{1, arcs_81_0},
-	{2, arcs_81_1},
+static state states_82[2] = {
+	{1, arcs_82_0},
+	{2, arcs_82_1},
 };
-static arc arcs_82_0[1] = {
+static arc arcs_83_0[1] = {
 	{19, 1},
 };
-static arc arcs_82_1[1] = {
+static arc arcs_83_1[1] = {
 	{0, 1},
 };
-static state states_82[2] = {
-	{1, arcs_82_0},
-	{1, arcs_82_1},
+static state states_83[2] = {
+	{1, arcs_83_0},
+	{1, arcs_83_1},
 };
-static arc arcs_83_0[1] = {
-	{167, 1},
+static arc arcs_84_0[1] = {
+	{170, 1},
 };
-static arc arcs_83_1[2] = {
+static arc arcs_84_1[2] = {
 	{9, 2},
 	{0, 1},
 };
-static arc arcs_83_2[1] = {
+static arc arcs_84_2[1] = {
 	{0, 2},
 };
-static state states_83[3] = {
-	{1, arcs_83_0},
-	{2, arcs_83_1},
-	{1, arcs_83_2},
+static state states_84[3] = {
+	{1, arcs_84_0},
+	{2, arcs_84_1},
+	{1, arcs_84_2},
 };
-static dfa dfas[84] = {
+static dfa dfas[85] = {
 	{256, "single_input", 0, 3, states_0,
-	 "\004\050\014\000\000\000\000\025\074\005\023\310\011\020\004\000\300\020\222\006\201"},
+	 "\004\050\014\000\000\000\000\025\074\005\023\220\070\201\040\000\000\206\220\064\010\004"},
 	{257, "file_input", 0, 2, states_1,
-	 "\204\050\014\000\000\000\000\025\074\005\023\310\011\020\004\000\300\020\222\006\201"},
+	 "\204\050\014\000\000\000\000\025\074\005\023\220\070\201\040\000\000\206\220\064\010\004"},
 	{258, "eval_input", 0, 3, states_2,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
 	{259, "decorator", 0, 7, states_3,
-	 "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{260, "decorators", 0, 2, states_4,
-	 "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{261, "funcdef", 0, 7, states_5,
-	 "\000\010\004\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\010\004\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{262, "parameters", 0, 4, states_6,
-	 "\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{263, "varargslist", 0, 10, states_7,
-	 "\000\040\010\060\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\040\010\060\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{264, "fpdef", 0, 4, states_8,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{265, "fplist", 0, 3, states_9,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{266, "stmt", 0, 2, states_10,
-	 "\000\050\014\000\000\000\000\025\074\005\023\310\011\020\004\000\300\020\222\006\201"},
+	 "\000\050\014\000\000\000\000\025\074\005\023\220\070\201\040\000\000\206\220\064\010\004"},
 	{267, "simple_stmt", 0, 4, states_11,
-	 "\000\040\010\000\000\000\000\025\074\005\023\000\000\020\004\000\300\020\222\006\200"},
+	 "\000\040\010\000\000\000\000\025\074\005\023\000\000\200\040\000\000\206\220\064\000\004"},
 	{268, "small_stmt", 0, 2, states_12,
-	 "\000\040\010\000\000\000\000\025\074\005\023\000\000\020\004\000\300\020\222\006\200"},
+	 "\000\040\010\000\000\000\000\025\074\005\023\000\000\200\040\000\000\206\220\064\000\004"},
 	{269, "expr_stmt", 0, 6, states_13,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
 	{270, "augassign", 0, 2, states_14,
-	 "\000\000\000\000\000\360\377\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\360\377\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{271, "print_stmt", 0, 9, states_15,
-	 "\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{272, "del_stmt", 0, 3, states_16,
-	 "\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{273, "pass_stmt", 0, 2, states_17,
-	 "\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{274, "flow_stmt", 0, 2, states_18,
-	 "\000\000\000\000\000\000\000\000\074\000\000\000\000\000\000\000\000\000\000\000\200"},
+	 "\000\000\000\000\000\000\000\000\074\000\000\000\000\000\000\000\000\000\000\000\000\004"},
 	{275, "break_stmt", 0, 2, states_19,
-	 "\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{276, "continue_stmt", 0, 2, states_20,
-	 "\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{277, "return_stmt", 0, 3, states_21,
-	 "\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{278, "yield_stmt", 0, 2, states_22,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\200"},
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\004"},
 	{279, "raise_stmt", 0, 7, states_23,
-	 "\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{280, "import_stmt", 0, 2, states_24,
-	 "\000\000\000\000\000\000\000\000\000\005\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\000\005\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{281, "import_name", 0, 3, states_25,
-	 "\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{282, "import_from", 0, 8, states_26,
-	 "\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{283, "import_as_name", 0, 4, states_27,
-	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{284, "dotted_as_name", 0, 4, states_28,
-	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{285, "import_as_names", 0, 3, states_29,
-	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{286, "dotted_as_names", 0, 2, states_30,
-	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{287, "dotted_name", 0, 2, states_31,
-	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
 	{288, "global_stmt", 0, 3, states_32,
-	 "\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000"},
 	{289, "exec_stmt", 0, 7, states_33,
-	 "\000\000\000\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\000\000\000\000"},
 	{290, "assert_stmt", 0, 5, states_34,
-	 "\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"},
+	 "\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000\000"},
 	{291, "compound_stmt", 0, 2, states_35,
-	 "\000\010\004\000\000\000\000\000\000\000\000\310\011\000\000\000\000\000\000\000\001"},
+	 "\000\010\004\000\000\000\000\000\000\000\000\220\070\001\000\000\000\000\000\000\010\000"},
 	{292, "if_stmt", 0, 8, states_36,
-	 "\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"},
-	{293, "while_stmt", 0, 8, states_37,
-	 "\000\000\000\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000"},
-	{294, "for_stmt", 0, 10, states_38,
-	 "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000"},
-	{295, "try_stmt", 0, 13, states_39,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000"},
-	{296, "with_stmt", 0, 6, states_40,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000"},
-	{297, "with_var", 0, 3, states_41,
-	 "\000\000\010\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000"},
-	{298, "except_clause", 0, 5, states_42,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000"},
-	{299, "suite", 0, 5, states_43,
-	 "\004\040\010\000\000\000\000\025\074\005\023\000\000\020\004\000\300\020\222\006\200"},
-	{300, "testlist_safe", 0, 5, states_44,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{301, "old_test", 0, 2, states_45,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{302, "old_lambdef", 0, 5, states_46,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000"},
-	{303, "test", 0, 6, states_47,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{304, "or_test", 0, 2, states_48,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\004\000\300\020\222\006\000"},
-	{305, "and_test", 0, 2, states_49,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\004\000\300\020\222\006\000"},
-	{306, "not_test", 0, 3, states_50,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\004\000\300\020\222\006\000"},
-	{307, "comparison", 0, 2, states_51,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{308, "comp_op", 0, 4, states_52,
-	 "\000\000\000\000\000\000\000\000\000\000\010\000\000\000\344\037\000\000\000\000\000"},
-	{309, "expr", 0, 2, states_53,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{310, "xor_expr", 0, 2, states_54,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{311, "and_expr", 0, 2, states_55,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{312, "shift_expr", 0, 2, states_56,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{313, "arith_expr", 0, 2, states_57,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{314, "term", 0, 2, states_58,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{315, "factor", 0, 3, states_59,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{316, "power", 0, 4, states_60,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\222\006\000"},
-	{317, "atom", 0, 11, states_61,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\222\006\000"},
-	{318, "listmaker", 0, 5, states_62,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{319, "testlist_gexp", 0, 5, states_63,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{320, "lambdef", 0, 5, states_64,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000"},
-	{321, "trailer", 0, 7, states_65,
-	 "\000\040\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\002\000\000"},
-	{322, "subscriptlist", 0, 3, states_66,
-	 "\000\040\050\000\000\000\000\000\000\010\000\000\000\020\004\000\300\020\222\006\000"},
-	{323, "subscript", 0, 7, states_67,
-	 "\000\040\050\000\000\000\000\000\000\010\000\000\000\020\004\000\300\020\222\006\000"},
-	{324, "sliceop", 0, 3, states_68,
-	 "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
-	{325, "exprlist", 0, 3, states_69,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\300\020\222\006\000"},
-	{326, "testlist", 0, 3, states_70,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{327, "dictmaker", 0, 5, states_71,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{328, "classdef", 0, 8, states_72,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\001"},
-	{329, "arglist", 0, 8, states_73,
-	 "\000\040\010\060\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{330, "argument", 0, 4, states_74,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{331, "list_iter", 0, 2, states_75,
-	 "\000\000\000\000\000\000\000\000\000\000\000\210\000\000\000\000\000\000\000\000\000"},
-	{332, "list_for", 0, 6, states_76,
-	 "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000"},
-	{333, "list_if", 0, 4, states_77,
-	 "\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"},
-	{334, "gen_iter", 0, 2, states_78,
-	 "\000\000\000\000\000\000\000\000\000\000\000\210\000\000\000\000\000\000\000\000\000"},
-	{335, "gen_for", 0, 6, states_79,
-	 "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000"},
-	{336, "gen_if", 0, 4, states_80,
-	 "\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"},
-	{337, "testlist1", 0, 2, states_81,
-	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\020\004\000\300\020\222\006\000"},
-	{338, "encoding_decl", 0, 2, states_82,
-	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
-	{339, "yield_expr", 0, 3, states_83,
-	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\200"},
+	 "\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"},
+	{293, "switch_stmt", 0, 14, states_37,
+	 "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000\000\000"},
+	{294, "while_stmt", 0, 8, states_38,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000"},
+	{295, "for_stmt", 0, 10, states_39,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000"},
+	{296, "try_stmt", 0, 13, states_40,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000"},
+	{297, "with_stmt", 0, 6, states_41,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\000"},
+	{298, "with_var", 0, 3, states_42,
+	 "\000\000\010\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000\000"},
+	{299, "except_clause", 0, 5, states_43,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000"},
+	{300, "suite", 0, 5, states_44,
+	 "\004\040\010\000\000\000\000\025\074\005\023\000\000\200\040\000\000\206\220\064\000\004"},
+	{301, "testlist_safe", 0, 5, states_45,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{302, "old_test", 0, 2, states_46,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{303, "old_lambdef", 0, 5, states_47,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000"},
+	{304, "test", 0, 6, states_48,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{305, "or_test", 0, 2, states_49,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\040\000\000\206\220\064\000\000"},
+	{306, "and_test", 0, 2, states_50,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\040\000\000\206\220\064\000\000"},
+	{307, "not_test", 0, 3, states_51,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\040\000\000\206\220\064\000\000"},
+	{308, "comparison", 0, 2, states_52,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{309, "comp_op", 0, 4, states_53,
+	 "\000\000\000\000\000\000\000\000\000\000\010\000\000\000\040\377\000\000\000\000\000\000"},
+	{310, "expr", 0, 2, states_54,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{311, "xor_expr", 0, 2, states_55,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{312, "and_expr", 0, 2, states_56,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{313, "shift_expr", 0, 2, states_57,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{314, "arith_expr", 0, 2, states_58,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{315, "term", 0, 2, states_59,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{316, "factor", 0, 3, states_60,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{317, "power", 0, 4, states_61,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\220\064\000\000"},
+	{318, "atom", 0, 11, states_62,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\220\064\000\000"},
+	{319, "listmaker", 0, 5, states_63,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{320, "testlist_gexp", 0, 5, states_64,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{321, "lambdef", 0, 5, states_65,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\000"},
+	{322, "trailer", 0, 7, states_66,
+	 "\000\040\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\020\000\000\000"},
+	{323, "subscriptlist", 0, 3, states_67,
+	 "\000\040\050\000\000\000\000\000\000\010\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{324, "subscript", 0, 7, states_68,
+	 "\000\040\050\000\000\000\000\000\000\010\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{325, "sliceop", 0, 3, states_69,
+	 "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	{326, "exprlist", 0, 3, states_70,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\206\220\064\000\000"},
+	{327, "testlist", 0, 3, states_71,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{328, "dictmaker", 0, 5, states_72,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{329, "classdef", 0, 8, states_73,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\010\000"},
+	{330, "arglist", 0, 8, states_74,
+	 "\000\040\010\060\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{331, "argument", 0, 4, states_75,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{332, "list_iter", 0, 2, states_76,
+	 "\000\000\000\000\000\000\000\000\000\000\000\020\020\000\000\000\000\000\000\000\000\000"},
+	{333, "list_for", 0, 6, states_77,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000"},
+	{334, "list_if", 0, 4, states_78,
+	 "\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"},
+	{335, "gen_iter", 0, 2, states_79,
+	 "\000\000\000\000\000\000\000\000\000\000\000\020\020\000\000\000\000\000\000\000\000\000"},
+	{336, "gen_for", 0, 6, states_80,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000"},
+	{337, "gen_if", 0, 4, states_81,
+	 "\000\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"},
+	{338, "testlist1", 0, 2, states_82,
+	 "\000\040\010\000\000\000\000\000\000\000\000\000\000\200\040\000\000\206\220\064\000\000"},
+	{339, "encoding_decl", 0, 2, states_83,
+	 "\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"},
+	{340, "yield_expr", 0, 3, states_84,
+	 "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\004"},
 };
-static label labels[168] = {
+static label labels[171] = {
 	{0, "EMPTY"},
 	{256, 0},
 	{4, 0},
@@ -1961,12 +2025,12 @@
 	{266, 0},
 	{0, 0},
 	{258, 0},
-	{326, 0},
+	{327, 0},
 	{259, 0},
 	{50, 0},
 	{287, 0},
 	{7, 0},
-	{329, 0},
+	{330, 0},
 	{8, 0},
 	{260, 0},
 	{261, 0},
@@ -1974,11 +2038,11 @@
 	{1, 0},
 	{262, 0},
 	{11, 0},
-	{299, 0},
+	{300, 0},
 	{263, 0},
 	{264, 0},
 	{22, 0},
-	{303, 0},
+	{304, 0},
 	{12, 0},
 	{16, 0},
 	{36, 0},
@@ -1995,7 +2059,7 @@
 	{289, 0},
 	{290, 0},
 	{270, 0},
-	{339, 0},
+	{340, 0},
 	{37, 0},
 	{38, 0},
 	{39, 0},
@@ -2011,7 +2075,7 @@
 	{1, "print"},
 	{35, 0},
 	{1, "del"},
-	{325, 0},
+	{326, 0},
 	{1, "pass"},
 	{275, 0},
 	{276, 0},
@@ -2034,41 +2098,44 @@
 	{284, 0},
 	{1, "global"},
 	{1, "exec"},
-	{309, 0},
+	{310, 0},
 	{1, "in"},
 	{1, "assert"},
 	{292, 0},
-	{293, 0},
 	{294, 0},
 	{295, 0},
 	{296, 0},
-	{328, 0},
+	{297, 0},
+	{329, 0},
+	{293, 0},
 	{1, "if"},
 	{1, "elif"},
 	{1, "else"},
+	{1, "switch"},
+	{5, 0},
+	{1, "case"},
+	{6, 0},
 	{1, "while"},
 	{1, "for"},
 	{1, "try"},
-	{298, 0},
+	{299, 0},
 	{1, "finally"},
 	{1, "with"},
-	{297, 0},
+	{298, 0},
 	{1, "except"},
-	{5, 0},
-	{6, 0},
-	{300, 0},
 	{301, 0},
-	{304, 0},
 	{302, 0},
+	{305, 0},
+	{303, 0},
 	{1, "lambda"},
-	{320, 0},
-	{305, 0},
+	{321, 0},
+	{306, 0},
 	{1, "or"},
-	{306, 0},
+	{307, 0},
 	{1, "and"},
 	{1, "not"},
-	{307, 0},
 	{308, 0},
+	{309, 0},
 	{20, 0},
 	{21, 0},
 	{28, 0},
@@ -2077,53 +2144,53 @@
 	{29, 0},
 	{29, 0},
 	{1, "is"},
-	{310, 0},
+	{311, 0},
 	{18, 0},
-	{311, 0},
+	{312, 0},
 	{33, 0},
-	{312, 0},
+	{313, 0},
 	{19, 0},
-	{313, 0},
+	{314, 0},
 	{34, 0},
-	{314, 0},
+	{315, 0},
 	{14, 0},
 	{15, 0},
-	{315, 0},
+	{316, 0},
 	{17, 0},
 	{24, 0},
 	{48, 0},
 	{32, 0},
-	{316, 0},
 	{317, 0},
-	{321, 0},
+	{318, 0},
+	{322, 0},
+	{320, 0},
+	{9, 0},
 	{319, 0},
-	{9, 0},
-	{318, 0},
 	{10, 0},
 	{26, 0},
-	{327, 0},
+	{328, 0},
 	{27, 0},
 	{25, 0},
-	{337, 0},
+	{338, 0},
 	{2, 0},
 	{3, 0},
-	{332, 0},
-	{335, 0},
-	{322, 0},
+	{333, 0},
+	{336, 0},
 	{323, 0},
 	{324, 0},
+	{325, 0},
 	{1, "class"},
-	{330, 0},
 	{331, 0},
-	{333, 0},
+	{332, 0},
 	{334, 0},
-	{336, 0},
-	{338, 0},
+	{335, 0},
+	{337, 0},
+	{339, 0},
 	{1, "yield"},
 };
 grammar _PyParser_Grammar = {
-	84,
+	85,
 	dfas,
-	{168, labels},
+	{171, labels},
 	256
 };
Index: Python/ast.c
===================================================================
--- Python/ast.c	(revision 46818)
+++ Python/ast.c	(working copy)
@@ -2575,6 +2575,69 @@
 }
 
 static stmt_ty
+ast_for_switch_stmt(struct compiling *c, const node *n)
+{
+	const char *s;
+	expr_ty expression;
+	asdl_seq *orelse = NULL;
+	asdl_seq *cases = NULL;
+
+	/* switch_stmt: ('switch' expr ':' NEWLINE INDENT (('case' expr ':' suite)+ ['else' ':' suite] | 'else' ':' suite) DEDENT) */
+	REQ(n, switch_stmt);
+
+	/* the main test expression */
+	expression = ast_for_expr(c, CHILD(n, 1));
+	if (expression == NULL)
+		return NULL;
+	
+	/* see if we have any case statements */
+	s = STR(CHILD(n, 5));
+	if (s[0] == 'c') {
+		int i;
+		int n_cases = NCH(n) - 5;
+		
+		/* how many cases do we have? */
+		s = STR(CHILD(n, NCH(n) - 4));
+		if (s[0] == 'e')
+			n_cases -= 3;
+		n_cases /= 4;
+		
+		/* we have to have at least one case at this point */
+		cases = asdl_seq_new(n_cases, c->c_arena);
+		if (cases == NULL)
+			return NULL;
+		
+		/* traverse the case statements */
+		for (i = 0; i < n_cases; i++) {
+			casestatement_ty case_stmt;
+			expr_ty case_test;
+			asdl_seq *case_body;
+			
+			case_test = ast_for_expr(c, CHILD(n, 5 + (i * 4) + 1));
+			if (case_test == NULL)
+				return NULL;
+			case_body = ast_for_suite(c, CHILD(n, 5 + (i * 4) + 3));
+			if (case_body == NULL)
+				return NULL;
+			case_stmt = casestatement(case_test, case_body, LINENO(n), n->n_col_offset, c->c_arena);
+			if (case_stmt == NULL)
+				return NULL;
+			asdl_seq_SET(cases, i, case_stmt);
+		}
+	}
+
+	/* handle the default case */
+	s = STR(CHILD(n, NCH(n) - 4));
+	if (s[0] == 'e') {
+		orelse = ast_for_suite(c, CHILD(n, NCH(n) - 2));
+		if (orelse == NULL)
+			return NULL;
+	}
+
+	return Switch(expression, cases, orelse, LINENO(n), n->n_col_offset, c->c_arena);
+}
+
+static stmt_ty
 ast_for_while_stmt(struct compiling *c, const node *n)
 {
     /* while_stmt: 'while' test ':' suite ['else' ':' suite] */
@@ -2909,7 +2972,7 @@
     }
     else {
         /* compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt
-	                | funcdef | classdef
+	                | funcdef | classdef | switch_stmt
 	*/
 	node *ch = CHILD(n, 0);
 	REQ(n, compound_stmt);
@@ -2924,6 +2987,8 @@
                 return ast_for_try_stmt(c, ch);
             case with_stmt:
                 return ast_for_with_stmt(c, ch);
+			case switch_stmt:
+				return ast_for_switch_stmt(c, ch);
             case funcdef:
                 return ast_for_funcdef(c, ch);
             case classdef:
Index: Python/symtable.c
===================================================================
--- Python/symtable.c	(revision 46818)
+++ Python/symtable.c	(working copy)
@@ -995,6 +995,26 @@
 		if (s->v.If.orelse)
 			VISIT_SEQ(st, stmt, s->v.If.orelse);
 		break;
+		case Switch_kind:
+		VISIT(st, expr, s->v.Switch.value);
+		
+		if (s->v.Switch.cases) {
+			int i, n_cases;
+			
+			n_cases = asdl_seq_LEN(s->v.Switch.cases);
+			for (i = 0; i < n_cases; i++) {
+				casestatement_ty c;
+
+				c = asdl_seq_GET(s->v.Switch.cases, i);
+				VISIT(st, expr, c->test);
+				VISIT_SEQ(st, stmt, c->body);
+			}
+		}
+
+		if (s->v.Switch.orelse)
+			VISIT_SEQ(st, stmt, s->v.Switch.orelse);
+		
+		break;
         case Raise_kind:
 		if (s->v.Raise.type) {
 			VISIT(st, expr, s->v.Raise.type);
Index: Python/compile.c
===================================================================
--- Python/compile.c	(revision 46818)
+++ Python/compile.c	(working copy)
@@ -2182,6 +2182,65 @@
 }
 
 static int
+compiler_switch(struct compiler *c, stmt_ty s)
+{
+	basicblock *end;
+	
+	assert(s->kind == Switch_kind);
+
+	end = compiler_new_block(c);
+	if (end == NULL)
+		return 0;
+
+	/* evaluate the switch expression just once */
+	VISIT(c, expr, s->v.Switch.value);
+	
+	if (s->v.Switch.cases) {	
+		int i;
+		int n_cases;
+	
+		n_cases = asdl_seq_LEN(s->v.Switch.cases);
+		for (i = 0; i < n_cases; i++) {
+			casestatement_ty case_stmt;
+			basicblock *next;
+			
+			next = compiler_new_block(c);
+			if (next == NULL)
+				return 0;
+
+			case_stmt = asdl_seq_GET(s->v.Switch.cases, i);
+
+			/* compare the case test to the original, jump if no match */
+			ADDOP(c, DUP_TOP);
+			VISIT(c, expr, case_stmt->test);
+			ADDOP_I(c, COMPARE_OP, PyCmp_EQ);
+			ADDOP_JREL(c, JUMP_IF_FALSE, next);
+			ADDOP(c, POP_TOP);
+			
+			/* test passes, this code gets executed */
+			VISIT_SEQ(c, stmt, case_stmt->body);
+
+			/* once the case has been executed, always jump to the end */
+			ADDOP_JREL(c, JUMP_FORWARD, end);
+			compiler_use_next_block(c, next);
+
+			/* pop off the comparison result */
+			ADDOP(c, POP_TOP);
+		}
+	}
+	
+	if (s->v.Switch.orelse)
+		VISIT_SEQ(c, stmt, s->v.Switch.orelse);
+	
+	compiler_use_next_block(c, end);
+
+	/* pop the switch value off the statck */
+	ADDOP(c, POP_TOP);
+
+	return 1;
+}
+
+static int
 compiler_for(struct compiler *c, stmt_ty s)
 {
 	basicblock *start, *cleanup, *end;
@@ -2743,6 +2802,8 @@
 		return compiler_continue(c);
 	case With_kind:
 		return compiler_with(c, s);
+	case Switch_kind:
+		return compiler_switch(c, s);
 	}
 	return 1;
 }
Index: Python/Python-ast.c
===================================================================
--- Python/Python-ast.c	(revision 46818)
+++ Python/Python-ast.c	(working copy)
@@ -91,6 +91,12 @@
         "optional_vars",
         "body",
 };
+static PyTypeObject *Switch_type;
+static char *Switch_fields[]={
+        "value",
+        "cases",
+        "orelse",
+};
 static PyTypeObject *Raise_type;
 static char *Raise_fields[]={
         "type",
@@ -334,6 +340,14 @@
         "lineno",
         "col_offset",
 };
+static PyTypeObject *casestatement_type;
+static PyObject* ast2obj_casestatement(void*);
+static char *casestatement_fields[]={
+        "test",
+        "body",
+        "lineno",
+        "col_offset",
+};
 static PyTypeObject *arguments_type;
 static PyObject* ast2obj_arguments(void*);
 static char *arguments_fields[]={
@@ -480,6 +494,8 @@
         if (!If_type) return 0;
         With_type = make_type("With", stmt_type, With_fields, 3);
         if (!With_type) return 0;
+        Switch_type = make_type("Switch", stmt_type, Switch_fields, 3);
+        if (!Switch_type) return 0;
         Raise_type = make_type("Raise", stmt_type, Raise_fields, 3);
         if (!Raise_type) return 0;
         TryExcept_type = make_type("TryExcept", stmt_type, TryExcept_fields, 3);
@@ -716,6 +732,9 @@
         excepthandler_type = make_type("excepthandler", AST_type,
                                        excepthandler_fields, 5);
         if (!excepthandler_type) return 0;
+        casestatement_type = make_type("casestatement", AST_type,
+                                       casestatement_fields, 4);
+        if (!casestatement_type) return 0;
         arguments_type = make_type("arguments", AST_type, arguments_fields, 4);
         if (!arguments_type) return 0;
         keyword_type = make_type("keyword", AST_type, keyword_fields, 2);
@@ -1052,6 +1071,30 @@
 }
 
 stmt_ty
+Switch(expr_ty value, asdl_seq * cases, asdl_seq * orelse, int lineno, int
+       col_offset, PyArena *arena)
+{
+        stmt_ty p;
+        if (!value) {
+                PyErr_SetString(PyExc_ValueError,
+                                "field value is required for Switch");
+                return NULL;
+        }
+        p = (stmt_ty)PyArena_Malloc(arena, sizeof(*p));
+        if (!p) {
+                PyErr_NoMemory();
+                return NULL;
+        }
+        p->kind = Switch_kind;
+        p->v.Switch.value = value;
+        p->v.Switch.cases = cases;
+        p->v.Switch.orelse = orelse;
+        p->lineno = lineno;
+        p->col_offset = col_offset;
+        return p;
+}
+
+stmt_ty
 Raise(expr_ty type, expr_ty inst, expr_ty tback, int lineno, int col_offset,
       PyArena *arena)
 {
@@ -1862,6 +1905,28 @@
         return p;
 }
 
+casestatement_ty
+casestatement(expr_ty test, asdl_seq * body, int lineno, int col_offset,
+              PyArena *arena)
+{
+        casestatement_ty p;
+        if (!test) {
+                PyErr_SetString(PyExc_ValueError,
+                                "field test is required for casestatement");
+                return NULL;
+        }
+        p = (casestatement_ty)PyArena_Malloc(arena, sizeof(*p));
+        if (!p) {
+                PyErr_NoMemory();
+                return NULL;
+        }
+        p->test = test;
+        p->body = body;
+        p->lineno = lineno;
+        p->col_offset = col_offset;
+        return p;
+}
+
 arguments_ty
 arguments(asdl_seq * args, identifier vararg, identifier kwarg, asdl_seq *
           defaults, PyArena *arena)
@@ -2184,6 +2249,25 @@
                         goto failed;
                 Py_DECREF(value);
                 break;
+        case Switch_kind:
+                result = PyType_GenericNew(Switch_type, NULL, NULL);
+                if (!result) goto failed;
+                value = ast2obj_expr(o->v.Switch.value);
+                if (!value) goto failed;
+                if (PyObject_SetAttrString(result, "value", value) == -1)
+                        goto failed;
+                Py_DECREF(value);
+                value = ast2obj_list(o->v.Switch.cases, ast2obj_casestatement);
+                if (!value) goto failed;
+                if (PyObject_SetAttrString(result, "cases", value) == -1)
+                        goto failed;
+                Py_DECREF(value);
+                value = ast2obj_list(o->v.Switch.orelse, ast2obj_stmt);
+                if (!value) goto failed;
+                if (PyObject_SetAttrString(result, "orelse", value) == -1)
+                        goto failed;
+                Py_DECREF(value);
+                break;
         case Raise_kind:
                 result = PyType_GenericNew(Raise_type, NULL, NULL);
                 if (!result) goto failed;
@@ -2940,6 +3024,45 @@
 }
 
 PyObject*
+ast2obj_casestatement(void* _o)
+{
+        casestatement_ty o = (casestatement_ty)_o;
+        PyObject *result = NULL, *value = NULL;
+        if (!o) {
+                Py_INCREF(Py_None);
+                return Py_None;
+        }
+
+        result = PyType_GenericNew(casestatement_type, NULL, NULL);
+        if (!result) return NULL;
+        value = ast2obj_expr(o->test);
+        if (!value) goto failed;
+        if (PyObject_SetAttrString(result, "test", value) == -1)
+                goto failed;
+        Py_DECREF(value);
+        value = ast2obj_list(o->body, ast2obj_stmt);
+        if (!value) goto failed;
+        if (PyObject_SetAttrString(result, "body", value) == -1)
+                goto failed;
+        Py_DECREF(value);
+        value = ast2obj_int(o->lineno);
+        if (!value) goto failed;
+        if (PyObject_SetAttrString(result, "lineno", value) == -1)
+                goto failed;
+        Py_DECREF(value);
+        value = ast2obj_int(o->col_offset);
+        if (!value) goto failed;
+        if (PyObject_SetAttrString(result, "col_offset", value) == -1)
+                goto failed;
+        Py_DECREF(value);
+        return result;
+failed:
+        Py_XDECREF(value);
+        Py_XDECREF(result);
+        return NULL;
+}
+
+PyObject*
 ast2obj_arguments(void* _o)
 {
         arguments_ty o = (arguments_ty)_o;
@@ -3076,6 +3199,8 @@
         if (PyDict_SetItemString(d, "While", (PyObject*)While_type) < 0) return;
         if (PyDict_SetItemString(d, "If", (PyObject*)If_type) < 0) return;
         if (PyDict_SetItemString(d, "With", (PyObject*)With_type) < 0) return;
+        if (PyDict_SetItemString(d, "Switch", (PyObject*)Switch_type) < 0)
+            return;
         if (PyDict_SetItemString(d, "Raise", (PyObject*)Raise_type) < 0) return;
         if (PyDict_SetItemString(d, "TryExcept", (PyObject*)TryExcept_type) <
             0) return;
@@ -3185,6 +3310,8 @@
             (PyObject*)comprehension_type) < 0) return;
         if (PyDict_SetItemString(d, "excepthandler",
             (PyObject*)excepthandler_type) < 0) return;
+        if (PyDict_SetItemString(d, "casestatement",
+            (PyObject*)casestatement_type) < 0) return;
         if (PyDict_SetItemString(d, "arguments", (PyObject*)arguments_type) <
             0) return;
         if (PyDict_SetItemString(d, "keyword", (PyObject*)keyword_type) < 0)
Index: Include/graminit.h
===================================================================
--- Include/graminit.h	(revision 46818)
+++ Include/graminit.h	(working copy)
@@ -35,50 +35,51 @@
 #define assert_stmt 290
 #define compound_stmt 291
 #define if_stmt 292
-#define while_stmt 293
-#define for_stmt 294
-#define try_stmt 295
-#define with_stmt 296
-#define with_var 297
-#define except_clause 298
-#define suite 299
-#define testlist_safe 300
-#define old_test 301
-#define old_lambdef 302
-#define test 303
-#define or_test 304
-#define and_test 305
-#define not_test 306
-#define comparison 307
-#define comp_op 308
-#define expr 309
-#define xor_expr 310
-#define and_expr 311
-#define shift_expr 312
-#define arith_expr 313
-#define term 314
-#define factor 315
-#define power 316
-#define atom 317
-#define listmaker 318
-#define testlist_gexp 319
-#define lambdef 320
-#define trailer 321
-#define subscriptlist 322
-#define subscript 323
-#define sliceop 324
-#define exprlist 325
-#define testlist 326
-#define dictmaker 327
-#define classdef 328
-#define arglist 329
-#define argument 330
-#define list_iter 331
-#define list_for 332
-#define list_if 333
-#define gen_iter 334
-#define gen_for 335
-#define gen_if 336
-#define testlist1 337
-#define encoding_decl 338
-#define yield_expr 339
+#define switch_stmt 293
+#define while_stmt 294
+#define for_stmt 295
+#define try_stmt 296
+#define with_stmt 297
+#define with_var 298
+#define except_clause 299
+#define suite 300
+#define testlist_safe 301
+#define old_test 302
+#define old_lambdef 303
+#define test 304
+#define or_test 305
+#define and_test 306
+#define not_test 307
+#define comparison 308
+#define comp_op 309
+#define expr 310
+#define xor_expr 311
+#define and_expr 312
+#define shift_expr 313
+#define arith_expr 314
+#define term 315
+#define factor 316
+#define power 317
+#define atom 318
+#define listmaker 319
+#define testlist_gexp 320
+#define lambdef 321
+#define trailer 322
+#define subscriptlist 323
+#define subscript 324
+#define sliceop 325
+#define exprlist 326
+#define testlist 327
+#define dictmaker 328
+#define classdef 329
+#define arglist 330
+#define argument 331
+#define list_iter 332
+#define list_for 333
+#define list_if 334
+#define gen_iter 335
+#define gen_for 336
+#define gen_if 337
+#define testlist1 338
+#define encoding_decl 339
+#define yield_expr 340
Index: Include/Python-ast.h
===================================================================
--- Include/Python-ast.h	(revision 46818)
+++ Include/Python-ast.h	(working copy)
@@ -28,6 +28,8 @@
 
 typedef struct _excepthandler *excepthandler_ty;
 
+typedef struct _casestatement *casestatement_ty;
+
 typedef struct _arguments *arguments_ty;
 
 typedef struct _keyword *keyword_ty;
@@ -62,10 +64,10 @@
 enum _stmt_kind {FunctionDef_kind=1, ClassDef_kind=2, Return_kind=3,
                   Delete_kind=4, Assign_kind=5, AugAssign_kind=6, Print_kind=7,
                   For_kind=8, While_kind=9, If_kind=10, With_kind=11,
-                  Raise_kind=12, TryExcept_kind=13, TryFinally_kind=14,
-                  Assert_kind=15, Import_kind=16, ImportFrom_kind=17,
-                  Exec_kind=18, Global_kind=19, Expr_kind=20, Pass_kind=21,
-                  Break_kind=22, Continue_kind=23};
+                  Switch_kind=12, Raise_kind=13, TryExcept_kind=14,
+                  TryFinally_kind=15, Assert_kind=16, Import_kind=17,
+                  ImportFrom_kind=18, Exec_kind=19, Global_kind=20,
+                  Expr_kind=21, Pass_kind=22, Break_kind=23, Continue_kind=24};
 struct _stmt {
         enum _stmt_kind kind;
         union {
@@ -133,6 +135,12 @@
                 } With;
                 
                 struct {
+                        expr_ty value;
+                        asdl_seq *cases;
+                        asdl_seq *orelse;
+                } Switch;
+                
+                struct {
                         expr_ty type;
                         expr_ty inst;
                         expr_ty tback;
@@ -331,6 +339,13 @@
         int col_offset;
 };
 
+struct _casestatement {
+        expr_ty test;
+        asdl_seq *body;
+        int lineno;
+        int col_offset;
+};
+
 struct _arguments {
         asdl_seq *args;
         identifier vararg;
@@ -374,6 +389,8 @@
            col_offset, PyArena *arena);
 stmt_ty With(expr_ty context_expr, expr_ty optional_vars, asdl_seq * body, int
              lineno, int col_offset, PyArena *arena);
+stmt_ty Switch(expr_ty value, asdl_seq * cases, asdl_seq * orelse, int lineno,
+               int col_offset, PyArena *arena);
 stmt_ty Raise(expr_ty type, expr_ty inst, expr_ty tback, int lineno, int
               col_offset, PyArena *arena);
 stmt_ty TryExcept(asdl_seq * body, asdl_seq * handlers, asdl_seq * orelse, int
@@ -435,6 +452,8 @@
                                PyArena *arena);
 excepthandler_ty excepthandler(expr_ty type, expr_ty name, asdl_seq * body, int
                                lineno, int col_offset, PyArena *arena);
+casestatement_ty casestatement(expr_ty test, asdl_seq * body, int lineno, int
+                               col_offset, PyArena *arena);
 arguments_ty arguments(asdl_seq * args, identifier vararg, identifier kwarg,
                        asdl_seq * defaults, PyArena *arena);
 keyword_ty keyword(identifier arg, expr_ty value, PyArena *arena);
Index: Grammar/Grammar
===================================================================
--- Grammar/Grammar	(revision 46818)
+++ Grammar/Grammar	(working copy)
@@ -73,8 +73,9 @@
 exec_stmt: 'exec' expr ['in' test [',' test]]
 assert_stmt: 'assert' test [',' test]
 
-compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef
+compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | switch_stmt
 if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
+switch_stmt: ('switch' expr ':' NEWLINE INDENT (('case' expr ':' suite)+ ['else' ':' suite] | 'else' ':' suite) DEDENT)
 while_stmt: 'while' test ':' suite ['else' ':' suite]
 for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
 try_stmt: ('try' ':' suite
Index: Parser/Python.asdl
===================================================================
--- Parser/Python.asdl	(revision 46818)
+++ Parser/Python.asdl	(working copy)
@@ -26,6 +26,7 @@
 	      | While(expr test, stmt* body, stmt* orelse)
 	      | If(expr test, stmt* body, stmt* orelse)
 	      | With(expr context_expr, expr? optional_vars, stmt* body)
+		  | Switch(expr value, casestatement* cases, stmt* orelse)
 
 	      -- 'type' is a bad name
 	      | Raise(expr? type, expr? inst, expr? tback)
@@ -103,7 +104,9 @@
         --                ast is exposed to Python.
 	excepthandler = (expr? type, expr? name, stmt* body, int lineno,
 	                 int col_offset)
-
+	
+	casestatement = (expr test, stmt* body, int lineno, int col_offset)
+	
 	arguments = (expr* args, identifier? vararg, 
 		     identifier? kwarg, expr* defaults)
 
Index: Lib/distutils/extension.py
===================================================================
--- Lib/distutils/extension.py	(revision 46818)
+++ Lib/distutils/extension.py	(working copy)
@@ -185,31 +185,31 @@
                 continue
 
             suffix = os.path.splitext(word)[1]
-            switch = word[0:2] ; value = word[2:]
+            switch_word = word[0:2] ; value = word[2:]
 
             if suffix in (".c", ".cc", ".cpp", ".cxx", ".c++", ".m", ".mm"):
                 # hmm, should we do something about C vs. C++ sources?
                 # or leave it up to the CCompiler implementation to
                 # worry about?
                 ext.sources.append(word)
-            elif switch == "-I":
+            elif switch_word == "-I":
                 ext.include_dirs.append(value)
-            elif switch == "-D":
+            elif switch_word == "-D":
                 equals = string.find(value, "=")
                 if equals == -1:        # bare "-DFOO" -- no value
                     ext.define_macros.append((value, None))
                 else:                   # "-DFOO=blah"
                     ext.define_macros.append((value[0:equals],
                                               value[equals+2:]))
-            elif switch == "-U":
+            elif switch_word == "-U":
                 ext.undef_macros.append(value)
-            elif switch == "-C":        # only here 'cause makesetup has it!
+            elif switch_word == "-C":        # only here 'cause makesetup has it!
                 ext.extra_compile_args.append(word)
-            elif switch == "-l":
+            elif switch_word == "-l":
                 ext.libraries.append(value)
-            elif switch == "-L":
+            elif switch_word == "-L":
                 ext.library_dirs.append(value)
-            elif switch == "-R":
+            elif switch_word == "-R":
                 ext.runtime_library_dirs.append(value)
             elif word == "-rpath":
                 append_next_word = ext.runtime_library_dirs
@@ -217,7 +217,7 @@
                 append_next_word = ext.extra_link_args
             elif word == "-Xcompiler":
                 append_next_word = ext.extra_compile_args
-            elif switch == "-u":
+            elif switch_word == "-u":
                 ext.extra_link_args.append(word)
                 if not value:
                     append_next_word = ext.extra_link_args

From bioinformed at gmail.com  Sat Jun 10 19:51:22 2006
From: bioinformed at gmail.com (Kevin Jacobs <jacobs@bioinformed.com>)
Date: Sat, 10 Jun 2006 13:51:22 -0400
Subject: [Python-Dev] Segmentation fault in collections.defaultdict
Message-ID: <2e1434c10606101051w4f00ee08j5da64ff0577a3935@mail.gmail.com>

An aside before I report this bug:

_I_HATE_SOURCEFORGE_.  If it doesn't bloody accept anonymous bug reports
then it bloody well shouldn't let you type in a nice, detailed, well
through-out report and then toss it in the toilet when you hit Submit, and
also not allow one dive in after it by using the browser back button to
recover the text.   AAARRGGHH!!

Anyhow, back to our regularly scheduled bug report, which as we know should
have gone to Sourceforge, but isn't because I don't have time for more of
that particular form of masochism.  (If that doesn't sit well with you, then
feel free to ignore any scribblings below.)

Try this at home:
import collections
d=collections.defaultdict(int)
d.iterkeys().next()  # Seg fault
d.iteritems().next() # Seg fault
d.itervalues().next() # Fine and dandy

Python version:
Python 2.5a2 (trunk:46822M, Jun 10 2006, 13:14:15)
[GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2

Discussion:
The segmentation fault only occurs where we'd expect StopIteration to be
raised.  ie, if the defaultdict has 3 elements, then only the fourth call
will result in a segmentation fault. Based on the following traceback, The
failure occurs at dictobject.c:dictiter_iternextkey:2204, which attempts to
INCREF the next non-existent key in the sequence.  Thus the current code
does not properly detect when it has run out of elements.

Not having an intimate knowledge of the internals of dictobject.c or the new
defaultdict implementation, the underlying problem is not immediately
apparent.  I wish I had more time to follow up on this, but my "random
poking around time" is already overdrawn and I must get back to less
enjoyable pursuits.


Traceback:
> gdb ./python
GNU gdb 6.3
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-suse-linux"...Using host libthread_db
library "/lib64/tls/libthread_db.so.1".

(gdb) r t.py
Starting program: src/python-trunk/python t.py
[Thread debugging using libthread_db enabled]
[New Thread 46912504205344 (LWP 12545)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 46912504205344 (LWP 12545)]
dictiter_iternextkey (di=0x2aaaaab980a0) at dictobject.c:2204
2204            Py_INCREF(key);
(gdb) back
#0  dictiter_iternextkey (di=0x2aaaaab980a0) at dictobject.c:2204
#1  0x0000000000460366 in wrap_next (self=<value optimized out>, args=<value
optimized out>, wrapped=<value optimized out>)
    at typeobject.c:3846
#2  0x0000000000415adc in PyObject_Call (func=0x2aaaaab90a50,
arg=0x2aaaaaac2050, kw=0x0) at abstract.c:1802
#3  0x0000000000481217 in PyEval_EvalFrameEx (f=0x6df8f0, throwflag=<value
optimized out>) at ceval.c:3776
#4  0x0000000000483a81 in PyEval_EvalCodeEx (co=0x2aaaaab7daf8,
globals=<value optimized out>, locals=<value optimized out>, args=0x0,
    argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at
ceval.c:2832
#5  0x0000000000483ce2 in PyEval_EvalCode (co=<value optimized out>,
globals=<value optimized out>, locals=<value optimized out>)
    at ceval.c:494
#6  0x00000000004a52f7 in PyRun_FileExFlags (fp=0x654010,
filename=0x7fffffc96546 "t.py", start=<value optimized out>,
globals=0x677070,
    locals=0x677070, closeit=1, flags=0x7fffffc95300) at pythonrun.c:1232
#7  0x00000000004a5612 in PyRun_SimpleFileExFlags (fp=<value optimized out>,
filename=0x7fffffc96546 "t.py", closeit=1,
    flags=0x7fffffc95300) at pythonrun.c:856
#8  0x0000000000411cbd in Py_Main (argc=<value optimized out>,
argv=0x7fffffc95418) at main.c:494
#9  0x00002aaaab0515aa in __libc_start_main () from /lib64/tls/libc.so.6
#10 0x00000000004112ba in _start () at start.S:113
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060610/21b91682/attachment.html 

From robert.kern at gmail.com  Sat Jun 10 22:56:08 2006
From: robert.kern at gmail.com (Robert Kern)
Date: Sat, 10 Jun 2006 15:56:08 -0500
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
Message-ID: <e6fbl4$c2j$1@sea.gmane.org>

Alex Martelli wrote:
> ...claims:
> 
> Note that for even rather small len(x), the total number of
> permutations of x is larger than the period of most random number
> generators; this implies that "most" permutations of a long
> sequence can never be generated.
> 
> Now -- why would the behavior of "most" random number generators be  
> relevant here?  The module's docs claim, for its specific Mersenne  
> Twister generator, a period of 2**19997-1, which is (e.g.) a  
> comfortable  
> 130128673800676351960752618754658780303412233749552410245124492452914636 
> 028095467780746435724876612802011164778042889281426609505759158196749438 
> 742986040468247017174321241233929215223326801091468184945617565998894057 
> 859403269022650639413550466514556014961826309062543 times larger than  
> the number of permutations of 2000 items, which doesn't really feel  
> to me like a "rather small len(x)" in this context (what I'm most  
> often shuffling is just a pack of cards -- len(x)==52 -- for example).

I wouldn't be too comfortable with that margin. The fun thing about factorials
is that they add up really quickly. The crossover point is between 2080 and 2081.


In [28]: from scipy import *

In [29]: def f(x):
   ....:     return special.gammaln(x-1) - 19937*log(2)
   ....:

In [30]: optimize.brentq(f, 2000, 3000)
Out[30]: 2082.4031300820125

In [31]: import gmpy

In [32]: mtperiod = 2**19937 - 1

In [33]: for i in range(2075, 2085):
   ....:     if gmpy.fac(i) > mtperiod:
   ....:         print i
   ....:         break
   ....:
   ....:
2081


I think that documenting this boundary might be worthwhile rather than making
vague references to "small len(x)." A note along the lines of Josiah wrote about
there being no guarantees despite period size would also be useful.

OTOH, isn't the exact PRNG algorithm considered an implementation detail? It
certainly was when the module migrated from Wichmann-Hill to the Mersenne
Twister. OTGH, I don't foresee the random module ever using an algorithm with
worse characteristics than the Mersenne Twister.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco


From tim.peters at gmail.com  Sat Jun 10 23:31:39 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 10 Jun 2006 17:31:39 -0400
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
Message-ID: <1f7befae0606101431w2746d711kda5ce3fefe145f4d@mail.gmail.com>

[Alex Martelli]
> ...claims:
>
> Note that for even rather small len(x), the total number of
> permutations of x is larger than the period of most random number
> generators; this implies that "most" permutations of a long
> sequence can never be generated.
>
> Now -- why would the behavior of "most" random number generators be
> relevant here?  The module's docs claim, for its specific Mersenne
> Twister generator, a period of 2**19997-1, which is (e.g.)

Oops!  That's wrong.  The period is 2**19937-1.

> a comfortable
> 130128673800676351960752618754658780303412233749552410245124492452914636
> 028095467780746435724876612802011164778042889281426609505759158196749438
> 742986040468247017174321241233929215223326801091468184945617565998894057
> 859403269022650639413550466514556014961826309062543 times larger than
> the number of permutations of 2000 items, which doesn't really feel
> to me like a "rather small len(x)" in this context (what I'm most
> often shuffling is just a pack of cards -- len(x)==52 -- for example).
>
> I suspect that the note is just a fossil from a time when the default
> random number generator was Whichman-Hill, with a much shorter
> period.  Should this note just be removed, or instead somehow
> reworded to point out that this is not in fact a problem for the
> module's current default random number generator?  Opinions welcome!

It should be removed now.  I'll do that :-)

WH's period was indeed so short that it couldn't generate even a tiny
fraction of the permutations of a deck of cards, and that's why the
note was added.

While a long period is necessary to get a shot at all permutations,
it's not sufficient, and I don't know what the true story is wrt the
Twister.  For example, a miserable PRNG that returns

   0.1,
   0.1, 0.2,
   0.1, 0.2, 0.2,
   0.1, 0.2, 0.2, 0.2,
   0.1, 0.2, 0.2, 0.2, 0.2,
   ...

has infinite period, but has few (O(N)) distinct subsequences of
length N.  That's a failure of so-called equidistribution in N
dimensions (for sufficiently large N, some N-vectors appear more often
than others across the whole period).  "A long" period is necessary
but not sufficient for high-dimensional equidistribution.

Off the top of my head, then, since the Twister is provably
equidistributed in 623 dimensions to 32-bit accuracy, I expect it
should be able to "fairly" generate all permutations of a sequence of
<= 623 elements (equidistribution in N dimensions implies
equidistribution in all dimensions <= N).  So I'm happy to leave a
warning out until the casinos switch to 12-deck blackjack ;-)

From aleaxit at gmail.com  Sat Jun 10 23:37:16 2006
From: aleaxit at gmail.com (Alex Martelli)
Date: Sat, 10 Jun 2006 14:37:16 -0700
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <20060610125305.F2B5.JCARLSON@uci.edu>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
Message-ID: <EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>


On Jun 10, 2006, at 1:08 PM, Josiah Carlson wrote:

> Josiah Carlson <jcarlson at uci.edu> wrote:
>>
>> Alex Martelli <aleaxit at gmail.com> wrote:
>>>
>>> ...claims:
>>>
>>> Note that for even rather small len(x), the total number of
>>> permutations of x is larger than the period of most random number
>>> generators; this implies that "most" permutations of a long
>>> sequence can never be generated.
>> [snip]
>>> I suspect that the note is just a fossil from a time when the  
>>> default
>>> random number generator was Whichman-Hill, with a much shorter
>>> period.  Should this note just be removed, or instead somehow
>>> reworded to point out that this is not in fact a problem for the
>>> module's current default random number generator?  Opinions welcome!
>>
>> I'm recovering from a migraine, but here are my thoughts on the  
>> topic...
>>
>> The number of permutations of n items is n!, which is > (n/2)^(n/2).
>> Solve:  2**19997 < (n/2)^(n/2)
>>         log_2(2**19997) < log_2((n/2)^(n/2))
>>         19997 < (n/2)*log(n/2)
>>
>> Certainly with n >= 4096, the above holds (2048 * 11 = 22528)
>>
>>  - Josiah
>
> I would also point out that even if MT had a larger period, there  
> would
> still be no guarantee that all permutations of a given sequence  
> would be
> able to be generated from the PRNG given some aribtrary internal  
> state.

Sure.  And an n of 2081 happens to suffice:

 >>> period = 2**19937
 >>> while gmpy.fac(i) < period: i = i +1
...
 >>> i
2081

Still, the note, as worded, is misleading -- it strongly suggests  
that for "even small len(x)" (no mention of whether that means dozens  
or thousands) the RNG can't generate all permutations, with no proof  
either way and just a misleading hint.  "The values of N such that  
the RNG can generate all permutations of a sequence of len(N) are not  
precisely known at this time" might be closer to the truth (if this  
is, indeed, the state of our collective knowledge).

Alex




From terry at jon.es  Sun Jun 11 00:01:40 2006
From: terry at jon.es (Terry Jones)
Date: Sun, 11 Jun 2006 00:01:40 +0200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: Your message at 14:37:16 on Saturday, 10 June 2006
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
Message-ID: <17547.16708.105058.906604@terry.jones.tc>

That doc note should surely be removed.  Perhaps it's an artifact from some
earlier shuffle algorithm.

The current algorithm (which is simple, well known, and which produces all
permutations with equal probability) only calls the RNG len(x) - 1 times.

Terry

From theller at python.net  Sun Jun 11 00:25:51 2006
From: theller at python.net (Thomas Heller)
Date: Sun, 11 Jun 2006 00:25:51 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <20060609125733.28747052@resist.wooz.org>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>	<e6b69s$pqf$1@sea.gmane.org>	<bbaeab100606090954g175c85d1r3f641a71505ebc9e@mail.gmail.com>
	<20060609125733.28747052@resist.wooz.org>
Message-ID: <e6fgte$p2n$1@sea.gmane.org>

Barry Warsaw wrote:
> On Fri, 9 Jun 2006 09:54:29 -0700
> "Brett Cannon" <brett at python.org> wrote:
> 
>> Do enough people use Google Calendar or a calendar app that support
>> iCal feeds that it would be useful for someone to maintain a Gcal
>> calendar that has the various Python dev related dates in it?
> 
> Great idea!

Won't help myself too much - I use a sheet of paper on the wall as a calendar.

Thomas


From johann at rocholl.net  Sun Jun 11 00:28:11 2006
From: johann at rocholl.net (Johann C. Rocholl)
Date: Sun, 11 Jun 2006 00:28:11 +0200
Subject: [Python-Dev] Add pure python PNG writer module to stdlib?
Message-ID: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>

I'm working on simple module to write PNG image files in pure python.
Adding it to the standard library would be useful for people who want
to create images on web server installations without gd and imlib, or
on platforms where the netpbm tools are not easily available.

Does anybody find this idea interesting?
Does anybody think it could go into stdlib before the feature freeze for 2.5?

The module consists of only one file. It imports only sys, zlib,
struct (maybe re for testing).
Some benchmarks for comparison with the pnmtopng program (from
netpbm), encoding a plain RGB file with 24 bits per pixel, input file
size 11520017 bytes (11M), measured with the 'time' command, including
Python interpreter start-up:
                        pnmtopng        png.py
straight encoding       1.31 seconds    0.72 seconds
resulting file size     342953 bytes    292885 bytes
interlaced encoding     3.78 seconds    4.88 seconds
resulting file size     422441 bytes    346872 bytes

The source code of the module lives here:
http://svn.browsershots.org/trunk/shotfactory/lib/image/png.py
http://trac.browsershots.org/browser/trunk/shotfactory/lib/image/png.py

I am willing to maintain the module for 5+ years, as it is a small but
important part of my main project. I am also willing to write latex
documentation and tests for the module, and I think I could do that
within the next three days. The module is licensed under the Apache
License 2.0, and I am ready to sign a contributor agreement for the
PSF.

I will probably add support for more different PNG formats, especially
alpha channel transparency, and then maybe color palettes. I don't
plan to add PNG decoding because it would make the module much larger
and rather complex.

Sorry if this contribution seems brash. Perhaps it is easy enough to
download and deploy my module separately. But I thought that if there
is a chance to get it in before beta1, I should not hesitate and just
ask.

Cheers,
Johann

From fuzzyman at voidspace.org.uk  Sun Jun 11 00:47:55 2006
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Sat, 10 Jun 2006 23:47:55 +0100
Subject: [Python-Dev] Add pure python PNG writer module to stdlib?
In-Reply-To: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
References: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
Message-ID: <448B4C1B.2040601@voidspace.org.uk>

Johann C. Rocholl wrote:
> I'm working on simple module to write PNG image files in pure python.
> Adding it to the standard library would be useful for people who want
> to create images on web server installations without gd and imlib, or
> on platforms where the netpbm tools are not easily available.
>
> Does anybody find this idea interesting?
> Does anybody think it could go into stdlib before the feature freeze for 2.5?
>
>   
+1

Michael Foord

> The module consists of only one file. It imports only sys, zlib,
> struct (maybe re for testing).
> Some benchmarks for comparison with the pnmtopng program (from
> netpbm), encoding a plain RGB file with 24 bits per pixel, input file
> size 11520017 bytes (11M), measured with the 'time' command, including
> Python interpreter start-up:
>                         pnmtopng        png.py
> straight encoding       1.31 seconds    0.72 seconds
> resulting file size     342953 bytes    292885 bytes
> interlaced encoding     3.78 seconds    4.88 seconds
> resulting file size     422441 bytes    346872 bytes
>
> The source code of the module lives here:
> http://svn.browsershots.org/trunk/shotfactory/lib/image/png.py
> http://trac.browsershots.org/browser/trunk/shotfactory/lib/image/png.py
>
> I am willing to maintain the module for 5+ years, as it is a small but
> important part of my main project. I am also willing to write latex
> documentation and tests for the module, and I think I could do that
> within the next three days. The module is licensed under the Apache
> License 2.0, and I am ready to sign a contributor agreement for the
> PSF.
>
> I will probably add support for more different PNG formats, especially
> alpha channel transparency, and then maybe color palettes. I don't
> plan to add PNG decoding because it would make the module much larger
> and rather complex.
>
> Sorry if this contribution seems brash. Perhaps it is easy enough to
> download and deploy my module separately. But I thought that if there
> is a chance to get it in before beta1, I should not hesitate and just
> ask.
>
> Cheers,
> Johann
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>
>   


From tim.peters at gmail.com  Sun Jun 11 00:44:01 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 10 Jun 2006 18:44:01 -0400
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17547.16708.105058.906604@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
Message-ID: <1f7befae0606101544q1827489ch41717eaabf6ca76a@mail.gmail.com>

[Terry Jones]
> That doc note should surely be removed.  Perhaps it's an artifact from some
> earlier shuffle algorithm.

No, it's an artifact form an earlier PRNG.  The shuffle algorithm
hasn't changed.

> The current algorithm (which is simple, well known,

Both true.

> and which produces all permutations with equal probability)

That needs proof.  Assuming a true random number generator, such a
proof is easy.  Using a deterministic PRNG, if the period is "too
short" it's dead easy (see below) to prove that it can't produce all
permutations (let alone with equal probablility).

> only calls the RNG len(x) - 1 times.

And that's irrelevant.  When a PRNG has period P, then _no_
deterministic algorithm (for shuffling or anything else) using that
PRNG can possibly produce more than P distinct outcomes:  the position
in the period when you start the algorithm entirely determines the
outcome, and there are only P _possible_ starting positions.  For the
older WH PRNG, P was much smaller than 52!, so it was just that easy
to _know_ that not all deck-of-card shufflings could be produced.  The
newer Mersenne Twister PRNG has a vastly larger period, and more to
the point has provably excellent equidistribution properties in 52
dimensions.

From skip at pobox.com  Sun Jun 11 00:53:14 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sat, 10 Jun 2006 17:53:14 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060610142736.GA19094@21degrees.com.au>
References: <20060610142736.GA19094@21degrees.com.au>
Message-ID: <17547.19802.361151.705599@montanaro.dyndns.org>


    Thomas> As the subject of this e-mail says, the attached patch adds a
    Thomas> "switch" statement to the Python language.

Thanks for the contribution.  I patched my sandbox and it built just fine.
I'm going out of town for a couple weeks, so I'll point out what everyone
else is thinking then duck out of the way:

    * Aside from the modified Grammar file there is no documentation.
    * There are no test cases.
    * Can you submit a patch on SourceForge?

Other than that, my trivial first attempt worked fine:

    #!/usr/bin/env python

    switch raw_input("enter a, b or c: "):
        case 'a':
            print 'yay! an a!'
        case 'b':
            print 'yay! a b!'
        case 'c':
            print 'yay! a c!'
        else:
            print 'hey dummy! I said a, b or c!'

(Need to teach python-mode about the switch and case keywords.)

You mentioned:

    Thomas> I got a bit lost as to why the SWITCH opcode is necessary for
    Thomas> the implementation of the PEP. The reasoning seems to be
    Thomas> improving performance, but I'm not sure how a new opcode could
    Thomas> improve performance.

Your implementation is straightforward, but uses a series of DUP_TOP and
COMPARE_OP instructions to compare each alternative expression to the
initial expression.  In many other languages the expression associated with
the case would be restricted to be a constant expression so that at compile
time a jump table or dictionary lookup could be used to jump straight to the
desired case.

Skip

From terry at jon.es  Sun Jun 11 01:28:34 2006
From: terry at jon.es (Terry Jones)
Date: Sun, 11 Jun 2006 01:28:34 +0200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: Your message at 18:44:01 on Saturday, 10 June 2006
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<1f7befae0606101544q1827489ch41717eaabf6ca76a@mail.gmail.com>
Message-ID: <17547.21922.467373.346612@terry.jones.tc>

>>>>> "Tim" == Tim Peters <tim.peters at gmail.com> writes:
Tim> [Terry Jones]
>> and which produces all permutations with equal probability)

Tim> That needs proof.  Assuming a true random number generator, such a
Tim> proof is easy.  Using a deterministic PRNG, if the period is "too
Tim> short" it's dead easy (see below) to prove that it can't produce all
Tim> permutations (let alone with equal probablility).

OK, thanks. Sorry for the noise.

Terry

From brett at python.org  Sun Jun 11 01:35:10 2006
From: brett at python.org (Brett Cannon)
Date: Sat, 10 Jun 2006 16:35:10 -0700
Subject: [Python-Dev] Add pure python PNG writer module to stdlib?
In-Reply-To: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
References: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
Message-ID: <bbaeab100606101635w2fd30d50ib469912e3ff68d0d@mail.gmail.com>

On 6/10/06, Johann C. Rocholl <johann at rocholl.net> wrote:
>
> I'm working on simple module to write PNG image files in pure python.
> Adding it to the standard library would be useful for people who want
> to create images on web server installations without gd and imlib, or
> on platforms where the netpbm tools are not easily available.
>
> Does anybody find this idea interesting?



Yes, although I wouldn't want an interface taking in strings but something
more like an iterator that returns each row which itself contains int
triples.  In other words more array-based than string based.

Does anybody think it could go into stdlib before the feature freeze for 2.5
> ?


Nope.  To get added to the stdlib there needs to be support from the
community that the module is useful and best-of-breed.  Try posting on
c.l.py and see if people pick it up and like it.  No way that is going to
happen before b1.  But there is always 2.6 .

-Brett

The module consists of only one file. It imports only sys, zlib,
> struct (maybe re for testing).
> Some benchmarks for comparison with the pnmtopng program (from
> netpbm), encoding a plain RGB file with 24 bits per pixel, input file
> size 11520017 bytes (11M), measured with the 'time' command, including
> Python interpreter start-up:
>                         pnmtopng        png.py
> straight encoding       1.31 seconds    0.72 seconds
> resulting file size     342953 bytes    292885 bytes
> interlaced encoding     3.78 seconds    4.88 seconds
> resulting file size     422441 bytes    346872 bytes
>
> The source code of the module lives here:
> http://svn.browsershots.org/trunk/shotfactory/lib/image/png.py
> http://trac.browsershots.org/browser/trunk/shotfactory/lib/image/png.py
>
> I am willing to maintain the module for 5+ years, as it is a small but
> important part of my main project. I am also willing to write latex
> documentation and tests for the module, and I think I could do that
> within the next three days. The module is licensed under the Apache
> License 2.0, and I am ready to sign a contributor agreement for the
> PSF.
>
> I will probably add support for more different PNG formats, especially
> alpha channel transparency, and then maybe color palettes. I don't
> plan to add PNG decoding because it would make the module much larger
> and rather complex.
>
> Sorry if this contribution seems brash. Perhaps it is easy enough to
> download and deploy my module separately. But I thought that if there
> is a chance to get it in before beta1, I should not hesitate and just
> ask.
>
> Cheers,
> Johann
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060610/53d33882/attachment.html 

From greg.ewing at canterbury.ac.nz  Sun Jun 11 02:29:35 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 11 Jun 2006 12:29:35 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <e6fbl4$c2j$1@sea.gmane.org>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<e6fbl4$c2j$1@sea.gmane.org>
Message-ID: <448B63EF.60304@canterbury.ac.nz>

Robert Kern wrote:

> OTOH, isn't the exact PRNG algorithm considered an implementation detail?

It's questionable whether the PRNG being used *should* be
an implementation detail. To anyone who cares even a little
bit about its quality, knowing the algorithm (or at least
some info about it, such as the period) is vital.

PRNGs are not like sorting algorithms, where different
ones all give the same result in the end. Different PRNGs
have *wildly* different characteristics.

--
Greg

From greg.ewing at canterbury.ac.nz  Sun Jun 11 02:35:04 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 11 Jun 2006 12:35:04 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <1f7befae0606101431w2746d711kda5ce3fefe145f4d@mail.gmail.com>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<1f7befae0606101431w2746d711kda5ce3fefe145f4d@mail.gmail.com>
Message-ID: <448B6538.1070509@canterbury.ac.nz>

Tim Peters wrote:

> Off the top of my head, then, since the Twister is provably
> equidistributed in 623 dimensions to 32-bit accuracy, I expect it
> should be able to "fairly" generate all permutations of a sequence of
> <= 623 elements (equidistribution in N dimensions implies
> equidistribution in all dimensions <= N).  So I'm happy to leave a
> warning out until the casinos switch to 12-deck blackjack ;-)

But isn't the problem with the Twister that for *some
initial states* the period could be much *shorter* than
the theoretical maximum?

Or is the probability of getting such an initial state
too small to worry about?

--
Greg

From greg.ewing at canterbury.ac.nz  Sun Jun 11 02:39:42 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 11 Jun 2006 12:39:42 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17547.16708.105058.906604@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
Message-ID: <448B664E.3040003@canterbury.ac.nz>

Terry Jones wrote:
> That doc note should surely be removed.  Perhaps it's an artifact from some
> earlier shuffle algorithm.
> 
> The current algorithm (which is simple, well known, and which produces all
> permutations with equal probability) only calls the RNG len(x) - 1 times.

It's not a matter of how many times it's called, but
of how much internal state it has.

A generator with only N possible internal states can't
possibly result in more than N different outcomes from
any algorithm that uses its results.

--
Greg

From greg.ewing at canterbury.ac.nz  Sun Jun 11 02:47:06 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 11 Jun 2006 12:47:06 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <17547.19802.361151.705599@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
Message-ID: <448B680A.9020000@canterbury.ac.nz>

skip at pobox.com wrote:

>     switch raw_input("enter a, b or c: "):
>         case 'a':
>             print 'yay! an a!'
>         case 'b':
>             print 'yay! a b!'
>         case 'c':
>             print 'yay! a c!'
>         else:
>             print 'hey dummy! I said a, b or c!'

Before accepting this, we could do with some debate about the
syntax. It's not a priori clear that C-style switch/case is
the best thing to adopt.

--
Greg

From bjourne at gmail.com  Sun Jun 11 03:04:38 2006
From: bjourne at gmail.com (=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=)
Date: Sun, 11 Jun 2006 03:04:38 +0200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <e6dsno$i7m$1@sea.gmane.org>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
	<e6dsno$i7m$1@sea.gmane.org>
Message-ID: <740c3aec0606101804sbb6ca98ldb4bc8255953b895@mail.gmail.com>

> > And from a syntax perspective, it's a bad idea.  x[] is much
> > more often a typo than an intentional attempt to index a
> > zero-dimensional array.
>
> but how often is it a typo?
>
> for example, judging from c.l.python traffic, forgetting to add a return
> statement is a quite common, but I haven't seen anyone arguing that we
> deprecate the implied "return None" behaviour.

Sounds like a terrific idea. The implicit None behaviour has hit me
many times and:

something = somefunc()

is almost always an error if somefunc() doesn't have an explicit
return statement. I don't know how difficult it is to get rid of the
implicit "return None" or even if it is doable, but if it is, it
should, IMHO, be done.

-- 
mvh Bj?rn

From terry at jon.es  Sun Jun 11 03:04:38 2006
From: terry at jon.es (Terry Jones)
Date: Sun, 11 Jun 2006 03:04:38 +0200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: Your message at 12:39:42 on Sunday, 11 June 2006
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
Message-ID: <17547.27686.67002.988677@terry.jones.tc>

>>>>> "Greg" == Greg Ewing <greg.ewing at canterbury.ac.nz> writes:
Greg> A generator with only N possible internal states can't
Greg> possibly result in more than N different outcomes from
Greg> any algorithm that uses its results.

I don't mean to pick nits, but I do find this a bit too general.

Suppose you have a RNG with a cycle length of 5. There's nothing to stop an
algorithm from taking multiple already returned values and combining them
in some (deterministic) way to generate > 5 outcomes. (Yes, those outcomes
might be more, or less, predictable but that's not the point). If you
expanded what you meant by "internal states" to include the state of the
algorithm (as well as the state of the RNG), then I'd be more inclined to
agree.

Worse, if you have multiple threads / processes using the same RNG, the
individual threads could exhibit _much_ more random behavior if individual
thread outcomes depend on multiple RNG return values (as is the case with
random.shuffle) and the scheduler is mixing things up. Here you'd have to
include the state of the operating system to claim you can't get more
outcomes than the number of internal states. But that's getting pretty far
away from what we'd ordinarily think of as the internal state of the RNG.

Terry

From skip at pobox.com  Sun Jun 11 03:28:41 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sat, 10 Jun 2006 20:28:41 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <448B680A.9020000@canterbury.ac.nz>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz>
Message-ID: <17547.29129.809800.765436@montanaro.dyndns.org>


    Greg> Before accepting this, we could do with some debate about the
    Greg> syntax. It's not a priori clear that C-style switch/case is the
    Greg> best thing to adopt.

Oh sure.  That debate should probably leverage PEP 275.

Skip

From bob at redivi.com  Sun Jun 11 04:09:08 2006
From: bob at redivi.com (Bob Ippolito)
Date: Sat, 10 Jun 2006 19:09:08 -0700
Subject: [Python-Dev] Add pure python PNG writer module to stdlib?
In-Reply-To: <bbaeab100606101635w2fd30d50ib469912e3ff68d0d@mail.gmail.com>
References: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
	<bbaeab100606101635w2fd30d50ib469912e3ff68d0d@mail.gmail.com>
Message-ID: <8B9DE534-F3D8-4160-90FF-BB9C5AD6D022@redivi.com>


On Jun 10, 2006, at 4:35 PM, Brett Cannon wrote:

>
>
> On 6/10/06, Johann C. Rocholl <johann at rocholl.net> wrote:
> I'm working on simple module to write PNG image files in pure python.
> Adding it to the standard library would be useful for people who want
> to create images on web server installations without gd and imlib, or
> on platforms where the netpbm tools are not easily available.
>
> Does anybody find this idea interesting?
>
>
> Yes, although I wouldn't want an interface taking in strings but  
> something more like an iterator that returns each row which itself  
> contains int triples.  In other words more array-based than string  
> based.

Well you could easily make such strings (or a buffer object that  
could probably be used in place of a string) with the array module...

-bob

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060610/abb40886/attachment.html 

From tom at vector-seven.com  Sun Jun 11 04:17:14 2006
From: tom at vector-seven.com (Thomas Lee)
Date: Sun, 11 Jun 2006 12:17:14 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <17547.19802.361151.705599@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
Message-ID: <20060611021714.GA17351@21degrees.com.au>

On Sat, Jun 10, 2006 at 05:53:14PM -0500, skip at pobox.com wrote:
>     * Aside from the modified Grammar file there is no documentation.
>     * There are no test cases.
>     * Can you submit a patch on SourceForge?

All have been addressed, although I'm not sure if I've covered
everywhere I need to update for the documentation:

http://sourceforge.net/tracker/index.php?func=detail&aid=1504199&group_id=5470&atid=305470

Thanks again for your feedback!

Cheers,
Tom

-- 
Tom Lee
http://www.vector-seven.com


From tim.peters at gmail.com  Sun Jun 11 05:10:42 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 10 Jun 2006 23:10:42 -0400
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <448B6538.1070509@canterbury.ac.nz>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<1f7befae0606101431w2746d711kda5ce3fefe145f4d@mail.gmail.com>
	<448B6538.1070509@canterbury.ac.nz>
Message-ID: <1f7befae0606102010y7c082819hd8c1b6a2c6a9da89@mail.gmail.com>

[Greg Ewing]
> But isn't the problem with the Twister that for *some
> initial states* the period could be much *shorter* than
> the theoretical maximum?
>
> Or is the probability of getting such an initial state
> too small to worry about?

The Twister's state is held in a vector of 624 32-bit words.  31 of
the bits aren't actually used, and the number of meaningful state bits
is actually 624 * 32 - 31 = 19937.

There are exactly 2 orbits in the state space under the state
transformation operation (STO):

1. A trivial orbit of length 1, consisting of the state in which all
meaningful bits are 0.  That's a fixed point for the STO.  There are
no other fixed points.

2. All not-entirely-0 states are in the other orbit, of length
2**19937 - 1.  All not-0 states are reachable from all other not-0
states, and you get back to the non-zero state you start from for the
first time after exactly 2**19937 - 1 iterations of the STO.

So as long as you don't start with the all-0 state, you're in the
second orbit, and see the advertised period (2**19937 - 1).

>From Python code, it's impossible to get the all-0 state.  All
Python-visible initialization methods guarantee there's at least one
bit set in the meaningful bits of the state vector,
so the probability of not seeing a period of 2**19937 - 1 from Python
is exactly 0.

Hmm.  Well, there _is_ a way to screw yourself here, but you have to
work at it:  you can force the all-0 state by hand-crafting the right
input to random.setstate().

From ncoghlan at gmail.com  Sun Jun 11 07:49:17 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 11 Jun 2006 15:49:17 +1000
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606101218w653537b9ke163ff1f5c1f737b@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>	
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>	
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>	
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>	
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>	
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>	
	<448AA5A6.6090803@canterbury.ac.nz> <448AD1CB.1080409@gmail.com>
	<b348a0850606101218w653537b9ke163ff1f5c1f737b@mail.gmail.com>
Message-ID: <448BAEDD.7050300@gmail.com>

Noam Raphael wrote:
> I hope that my (hopefully) better explanation made the use case more
> compelling, but I want to add two points in favour of an empty tuple:

I guess I'm really only -0 on the idea of x[] invoking x.__getitem__(), and 
allowing the class to decide whether or not to define a default value for the 
subscript. I wouldn't implement it myself, but I wouldn't object strenuously 
if Guido decided it was OK :)

For your specific use cases, though, I'd be inclined to tweak the API a bit, 
and switch to using attributes for the single-valued data:

tax_rates.income_tax = 0.18

Although the income tax rate should actually depend on the current financial 
year, since it can change over time as the government increases taxes ;)

> Why? Mental exercise is a good way to keep you mental ;)

Hehe :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From johann at rocholl.net  Sun Jun 11 07:52:26 2006
From: johann at rocholl.net (Johann C. Rocholl)
Date: Sun, 11 Jun 2006 07:52:26 +0200
Subject: [Python-Dev] Add pure python PNG writer module to stdlib?
In-Reply-To: <bbaeab100606101635w2fd30d50ib469912e3ff68d0d@mail.gmail.com>
References: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
	<bbaeab100606101635w2fd30d50ib469912e3ff68d0d@mail.gmail.com>
Message-ID: <8233478f0606102252h16927555t4c28414c0f5520dc@mail.gmail.com>

> > Does anybody find this idea interesting?
>
> Yes, although I wouldn't want an interface taking in strings but something
> more like an iterator that returns each row which itself contains int
> triples.  In other words more array-based than string based.

I agree that arrays would be semantically closer to the concept of
scanlines of pixel values. OTOH, I have my reasons for choosing the
string interface:

1. String is what you get from any file-like object with file.read(),
be it a PPM file on disk, or a pipe like this: os.popen('djpeg
test.jpg')
2. String is unbeatable in terms of efficiency.
3. Everybody knows how to use strings.
4. String can easily be created from other representations, for example:

>>> from array import array
>>> B = array('B')
>>> B.extend((255, 255, 255))
>>> B.tostring()
'\xff\xff\xff'

> > Does anybody think it could go into stdlib before the feature freeze for
> 2.5?
>
> Nope.  To get added to the stdlib there needs to be support from the
> community that the module is useful and best-of-breed.  Try posting on
> c.l.py and see if people pick it up and like it.  No way that is going to
> happen before b1.  But there is always 2.6 .

That's what I thought. My remote hope was that there would be
immediate concensus on python-dev about both the 'useful' and
'best-of-breed' parts. Anybody else with a +1? ;-)

Seriously, it's totally fine with me if the module doesn't make it
into 2.5, or even if it never makes it into stdlib. I'm just offering
it with some enthusiasm.

Cheers,
Johann

From ncoghlan at gmail.com  Sun Jun 11 08:16:17 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 11 Jun 2006 16:16:17 +1000
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <200606101515.47750.fdrake@acm.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>	<e6esab$388$1@sea.gmane.org>
	<200606101515.47750.fdrake@acm.org>
Message-ID: <448BB531.7020104@gmail.com>

Fred L. Drake, Jr. wrote:
> On Saturday 10 June 2006 12:34, Fredrik Lundh wrote:
>  > if all undocumented modules had as much documentation and articles as
>  > ET, the world would be a lot better documented ;-)
>  >
>  > I've posted a text version of the xml.etree.ElementTree PythonDoc here:
> 
> Here's a question that we should answer before the beta:
> 
> With the introduction of the xmlcore package in Python 2.5, should we document 
> xml.etree or xmlcore.etree?  If someone installs PyXML with Python 2.5, I 
> don't think they're going to get xml.etree, which will be really confusing.  
> We can be sure that xmlcore.etree will be there.
> 
> I'd rather not propogate the pain caused "xml" package insanity any further.

+1 for 'xmlcore.etree'.

I don't use XML very much, and it was thoroughly confusing to find that 
published XML related code didn't work on my machine, even though the stdlib 
claimed to provide an 'xml' module (naturally, the published code needed the 
full version of PyXML, but I didn't know that at the time).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Sun Jun 11 08:25:21 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 11 Jun 2006 16:25:21 +1000
Subject: [Python-Dev] Segmentation fault in collections.defaultdict
In-Reply-To: <2e1434c10606101051w4f00ee08j5da64ff0577a3935@mail.gmail.com>
References: <2e1434c10606101051w4f00ee08j5da64ff0577a3935@mail.gmail.com>
Message-ID: <448BB751.1050300@gmail.com>

Kevin Jacobs <jacobs at bioinformed.com> wrote:
> Try this at home:
> import collections
> d=collections.defaultdict(int)
> d.iterkeys().next()  # Seg fault
> d.iteritems().next() # Seg fault
> d.itervalues().next() # Fine and dandy

This all worked fine for me in rev 46739 and 46849 (Kubuntu 6.06, gcc 4.0.3).

> Python version:
> Python 2.5a2 (trunk:46822M, Jun 10 2006, 13:14:15)
> [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2

Either something got broken and then fixed again between the two revs I tried, 
there's a problem specific to GCC 4.0.2, or there's a problem with whatever 
local modifications you have in your working copy :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From g.brandl at gmx.net  Sun Jun 11 09:38:26 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 11 Jun 2006 09:38:26 +0200
Subject: [Python-Dev] Segmentation fault in collections.defaultdict
In-Reply-To: <448BB751.1050300@gmail.com>
References: <2e1434c10606101051w4f00ee08j5da64ff0577a3935@mail.gmail.com>
	<448BB751.1050300@gmail.com>
Message-ID: <e6gdgh$ot5$1@sea.gmane.org>

Nick Coghlan wrote:
> Kevin Jacobs <jacobs at bioinformed.com> wrote:
>> Try this at home:
>> import collections
>> d=collections.defaultdict(int)
>> d.iterkeys().next()  # Seg fault
>> d.iteritems().next() # Seg fault
>> d.itervalues().next() # Fine and dandy
> 
> This all worked fine for me in rev 46739 and 46849 (Kubuntu 6.06, gcc 4.0.3).
> 
>> Python version:
>> Python 2.5a2 (trunk:46822M, Jun 10 2006, 13:14:15)
>> [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2
> 
> Either something got broken and then fixed again between the two revs I tried, 
> there's a problem specific to GCC 4.0.2, or there's a problem with whatever 
> local modifications you have in your working copy :)

Same here. I tried with the same revision as Kevin, and got no segfault
at all (using GCC 4.1.1 on Linux).

Note that "GCC 4.0.2 20050901 (prerelease)" sound like something that's not
really been thoroughly tested ;)

Georg


From nnorwitz at gmail.com  Sun Jun 11 09:55:41 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sun, 11 Jun 2006 00:55:41 -0700
Subject: [Python-Dev] crash in dict on gc collect
Message-ID: <ee2a432c0606110055i1367ac50q681bdd990d252602@mail.gmail.com>

I wonder if this is similar to Kevin's problem?  I couldn't reproduce
his problem though.  This happens with both debug and release builds.
Not sure how to reduce the test case.  pychecker was just iterating
through the byte codes.  It wasn't doing anything particularly
interesting.

./python pychecker/pychecker/checker.py Lib/encodings/cp1140.py

0x00000000004cfa18 in visit_decref (op=0x661180, data=0x0) at gcmodule.c:270
270             if (PyObject_IS_GC(op)) {
(gdb) bt
#0  0x00000000004cfa18 in visit_decref (op=0x661180, data=0x0) at gcmodule.c:270
#1  0x00000000004474ab in dict_traverse (op=0x7cdd90,  visit=0x4cf9e0
<visit_decref>, arg=0x0) at dictobject.c:1819
#2  0x00000000004cfaf0 in subtract_refs (containers=0x670240) at gcmodule.c:295
#3  0x00000000004d07fd in collect (generation=0) at gcmodule.c:790
#4  0x00000000004d0ad1 in collect_generations () at gcmodule.c:897
#5  0x00000000004d1505 in _PyObject_GC_Malloc (basicsize=56) at gcmodule.c:1332
#6  0x00000000004d1542 in _PyObject_GC_New (tp=0x64f4a0) at gcmodule.c:1342
#7  0x000000000041d992 in PyInstance_NewRaw (klass=0x2a95dffcc0,
dict=0x800e80) at classobject.c:505
#8  0x000000000041dab8 in PyInstance_New (klass=0x2a95dffcc0,
arg=0x2a95f5f9e0, kw=0x0) at classobject.c:525
#9  0x000000000041aa4e in PyObject_Call (func=0x2a95dffcc0,
arg=0x2a95f5f9e0,  kw=0x0) at abstract.c:1802
#10 0x000000000049ecd2 in do_call (func=0x2a95dffcc0,
pp_stack=0x7fbfffb5b0,  na=3, nk=0) at ceval.c:3785
#11 0x000000000049e46f in call_function (pp_stack=0x7fbfffb5b0,
oparg=3) at ceval.c:3597

From bob at redivi.com  Sun Jun 11 11:11:34 2006
From: bob at redivi.com (Bob Ippolito)
Date: Sun, 11 Jun 2006 02:11:34 -0700
Subject: [Python-Dev] Add pure python PNG writer module to stdlib?
In-Reply-To: <8233478f0606102252h16927555t4c28414c0f5520dc@mail.gmail.com>
References: <8233478f0606101528pb43e9a2h572c91f112351e62@mail.gmail.com>
	<bbaeab100606101635w2fd30d50ib469912e3ff68d0d@mail.gmail.com>
	<8233478f0606102252h16927555t4c28414c0f5520dc@mail.gmail.com>
Message-ID: <E5467DF6-3164-45AE-B33B-E817E9FE219C@redivi.com>

On Jun 10, 2006, at 10:52 PM, Johann C. Rocholl wrote:

>>> Does anybody think it could go into stdlib before the feature  
>>> freeze for
>> 2.5?
>>
>> Nope.  To get added to the stdlib there needs to be support from the
>> community that the module is useful and best-of-breed.  Try  
>> posting on
>> c.l.py and see if people pick it up and like it.  No way that is  
>> going to
>> happen before b1.  But there is always 2.6 .
>
> That's what I thought. My remote hope was that there would be
> immediate concensus on python-dev about both the 'useful' and
> 'best-of-breed' parts. Anybody else with a +1? ;-)
>
> Seriously, it's totally fine with me if the module doesn't make it
> into 2.5, or even if it never makes it into stdlib. I'm just offering
> it with some enthusiasm.

The best way to do this would be to make it available as its own  
package. Give it a setup.py, stick it on CheeseShop, etc.

For performance and memory usage reasons it would probably make sense  
to take an iterator that returns a scanline at a time. The current  
implementation does a lot more allocations than it needs to (full  
image, then one str per scanline). It also asserts type is str, when  
a buffer or mmap object would work perfectly well otherwise. If  
reading from a file or something you could skip the full allocation  
and a lot of memcpy by reading a scanline at a time.

I'd also like to see RGBA support as well. Often the reason for  
choosing png over other lossless formats is its support for alpha.  
For your use case it's irrelevant, but there are many use cases that  
need the alpha channel.

But to reiterate, further discussion of this really belongs on c.l.py  
for now...

-bob


From fredrik at pythonware.com  Sun Jun 11 12:09:39 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sun, 11 Jun 2006 12:09:39 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <200606101515.47750.fdrake@acm.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>	<e6esab$388$1@sea.gmane.org>
	<200606101515.47750.fdrake@acm.org>
Message-ID: <e6gq53$mqd$1@sea.gmane.org>

Fred L. Drake, Jr. wrote:

> With the introduction of the xmlcore package in Python 2.5, should we document 
> xml.etree or xmlcore.etree?  If someone installs PyXML with Python 2.5, I 
> don't think they're going to get xml.etree, which will be really confusing.  
> We can be sure that xmlcore.etree will be there.

I think it would be unfortunate if an external, mostly unmaintained 
package could claim absolute ownership of the xml package root.

how about tweaking the xml loader to map "xml.foo" to "_xmlplus.foo" 
only if that subpackage really exists ?

</F>


From s.percivall at chello.se  Sun Jun 11 14:21:53 2006
From: s.percivall at chello.se (Simon Percivall)
Date: Sun, 11 Jun 2006 14:21:53 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <e6gq53$mqd$1@sea.gmane.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>	<e6esab$388$1@sea.gmane.org>
	<200606101515.47750.fdrake@acm.org> <e6gq53$mqd$1@sea.gmane.org>
Message-ID: <94C9DB0F-E8A8-40AB-B922-6BF436344EC0@chello.se>

On 11 jun 2006, at 12.09, Fredrik Lundh wrote:
> Fred L. Drake, Jr. wrote:
>
>> With the introduction of the xmlcore package in Python 2.5, should  
>> we document
>> xml.etree or xmlcore.etree?  If someone installs PyXML with Python  
>> 2.5, I
>> don't think they're going to get xml.etree, which will be really  
>> confusing.
>> We can be sure that xmlcore.etree will be there.
>
> I think it would be unfortunate if an external, mostly unmaintained
> package could claim absolute ownership of the xml package root.
>
> how about tweaking the xml loader to map "xml.foo" to "_xmlplus.foo"
> only if that subpackage really exists ?

I'm a bit confused by what the problem is. I though this was all
handled like it should be now.

     >>> import xml.etree
     >>> xml.etree
     <module 'xml.etree' from '.../lib/python2.5/xmlcore/etree/ 
__init__.pyc'>
     >>> import xml.sax
     >>> xml.sax
     <module 'xml.sax' from '.../lib/python2.5/site-packages/_xmlplus/ 
sax/__init__.pyc'>

It picks up modules from both places

//Simon

From rasky at develer.com  Sun Jun 11 15:35:46 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Sun, 11 Jun 2006 15:35:46 +0200
Subject: [Python-Dev] UUID module
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
Message-ID: <020a01c68d5b$f2130170$3db72997@bagio>

Ka-Ping Yee <python-dev at zesty.ca> wrote:

> Quite a few people have expressed interest in having UUID
> functionality in the standard library, and previously on this
> list some suggested possibly using the uuid.py module i wrote:
>
>     http://zesty.ca/python/uuid.py


Some comments on the code:

> for dir in ['', r'c:\windows\system32', r'c:\winnt\system32']:

Can we get rid of these absolute paths? Something like this should suffice:

>>> from ctypes import *
>>> buf = create_string_buffer(4096)
>>> windll.kernel32.GetSystemDirectoryA(buf, 4096)
17
>>> buf.value.decode("mbcs")
u'C:\\WINNT\\system32'


>  for function in functions:
>        try:
>            _node = function()
>        except:
>            continue

This also hides typos and whatnot. I guess it's better if each function catches
its own exceptions, and either return None or raise a common exception (like a
class _GetNodeError(RuntimeError)) which is then caught.

Giovanni Bajo


From fredrik at pythonware.com  Sun Jun 11 16:02:45 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sun, 11 Jun 2006 16:02:45 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <94C9DB0F-E8A8-40AB-B922-6BF436344EC0@chello.se>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>	<e6esab$388$1@sea.gmane.org>	<200606101515.47750.fdrake@acm.org>
	<e6gq53$mqd$1@sea.gmane.org>
	<94C9DB0F-E8A8-40AB-B922-6BF436344EC0@chello.se>
Message-ID: <e6h7q6$n01$1@sea.gmane.org>

Simon Percivall wrote:

>> how about tweaking the xml loader to map "xml.foo" to "_xmlplus.foo"
>> only if that subpackage really exists ?
> 
> I'm a bit confused by what the problem is. I though this was all
> handled like it should be now.

that's how I thought things were done, but then I read Fred's post, and 
looked at the source code, and didn't see this line:

             _xmlplus.__path__.extend(xmlcore.__path__)

or-maybe-someone's-been-using-the-time-machine-ly yrs /F


From rubys at intertwingly.net  Sun Jun 11 22:26:29 2006
From: rubys at intertwingly.net (Sam Ruby)
Date: Sun, 11 Jun 2006 16:26:29 -0400
Subject: [Python-Dev] sgmllib Comments
Message-ID: <448C7C75.70703@intertwingly.net>

Planet is a feed aggregator written in Python.  It depends heavily on 
SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib, 
and I've submitted a test case and a patch[1] (use or discard the patch, 
it is the test that I care about).

While looking around, a few things surfaced.  For starters, it would 
seem that the version of sgmllib in SVN HEAD will selectively unescape 
certain character references that might appear in an attribute.  I say 
selectively, as:

  * it will unescape  &amp;
  * it won't unescape &copy;
  * it will unescape  &#38;
  * it won't unescape &#x26;
  * it will unescape  &#146;
  * it won't unescape &#8217;

There are a number of issues here.  While not unescaping anything is 
suboptimal, at least the recipient is aware of exactly which characters 
have been unescaped (i.e., none of them).  The proposed solution makes 
it impossible for the recipient to know which characters are unescaped, 
and which are original.  (Note: feeds often contain such abominations as 
&amp;copy; which the new code will treat indistinguishably from &copy;)

Additionally, there is a unicode issue here - one that is shared by 
handle_charref, but at least that method is overrideable.  If unescaping 
remains, do it for hex character references and for values greather than 
8-bits, i.e., use unichr instead of chr if the value is greater than 127.

- Sam Ruby

[1] http://tinyurl.com/j4a6n

From talin at acm.org  Sun Jun 11 23:15:13 2006
From: talin at acm.org (Talin)
Date: Sun, 11 Jun 2006 14:15:13 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <448B680A.9020000@canterbury.ac.nz>
References: <20060610142736.GA19094@21degrees.com.au>	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz>
Message-ID: <448C87E1.6070801@acm.org>

Greg Ewing wrote:
> skip at pobox.com wrote:
> 
> 
>>    switch raw_input("enter a, b or c: "):
>>        case 'a':
>>            print 'yay! an a!'
>>        case 'b':
>>            print 'yay! a b!'
>>        case 'c':
>>            print 'yay! a c!'
>>        else:
>>            print 'hey dummy! I said a, b or c!'
> 
> 
> Before accepting this, we could do with some debate about the
> syntax. It's not a priori clear that C-style switch/case is
> the best thing to adopt.

Since you don't have the 'fall-through' behavior of C, I would also 
assume that you could associate more than one value with a case, i.e.:

    case 'a', 'b', 'c':
       ...

It seems to me that the value of a 'switch' statement is that it is a 
computed jump - that is, instead of having to iteratively test a bunch 
of alternatives, you can directly jump to the code for a specific value.

I can see this being very useful for parser generators and state machine 
code. At the moment, similar things can be done with hash tables of 
functions, but those have a number of limitations, such as the fact that 
they can't access local variables.

I don't have any specific syntax proposals, but I notice that the suite 
that follows the switch statement is not a normal suite, but a 
restricted one, and I am wondering if we could come up with a syntax 
that avoids having a special suite.

Here's an (ugly) example, not meant as a serious proposal:

    select (x) when 'a':
       ...
    when 'b', 'c':
       ...
    else:
       ...

The only real difference between this and an if-else chain is that the 
compiler knows that all of the test expressions are constants and can be 
hashed at compile time.

-- Talin

From aahz at pythoncraft.com  Mon Jun 12 00:11:32 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sun, 11 Jun 2006 15:11:32 -0700
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448C7C75.70703@intertwingly.net>
References: <448C7C75.70703@intertwingly.net>
Message-ID: <20060611221132.GA16080@panix.com>

On Sun, Jun 11, 2006, Sam Ruby wrote:
>
> Planet is a feed aggregator written in Python.  It depends heavily on 
> SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib, 
> and I've submitted a test case and a patch[1] (use or discard the patch, 
> it is the test that I care about).
> 
> [1] http://tinyurl.com/j4a6n

When providing links to SF, please use the python.org tinyurl equivalent
to ensure that people can easily see the bug/patch number:

http://www.python.org/sf?id=1504333
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From fabiofz at gmail.com  Mon Jun 12 00:31:33 2006
From: fabiofz at gmail.com (Fabio Zadrozny)
Date: Sun, 11 Jun 2006 19:31:33 -0300
Subject: [Python-Dev] Import semantics
Message-ID: <cfb578b20606111531t6806d5c9kd35fd8ba29638174@mail.gmail.com>

Python and Jython import semantics differ on how sub-packages should be
accessed after importing some module:

Jython 2.1 on java1.5.0 (JIT: null)
Type "copyright", "credits" or "license" for more information.
>>> import xml
>>> xml.dom
<module xml.dom at 10340434>

Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import xml
>>> xml.dom
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
AttributeError: 'module' object has no attribute 'dom'
>>> from xml.dom import pulldom
>>> xml.dom
<module 'xml.dom' from 'C:\bin\Python24\lib\xml\dom\__init__.pyc'>

Note that in Jython importing a module makes all subpackages beneath it
available, whereas in python, only the tokens available in __init__.py are
accessible, but if you do load the module later even if not getting it
directly into the namespace, it gets accessible too -- this seems more like
something unexpected to me -- I would expect it to be available only if I
did some "import xml.dom" at some point.

My problem is that in Pydev, in static analysis, I would only get the tokens
available for actually imported modules, but that's not true for Jython, and
I'm not sure if the current behaviour in Python was expected.

So... which would be the right semantics for this?

Thanks,

Fabio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060611/94b5b19c/attachment.html 

From skip at pobox.com  Mon Jun 12 00:36:11 2006
From: skip at pobox.com (skip at pobox.com)
Date: Sun, 11 Jun 2006 17:36:11 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <448C87E1.6070801@acm.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
Message-ID: <17548.39643.646760.994634@montanaro.dyndns.org>


    talin> Since you don't have the 'fall-through' behavior of C, I would
    talin> also assume that you could associate more than one value with a
    talin> case, i.e.:

    talin>     case 'a', 'b', 'c':
    talin>        ...

As Andrew Koenig pointed out, that's not discussed in the PEP.  Given the
various examples though, I would have to assume the above is equivalent to

    case ('a', 'b', 'c'):
        ...

since in all cases the PEP implies a single expression.

    talin> It seems to me that the value of a 'switch' statement is that it
    talin> is a computed jump - that is, instead of having to iteratively
    talin> test a bunch of alternatives, you can directly jump to the code
    talin> for a specific value.

I agree, but that of course limits the expressions to constants which can be
evaluated at compile-time as I indicated in my previous mail.  Also, as
someone else pointed out, that probably prevents something like

    START_TOKEN = '<'
    END_TOKEN = '>'

    ...

    switch expr:
        case START_TOKEN:
            ...
        case END_TOKEN:
            ...

The PEP states that the case clauses must accept constants, but the sample
implementation allows arbitrary expressions.  If we assume that the case
expressions need not be constants, does that force the compiler to evaluate
the case expressions in the order given in the file?  To make my dumb
example from yesterday even dumber:

    def f():
        switch raw_input("enter b, d or f:"):
            case incr('a'):
                print 'yay! a b!'
            case incr('b'):
                print 'yay! a d!'
            case incr('c'):
                print 'yay! an f!'
            else:
                print 'hey dummy! I said b, d or f!'

    _n = 0
    def incr(c):
        global _n
        try:
            return chr(ord(c)+1+_n)
        finally:
            _n += 1
            print _n

The cases must be evaluated in the order they are written for the example to
work properly.

The tension between efficient run-time and Python's highly dynamic nature
would seem to prevent the creation of a switch statement that will satisfy
all demands.

Skip

From fredrik at pythonware.com  Mon Jun 12 00:44:50 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 00:44:50 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <448C87E1.6070801@acm.org>
References: <20060610142736.GA19094@21degrees.com.au>	<17547.19802.361151.705599@montanaro.dyndns.org>	<448B680A.9020000@canterbury.ac.nz>
	<448C87E1.6070801@acm.org>
Message-ID: <e6i6d1$g40$1@sea.gmane.org>

Talin wrote:

> I don't have any specific syntax proposals, but I notice that the suite 
> that follows the switch statement is not a normal suite, but a 
> restricted one, and I am wondering if we could come up with a syntax 
> that avoids having a special suite.

don't have K&R handy, but I'm pretty sure they put switch and case at 
the same level (just like if/else), thus eliminating the need for silly 
special suites.

> The only real difference between this and an if-else chain is that the 
> compiler knows that all of the test expressions are constants and can be 
> hashed at compile time.

the compiler can of course figure that out also for if/elif/else state- 
ments, by inspecting the AST.  the only advantage for switch/case is 
user syntax...

</F>


From talin at acm.org  Mon Jun 12 01:18:59 2006
From: talin at acm.org (Talin)
Date: Sun, 11 Jun 2006 16:18:59 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <17548.39643.646760.994634@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
	<17548.39643.646760.994634@montanaro.dyndns.org>
Message-ID: <448CA4E3.3080406@acm.org>

skip at pobox.com wrote:
>     talin> Since you don't have the 'fall-through' behavior of C, I would
>     talin> also assume that you could associate more than one value with a
>     talin> case, i.e.:
> 
>     talin>     case 'a', 'b', 'c':
>     talin>        ...
> 
> As Andrew Koenig pointed out, that's not discussed in the PEP.  Given the
> various examples though, I would have to assume the above is equivalent to
> 
>     case ('a', 'b', 'c'):
>         ...

I had recognized that ambiguity as well, but chose not to mention it :)

> since in all cases the PEP implies a single expression.
> 
>     talin> It seems to me that the value of a 'switch' statement is that it
>     talin> is a computed jump - that is, instead of having to iteratively
>     talin> test a bunch of alternatives, you can directly jump to the code
>     talin> for a specific value.
> 
> I agree, but that of course limits the expressions to constants which can be
> evaluated at compile-time as I indicated in my previous mail.  Also, as
> someone else pointed out, that probably prevents something like
> 
>     START_TOKEN = '<'
>     END_TOKEN = '>'
> 
>     ...
> 
>     switch expr:
>         case START_TOKEN:
>             ...
>         case END_TOKEN:
>             ...

Here's another ugly thought experiment, not meant as a serious proposal; 
it's intent is to stimulate ideas by breaking preconceptions. Suppose we 
take the notion of a computed jump literally:

    def myfunc( x ):
       goto dispatcher[ x ]

       section s1:
          ...

       section s2:
          ...

    dispatcher=dict('a'=myfunc.s1, 'b'=myfunc.s2)

No, I am *not* proposing that Python add a goto statement. What I am 
really talking about is the idea that you could (somehow) use a 
dictionary as the input to a control construct.

In the above example, rather than allowing arbitrary constant 
expressions as cases, we would require the compiler to generate a set of 
opaque tokens representing various code fragments. These fragments would 
be exactly like inner functions, except that they don't have their own 
scope (and therefore have no parameters either).

Since the jump labels are symbols generated by the compiler, there's no 
ambiguity about when they get evaluated.

The above example also allows these labels to be accessed externally 
from the function by defining attributes on the function object itself 
which correspond to the code fragments.

So in the example, the dictionary which associates specific values with 
executable sections is created once, at runtime, but before the first 
time that myfunc is called.

Of course, this is quite a bit clumsier than a switch statement, which 
is why I say its not a serious proposal.

-- Talin

From blais at furius.ca  Mon Jun 12 01:59:47 2006
From: blais at furius.ca (Martin Blais)
Date: Sun, 11 Jun 2006 19:59:47 -0400
Subject: [Python-Dev] subprocess.Popen(.... stdout=IGNORE, ...)
Message-ID: <8393fff0606111659j16e0ed73wf16d3f7e0892d268@mail.gmail.com>

In the subprocess module, by default the files handles in the child
are inherited from the parent.  To ignore a child's output, I can use
the stdout or stderr options to send the output to a pipe::

   p = Popen(command, stdout=PIPE, stderr=PIPE)

However, this is sensitive to the buffer deadlock problem, where for
example the buffer for stderr might become full and a deadlock occurs
because the child is blocked on writing to stderr and the parent is
blocked on reading from stdout or waiting for the child to finish.

For example, using this command will cause deadlock::

   call('cat /boot/vmlinuz'.split(), stdout=PIPE, stderr=PIPE)

Popen.communicate() implements a solution using either select() or
multiple threads (under Windows) to read from the pipes, and returns
the strings as a result.  It works out like this::

   p = Popen(command, stdout=PIPE, stderr=PIPE)
   output, errors = p.communicate()
   if p.returncode != 0:
        ?

Now, as a user of the subprocess module, sometimes I just want to
call some child process and simply ignore its output, and to do so I
am forced to use communicate() as above and wastefully capture and
ignore the strings.  This is actually quite a common use case.  "Just
run something, and check the return code".  Right now, in order to do
this without polluting the parent's output, you cannot use the call()
convenience (or is there another way?).

A workaround that works under UNIX is to do this::

   FNULL = open('/dev/null', 'w')
   returncode = call(command, stdout=FNULL, stderr=FNULL)


Some feedback requested, I'd like to know what you think:

1. Would it not be nice to add a IGNORE constant to subprocess.py
   that would do this automatically?, i.e. ::

     returncode = call(command, stdout=IGNORE, stderr=IGNORE)

   Rather than capture and accumulate the output, it would find an
   appropriate OS-specific way to ignore the output (the /dev/null file
   above works well under UNIX, how would you do this under Windows?
   I'm sure we can find something.)

2. call() should be modified to not be sensitive to the deadlock
   problem, since its interface provides no way to return the
   contents of the output.  The IGNORE value provides a possible
   solution for this.

3. With the /dev/null file solution, the following code actually
   works without deadlock, because stderr is never blocked on writing
   to /dev/null::

     p = Popen(command, stdout=PIPE, stderr=IGNORE)
     text = p.stdout.read()
     retcode = p.wait()

   Any idea how this idiom could be supported using a more portable
   solution (i.e. how would I make this idiom under Windows, is there
   some equivalent to /dev/null)?

From python-dev at zesty.ca  Mon Jun 12 02:24:53 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 11 Jun 2006 19:24:53 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>
References: <e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
	<e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>
Message-ID: <Pine.LNX.4.58.0606111918210.5223@server1.LFW.org>

Thomas Heller wrote:
> I don't know if this is the uuidgen you're talking about, but
> on linux there is libuuid:

Thanks!

Okay, that's in there now.  Have a look at http://zesty.ca/python/uuid.py .

Phillip J. Eby wrote:
> By the way, I'd love to see a uuid.uuid() constructor that simply calls the
> platform-specific default UUID constructor (CoCreateGuid or uuidgen(2)),

I've added code to make uuid1() use uuid_generate_time() if available
and uuid4() use uuid_generate_random() if available.  These functions
are provided on Mac OS X (in libc) and on Linux (in libuuid).  Does
that work for you?

I'm using the Windows UUID generation calls (UuidCreate and
UuidCreateSequential in rpcrt4) only to get the hardware address,
not to make UUIDs, because they yield results that aren't compliant
with RFC 4122.  Even worse, they actually have the variant bits set
to say that they are RFC 4122, but they can have an illegal version
number.  If there are better alternatives on Windows, i'm happy to
use them.


-- ?!ng

From python-dev at zesty.ca  Mon Jun 12 02:26:15 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 11 Jun 2006 19:26:15 -0500 (CDT)
Subject: [Python-Dev] Should hex() yield 'L' suffix for long numbers?
Message-ID: <Pine.LNX.4.58.0606111925410.5223@server1.LFW.org>

I did this earlier:

    >>> hex(9999999999999)
    '0x9184e729fffL'

and found it a little jarring, because i feel there's been a general
trend toward getting rid of the 'L' suffix in Python.

Literal long integers don't need an L anymore; they're automatically
made into longs if the number is too big.  And while the repr() of
a long retains the L on the end, the str() of a long does not, and
i rather like that.

So i kind of expected that hex() would not include the L either.
I see its main job as just giving me the hex digits (in fact, for
Python 3000 i'd prefer even to drop the '0x' as well), and the L
seems superfluous and distracting.

What do you think?  Is Python 2.5 a reasonable time to drop this L?


-- ?!ng

From greg.ewing at canterbury.ac.nz  Mon Jun 12 02:27:49 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Jun 2006 12:27:49 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17547.27686.67002.988677@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
Message-ID: <448CB505.2040304@canterbury.ac.nz>

Terry Jones wrote:

> Suppose you have a RNG with a cycle length of 5. There's nothing to stop an
> algorithm from taking multiple already returned values and combining them
> in some (deterministic) way to generate > 5 outcomes.

No, it's not. As long as the RNG output is the only input to
the algorithm, and the algorithm is deterministic, it is
not possible get more than N different outcomes. It doesn't
matter what the algorithm does with the input.

> If you
> expanded what you meant by "internal states" to include the state of the
> algorithm (as well as the state of the RNG), then I'd be more inclined to
> agree.

If the algorithm can start out with more than one initial
state, then the RNG is not the only input.

> Worse, if you have multiple threads / processes using the same RNG, the
> individual threads could exhibit _much_ more random behavior

Then you haven't got a deterministic algorithm.

--
Greg

From greg.ewing at canterbury.ac.nz  Mon Jun 12 02:30:26 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Jun 2006 12:30:26 +1200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
 Parentheses
In-Reply-To: <740c3aec0606101804sbb6ca98ldb4bc8255953b895@mail.gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<ca471dc20606090944p75aba6a9x8761d6872522d5b6@mail.gmail.com>
	<960F603A-04F4-4AE3-ABC4-A6ED58FBD893@nicko.org>
	<ca471dc20606091005s2eaa754u49ad29d5560447b0@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
	<e6dsno$i7m$1@sea.gmane.org>
	<740c3aec0606101804sbb6ca98ldb4bc8255953b895@mail.gmail.com>
Message-ID: <448CB5A2.3030502@canterbury.ac.nz>

BJ?rn Lindqvist wrote:

> I don't know how difficult it is to get rid of the
> implicit "return None" or even if it is doable, but if it is, it
> should, IMHO, be done.

It's been proposed before, and the conclusion was that
it would cause more problems than it would solve.

(Essentially it would require returning some object
that raised an exception when anything at all was
done to it, but such an object would cause debuggers
and other introspective code to choke.)

--
Greg

From fdrake at acm.org  Mon Jun 12 02:39:37 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sun, 11 Jun 2006 20:39:37 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448C7C75.70703@intertwingly.net>
References: <448C7C75.70703@intertwingly.net>
Message-ID: <200606112039.37834.fdrake@acm.org>

On Sunday 11 June 2006 16:26, Sam Ruby wrote:
 > Planet is a feed aggregator written in Python.  It depends heavily on
 > SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib,
 > and I've submitted a test case and a patch[1] (use or discard the patch,
 > it is the test that I care about).

And it's a nice aggregator to use, indeed!

 > While looking around, a few things surfaced.  For starters, it would
 > seem that the version of sgmllib in SVN HEAD will selectively unescape
 > certain character references that might appear in an attribute.  I say
 > selectively, as:
 >
 >   * it will unescape  &amp;
 >   * it won't unescape &copy;
 >   * it will unescape  &#38;
 >   * it won't unescape &#x26;
 >   * it will unescape  &#146;
 >   * it won't unescape &#8217;

And just why would you use sgmllib to handle RSS or ATOM feeds?  Neither is 
defined in terms of SGML.  The sgmllib documentation also notes that it isn't 
really a fully general SGML parser (it isn't), but that it exists primarily 
as a foundation for htmllib.

 > There are a number of issues here.  While not unescaping anything is
 > suboptimal, at least the recipient is aware of exactly which characters
 > have been unescaped (i.e., none of them).  The proposed solution makes
 > it impossible for the recipient to know which characters are unescaped,
 > and which are original.  (Note: feeds often contain such abominations as
 > &amp;copy; which the new code will treat indistinguishably from &copy;)

My suspicion is that the "right" thing to do at the sgmllib level is to 
categorize the markup and call a method depending on what the entity 
reference is, and let that handle whatever it is.  For SGML, that means we 
have things like &name; (entity references), &#123; (character references), 
and that's it.  &#x123; isn't legal SGML under any circumstance; 
the "&#x<number>;" syntax was introduced with XML.

 > Additionally, there is a unicode issue here - one that is shared by
 > handle_charref, but at least that method is overrideable.  If unescaping
 > remains, do it for hex character references and for values greather than
 > 8-bits, i.e., use unichr instead of chr if the value is greater than 127.

For SGML, it's worse than that, since the document character set is defined in 
the SGML declaration, which is a far hairier beast than an XML 
declaration.  :-)

It really sounds like sgmllib is the wrong foundation for this.  While the 
module has some questionable behaviors, none of them are signifcant in the 
context it's intended context (support for htmllib).  Now, I understand that 
RSS has historical issues, with HTML-as-practiced getting embedded as payload 
data with various flavors of escaping applied, and I'm not an expert in the 
details of that.  Have you looked at HTMLParser as an alternate to sgmllib?  
It has better support for XHTML constructs.


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From greg.ewing at canterbury.ac.nz  Mon Jun 12 03:03:34 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Jun 2006 13:03:34 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <448C87E1.6070801@acm.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
Message-ID: <448CBD66.80002@canterbury.ac.nz>

Talin wrote:

> Since you don't have the 'fall-through' behavior of C, I would also 
> assume that you could associate more than one value with a case, i.e.:
> 
>     case 'a', 'b', 'c':
>        ...

Multiple values could be written

   case 'a':
   case 'b':
   case 'c':
     ...

without conflicting with the no-fallthrough semantics, since
a do-nothing case can be written as

   case 'd':
     pass

> I don't have any specific syntax proposals, but I notice that the suite 
> that follows the switch statement is not a normal suite, but a 
> restricted one,

I don't see that as a problem. And all the proposed syntaxes
I've ever seen for putting the cases at the same level as
the switch look ugly to me.

--
Greg

From greg.ewing at canterbury.ac.nz  Mon Jun 12 03:06:41 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Jun 2006 13:06:41 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <17548.39643.646760.994634@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
	<17548.39643.646760.994634@montanaro.dyndns.org>
Message-ID: <448CBE21.6070009@canterbury.ac.nz>

skip at pobox.com wrote:

> I agree, but that of course limits the expressions to constants which can be
> evaluated at compile-time as I indicated in my previous mail.

A way out of this would be to define the semantics so that
the expression values are allowed to be cached, and the
order of evaluation and testing is undefined. So the first
time through, the values could all be put in a dict, to
be looked up thereafter.

--
Greg

From terry at jon.es  Mon Jun 12 03:19:57 2006
From: terry at jon.es (Terry Jones)
Date: Mon, 12 Jun 2006 03:19:57 +0200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: Your message at 12:27:49 on Monday, 12 June 2006
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
Message-ID: <17548.49469.394804.146445@terry.jones.tc>

>>>>> "Greg" == Greg Ewing <greg.ewing at canterbury.ac.nz> writes:

Greg> Terry Jones wrote:
>> Suppose you have a RNG with a cycle length of 5. There's nothing to stop an
>> algorithm from taking multiple already returned values and combining them
>> in some (deterministic) way to generate > 5 outcomes.

Greg> No, it's not. As long as the RNG output is the only input to
Greg> the algorithm, and the algorithm is deterministic, it is
Greg> not possible get more than N different outcomes. It doesn't
Greg> matter what the algorithm does with the input.

Greg> If the algorithm can start out with more than one initial
Greg> state, then the RNG is not the only input.

The code below uses a RNG with period 5, is deterministic, and has one
initial state. It produces 20 different outcomes.

It's just doing a simplistic version of what a lagged RNG generator does,
but the lagged part is in the "algorithm" not in the rng. That's why I said
if you included the state of the algorithm in what you meant by "state" I'd
be more inclined to agree.

Terry

----

n = map(float, range(1, 17, 3))
i = 0

def rng():
    global i
    i += 1
    if i == 5: i = 0
    return n[i]

if __name__ == '__main__':
    seen = {}
    history = [rng()]
    o = 0
    for lag in range(1, 5):
        for x in range(5):
            o += 1
            new = rng()
            outcome = new / history[-lag]
            if outcome in seen: print "DUP!"
            seen[outcome] = True
            print "outcome %d = %f" % (o, outcome)
            history.append(new)


# Outputs
outcome 1 = 1.750000
outcome 2 = 1.428571
outcome 3 = 1.300000
outcome 4 = 0.076923
outcome 5 = 4.000000
outcome 6 = 7.000000
outcome 7 = 2.500000
outcome 8 = 1.857143
outcome 9 = 0.100000
outcome 10 = 0.307692
outcome 11 = 0.538462
outcome 12 = 10.000000
outcome 13 = 3.250000
outcome 14 = 0.142857
outcome 15 = 0.400000
outcome 16 = 0.700000
outcome 17 = 0.769231
outcome 18 = 13.000000
outcome 19 = 0.250000
outcome 20 = 0.571429

From tjreedy at udel.edu  Mon Jun 12 04:06:16 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 11 Jun 2006 22:06:16 -0400
Subject: [Python-Dev] sgmllib Comments
References: <448C7C75.70703@intertwingly.net>
	<200606112039.37834.fdrake@acm.org>
Message-ID: <e6ii6p$e0b$1@sea.gmane.org>


"Fred L. Drake, Jr." <fdrake at acm.org> wrote in message 
news:200606112039.37834.fdrake at acm.org...
> On Sunday 11 June 2006 16:26, Sam Ruby wrote:
> > Planet is a feed aggregator written in Python.  It depends heavily on
> > SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib,
> > and I've submitted a test case and a patch[1] (use or discard the 
> > patch,
> > it is the test that I care about).
...
> > and which are original.  (Note: feeds often contain such abominations 
> > as
> > &amp;copy; which the new code will treat indistinguishably from &copy;)

> It really sounds like sgmllib is the wrong foundation for this.
...
> Have you looked at HTMLParser as an alternate to sgmllib?
> It has better support for XHTML constructs.

Have you (the OP), checked how related Python projects, such as Mark 
Pilgrim's feed parser,
http://www.feedparser.org/
handle the same sort of input (I have only looked at docs and tests, not 
code).

tjr




From tjreedy at udel.edu  Mon Jun 12 04:19:59 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 11 Jun 2006 22:19:59 -0400
Subject: [Python-Dev] Import semantics
References: <cfb578b20606111531t6806d5c9kd35fd8ba29638174@mail.gmail.com>
Message-ID: <e6ij0g$gb2$1@sea.gmane.org>


"Fabio Zadrozny" <fabiofz at gmail.com> wrote in message
>Jython 2.1 on java1.5.0 (JIT: null)
>Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on 
>win32

Jython 2.1 intends to match Python 2.1, I believe.
Python 2.2, which I still have loaded, matches Python 2.4 in the behavior 
reported.




From tjreedy at udel.edu  Mon Jun 12 04:40:53 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 11 Jun 2006 22:40:53 -0400
Subject: [Python-Dev] subprocess.Popen(.... stdout=IGNORE, ...)
References: <8393fff0606111659j16e0ed73wf16d3f7e0892d268@mail.gmail.com>
Message-ID: <e6ik7l$jc8$1@sea.gmane.org>


"Martin Blais" <blais at furius.ca> wrote in message 
news:8393fff0606111659j16e0ed73wf16d3f7e0892d268 at mail.gmail.com...
>   Any idea how this idiom could be supported using a more portable
>  solution (i.e. how would I make this idiom under Windows, is there
>   some equivalent to /dev/null)?

On a DOS/Windows command line,  '>NUL:' or '>nul:'





From tim.peters at gmail.com  Mon Jun 12 05:11:46 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sun, 11 Jun 2006 23:11:46 -0400
Subject: [Python-Dev] Should hex() yield 'L' suffix for long numbers?
In-Reply-To: <Pine.LNX.4.58.0606111925410.5223@server1.LFW.org>
References: <Pine.LNX.4.58.0606111925410.5223@server1.LFW.org>
Message-ID: <1f7befae0606112011y119f8f7ds91f753df01884ee4@mail.gmail.com>

[Ka-Ping Yee]
> I did this earlier:
>
>     >>> hex(9999999999999)
>     '0x9184e729fffL'
>
> and found it a little jarring, because i feel there's been a general
> trend toward getting rid of the 'L' suffix in Python.
>
> Literal long integers don't need an L anymore; they're automatically
> made into longs if the number is too big.  And while the repr() of
> a long retains the L on the end, the str() of a long does not, and
> i rather like that.
>
> So i kind of expected that hex() would not include the L either.
> I see its main job as just giving me the hex digits (in fact, for
> Python 3000 i'd prefer even to drop the '0x' as well), and the L
> seems superfluous and distracting.
>
> What do you think?  Is Python 2.5 a reasonable time to drop this L?

As I read pep 237, that should have happened in Python 2.3 or 2.4.
This specific case is kinda muddy there.  Regardless, the only part
that was left for Python 3 was "phase C", and this is phase C in its
entirety:

 C. The trailing 'L' is dropped from repr(), and made illegal on
       input.  (If possible, the 'long' type completely disappears.)

It's possible, though, that hex() and oct() were implicitly considered
to be variants of repr() for purposes of phase C.  How much are we
willing to pay Guido to Pronounce?

From tim.peters at gmail.com  Mon Jun 12 05:25:53 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sun, 11 Jun 2006 23:25:53 -0400
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17548.49469.394804.146445@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
Message-ID: <1f7befae0606112025n121d3f56ye0b20e3b9120a076@mail.gmail.com>

[Terry Jones]
> The code below uses a RNG with period 5, is deterministic, and has one
> initial state. It produces 20 different outcomes.

Well, I'd call the sequence of 20 numbers it produces one outcome.
>From that view, there are at most 5 outcomes it can produce (at most 5
distinct 20-number sequences).  In much the same way, there are at
most P distinct infinite sequences this can produce, if the PRNG used
by random.random() has period P:

def belch():
    import random, math
    start = random.random()
    i = 0
    while True:
        i += 1
        yield math.fmod(i * start, 1.0)

The trick is to define "outcome" in such a way that the original claim
is true :-)

From rubys at intertwingly.net  Mon Jun 12 06:01:23 2006
From: rubys at intertwingly.net (Sam Ruby)
Date: Mon, 12 Jun 2006 00:01:23 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <200606112039.37834.fdrake@acm.org>
References: <448C7C75.70703@intertwingly.net>
	<200606112039.37834.fdrake@acm.org>
Message-ID: <448CE713.3050000@intertwingly.net>

Fred L. Drake, Jr. wrote:
> On Sunday 11 June 2006 16:26, Sam Ruby wrote:
>  > Planet is a feed aggregator written in Python.  It depends heavily on
>  > SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib,
>  > and I've submitted a test case and a patch[1] (use or discard the patch,
>  > it is the test that I care about).
> 
> And it's a nice aggregator to use, indeed!
> 
>  > While looking around, a few things surfaced.  For starters, it would
>  > seem that the version of sgmllib in SVN HEAD will selectively unescape
>  > certain character references that might appear in an attribute.  I say
>  > selectively, as:
>  >
>  >   * it will unescape  &amp;
>  >   * it won't unescape &copy;
>  >   * it will unescape  &#38;
>  >   * it won't unescape &#x26;
>  >   * it will unescape  &#146;
>  >   * it won't unescape &#8217;
> 
> And just why would you use sgmllib to handle RSS or ATOM feeds?  Neither is 
> defined in terms of SGML.  The sgmllib documentation also notes that it isn't 
> really a fully general SGML parser (it isn't), but that it exists primarily 
> as a foundation for htmllib.

The feed itself is read first with SAX (then with a fallback using 
sgmllib if the feed is not well formed, but that's beside the point). 
Then the embedded HTML portions are then processed with subclasses of 
sgmllib.

>  > There are a number of issues here.  While not unescaping anything is
>  > suboptimal, at least the recipient is aware of exactly which characters
>  > have been unescaped (i.e., none of them).  The proposed solution makes
>  > it impossible for the recipient to know which characters are unescaped,
>  > and which are original.  (Note: feeds often contain such abominations as
>  > &amp;copy; which the new code will treat indistinguishably from &copy;)
> 
> My suspicion is that the "right" thing to do at the sgmllib level is to 
> categorize the markup and call a method depending on what the entity 
> reference is, and let that handle whatever it is.  For SGML, that means we 
> have things like &name; (entity references), &#123; (character references), 
> and that's it.  &#x123; isn't legal SGML under any circumstance; 
> the "&#x<number>;" syntax was introduced with XML.

... but it effectively is valid HTML.  And as you point out below 
sgmllib's raison d??tre is to support htmllib.

>  > Additionally, there is a unicode issue here - one that is shared by
>  > handle_charref, but at least that method is overrideable.  If unescaping
>  > remains, do it for hex character references and for values greather than
>  > 8-bits, i.e., use unichr instead of chr if the value is greater than 127.
> 
> For SGML, it's worse than that, since the document character set is defined in 
> the SGML declaration, which is a far hairier beast than an XML 
> declaration.  :-)

understood

> It really sounds like sgmllib is the wrong foundation for this.  While the 
> module has some questionable behaviors, none of them are signifcant in the 
> context it's intended context (support for htmllib).  Now, I understand that 
> RSS has historical issues, with HTML-as-practiced getting embedded as payload 
> data with various flavors of escaping applied, and I'm not an expert in the 
> details of that.  Have you looked at HTMLParser as an alternate to sgmllib?  
> It has better support for XHTML constructs.

HTMLParser is less forgiving, and generally less suitable for consuming 
HTML as practiced.

- Sam Ruby


From rubys at intertwingly.net  Mon Jun 12 06:05:06 2006
From: rubys at intertwingly.net (Sam Ruby)
Date: Mon, 12 Jun 2006 00:05:06 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <e6ii6p$e0b$1@sea.gmane.org>
References: <448C7C75.70703@intertwingly.net>	<200606112039.37834.fdrake@acm.org>
	<e6ii6p$e0b$1@sea.gmane.org>
Message-ID: <448CE7F2.4060106@intertwingly.net>

Terry Reedy wrote:
> "Fred L. Drake, Jr." <fdrake at acm.org> wrote in message 
> news:200606112039.37834.fdrake at acm.org...
>> On Sunday 11 June 2006 16:26, Sam Ruby wrote:
>>> Planet is a feed aggregator written in Python.  It depends heavily on
>>> SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib,
>>> and I've submitted a test case and a patch[1] (use or discard the 
>>> patch,
>>> it is the test that I care about).
> ...
>>> and which are original.  (Note: feeds often contain such abominations 
>>> as
>>> &amp;copy; which the new code will treat indistinguishably from &copy;)
> 
>> It really sounds like sgmllib is the wrong foundation for this.
> ...
>> Have you looked at HTMLParser as an alternate to sgmllib?
>> It has better support for XHTML constructs.
> 
> Have you (the OP), checked how related Python projects, such as Mark 
> Pilgrim's feed parser,
> http://www.feedparser.org/
> handle the same sort of input (I have only looked at docs and tests, not 
> code).

Just to be clear: Planet uses Mark's feed parser, which uses SGMLlib.

I'm a committer on that project:

http://sourceforge.net/project/memberlist.php?group_id=112328

I was investigating a bug in sgmllib which affected the feed parser (and 
therefore Planet), and noticed that there were changes in the SVN head 
of Python which broke three feed parser unit tests.

It is my belief that these changes will break other existing users of 
sgmllib.

- Sam Ruby

From martin at v.loewis.de  Mon Jun 12 06:41:25 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 06:41:25 +0200
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <20060611221132.GA16080@panix.com>
References: <448C7C75.70703@intertwingly.net>
	<20060611221132.GA16080@panix.com>
Message-ID: <448CF075.2000405@v.loewis.de>

Aahz wrote:
> When providing links to SF, please use the python.org tinyurl equivalent
> to ensure that people can easily see the bug/patch number:
> 
> http://www.python.org/sf?id=1504333

Although I usually use the path-style form:

http://www.python.org/sf/1504333

Regards,
Martin

From fdrake at acm.org  Mon Jun 12 06:53:17 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon, 12 Jun 2006 00:53:17 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448CE7F2.4060106@intertwingly.net>
References: <448C7C75.70703@intertwingly.net> <e6ii6p$e0b$1@sea.gmane.org>
	<448CE7F2.4060106@intertwingly.net>
Message-ID: <200606120053.17829.fdrake@acm.org>

On Monday 12 June 2006 00:05, Sam Ruby wrote:
 > Just to be clear: Planet uses Mark's feed parser, which uses SGMLlib.

Cool.

 > I was investigating a bug in sgmllib which affected the feed parser (and
 > therefore Planet), and noticed that there were changes in the SVN head
 > of Python which broke three feed parser unit tests.
 >
 > It is my belief that these changes will break other existing users of
 > sgmllib.

This is good to know; thanks for pointing it out.

If you can summarize the specific changes to sgmllib that cause problems for 
the feed parser, and identify the tests there that rely on the old behavior, 
I'll be glad to look at the problems.  I expect to have some time in the next 
few evenings, so I should be able to look at these soon.

Is the SourceForge CVS the definitive development source for the feed parser?


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From martin at v.loewis.de  Mon Jun 12 07:06:45 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 07:06:45 +0200
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448C7C75.70703@intertwingly.net>
References: <448C7C75.70703@intertwingly.net>
Message-ID: <448CF665.4010208@v.loewis.de>

Sam Ruby wrote:
> Planet is a feed aggregator written in Python.  It depends heavily on 
> SGMLLib.  A recent bug report turned out to be a deficiency in sgmllib, 
> and I've submitted a test case and a patch[1] (use or discard the patch, 
> it is the test that I care about).

I think (but am not sure) you are referring to patch #1462498 here,
which fixes bugs 1452246 and 1087808.

>   * it will unescape  &amp;
>   * it won't unescape &copy;

That must be because you have amp in your entitydefs, but not copy.

>   * it will unescape  &#38;
>   * it won't unescape &#x26;

That's because it doesn't recognize hex character references.
That's systematic, though: it doesn't just ignore them in attribute
values, but also in content.

>   * it will unescape  &#146;
>   * it won't unescape &#8217;

That's because the value is larger than 256, so chr() fails.

> There are a number of issues here.  While not unescaping anything is 
> suboptimal, at least the recipient is aware of exactly which characters 
> have been unescaped (i.e., none of them).  The proposed solution makes 
> it impossible for the recipient to know which characters are unescaped, 
> and which are original.  (Note: feeds often contain such abominations as 
> &amp;copy; which the new code will treat indistinguishably from &copy;)

The recipient should then add &copy; to entitydefs; sgmllib will
unescape copy, so the recipient can know not to unescape that.

Alternatively, the recipient could provide an empty entitydefs.

> Additionally, there is a unicode issue here - one that is shared by 
> handle_charref, but at least that method is overrideable.  If unescaping 
> remains, do it for hex character references and for values greather than 
> 8-bits, i.e., use unichr instead of chr if the value is greater than 127.

Alternatively, a callback function could be provided for character
references. Unfortunately, the existing callback is unsuitable,
as it is supposed to do the full processing; this callback should
return the replacement text. Generally assuming Unicode would be
wrong, though.

Would you like to contribute a patch?

Regards,
Martin

From martin at v.loewis.de  Mon Jun 12 07:10:13 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 07:10:13 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
Message-ID: <448CF735.7000404@v.loewis.de>

Neal Norwitz wrote:
> The most important outstanding issue is the xmlplus/xmlcore issue.
> It's not going to get fixed unless someone works on it.  There's only
> a few days left before beta 1.  Can someone please address this?

>From my point of view, I shall consider them resolved/irrelevant:
I'm going to step down as a PyXML maintainer, so I don't have to
worry anymore about how to maintain PyXML. If PyXML then gets
unmaintained, the problem goes away, otherwise, the new maintainer
will have to find a solution.

Regards,
Martin

From rubys at intertwingly.net  Mon Jun 12 07:11:15 2006
From: rubys at intertwingly.net (Sam Ruby)
Date: Mon, 12 Jun 2006 01:11:15 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <200606120053.17829.fdrake@acm.org>
References: <448C7C75.70703@intertwingly.net> <e6ii6p$e0b$1@sea.gmane.org>
	<448CE7F2.4060106@intertwingly.net>
	<200606120053.17829.fdrake@acm.org>
Message-ID: <448CF773.7080209@intertwingly.net>

Fred L. Drake, Jr. wrote:
> On Monday 12 June 2006 00:05, Sam Ruby wrote:
>  > Just to be clear: Planet uses Mark's feed parser, which uses SGMLlib.
> 
> Cool.
> 
>  > I was investigating a bug in sgmllib which affected the feed parser (and
>  > therefore Planet), and noticed that there were changes in the SVN head
>  > of Python which broke three feed parser unit tests.
>  >
>  > It is my belief that these changes will break other existing users of
>  > sgmllib.
> 
> This is good to know; thanks for pointing it out.
> 
> If you can summarize the specific changes to sgmllib that cause problems for 
> the feed parser, and identify the tests there that rely on the old behavior, 
> I'll be glad to look at the problems.  I expect to have some time in the next 
> few evenings, so I should be able to look at these soon.
> 
> Is the SourceForge CVS the definitive development source for the feed parser?

Yes: but if you check out the CVS HEAD, you won't see any failures as 
I've committed changes that mitigate the problems I've found.

However, if you get the latest release instead, you will see that feeds 
that contain &lt; &amp; or &gt; in attribute values will get these 
converted to <, &, and > characters instead.  In some cases, this can 
cause problems.  Particularly if the output is reparsed by sgmllib.

Additionally, entity references in the range of &#129; to &#255; will 
cause the released Feed Parser to die with a UnicodeDecodeError.

My workarounds are to re-escape < and > characters, and to escape bare 
ampersands - beyond that I can't really tell for sure which ampersands 
need to be re-escaped, and which ones I should leave as is.

And I first try decoding attributes in the original declared encoding 
and then fall back to iso-8859-1.  If a single attribute value contains 
both non-ASCII utf-8 characters and a numeric character reference above 
&#128; then this will produce incorrect results.

I also have committed a workaround to the incorrect parsing of 
attributes with quoted markup that I originally reported.

- Sam Ruby

From rubys at intertwingly.net  Mon Jun 12 07:24:45 2006
From: rubys at intertwingly.net (Sam Ruby)
Date: Mon, 12 Jun 2006 01:24:45 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448CF665.4010208@v.loewis.de>
References: <448C7C75.70703@intertwingly.net> <448CF665.4010208@v.loewis.de>
Message-ID: <448CFA9D.9070404@intertwingly.net>

Martin v. L?wis wrote:
> 
> Alternatively, a callback function could be provided for character
> references. Unfortunately, the existing callback is unsuitable,
> as it is supposed to do the full processing; this callback should
> return the replacement text. Generally assuming Unicode would be
> wrong, though.
> 
> Would you like to contribute a patch?

If we can agree on the behavior, I would be glad to write up a patch.

It seems to me that the simplest way to proceed would be for the code 
that attempts to resolve character references (both named and numeric) 
in attributes to be isolated in a single method.  Subclasses that desire 
different behavior (including the existing Python 2.4 and prior 
behaviour) could simply override this method.

- Sam Ruby

From martin at v.loewis.de  Mon Jun 12 08:18:50 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 08:18:50 +0200
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448CFA9D.9070404@intertwingly.net>
References: <448C7C75.70703@intertwingly.net> <448CF665.4010208@v.loewis.de>
	<448CFA9D.9070404@intertwingly.net>
Message-ID: <448D074A.5030508@v.loewis.de>

Sam Ruby wrote:
> If we can agree on the behavior, I would be glad to write up a patch.
> 
> It seems to me that the simplest way to proceed would be for the code
> that attempts to resolve character references (both named and numeric)
> in attributes to be isolated in a single method.  Subclasses that desire
> different behavior (including the existing Python 2.4 and prior
> behaviour) could simply override this method.

In SGML, this is problematic: The named things are not character
references, they are entity references, and it isn't necessarily
the case that they expand to a character. For example, &author;
might expand to "Martin v. L?wis", and &logo; might refer to a
bitmap image which is unparsed.

That said, providing a overridable replacement function sounds
like the right approach. To keep with tradition, I would still
distinguish between character references and entity references,
i.e. providing two overridable functions instead. Returning
None could mean that no replacement is available.

As for default implementations, I think they should do what
currently happens: entity references are replaced according to
entitydefs, character references are replaced to bytes if
they are smaller than 256.

Contrary to what others said, it appears that SGML *does*
support hexadecimal character references, provided that
the SGML declaraction contains the HCRO definition (which,
for HTML and XML, is defined as HCRO "&#38;#x"). So it seems
safe to process hex character references by default (although
it isn't safe to assume Unicode, IMO).

Regards,
Martin

From pedronis at strakt.com  Mon Jun 12 10:00:13 2006
From: pedronis at strakt.com (Samuele Pedroni)
Date: Mon, 12 Jun 2006 10:00:13 +0200
Subject: [Python-Dev] Import semantics
In-Reply-To: <cfb578b20606111531t6806d5c9kd35fd8ba29638174@mail.gmail.com>
References: <cfb578b20606111531t6806d5c9kd35fd8ba29638174@mail.gmail.com>
Message-ID: <448D1F0D.7000405@strakt.com>

Fabio Zadrozny wrote:
> Python and Jython import semantics differ on how sub-packages should be 
> accessed after importing some module:
> 
> Jython 2.1 on java1.5.0 (JIT: null)
> Type "copyright", "credits" or "license" for more information.
>  >>> import xml
>  >>> xml.dom
> <module xml.dom at 10340434>
> 
> Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on 
> win32
> Type "help", "copyright", "credits" or "license" for more information.
>  >>> import xml
>  >>> xml.dom
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> AttributeError: 'module' object has no attribute 'dom'
>  >>> from xml.dom import pulldom
>  >>> xml.dom
> <module 'xml.dom' from 'C:\bin\Python24\lib\xml\dom\__init__.pyc'>
> 
> Note that in Jython importing a module makes all subpackages beneath it 
> available, whereas in python, only the tokens available in __init__.py 
> are accessible, but if you do load the module later even if not getting 
> it directly into the namespace, it gets accessible too -- this seems 
> more like something unexpected to me -- I would expect it to be 
> available only if I did some "import xml.dom" at some point.
> 
> My problem is that in Pydev, in static analysis, I would only get the 
> tokens available for actually imported modules, but that's not true for 
> Jython, and I'm not sure if the current behaviour in Python was expected.
> 
> So... which would be the right semantics for this?

the difference in Jython is deliberate. I think the reason was to mimic 
more the Java style for this, in java fully qualified names always work. 
In jython importing the top level packages is enough to get a similar 
effect.

This is unlikely to change for backward compatibility reasons, at least 
from my POV.


> 
> Thanks,
> 
> Fabio
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/pedronis%40strakt.com


From python-dev at zesty.ca  Mon Jun 12 10:49:45 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 12 Jun 2006 03:49:45 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <020a01c68d5b$f2130170$3db72997@bagio>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<020a01c68d5b$f2130170$3db72997@bagio>
Message-ID: <Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>

On Sun, 11 Jun 2006, Giovanni Bajo wrote:
> Some comments on the code:
>
> > for dir in ['', r'c:\windows\system32', r'c:\winnt\system32']:
>
> Can we get rid of these absolute paths? Something like this should suffice:
>
> >>> from ctypes import *
> >>> buf = create_string_buffer(4096)
> >>> windll.kernel32.GetSystemDirectoryA(buf, 4096)
> 17
> >>> buf.value.decode("mbcs")
> u'C:\\WINNT\\system32'

I'd like to, but i don't want to use a method for finding the system
directory that depends on ctypes.  Is there a more general way?

> >  for function in functions:
> >        try:
> >            _node = function()
> >        except:
> >            continue
>
> This also hides typos and whatnot.

The intended semantics of getnode() are that it cannot fail.
The individual *_getnode() functions do throw exceptions if
something goes wrong, and so they can be tested individually
on platforms where they are expected to work.


-- ?!ng

From python-dev at zesty.ca  Mon Jun 12 10:59:04 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 12 Jun 2006 03:59:04 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <5.1.1.6.0.20060611215139.01e7ced8@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>
	<e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
	<e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>
	<5.1.1.6.0.20060611215139.01e7ced8@sparrow.telecommunity.com>
Message-ID: <Pine.LNX.4.58.0606120354270.5223@server1.LFW.org>

On Sun, 11 Jun 2006, Phillip J. Eby wrote:
> At 07:24 PM 6/11/2006 -0500, Ka-Ping Yee wrote:
> >I've added code to make uuid1() use uuid_generate_time() if available
> >and uuid4() use uuid_generate_random() if available.  These functions
> >are provided on Mac OS X (in libc) and on Linux (in libuuid).  Does
> >that work for you?
>
> Sure - but actually my main point was to have a uuid() call you could use
> to just get whatever the platform's preferred form of GUID is, without
> having to pick what *type* you want.

I'm reluctant to do that, because there's a privacy question here.
I think the person using the module should have control over whether
the UUID is going to leak the host ID or not (rather than leaving it
up to whatever the platform prefers or which call the implementor of
uuid.py happened to choose for a given platform).

> Perhaps that isn't feasible, or is a bad idea for some other reason, but my
> main point was to have a call that means "get me a good unique ID".  :)

Couldn't we just recommend uuid.uuid4() for that?


-- ?!ng

From rasky at develer.com  Mon Jun 12 11:08:09 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Mon, 12 Jun 2006 11:08:09 +0200
Subject: [Python-Dev] UUID module
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<020a01c68d5b$f2130170$3db72997@bagio>
	<Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>
Message-ID: <047e01c68dff$bae89760$3db72997@bagio>

Ka-Ping Yee <python-dev at zesty.ca> wrote:

>>> for dir in ['', r'c:\windows\system32', r'c:\winnt\system32']:
>>
>> Can we get rid of these absolute paths? Something like this should
>> suffice:
>>
>>>>> from ctypes import *
>>>>> buf = create_string_buffer(4096)
>>>>> windll.kernel32.GetSystemDirectoryA(buf, 4096)
>> 17
>>>>> buf.value.decode("mbcs")
>> u'C:\\WINNT\\system32'
>
> I'd like to, but i don't want to use a method for finding the system
> directory that depends on ctypes.

Why?

> Is there a more general way?

GetSystemDirectory() is the official way to find the system directory. You can
either access it through ctypes or through pywin32, but I guess we're moving to
ctypes for this kind of stuff, since it's bundled in 2.5. I don't know any
Python sys/os method to get a pointer to that directory. Another thing that you
might do is to drop those absolute system directories altogether. After all,
ipconfig should always be in the path.

As a last note, you are parsing ipconfig output assuming an English Windows
installation. My Italian Windows 2000 has localized output.

Giovanni Bajo


From engelbert.gruber at ssg.co.at  Mon Jun 12 11:23:01 2006
From: engelbert.gruber at ssg.co.at (engelbert.gruber at ssg.co.at)
Date: Mon, 12 Jun 2006 11:23:01 +0200 (CEST)
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <e6esab$388$1@sea.gmane.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<e6esab$388$1@sea.gmane.org>
Message-ID: <Pine.LNX.4.64.0606121118240.9824@lx3.local>

On Sat, 10 Jun 2006, Fredrik Lundh wrote:

> if all undocumented modules had as much documentation and articles as
> ET, the world would be a lot better documented ;-)
>
> I've posted a text version of the xml.etree.ElementTree PythonDoc here:
>
>     http://www.python.org/sf/1504046
>
> hopefully, one of the anything-to-latex volunteers will pick this up
> shortly; otherwise, I'll deal with that early next week.

i am new to anything-to-latex but gave it a try

elementtree.txt is the modified text version

   1. add a classifier to function and class documentation
   2. remove the Arguments:/Returns: definition lists, first because the
      tool does not handle it in a useful way, second i couldnt find this
      style in lib/*.tex and therefore dont know how it should be handled.

elementtree.text is the version generated from this version (rst2docpy)

cheers
-------------- next part --------------
:Module: elementtree

:Summary: The xml.etree.ElementTree Module

:Module Type: standard

:Author: Fredrik Lundh <fredrik at pythonware.com>

:Version Added: 2.5

:Synopsis: This module provides implementations of the Element and ElementTree

           types, plus support classes.



           A C version of this API is available as xml.etree.cElementTree.



Overview

--------



The Element type is a flexible container object, designed to store

hierarchical data structures in memory. The type can be described as a

cross between a list and a dictionary.



Each element has a number of properties associated with it:



* a tag. This is a string identifying what kind of data

  this element represents (the element type, in other words).

* a number of attributes, stored in a Python dictionary.

* a text string.

* an optional tail string.

* a number of child elements, stored in a Python sequence



To create an element instance, use the Element or SubElement factory

functions.



The ElementTree class can be used to wrap an element

structure, and convert it from and to XML.



Functions

---------



Comment(text=None) : funcdesc

  Comment element factory.  This factory function creates a special

  element that will be serialized as an XML comment.



  The comment string can be either an 8-bit ASCII string or a Unicode

  string.



  text: A string containing the comment string.



  Returns: An element instance, representing a comment.



dump(elem) : funcdesc

  Writes an element tree or element structure to sys.stdout.  This

  function should be used for debugging only.



  The exact output format is implementation dependent.  In this

  version, it's written as an ordinary XML file.



  elem: An element tree or an individual element.



Element(tag, attrib={}, **extra) : funcdesc

  Element factory.  This function returns an object implementing the

  standard Element interface.  The exact class or type of that object

  is implementation dependent, but it will always be compatible with

  the _ElementInterface class in this module.



  The element name, attribute names, and attribute values can be

  either 8-bit ASCII strings or Unicode strings.



  tag: The element name.



  attrib: An optional dictionary, containing element attributes.



  extra: Additional attributes, given as keyword arguments.



  Returns: An element instance.



fromstring(text) : funcdesc

  Parses an XML document from a string constant.  Same as XML.



  source: A string containing XML data.



  Returns: An Element instance.



iselement(element) : funcdesc

  Checks if an object appears to be a valid element object.



  element: element instance.



  Returns: A true value if this is an element object.



iterparse(source, events=None) : funcdesc

  Parses an XML document into an element tree incrementally, and reports

  what's going on to the user.



  source: A filename or file object containing XML data.



  events: A list of events to report back.  If omitted, only "end"

  events are reported.



  Returns: A (event, elem) iterator.



parse(source, parser=None) : funcdesc

  Parses an XML document into an element tree.



  source: A filename or file object containing XML data.



  parser: An optional parser instance.  If not given, the

  standard XMLTreeBuilder parser is used.



  Returns: An ElementTree instance



ProcessingInstruction(target, text=None) : funcdesc

  PI element factory.  This factory function creates a special element

  that will be serialized as an XML processing instruction.



  target: A string containing the PI target.



  text: A string containing the PI contents, if any.



  Returns: An element instance, representing a PI.



SubElement(parent, tag, attrib={}, **extra) : funcdesc

  Subelement factory.  This function creates an element instance, and

  appends it to an existing element.



  The element name, attribute names, and attribute values can be

  either 8-bit ASCII strings or Unicode strings.



  parent: The parent element.



  tag: The subelement name.



  attrib: An optional dictionary, containing element attributes.



  extra: Additional attributes, given as keyword arguments.



  Returns: An element instance.



tostring(element, encoding=None) : funcdesc

  Generates a string representation of an XML element, including all

  subelements.



  element: An Element instance.



  Returns: An encoded string containing the XML data.



XML(text) : funcdesc

  Parses an XML document from a string constant.  This function can

  be used to embed "XML literals" in Python code.



  source: A string containing XML data.



  Returns: An Element instance.



XMLID(text) : funcdesc

  Parses an XML document from a string constant, and also returns

  a dictionary which maps from element id:s to elements.



  source: A string containing XML data.



  Returns: A tuple containing an Element instance and a dictionary.



ElementTree Objects

-------------------



class ElementTree(element=None, file=None) : classdesc

  ElementTree wrapper class.  This class represents an entire element

  hierarchy, and adds some extra support for serialization to and from

  standard XML.



  element: Optional root element.



  file (keyword): Optional file handle or name.  If given, the

  tree is initialized with the contents of this XML file.



_setroot(element) : methoddesc

  Replaces the root element for this tree.  This discards the

  current contents of the tree, and replaces it with the given

  element.  Use with care.



  element: An element instance.



find(path) : methoddesc

  Finds the first toplevel element with given tag.

  Same as getroot().find(path).



  path: What element to look for.



  Returns: The first matching element, or None if no element was found.



findall(path) : methoddesc

  Finds all toplevel elements with the given tag.

  Same as getroot().findall(path).



  path: What element to look for.



  Returns: A list or iterator containing all matching elements,

  in document order.



findtext(path, default=None) : methoddesc

  Finds the element text for the first toplevel element with given

  tag.  Same as getroot().findtext(path).



  path: What toplevel element to look for.



  default: What to return if the element was not found.



  Returns: The text content of the first matching element, or the

  default value no element was found.  Note that if the element

  has is found, but has no text content, this method returns an

  empty string.



getiterator(tag=None) : methoddesc

  Creates a tree iterator for the root element.  The iterator loops

  over all elements in this tree, in document order.



  tag: What tags to look for (default is to return all elements)



  Returns: An iterator.



getroot() : methoddesc

  Gets the root element for this tree.



  Returns:

    An element instance.



parse(source, parser=None) : methoddesc

  Loads an external XML document into this element tree.



  source: A file name or file object.



  parser: An optional parser instance.  If not given, the

  standard XMLTreeBuilder parser is used.



  Returns: The document root element.



write(file, encoding="us-ascii") : methoddesc

  Writes the element tree to a file, as XML.



  file: A file name, or a file object opened for writing.



  encoding: Optional output encoding (default is US-ASCII).



QName Objects

-------------



class QName(text_or_uri, tag=None) : classdesc

  QName wrapper.  This can be used to wrap a QName attribute value, in

  order to get proper namespace handling on output.



  text: A string containing the QName value, in the form {uri}local,

  or, if the tag argument is given, the URI part of a QName.



  tag: Optional tag.  If given, the first argument is interpreted as

  an URI, and this argument is interpreted as a local name.



  Returns: An opaque object, representing the QName.



TreeBuilder Objects

-------------------



class TreeBuilder(element_factory=None) : classdesc

  Generic element structure builder.  This builder converts a sequence

  of start, data, and end method calls to a well-formed element structure.



  You can use this class to build an element structure using a custom XML

  parser, or a parser for some other XML-like format.



  element_factory: Optional element factory.  This factory

  is called to create new Element instances, as necessary.



close() : methoddesc

  Flushes the parser buffers, and returns the toplevel documen

  element.



  Returns:

    An Element instance.



data(data) : methoddesc

  Adds text to the current element.



  data: A string.  This should be either an 8-bit string

  containing ASCII text, or a Unicode string.



end(tag) : methoddesc

  Closes the current element.



  tag: The element name.



  Returns: The closed element.



start(tag, attrs) : methoddesc

  Opens a new element.



  tag: The element name.



  attrib: A dictionary containing element attributes.



  Returns: The opened element.



XMLTreeBuilder Objects

----------------------



class XMLTreeBuilder(html=0, target=None) : classdesc

  Element structure builder for XML source data, based on the

  expat parser.



  target (keyword): Target object.  If omitted, the builder uses an

  instance of the standard TreeBuilder class.



  html (keyword): Predefine HTML entities.  This flag is not supported

  by the current implementation.



close() : methoddesc

  Finishes feeding data to the parser.



  Returns:

    An element structure.



doctype(name, pubid, system) : methoddesc

  Handles a doctype declaration.



  name: Doctype name.



  pubid: Public identifier.



  system: System identifier.



feed(data) : methoddesc

  Feeds data to the parser.



  data: Encoded data.



-------------- next part --------------
A non-text attachment was scrubbed...
Name: elementtree.tex
Type: text/x-tex
Size: 10885 bytes
Desc: 
Url : http://mail.python.org/pipermail/python-dev/attachments/20060612/9c4f7e28/attachment-0001.tex 

From engelbert.gruber at ssg.co.at  Mon Jun 12 11:23:10 2006
From: engelbert.gruber at ssg.co.at (engelbert.gruber at ssg.co.at)
Date: Mon, 12 Jun 2006 11:23:10 +0200 (CEST)
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <e6esab$388$1@sea.gmane.org>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<e6esab$388$1@sea.gmane.org>
Message-ID: <Pine.LNX.4.64.0606121123040.9824@lx3.local>

On Sat, 10 Jun 2006, Fredrik Lundh wrote:

> if all undocumented modules had as much documentation and articles as
> ET, the world would be a lot better documented ;-)
>
> I've posted a text version of the xml.etree.ElementTree PythonDoc here:
>
>     http://www.python.org/sf/1504046
>
> hopefully, one of the anything-to-latex volunteers will pick this up
> shortly; otherwise, I'll deal with that early next week.

i am new to anything-to-latex but gave it a try

elementtree.txt is the modified text version

    1. add a classifier to function and class documentation
    2. remove the Arguments:/Returns: definition lists, first because the
       tool does not handle it in a useful way, second i couldnt find this
       style in lib/*.tex and therefore dont know how it should be handled.

elementtree.text is the version generated from this version (rst2docpy)

cheers
-------------- next part --------------
:Module: elementtree

:Summary: The xml.etree.ElementTree Module

:Module Type: standard

:Author: Fredrik Lundh <fredrik at pythonware.com>

:Version Added: 2.5

:Synopsis: This module provides implementations of the Element and ElementTree

           types, plus support classes.



           A C version of this API is available as xml.etree.cElementTree.



Overview

--------



The Element type is a flexible container object, designed to store

hierarchical data structures in memory. The type can be described as a

cross between a list and a dictionary.



Each element has a number of properties associated with it:



* a tag. This is a string identifying what kind of data

  this element represents (the element type, in other words).

* a number of attributes, stored in a Python dictionary.

* a text string.

* an optional tail string.

* a number of child elements, stored in a Python sequence



To create an element instance, use the Element or SubElement factory

functions.



The ElementTree class can be used to wrap an element

structure, and convert it from and to XML.



Functions

---------



Comment(text=None) : funcdesc

  Comment element factory.  This factory function creates a special

  element that will be serialized as an XML comment.



  The comment string can be either an 8-bit ASCII string or a Unicode

  string.



  text: A string containing the comment string.



  Returns: An element instance, representing a comment.



dump(elem) : funcdesc

  Writes an element tree or element structure to sys.stdout.  This

  function should be used for debugging only.



  The exact output format is implementation dependent.  In this

  version, it's written as an ordinary XML file.



  elem: An element tree or an individual element.



Element(tag, attrib={}, **extra) : funcdesc

  Element factory.  This function returns an object implementing the

  standard Element interface.  The exact class or type of that object

  is implementation dependent, but it will always be compatible with

  the _ElementInterface class in this module.



  The element name, attribute names, and attribute values can be

  either 8-bit ASCII strings or Unicode strings.



  tag: The element name.



  attrib: An optional dictionary, containing element attributes.



  extra: Additional attributes, given as keyword arguments.



  Returns: An element instance.



fromstring(text) : funcdesc

  Parses an XML document from a string constant.  Same as XML.



  source: A string containing XML data.



  Returns: An Element instance.



iselement(element) : funcdesc

  Checks if an object appears to be a valid element object.



  element: element instance.



  Returns: A true value if this is an element object.



iterparse(source, events=None) : funcdesc

  Parses an XML document into an element tree incrementally, and reports

  what's going on to the user.



  source: A filename or file object containing XML data.



  events: A list of events to report back.  If omitted, only "end"

  events are reported.



  Returns: A (event, elem) iterator.



parse(source, parser=None) : funcdesc

  Parses an XML document into an element tree.



  source: A filename or file object containing XML data.



  parser: An optional parser instance.  If not given, the

  standard XMLTreeBuilder parser is used.



  Returns: An ElementTree instance



ProcessingInstruction(target, text=None) : funcdesc

  PI element factory.  This factory function creates a special element

  that will be serialized as an XML processing instruction.



  target: A string containing the PI target.



  text: A string containing the PI contents, if any.



  Returns: An element instance, representing a PI.



SubElement(parent, tag, attrib={}, **extra) : funcdesc

  Subelement factory.  This function creates an element instance, and

  appends it to an existing element.



  The element name, attribute names, and attribute values can be

  either 8-bit ASCII strings or Unicode strings.



  parent: The parent element.



  tag: The subelement name.



  attrib: An optional dictionary, containing element attributes.



  extra: Additional attributes, given as keyword arguments.



  Returns: An element instance.



tostring(element, encoding=None) : funcdesc

  Generates a string representation of an XML element, including all

  subelements.



  element: An Element instance.



  Returns: An encoded string containing the XML data.



XML(text) : funcdesc

  Parses an XML document from a string constant.  This function can

  be used to embed "XML literals" in Python code.



  source: A string containing XML data.



  Returns: An Element instance.



XMLID(text) : funcdesc

  Parses an XML document from a string constant, and also returns

  a dictionary which maps from element id:s to elements.



  source: A string containing XML data.



  Returns: A tuple containing an Element instance and a dictionary.



ElementTree Objects

-------------------



class ElementTree(element=None, file=None) : classdesc

  ElementTree wrapper class.  This class represents an entire element

  hierarchy, and adds some extra support for serialization to and from

  standard XML.



  element: Optional root element.



  file (keyword): Optional file handle or name.  If given, the

  tree is initialized with the contents of this XML file.



_setroot(element) : methoddesc

  Replaces the root element for this tree.  This discards the

  current contents of the tree, and replaces it with the given

  element.  Use with care.



  element: An element instance.



find(path) : methoddesc

  Finds the first toplevel element with given tag.

  Same as getroot().find(path).



  path: What element to look for.



  Returns: The first matching element, or None if no element was found.



findall(path) : methoddesc

  Finds all toplevel elements with the given tag.

  Same as getroot().findall(path).



  path: What element to look for.



  Returns: A list or iterator containing all matching elements,

  in document order.



findtext(path, default=None) : methoddesc

  Finds the element text for the first toplevel element with given

  tag.  Same as getroot().findtext(path).



  path: What toplevel element to look for.



  default: What to return if the element was not found.



  Returns: The text content of the first matching element, or the

  default value no element was found.  Note that if the element

  has is found, but has no text content, this method returns an

  empty string.



getiterator(tag=None) : methoddesc

  Creates a tree iterator for the root element.  The iterator loops

  over all elements in this tree, in document order.



  tag: What tags to look for (default is to return all elements)



  Returns: An iterator.



getroot() : methoddesc

  Gets the root element for this tree.



  Returns:

    An element instance.



parse(source, parser=None) : methoddesc

  Loads an external XML document into this element tree.



  source: A file name or file object.



  parser: An optional parser instance.  If not given, the

  standard XMLTreeBuilder parser is used.



  Returns: The document root element.



write(file, encoding="us-ascii") : methoddesc

  Writes the element tree to a file, as XML.



  file: A file name, or a file object opened for writing.



  encoding: Optional output encoding (default is US-ASCII).



QName Objects

-------------



class QName(text_or_uri, tag=None) : classdesc

  QName wrapper.  This can be used to wrap a QName attribute value, in

  order to get proper namespace handling on output.



  text: A string containing the QName value, in the form {uri}local,

  or, if the tag argument is given, the URI part of a QName.



  tag: Optional tag.  If given, the first argument is interpreted as

  an URI, and this argument is interpreted as a local name.



  Returns: An opaque object, representing the QName.



TreeBuilder Objects

-------------------



class TreeBuilder(element_factory=None) : classdesc

  Generic element structure builder.  This builder converts a sequence

  of start, data, and end method calls to a well-formed element structure.



  You can use this class to build an element structure using a custom XML

  parser, or a parser for some other XML-like format.



  element_factory: Optional element factory.  This factory

  is called to create new Element instances, as necessary.



close() : methoddesc

  Flushes the parser buffers, and returns the toplevel documen

  element.



  Returns:

    An Element instance.



data(data) : methoddesc

  Adds text to the current element.



  data: A string.  This should be either an 8-bit string

  containing ASCII text, or a Unicode string.



end(tag) : methoddesc

  Closes the current element.



  tag: The element name.



  Returns: The closed element.



start(tag, attrs) : methoddesc

  Opens a new element.



  tag: The element name.



  attrib: A dictionary containing element attributes.



  Returns: The opened element.



XMLTreeBuilder Objects

----------------------



class XMLTreeBuilder(html=0, target=None) : classdesc

  Element structure builder for XML source data, based on the

  expat parser.



  target (keyword): Target object.  If omitted, the builder uses an

  instance of the standard TreeBuilder class.



  html (keyword): Predefine HTML entities.  This flag is not supported

  by the current implementation.



close() : methoddesc

  Finishes feeding data to the parser.



  Returns:

    An element structure.



doctype(name, pubid, system) : methoddesc

  Handles a doctype declaration.



  name: Doctype name.



  pubid: Public identifier.



  system: System identifier.



feed(data) : methoddesc

  Feeds data to the parser.



  data: Encoded data.



-------------- next part --------------
A non-text attachment was scrubbed...
Name: elementtree.tex
Type: text/x-tex
Size: 10885 bytes
Desc: 
Url : http://mail.python.org/pipermail/python-dev/attachments/20060612/cad92d1c/attachment-0001.tex 

From walter at livinglogic.de  Mon Jun 12 11:56:17 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Mon, 12 Jun 2006 11:56:17 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <003f01c68c40$8360c2b0$0400a8c0@whiterabc2znlh>
References: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>
	<448953EA.9080006@livinglogic.de>
	<003f01c68c40$8360c2b0$0400a8c0@whiterabc2znlh>
Message-ID: <448D3A41.3090402@livinglogic.de>

H.Yamamoto wrote:

> ----- Original Message ----- 
> From: "Walter D?rwald" <walter at livinglogic.de>
> To: "H.Yamamoto" <ocean at m2.ccsnet.ne.jp>
> Cc: "python-dev" <python-dev at python.org>
> Sent: Friday, June 09, 2006 7:56 PM
> Subject: Re: [Python-Dev] beta1 coming real soon
> 
>> The best way to throughly test the patch is of course to check it in. ;)
> 
> Is it too risky? ;)

At least I'd like to get a second review of the patch.

>> I've tested the patch on Windows and there were no obvious bugs. Of
>> course to *really* test the patch a Windows installation with a
>> multibyte locale is required.
>>
>>> # Maybe, no one is using this codec?
>> The audience is indeed limited.
> 
> Yes, I agree. And the audience who has "64bit" Windows with multibyte locale
> should be much more limitted...

Unfortunately, yes.

Servus,
   Walter


From p.f.moore at gmail.com  Mon Jun 12 12:02:08 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 12 Jun 2006 11:02:08 +0100
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<020a01c68d5b$f2130170$3db72997@bagio>
	<Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>
Message-ID: <79990c6b0606120302k9d66af1w32cc99c4f4d68b5e@mail.gmail.com>

On 6/12/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Sun, 11 Jun 2006, Giovanni Bajo wrote:
> > Some comments on the code:
> >
> > > for dir in ['', r'c:\windows\system32', r'c:\winnt\system32']:
> >
> > Can we get rid of these absolute paths? Something like this should suffice:
> >
> > >>> from ctypes import *
> > >>> buf = create_string_buffer(4096)
> > >>> windll.kernel32.GetSystemDirectoryA(buf, 4096)
> > 17
> > >>> buf.value.decode("mbcs")
> > u'C:\\WINNT\\system32'
>
> I'd like to, but i don't want to use a method for finding the system
> directory that depends on ctypes.  Is there a more general way?

Why not use ctypes? This is precisely the situation it was designed
for. There's nothing more general (this is totally Windows-specific
after all). The alternative is to depend on pywin32, which is not part
of the core, or to write a custom C wrapper, which seems to me to be
precisely what we're trying to move away from...

Paul.

From skip at pobox.com  Mon Jun 12 12:34:03 2006
From: skip at pobox.com (skip at pobox.com)
Date: Mon, 12 Jun 2006 05:34:03 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <448CBD66.80002@canterbury.ac.nz>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
	<448CBD66.80002@canterbury.ac.nz>
Message-ID: <17549.17179.944527.897085@montanaro.dyndns.org>


    Greg> Multiple values could be written

    Greg>    case 'a':
    Greg>    case 'b':
    Greg>    case 'c':
    Greg>      ...

That would be an exception to the rule that a line ending in a colon
introduces an indented block.

Skip

From skip at pobox.com  Mon Jun 12 12:35:20 2006
From: skip at pobox.com (skip at pobox.com)
Date: Mon, 12 Jun 2006 05:35:20 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <448CBE21.6070009@canterbury.ac.nz>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
	<17548.39643.646760.994634@montanaro.dyndns.org>
	<448CBE21.6070009@canterbury.ac.nz>
Message-ID: <17549.17256.241958.724752@montanaro.dyndns.org>


    Greg> A way out of this would be to define the semantics so that the
    Greg> expression values are allowed to be cached, and the order of
    Greg> evaluation and testing is undefined. So the first time through,
    Greg> the values could all be put in a dict, to be looked up thereafter.

And if those expressions' values would change if evaluated after further
execution?

Skip

From rubys at intertwingly.net  Mon Jun 12 12:49:50 2006
From: rubys at intertwingly.net (Sam Ruby)
Date: Mon, 12 Jun 2006 06:49:50 -0400
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448D074A.5030508@v.loewis.de>
References: <448C7C75.70703@intertwingly.net> <448CF665.4010208@v.loewis.de>
	<448CFA9D.9070404@intertwingly.net> <448D074A.5030508@v.loewis.de>
Message-ID: <448D46CE.40405@intertwingly.net>

Martin v. L?wis wrote:
> Sam Ruby wrote:
>> If we can agree on the behavior, I would be glad to write up a patch.
>>
>> It seems to me that the simplest way to proceed would be for the code
>> that attempts to resolve character references (both named and numeric)
>> in attributes to be isolated in a single method.  Subclasses that desire
>> different behavior (including the existing Python 2.4 and prior
>> behaviour) could simply override this method.
> 
> In SGML, this is problematic: The named things are not character
> references, they are entity references, and it isn't necessarily
> the case that they expand to a character. For example, &author;
> might expand to "Martin v. L?wis", and &logo; might refer to a
> bitmap image which is unparsed.
> 
> That said, providing a overridable replacement function sounds
> like the right approach. To keep with tradition, I would still
> distinguish between character references and entity references,
> i.e. providing two overridable functions instead. Returning
> None could mean that no replacement is available.
> 
> As for default implementations, I think they should do what
> currently happens: entity references are replaced according to
> entitydefs, character references are replaced to bytes if
> they are smaller than 256.
> 
> Contrary to what others said, it appears that SGML *does*
> support hexadecimal character references, provided that
> the SGML declaraction contains the HCRO definition (which,
> for HTML and XML, is defined as HCRO "&#38;#x"). So it seems
> safe to process hex character references by default (although
> it isn't safe to assume Unicode, IMO).

I don't see why expanding to multiple characters is a problem.

Just so that we have a tracking number and real code to anchor this 
discussion, I've opened the following and attached a patch:

http://python.org/sf/1504676

This implementation does handle multiple character expansions.  It does 
default to exactly what the current code does.  It does *not* currently 
handle hexadecimal character references.

It also does pass all the current sgmllib tests, though I did not 
include any additional tests in this initial patch.

- Sam Ruby

From bborcic at gmail.com  Mon Jun 12 14:20:48 2006
From: bborcic at gmail.com (Boris Borcic)
Date: Mon, 12 Jun 2006 14:20:48 +0200
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
 locals' in Python 2.5)
In-Reply-To: <20060607083940.GA12003@code0.codespeak.net>
References: <9e804ac0606061707w64a5b90pddd62d31bce1e7d6@mail.gmail.com>
	<20060607083940.GA12003@code0.codespeak.net>
Message-ID: <e6jm7e$hnn$1@sea.gmane.org>

Hello,

Armin Rigo wrote:
> Hi,
> 
> On Wed, Jun 07, 2006 at 02:07:48AM +0200, Thomas Wouters wrote:
>> I just submitted http://python.org/sf/1501934 and assigned it to Neal so it
>> doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles
>> the following code incorrectly:
> 
> No, no, it's an underground move by Jeremy to allow assignment to
> variables of enclosing scopes:
...
> Credits to Samuele's evil side for the ideas.  His non-evil side doesn't
> agree, and neither does mine, of course :-)
...
> More seriously, a function with a variable that is only written to as
> the target of augmented assignments cannot possibly be something else
> than a newcomer's mistake: the augmented assignments will always raise
> UnboundLocalError.

I am not really a newcomer to python. But lately I find myself regularly bitten
by this compiler behavior that I tend to view as a (design) bug. This started
happening after I saw that sets are just as good as lists performance-wise and I
began changing code like this

def solve(problem) :
     freebits = [True for _ in range(N)]
     def search(data) :
         ...
         for b in swaps :
             freebits[b] ^= 1
         ....

to more concise and clearer code like that

def solve(problem) :
     freebits = set(range(N))
     def search(data)
         ...
         freebits ^= swaps
         ...


At such points, I find it a huge violation of the principle of least surprise 
that the compiler forbids to mean 'x.__ixor__(y)' with 'x ^= y' and interprets 
the latter to mean something different. Given that this preferred meaning is 
certain to cause a misleading runtime error !!!

<thought_processes>

What ? It *wants* me to write x.__ixor__(y) *spontaneously* ?

I can't believe it ! What's in dir(set) ?

Oh, what's that, set.symmetric_difference_update() !?

Why not set.lets_get_miles_out_of_our_way_to_accomodate_some_evil_theories() ?

Mhh, set.X_update() admits iterables while augmented assignments require other 
sets [a vital difference for sure ;)] so maybe it's just by accident that the 
set API offers a good enough equivalent to x ^= y to escape having to use 
x.__ixor__(y) in similar contexts.

</though_processes>

Now reading this thread where somebody files a patch to "rectify" 2.5a2 behaving 
sanely on this issue, and somebody else follows up to -jokingly- argue in favor 
of sanity while confessing to the latter's evilness - that tells me that some 
really weird behind-the-looking-glass official theory might indeed dominate the 
issue.

Is that theory written somewhere ? Or is it just the manifestation of a bug in 
the BDFL's famed time machine ? (I am saying this because Guido recently argued 
that sets should integrate as if they had been designed into Python from the 
beginning, what the above flagrantly contradicts imho).

Cheers,

Boris Borcic
--
"On na?t tous les m?tres du m?me monde"


From kristjan at ccpgames.com  Mon Jun 12 15:49:50 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 12 Jun 2006 13:49:50 -0000
Subject: [Python-Dev] file()
Message-ID: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>

I notice that file() throws an IOError when it detects an invalid mode string.  Wouldn't a ValueError be more appropriate?
Kristj?n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/37fd048b/attachment.html 

From rhettinger at ewtllc.com  Mon Jun 12 15:52:18 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 12 Jun 2006 06:52:18 -0700
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
Message-ID: <448D7192.8070607@ewtllc.com>

Alex Martelli wrote:

>...claims:
>
>Note that for even rather small len(x), the total number of
>permutations of x is larger than the period of most random number
>generators; this implies that "most" permutations of a long
>sequence can never be generated.
>
>Now -- why would the behavior of "most" random number generators be  
>relevant here?  The module's docs claim, for its specific Mersenne  
>Twister generator, a period of 2**19997-1, which is (e.g.) a  
>comfortable  
>130128673800676351960752618754658780303412233749552410245124492452914636 
>028095467780746435724876612802011164778042889281426609505759158196749438 
>742986040468247017174321241233929215223326801091468184945617565998894057 
>859403269022650639413550466514556014961826309062543 times larger than  
>the number of permutations of 2000 items, which doesn't really feel  
>to me like a "rather small len(x)" in this context (what I'm most  
>often shuffling is just a pack of cards -- len(x)==52 -- for example).
>
>I suspect that the note is just a fossil from a time when the default  
>random number generator was Whichman-Hill, with a much shorter  
>period.  Should this note just be removed, or instead somehow  
>reworded to point out that this is not in fact a problem for the  
>module's current default random number generator?  Opinions welcome!
>  
>
I think the note is still useful, but the "rather small" wording
should be replaced by something most precise (such as the
value of n=len(x) where n! > 2**19997).



Raymond


From guido at python.org  Mon Jun 12 16:11:53 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 07:11:53 -0700
Subject: [Python-Dev] file()
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>
Message-ID: <ca471dc20606120711r2ca09fbbk36133c1c6fc8770a@mail.gmail.com>

Yup, although it's a change in behavior that would need to be studied
carefully for backwards incompatibilities. Usually it's given as a
constant, so there won't be any problems; but there might be code that
receives a mode string and attempts to test its validity by trying it
and catching IOError, such code would have to be changed.

--Guido

On 6/12/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
> I notice that file() throws an IOError when it detects an invalid mode
> string.  Wouldn't a ValueError be more appropriate?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From martin at v.loewis.de  Mon Jun 12 17:29:06 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 17:29:06 +0200
Subject: [Python-Dev] sgmllib Comments
In-Reply-To: <448D46CE.40405@intertwingly.net>
References: <448C7C75.70703@intertwingly.net> <448CF665.4010208@v.loewis.de>
	<448CFA9D.9070404@intertwingly.net> <448D074A.5030508@v.loewis.de>
	<448D46CE.40405@intertwingly.net>
Message-ID: <448D8842.5040004@v.loewis.de>

Sam Ruby wrote:
> I don't see why expanding to multiple characters is a problem.

That isn't a problem. Expanding to unparsed entities is. So the
current call to handle_entityref must remain.

Regards,
Martin

From pje at telecommunity.com  Mon Jun 12 01:59:21 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 11 Jun 2006 19:59:21 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <e6i6d1$g40$1@sea.gmane.org>
References: <448C87E1.6070801@acm.org>
	<20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
Message-ID: <5.1.1.6.0.20060611195557.037e1d88@sparrow.telecommunity.com>

At 12:44 AM 6/12/2006 +0200, Fredrik Lundh wrote:
>the compiler can of course figure that out also for if/elif/else state-
>ments, by inspecting the AST.  the only advantage for switch/case is
>user syntax...

Not quite true - you'd have to restrict the switch expression in some way, 
so you don't have:

    if x.y == 1:
       ...
    elif x.y == 2:
       ...

where the compiler doesn't know if getattr(x,'y') is really supposed to 
happen more than once.  But I suppose you could class that as syntax.


From pje at telecommunity.com  Mon Jun 12 05:55:57 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 11 Jun 2006 23:55:57 -0400
Subject: [Python-Dev] Please stop changing wsgiref on the trunk
Message-ID: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>

As requested in PEP 360, please inform me of any issues you find so they 
can be corrected in the standalone package and merged back to the trunk.

I just wasted time cutting an 0.1.1 release of the standalone wsgiref 
package only to find that it doesn't correspond to any particular point in 
the trunk, because people made changes without contacting me or the 
Web-SIG.  I then spent a bunch more time figuring out how to get the 
changes out and merge them back in to the standalone version such that the 
Python trunk has a specific version number of wsgiref.  Please don't do 
this again.

I appreciate the help finding bugs, but I'll probably still be maintaining 
the standalone version of wsgiref for a few years yet.


From pje at telecommunity.com  Mon Jun 12 03:55:04 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 11 Jun 2006 21:55:04 -0400
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606111918210.5223@server1.LFW.org>
References: <5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>
	<e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<e6baf3$8fl$1@sea.gmane.org>
	<9e804ac0606090626o20008650o93a211da5e6f6f64@mail.gmail.com>
	<Pine.LNX.4.58.0606100804170.5223@server1.LFW.org>
	<e6enei$kpo$1@sea.gmane.org>
	<Pine.LNX.4.58.0606101016530.5223@server1.LFW.org>
	<e6eorl$o0d$1@sea.gmane.org>
	<5.1.1.6.0.20060610123421.01f62e60@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20060611215139.01e7ced8@sparrow.telecommunity.com>

At 07:24 PM 6/11/2006 -0500, Ka-Ping Yee wrote:
>Thomas Heller wrote:
> > I don't know if this is the uuidgen you're talking about, but
> > on linux there is libuuid:
>
>Thanks!
>
>Okay, that's in there now.  Have a look at http://zesty.ca/python/uuid.py .
>
>Phillip J. Eby wrote:
> > By the way, I'd love to see a uuid.uuid() constructor that simply calls the
> > platform-specific default UUID constructor (CoCreateGuid or uuidgen(2)),
>
>I've added code to make uuid1() use uuid_generate_time() if available
>and uuid4() use uuid_generate_random() if available.  These functions
>are provided on Mac OS X (in libc) and on Linux (in libuuid).  Does
>that work for you?

Sure - but actually my main point was to have a uuid() call you could use 
to just get whatever the platform's preferred form of GUID is, without 
having to pick what *type* you want.

The idea being that there should be some call you can make that will always 
give you something reasonably unique, without being overspecified as to the 
type of uuid.  That way, people can be told to use uuid.uuid() to get 
unique IDs for use in their programs, without having to get into what types 
of UUIDs do what.

Perhaps that isn't feasible, or is a bad idea for some other reason, but my 
main point was to have a call that means "get me a good unique ID".  :)


From pje at telecommunity.com  Mon Jun 12 06:00:01 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 00:00:01 -0400
Subject: [Python-Dev] FYI: wsgiref is now checked in
In-Reply-To: <1f7befae0606101222j7be21daas987e7f24bf5e1445@mail.gmail.co
 m>
References: <5.1.1.6.0.20060609124432.02e7c348@mail.telecommunity.com>
	<5.1.1.6.0.20060609124432.02e7c348@mail.telecommunity.com>
Message-ID: <5.1.1.6.0.20060611235632.03753c30@sparrow.telecommunity.com>

At 03:22 PM 6/10/2006 -0400, Tim Peters wrote:
>This may be because compare_generic_iter() uses `assert` statements,
>and those vanish under -O.  If so, a test shouldn't normally use
>`assert`.  On rare occasions it's appropriate, like test_struct's:
>
>             if x < 0:
>                 expected += 1L << self.bitsize
>                 assert expected > 0
>
>That isn't testing any of struct's functionality, it's documenting and
>verifying a fundamental _belief_ of the test author's:  the test
>itself is buggy if that assert ever triggers.  Or, IOW, it's being
>used for what an assert statement should be used for :-)

Thanks for the bug report; I've fixed these problems in the standalone 
version (0.1.2 on the cheeseshop) and in the Python 2.5 trunk.

Web-SIG folks take note: wsgiref.validate is based on paste.lint, so 
paste.lint has the same problem.  That is, errors won't be raised if the 
code is run with -O.

As a side effect of fixing the problems,  I found that some of the 
wsgiref.validate (aka paste.lint) asserts have improperly computed 
messages.  Instead of getting an explanation of the problem, you'll instead 
get a different error at the assert.  I fixed these in wsgiref.validate, 
but the underlying problems presumably still exist in paste.lint.


From guido at python.org  Mon Jun 12 18:04:53 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 09:04:53 -0700
Subject: [Python-Dev] Please stop changing wsgiref on the trunk
In-Reply-To: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
Message-ID: <ca471dc20606120904l60a4df90k96cdb4095518ac57@mail.gmail.com>

On 6/11/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> As requested in PEP 360, please inform me of any issues you find so they
> can be corrected in the standalone package and merged back to the trunk.
>
> I just wasted time cutting an 0.1.1 release of the standalone wsgiref
> package only to find that it doesn't correspond to any particular point in
> the trunk, because people made changes without contacting me or the
> Web-SIG.  I then spent a bunch more time figuring out how to get the
> changes out and merge them back in to the standalone version such that the
> Python trunk has a specific version number of wsgiref.  Please don't do
> this again.
>
> I appreciate the help finding bugs, but I'll probably still be maintaining
> the standalone version of wsgiref for a few years yet.

Phillip, Please consider the burden for Python developers who have to
remember who owns what code. I know we have PEP 360, but the more I
think about it, the more I believe it's unreasonable to have growing
amounts of code in the Python source tree that cannot be modified by
Python developers who are taking the usual caution (understanding the
code, running unit tests, being aware of backwards compatibility
requirements, etc.).

Once code is checked into Python's source tree, change control should
ideally rest with the Python developers collectively, not with one
particular "owner".

IOW I think PEP 360 is an unfortunate historic accident, and we would
be better off without it. I propose that we don't add to it going
forward, and that we try to get rid of it as we can.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Mon Jun 12 18:29:48 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 12:29:48 -0400
Subject: [Python-Dev] Please stop changing wsgiref on the trunk
In-Reply-To: <ca471dc20606120904l60a4df90k96cdb4095518ac57@mail.gmail.co
 m>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>

At 09:04 AM 6/12/2006 -0700, Guido van Rossum wrote:
>IOW I think PEP 360 is an unfortunate historic accident, and we would
>be better off without it. I propose that we don't add to it going
>forward, and that we try to get rid of it as we can.

4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want 
to get rid of it, *now* would be the time.

There is an approach that would address this issue and others relating to 
external packages, but it would require changes to how Python is 
built.  That is, I would propose a directory to contain 
externally-maintained packages, each with their own setup.py.  These 
packages could be built and installed with Python, but would then also be 
separately-distributable.

Alternately, such packages could be done using svn:externals tied to 
specific release versions of the external packages.

This idea would address the needs of external maintainers (having a single 
release history) while still allowing Python developers to modify the code 
(if the external package is in Python's SVN repository).


From martin at v.loewis.de  Mon Jun 12 18:32:03 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 18:32:03 +0200
Subject: [Python-Dev] UUID module
In-Reply-To: <Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>	<020a01c68d5b$f2130170$3db72997@bagio>
	<Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>
Message-ID: <448D9703.4050900@v.loewis.de>

Ka-Ping Yee wrote:
> I'd like to, but i don't want to use a method for finding the system
> directory that depends on ctypes.  Is there a more general way?

I think

os.path.join(os.environ["SystemRoot"], "system32")

should be fairly reliable. If people are worried that the directory
might not be system32, then

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Windows\SystemDirectory

gives you the value (although I could not find out how to expand
REG_EXPAND_SZ keys with _winreg).

Regards,
Martin

From martin at v.loewis.de  Mon Jun 12 18:42:57 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 18:42:57 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <448D3A41.3090402@livinglogic.de>
References: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>	<448953EA.9080006@livinglogic.de>	<003f01c68c40$8360c2b0$0400a8c0@whiterabc2znlh>
	<448D3A41.3090402@livinglogic.de>
Message-ID: <448D9991.1050601@v.loewis.de>

Walter D?rwald wrote:
>>> The best way to throughly test the patch is of course to check it in. ;)
>> Is it too risky? ;)
> 
> At least I'd like to get a second review of the patch.

I've reviewed it, and am likely to check it in. I notice that the
patch still has problems. In particular, it is limited to "DBCS"
(and SBCS) character sets in the strict sense; general "MBCS"
character sets are not supported. There are a few of these, most
notably the ISO-2022 ones, UTF-8, and GB18030 (can't be bothered
to look up the code page numbers for them right now).

What I don't know is whether any Windows locale uses a "true"
MBCS character set as its "ANSI" code page.

The approach taken in the patch could be extended to GB18030 and
UTF-8 in principle, but can't possibly work for ISO-2022.

Regards,
Martin

From guido at python.org  Mon Jun 12 18:43:47 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 09:43:47 -0700
Subject: [Python-Dev] Please stop changing wsgiref on the trunk
In-Reply-To: <5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
Message-ID: <ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>

On 6/12/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 09:04 AM 6/12/2006 -0700, Guido van Rossum wrote:
> >IOW I think PEP 360 is an unfortunate historic accident, and we would
> >be better off without it. I propose that we don't add to it going
> >forward, and that we try to get rid of it as we can.
>
> 4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
> to get rid of it, *now* would be the time.

I'm all for it.

While I am an enthusiastic supporter of several of those additions, I
am *not* in favor of the special status granted to software
contributed by certain developers, since it is a burden for all other
developers.

> There is an approach that would address this issue and others relating to
> external packages, but it would require changes to how Python is
> built.  That is, I would propose a directory to contain
> externally-maintained packages, each with their own setup.py.  These
> packages could be built and installed with Python, but would then also be
> separately-distributable.
>
> Alternately, such packages could be done using svn:externals tied to
> specific release versions of the external packages.
>
> This idea would address the needs of external maintainers (having a single
> release history) while still allowing Python developers to modify the code
> (if the external package is in Python's SVN repository).

Even that is a burden on regular Python developers. For example, when
I do "svn up" in the PEPS directory, which has such an arrangement for
the docutils, it usually spends (much) more time deciding that there's
nothing new in the docutils than it spends on the rest of the update.
I also suspect that the external linking will continue to cause a
burden for Python developers -- upgrading to a newer version of the
external package would require making sure that no changes made by
Python developers in the previous release bundle are lost in the new
release bundle.

I personally think that, going forward, external maintainers should
not be granted privileges such as are being granted by PEP 360, and an
inclusion of a package in the Python tree should be considered a
"fork" for all practical purposes. If an external developer is not
okay with such an arrangement, they shouldn't contribute.

Note: I'm saying "going forward". I'm not saying that this "tough
luck" policy should be applied to the packages that have already been
accepted; I don't want the PSF to break its work. Although I'd
encourage their authors to loosen up.

Perhaps issues like these should motivate us to consider a different
source control tool. There's a new crop of tools out that could solve
this by having multiple repositories that can be sync'ed with each
other. This sounds like an important move towards world peace!

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 12 18:47:30 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 09:47:30 -0700
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <448CF735.7000404@v.loewis.de>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>
	<448CF735.7000404@v.loewis.de>
Message-ID: <ca471dc20606120947r2b0940berf7a0c806b607dd4e@mail.gmail.com>

PyXML appears pretty stable (in terms of release frequency -- I have
no opinion on the code quality :-). Perhaps it could just be
incorporated into the Python svn tree, if the various owners are
willing to sign a contributor statement?

--Guido

On 6/11/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Neal Norwitz wrote:
> > The most important outstanding issue is the xmlplus/xmlcore issue.
> > It's not going to get fixed unless someone works on it.  There's only
> > a few days left before beta 1.  Can someone please address this?
>
> >From my point of view, I shall consider them resolved/irrelevant:
> I'm going to step down as a PyXML maintainer, so I don't have to
> worry anymore about how to maintain PyXML. If PyXML then gets
> unmaintained, the problem goes away, otherwise, the new maintainer
> will have to find a solution.
>
> Regards,
> Martin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 12 18:48:45 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 09:48:45 -0700
Subject: [Python-Dev] Import semantics
In-Reply-To: <448D1F0D.7000405@strakt.com>
References: <cfb578b20606111531t6806d5c9kd35fd8ba29638174@mail.gmail.com>
	<448D1F0D.7000405@strakt.com>
Message-ID: <ca471dc20606120948h119f1c1fw2725e7d434e287df@mail.gmail.com>

On 6/12/06, Samuele Pedroni <pedronis at strakt.com> wrote:
> Fabio Zadrozny wrote:
> > Python and Jython import semantics differ on how sub-packages should be
> > accessed after importing some module:
> >
> > Jython 2.1 on java1.5.0 (JIT: null)
> > Type "copyright", "credits" or "license" for more information.
> >  >>> import xml
> >  >>> xml.dom
> > <module xml.dom at 10340434>
> >
> > Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on
> > win32
> > Type "help", "copyright", "credits" or "license" for more information.
> >  >>> import xml
> >  >>> xml.dom
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in ?
> > AttributeError: 'module' object has no attribute 'dom'
> >  >>> from xml.dom import pulldom
> >  >>> xml.dom
> > <module 'xml.dom' from 'C:\bin\Python24\lib\xml\dom\__init__.pyc'>
> >
> > Note that in Jython importing a module makes all subpackages beneath it
> > available, whereas in python, only the tokens available in __init__.py
> > are accessible, but if you do load the module later even if not getting
> > it directly into the namespace, it gets accessible too -- this seems
> > more like something unexpected to me -- I would expect it to be
> > available only if I did some "import xml.dom" at some point.
> >
> > My problem is that in Pydev, in static analysis, I would only get the
> > tokens available for actually imported modules, but that's not true for
> > Jython, and I'm not sure if the current behaviour in Python was expected.
> >
> > So... which would be the right semantics for this?
>
> the difference in Jython is deliberate. I think the reason was to mimic
> more the Java style for this, in java fully qualified names always work.
> In jython importing the top level packages is enough to get a similar
> effect.
>
> This is unlikely to change for backward compatibility reasons, at least
> from my POV.

IMO it should do this only if the imported module is really a Java
package. If it's a Python package it should stick to python semantics
if possible.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 12 18:57:03 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 09:57:03 -0700
Subject: [Python-Dev] Should hex() yield 'L' suffix for long numbers?
In-Reply-To: <1f7befae0606112011y119f8f7ds91f753df01884ee4@mail.gmail.com>
References: <Pine.LNX.4.58.0606111925410.5223@server1.LFW.org>
	<1f7befae0606112011y119f8f7ds91f753df01884ee4@mail.gmail.com>
Message-ID: <ca471dc20606120957t223a1d44o78b15e401177b60d@mail.gmail.com>

Here's how I interpret PEP 237. Some changes to hex() and oct() are
warned about in B1and to be implemented in B2. But I'm pretty sure
that was about the treatment of negative numbers, not about the
trailing 'L'. I believe the PEP authors overlooked the trailing 'L'
for hex() and oct(). I think they should be considered just as sticky
as the trailing 'L' for repr().

--Guido

On 6/11/06, Tim Peters <tim.peters at gmail.com> wrote:
> [Ka-Ping Yee]
> > I did this earlier:
> >
> >     >>> hex(9999999999999)
> >     '0x9184e729fffL'
> >
> > and found it a little jarring, because i feel there's been a general
> > trend toward getting rid of the 'L' suffix in Python.
> >
> > Literal long integers don't need an L anymore; they're automatically
> > made into longs if the number is too big.  And while the repr() of
> > a long retains the L on the end, the str() of a long does not, and
> > i rather like that.
> >
> > So i kind of expected that hex() would not include the L either.
> > I see its main job as just giving me the hex digits (in fact, for
> > Python 3000 i'd prefer even to drop the '0x' as well), and the L
> > seems superfluous and distracting.
> >
> > What do you think?  Is Python 2.5 a reasonable time to drop this L?
>
> As I read pep 237, that should have happened in Python 2.3 or 2.4.
> This specific case is kinda muddy there.  Regardless, the only part
> that was left for Python 3 was "phase C", and this is phase C in its
> entirety:
>
>  C. The trailing 'L' is dropped from repr(), and made illegal on
>        input.  (If possible, the 'long' type completely disappears.)
>
> It's possible, though, that hex() and oct() were implicitly considered
> to be variants of repr() for purposes of phase C.  How much are we
> willing to pay Guido to Pronounce?
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From martin at v.loewis.de  Mon Jun 12 19:04:33 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 12 Jun 2006 19:04:33 +0200
Subject: [Python-Dev] Dropping externally maintained packages (Was:
 Please stop changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
Message-ID: <448D9EA1.9000209@v.loewis.de>

Guido van Rossum wrote:
>> 4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
>> to get rid of it, *now* would be the time.
> 
> I'm all for it.
> 
> While I am an enthusiastic supporter of several of those additions, I
> am *not* in favor of the special status granted to software
> contributed by certain developers, since it is a burden for all other
> developers.

Then I guess we should deal with before 2.5b1, and delay 2.5b1 until the
status of each of these has been clarified.

Each maintainer should indicate whether he is happy with a "this is
part of Python" approach. If so, the entry should be removed from PEP
360 (*); if not, the code should be removed from Python before beta 1.

Speaking with some authority for Expat, I'd be happy to have it removed
from PEP 360.

Regards,
Martin

(*) Alternatively, this PEP could be given purely informational status,
i.e. with section "no requirements on Python maintainers". Then, Expat
should be moved to that section: people can happily make changes to
expat, and whoever synchronizes it with the external source must make
sure these changes get carried over.

From pje at telecommunity.com  Mon Jun 12 19:08:52 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 13:08:52 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.co
 m>
References: <5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>

At 09:43 AM 6/12/2006 -0700, Guido van Rossum wrote:
>On 6/12/06, Phillip J. Eby <pje at telecommunity.com> wrote:
>>At 09:04 AM 6/12/2006 -0700, Guido van Rossum wrote:
>> >IOW I think PEP 360 is an unfortunate historic accident, and we would
>> >be better off without it. I propose that we don't add to it going
>> >forward, and that we try to get rid of it as we can.
>>
>>4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
>>to get rid of it, *now* would be the time.
>
>I'm all for it.
>
>While I am an enthusiastic supporter of several of those additions, I
>am *not* in favor of the special status granted to software
>contributed by certain developers, since it is a burden for all other
>developers.

While I won't claim to speak for the other authors, I would guess that they 
have the same reason for wanting that status as I do: to be able to 
maintain an external release for their existing users with older versions 
of Python, until Python-in-the-field catches up with Python-in-development.

Right now, the effective industry-deployed version of Python is 2.3 - maybe 
2.2 if you have a lot infrastructure in Python, and 2.1 if you support Jython.


>I also suspect that the external linking will continue to cause a
>burden for Python developers -- upgrading to a newer version of the
>external package would require making sure that no changes made by
>Python developers in the previous release bundle are lost in the new
>release bundle.

I'd be willing to live with e.g. moving wsgiref to an Externals/wsgiref 
subdirectory of the main Python tree, *without* svn:externals, and simply 
bumping its version number in that directory to issue snapshots.

This would be no different from the current situation (in terms of svn 
usage for core developers), except that I could go to one directory to get 
an "svn log" and review what other people did to the code and docs.  Right 
now, I've got to track at least three different directories to know what 
somebody did to wsgiref in the core.


>I personally think that, going forward, external maintainers should
>not be granted privileges such as are being granted by PEP 360, and an
>inclusion of a package in the Python tree should be considered a
>"fork" for all practical purposes. If an external developer is not
>okay with such an arrangement, they shouldn't contribute.

This is going to make it tougher to get good contributions, where "good" 
means "has existing users and a maintainer committed to supporting them".


>Perhaps issues like these should motivate us to consider a different
>source control tool. There's a new crop of tools out that could solve
>this by having multiple repositories that can be sync'ed with each
>other. This sounds like an important move towards world peace!

First we'd need to make Python's build process support building external 
libraries in the first place.  If we did that, we could solve the problem 
in SVN right now, as long as maintainers were willing to move their 
project's main repository to Python's repository.

If I understand correctly, the main thing it would require is that Python's 
setup.py invoke all the Externals/*/setup.py files.


From mal at egenix.com  Mon Jun 12 19:30:55 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 12 Jun 2006 19:30:55 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060610142736.GA19094@21degrees.com.au>
References: <20060610142736.GA19094@21degrees.com.au>
Message-ID: <448DA4CF.80907@egenix.com>

Thomas Lee wrote:
> Hi all,
> 
> As the subject of this e-mail says, the attached patch adds a "switch"
> statement to the Python language.
> 
> However, I've been reading through PEP 275 and it seems that the PEP
> calls for a new opcode - SWITCH - to be added to support the new
> construct.
> 
> I got a bit lost as to why the SWITCH opcode is necessary for the
> implementation of the PEP. The reasoning seems to be
> improving performance, but I'm not sure how a new opcode could improve
> performance.
> 
> Anybody care to take the time to explain this to me, perhaps within the
> context of my patch?

Could you upload your patch to SourceForge ? Then I could add
it to the PEP.

Thomas wrote a patch which implemented the switch statement
using an opcode. The reason was probably that switch works
a lot like e.g. the for-loop which also opens a new block.

Could you explain how your patch works ?

BTW, I think this part doesn't belong into the patch:

> Index: Lib/distutils/extension.py
> ===================================================================
> --- Lib/distutils/extension.py	(revision 46818)
> +++ Lib/distutils/extension.py	(working copy)
> @@ -185,31 +185,31 @@
>                  continue
>  
>              suffix = os.path.splitext(word)[1]
> -            switch = word[0:2] ; value = word[2:]
> +            switch_word = word[0:2] ; value = word[2:]

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 12 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              20 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Mon Jun 12 19:40:18 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 19:40:18 +0200
Subject: [Python-Dev] Dropping externally maintained packages (Was:
	Please stop changing wsgiref on the trunk)
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com><ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de>
Message-ID: <e6k8ug$p9q$1@sea.gmane.org>

Martin v. Löwis wrote:

> Then I guess we should deal with before 2.5b1, and delay 2.5b1 until the
> status of each of these has been clarified.
>
> Each maintainer should indicate whether he is happy with a "this is
> part of Python" approach. If so, the entry should be removed from PEP
> 360 (*); if not, the code should be removed from Python before beta 1.

is this "we don't give a fuck about 3rd party developers and users" attitude
really representative for the python-dev community and the PSF ?

</F> 




From guido at python.org  Mon Jun 12 19:42:44 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 10:42:44 -0700
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
	changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
Message-ID: <ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>

On 6/12/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 09:43 AM 6/12/2006 -0700, Guido van Rossum wrote:
> >On 6/12/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> >>At 09:04 AM 6/12/2006 -0700, Guido van Rossum wrote:
> >> >IOW I think PEP 360 is an unfortunate historic accident, and we would
> >> >be better off without it. I propose that we don't add to it going
> >> >forward, and that we try to get rid of it as we can.
> >>
> >>4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
> >>to get rid of it, *now* would be the time.
> >
> >I'm all for it.
> >
> >While I am an enthusiastic supporter of several of those additions, I
> >am *not* in favor of the special status granted to software
> >contributed by certain developers, since it is a burden for all other
> >developers.
>
> While I won't claim to speak for the other authors, I would guess that they
> have the same reason for wanting that status as I do: to be able to
> maintain an external release for their existing users with older versions
> of Python, until Python-in-the-field catches up with Python-in-development.
>
> Right now, the effective industry-deployed version of Python is 2.3 - maybe
> 2.2 if you have a lot infrastructure in Python, and 2.1 if you support Jython.

Sure, but this doesn't require the draconian "I-and-I-only own the
code" approach that you have. There's more code in the source tree
that is also distributed externally -- see the table in PEP 291.

> >I also suspect that the external linking will continue to cause a
> >burden for Python developers -- upgrading to a newer version of the
> >external package would require making sure that no changes made by
> >Python developers in the previous release bundle are lost in the new
> >release bundle.
>
> I'd be willing to live with e.g. moving wsgiref to an Externals/wsgiref
> subdirectory of the main Python tree, *without* svn:externals, and simply
> bumping its version number in that directory to issue snapshots.
>
> This would be no different from the current situation (in terms of svn
> usage for core developers), except that I could go to one directory to get
> an "svn log" and review what other people did to the code and docs.  Right
> now, I've got to track at least three different directories to know what
> somebody did to wsgiref in the core.

And is that such a big deal? Now that wsgiref is being distributed
with Python 2.5, it shouldn't evlove at a much faster pace than Python
2.5, otherwise it would defeat the purpose of having it in 2.5. (And
isn't it just a reference implementation? Why would it evolve at all?)

> >I personally think that, going forward, external maintainers should
> >not be granted privileges such as are being granted by PEP 360, and an
> >inclusion of a package in the Python tree should be considered a
> >"fork" for all practical purposes. If an external developer is not
> >okay with such an arrangement, they shouldn't contribute.
>
> This is going to make it tougher to get good contributions, where "good"
> means "has existing users and a maintainer committed to supporting them".

To which I say, "fine". From the Python core maintainers' POV, more
standard library code is just more of a maintenance burden. Maybe we
should get serious about slimming down the core distribution and
having a separate group of people maintain sumo bundles containing
Python and lots of other stuff. Or maybe we don't even need the
latter, if the download technology improves enough (eggs should make
it easier for people to download and install extra stuff in a pinch).

> >Perhaps issues like these should motivate us to consider a different
> >source control tool. There's a new crop of tools out that could solve
> >this by having multiple repositories that can be sync'ed with each
> >other. This sounds like an important move towards world peace!
>
> First we'd need to make Python's build process support building external
> libraries in the first place.  If we did that, we could solve the problem
> in SVN right now, as long as maintainers were willing to move their
> project's main repository to Python's repository.
>
> If I understand correctly, the main thing it would require is that Python's
> setup.py invoke all the Externals/*/setup.py files.

I guess that's one way of doing it. But perhaps Python's setup.py
should not bother at all, and this is up to the users.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From barry at python.org  Mon Jun 12 19:42:37 2006
From: barry at python.org (Barry Warsaw)
Date: Mon, 12 Jun 2006 13:42:37 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
Message-ID: <20060612134237.0e8500cd@resist.wooz.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 12 Jun 2006 13:08:52 -0400
"Phillip J. Eby" <pje at telecommunity.com> wrote:

> While I won't claim to speak for the other authors, I would guess
> that they have the same reason for wanting that status as I do: to be
> able to maintain an external release for their existing users with
> older versions of Python, until Python-in-the-field catches up with
> Python-in-development.

I handle this in a different way for the email package, which used to be
externally maintained and effectively still supports Pythons back to
2.1.  I'm happy (no, ecstatic) if others want to help maintain it
(w/discussion on email-sig of course), but I'll probably still be the
one doing standalone releases.

I do this by maintaining a directory in the sandbox that externals in
the correct url from the Python repo, and also provides all the
necessary distutils and documentation chrome used to support the
standalone releases. That way, the code lives in Python's primary repo,
but it's still easy enough for me to spin releases.

The catch of course is that I have three checkouts for each of the
three major versions of email.  Two of them pull from branches in the
Python repo while one (currently) pulls from the trunk.  Yes, it's a
PITA, but I could reduce the amount of work by deciding on a different
mix of version compatibility.

I don't know if that will work for the other PEP 360 packages, but it
works for me.

- -Barry
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)

iQCVAwUBRI2nkHEjvBPtnXfVAQLuZgP+NTCIYEM2gCiphIPlDCwncYBZ7FHrirPd
XHIPfmjDfYYNA+wWvCBYkgLPakRXCldSO7p2EYrl/RVetgZR51/V0kQsKbC+UkUa
TiCIgMr0P7rIMRXXN7dZAcvD8Xaxs7TE2meTEC+HaWem9wEUaJy/2PJSUmxTMF3t
BxlIGN7gjeg=
=C6oP
-----END PGP SIGNATURE-----

From guido at python.org  Mon Jun 12 19:45:42 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 10:45:42 -0700
Subject: [Python-Dev] Dropping externally maintained packages (Was:
	Please stop changing wsgiref on the trunk)
In-Reply-To: <e6k8ug$p9q$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
Message-ID: <ca471dc20606121045k62941bfdjf642ae61ee1e7a1f@mail.gmail.com>

On 6/12/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> is this "we don't give a fuck about 3rd party developers and users" attitude
> really representative for the python-dev community and the PSF ?

If you want the PSF to listen to you should watch your language. I'm
out of here, back to focusing on Python 3000.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From fredrik at pythonware.com  Mon Jun 12 19:59:55 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 19:59:55 +0200
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com><5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com><5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
Message-ID: <e6ka39$ucc$1@sea.gmane.org>

Guido van Rossum wrote:

> Maybe we should get serious about slimming down the core distribution
> and having a separate group of people maintain sumo bundles containing
> Python and lots of other stuff.

there are already lots of people doing that (most Linux distributions add stuff, directly
or indirectly; ActiveState and Enthought are doing that for windows, Nokia is doing
that for the S60 platform, etc); the PEP 360 approach is an attempt to emulate that
for the python.org distribution.

> I guess that's one way of doing it. But perhaps Python's setup.py
> should not bother at all, and this is up to the users.

or, for the python.org distribution, the release manager.

I'm not sure how to deal with things like documentation, regression tests, buildbots,
issue trackers, etc, though.

</F> 




From fdrake at acm.org  Mon Jun 12 20:04:46 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon, 12 Jun 2006 14:04:46 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
	changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
Message-ID: <200606121404.47048.fdrake@acm.org>

On Monday 12 June 2006 13:42, Guido van Rossum wrote:
 > Maybe we
 > should get serious about slimming down the core distribution and
 > having a separate group of people maintain sumo bundles containing
 > Python and lots of other stuff.

+1


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From brett at python.org  Mon Jun 12 20:05:02 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 12 Jun 2006 11:05:02 -0700
Subject: [Python-Dev] Dropping externally maintained packages (Was:
	Please stop changing wsgiref on the trunk)
In-Reply-To: <e6k8ug$p9q$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
Message-ID: <bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>

On 6/12/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
>
> Martin v. L?wis wrote:
>
> > Then I guess we should deal with before 2.5b1, and delay 2.5b1 until the
> > status of each of these has been clarified.
> >
> > Each maintainer should indicate whether he is happy with a "this is
> > part of Python" approach. If so, the entry should be removed from PEP
> > 360 (*); if not, the code should be removed from Python before beta 1.
>
> is this "we don't give a fuck about 3rd party developers and users"
> attitude
> really representative for the python-dev community and the PSF ?



Well, obviously I care since I wrote the PEP in the first place.

But I don't think this is trying to say they don't care.  People just want
to lower the overhead of maintaining the distro.  Having to report bugs and
patches to an external location instead of just getting to fix it directly
is a hassle that poeple don't ask for.  Plus it is bound to get forgotten by
accident, so planning for it doesn't hurt.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/93861c49/attachment.htm 

From nicko at nicko.org  Mon Jun 12 19:52:48 2006
From: nicko at nicko.org (Nicko van Someren)
Date: Mon, 12 Jun 2006 18:52:48 +0100
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17548.49469.394804.146445@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
Message-ID: <4A7907AD-8F78-4DDD-9F0A-3801D1795D40@nicko.org>

On 12 Jun 2006, at 02:19, Terry Jones wrote:

>>>>>> "Greg" == Greg Ewing <greg.ewing at canterbury.ac.nz> writes:
>
> Greg> Terry Jones wrote:
>>> Suppose you have a RNG with a cycle length of 5. There's nothing  
>>> to stop an
>>> algorithm from taking multiple already returned values and  
>>> combining them
>>> in some (deterministic) way to generate > 5 outcomes.
>
> Greg> No, it's not. As long as the RNG output is the only input to
> Greg> the algorithm, and the algorithm is deterministic, it is
> Greg> not possible get more than N different outcomes. It doesn't
> Greg> matter what the algorithm does with the input.
...
> The code below uses a RNG with period 5, is deterministic, and has one
> initial state. It produces 20 different outcomes.

I think that in any meaningful sense your code is producing just one  
outcome, since it has just one initial state.  It is completely  
deterministic and has no seed, so this is expected.

> It's just doing a simplistic version of what a lagged RNG generator  
> does,
> but the lagged part is in the "algorithm" not in the rng. That's  
> why I said
> if you included the state of the algorithm in what you meant by  
> "state" I'd
> be more inclined to agree.

This is a different issue.  You instantiate more than one PRNG.  If  
you have n PRGNs which each have a period p then you can get have the  
combination in p^n different starting states, which can be useful but  
only if you can find n*log2(p) bits of starting entropy to get the  
thing into a usefully random state.

	Nicko


> def rng():
...
>     history = [rng()]
...
>     for lag in range(1, 5):
...
>             new = rng()




From fredrik at pythonware.com  Mon Jun 12 20:17:02 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 20:17:02 +0200
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com><5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com><ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com><448D9EA1.9000209@v.loewis.de>
	<e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
Message-ID: <e6kb3d$2gg$1@sea.gmane.org>

Brett Cannon wrote:

> But I don't think this is trying to say they don't care.  People just want
> to lower the overhead of maintaining the distro.

well, wouldn't the best way to do that be to leave all non-trivial maintenance of a
given component to an existing external community?

(after all, the number of non-python-dev python contributors are *much* larger
than the number of python-dev contributors).

I mean, we're not really talking about ordinary leak-elimination or portability-fixing
or security-hole-plugging maintenance; it's the let's-extend-the-api-in-incompatible-
ways and fork-because-we-can stuff that I'm worried about.

</F> 




From rasky at develer.com  Mon Jun 12 20:20:10 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Mon, 12 Jun 2006 20:20:10 +0200
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com><5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com><5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
Message-ID: <058501c68e4c$d6e701c0$bf03030a@trilan>

Guido van Rossum wrote:

>>> I personally think that, going forward, external maintainers should
>>> not be granted privileges such as are being granted by PEP 360, and
>>> an inclusion of a package in the Python tree should be considered a
>>> "fork" for all practical purposes. If an external developer is not
>>> okay with such an arrangement, they shouldn't contribute.
>>
>> This is going to make it tougher to get good contributions, where
>> "good" means "has existing users and a maintainer committed to
>> supporting them".
>
> To which I say, "fine". From the Python core maintainers' POV, more
> standard library code is just more of a maintenance burden. Maybe we
> should get serious about slimming down the core distribution and
> having a separate group of people maintain sumo bundles containing
> Python and lots of other stuff.

-1000.

One of the biggest Python strength, and one that I personally rely on a lot,
is the large *standard* library. It means that you can write scripts and
programs that will run on any Python installation out there, no matter how
many eggs were downloaded before, no matter whether the Internet connection
is available or not, no matter if the user has privileges to install
extensions, even if the SourceForge mirror is down, even if SourceForge
changed their HTML and now the magic code can't grok it anymore, etc etc
etc.

If Python were to lose this standard library in favor of several different
distributions, users could not sensibly write a program anymore without
incurring the risk of using packages not available to some users. Perl has
this problem with CPAN, and system administrators going through hoops to
write admin scripts which do not rely on any external package just because
you can't be sure if a package is installed or not; this leads to code
duplication (duplication of the code included in an external package, but
which can't be "reliably" used), and to bugs (since the local copy of the
functionality can surely be more buggy than the widespread implementation of
the external package).

Let's not get into this mess, please. I think we just need a smoother way to
maintain the standard library, not an agreement to remove it, just because
we cannot find a way to maintain it properly. The fact that there hundreds
of unreviewed patches to the standard library made by wannabe contributors
is a blatant sign that something *can* be improved.
-- 
Giovanni Bajo


From guido at python.org  Mon Jun 12 20:23:49 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 11:23:49 -0700
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
In-Reply-To: <058501c68e4c$d6e701c0$bf03030a@trilan>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
Message-ID: <ca471dc20606121123v726d89d4pe507ba2fc7f9ed5c@mail.gmail.com>

On 6/12/06, Giovanni Bajo <rasky at develer.com> wrote:
> Guido van Rossum wrote:
>
> >>> I personally think that, going forward, external maintainers should
> >>> not be granted privileges such as are being granted by PEP 360, and
> >>> an inclusion of a package in the Python tree should be considered a
> >>> "fork" for all practical purposes. If an external developer is not
> >>> okay with such an arrangement, they shouldn't contribute.
> >>
> >> This is going to make it tougher to get good contributions, where
> >> "good" means "has existing users and a maintainer committed to
> >> supporting them".
> >
> > To which I say, "fine". From the Python core maintainers' POV, more
> > standard library code is just more of a maintenance burden. Maybe we
> > should get serious about slimming down the core distribution and
> > having a separate group of people maintain sumo bundles containing
> > Python and lots of other stuff.
>
> -1000.
>
> One of the biggest Python strength, and one that I personally rely on a lot,
> is the large *standard* library. It means that you can write scripts and
> programs that will run on any Python installation out there, no matter how
> many eggs were downloaded before, no matter whether the Internet connection
> is available or not, no matter if the user has privileges to install
> extensions, even if the SourceForge mirror is down, even if SourceForge
> changed their HTML and now the magic code can't grok it anymore, etc etc
> etc.
>
> If Python were to lose this standard library in favor of several different
> distributions, users could not sensibly write a program anymore without
> incurring the risk of using packages not available to some users. Perl has
> this problem with CPAN, and system administrators going through hoops to
> write admin scripts which do not rely on any external package just because
> you can't be sure if a package is installed or not; this leads to code
> duplication (duplication of the code included in an external package, but
> which can't be "reliably" used), and to bugs (since the local copy of the
> functionality can surely be more buggy than the widespread implementation of
> the external package).
>
> Let's not get into this mess, please. I think we just need a smoother way to
> maintain the standard library, not an agreement to remove it, just because
> we cannot find a way to maintain it properly. The fact that there hundreds
> of unreviewed patches to the standard library made by wannabe contributors
> is a blatant sign that something *can* be improved.

I'm with you, actually; developers contributing code without wanting
to give up control are the problem. You should go talk to those
contributors.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 12 20:25:21 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 11:25:21 -0700
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <e6kb3d$2gg$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
Message-ID: <ca471dc20606121125n321a2dc2oad2b69458e6a3bf1@mail.gmail.com>

On 6/12/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> I mean, we're not really talking about ordinary leak-elimination or portability-fixing
> or security-hole-plugging maintenance; it's the let's-extend-the-api-in-incompatible-
> ways and fork-because-we-can stuff that I'm worried about.

Have any instances of that actually happened? That would be a problem
with *any* code in the Python library, not just external
contributions, so I'm not sure why external contribions should be
treated any differently here.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Mon Jun 12 20:26:32 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 12 Jun 2006 11:26:32 -0700
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <e6kb3d$2gg$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
Message-ID: <bbaeab100606121126u64ee5f16kafcd720428a9e549@mail.gmail.com>

On 6/12/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
>
> Brett Cannon wrote:
>
> > But I don't think this is trying to say they don't care.  People just
> want
> > to lower the overhead of maintaining the distro.
>
> well, wouldn't the best way to do that be to leave all non-trivial
> maintenance of a
> given component to an existing external community?
>
> (after all, the number of non-python-dev python contributors are *much*
> larger
> than the number of python-dev contributors).
>
> I mean, we're not really talking about ordinary leak-elimination or
> portability-fixing
> or security-hole-plugging maintenance; it's the
> let's-extend-the-api-in-incompatible-
> ways and fork-because-we-can stuff that I'm worried about.




Well, I don't know if that is necessarily the case.  PEP 360 doesn't have a
single project saying that minor fixes can just go right in.  If we want to
just change the wording such that all code in the tree can be touched for
bug fixes and compatibility issues without clearance, that's great.

But Phillip's email that sparked all of this was about basic changes to
wsgiref, not some API change (at least to the best of my knowledge).

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/5127f7b8/attachment.html 

From pje at telecommunity.com  Mon Jun 12 20:32:05 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 14:32:05 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.co
 m>
References: <5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612142631.01e83a30@sparrow.telecommunity.com>

At 10:42 AM 6/12/2006 -0700, Guido van Rossum wrote:
>Sure, but this doesn't require the draconian "I-and-I-only own the
>code" approach that you have.

If there were only one version and directory tree to maintain to do both 
the Python trunk and the external version, I wouldn't mind other people 
making changes.   It's the synchronization that's a PITA, especially 
because of the directory layout.

If we had Externals/ I would just issue snapshots from there.


>And is that such a big deal? Now that wsgiref is being distributed
>with Python 2.5, it shouldn't evlove at a much faster pace than Python
>2.5, otherwise it would defeat the purpose of having it in 2.5. (And
>isn't it just a reference implementation? Why would it evolve at all?)

This is backwards: I'm not the one who evolved it, other Python devs 
did!  :)  I want Python 2.5 to distribute some version of wsgiref that is 
precisely the same as *some* public wsgiref release, so that PEP 360 will 
have accurate info and so that people who want a particular wsigref release 
can specify a sane version number, to avoid the kind of skew we used to 
have with micro-releases (e.g. 2.2.2).


>>If I understand correctly, the main thing it would require is that Python's
>>setup.py invoke all the Externals/*/setup.py files.
>
>I guess that's one way of doing it. But perhaps Python's setup.py
>should not bother at all, and this is up to the users.

However, if Python's setup.py did this, then external developers would get 
the benefit (and discipline) of the buildbots and testing.  That seems like 
a good thing to me.


From tim.peters at gmail.com  Mon Jun 12 21:05:29 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 12 Jun 2006 15:05:29 -0400
Subject: [Python-Dev] Should hex() yield 'L' suffix for long numbers?
In-Reply-To: <ca471dc20606120957t223a1d44o78b15e401177b60d@mail.gmail.com>
References: <Pine.LNX.4.58.0606111925410.5223@server1.LFW.org>
	<1f7befae0606112011y119f8f7ds91f753df01884ee4@mail.gmail.com>
	<ca471dc20606120957t223a1d44o78b15e401177b60d@mail.gmail.com>
Message-ID: <1f7befae0606121205k645a2397q10047b7614623576@mail.gmail.com>

[Guido]
> Here's how I interpret PEP 237. Some changes to hex() and oct() are
> warned about in B1and to be implemented in B2. But I'm pretty sure
> that was about the treatment of negative numbers, not about the
> trailing 'L'. I believe the PEP authors overlooked the trailing 'L'
> for hex() and oct().

That was mentioned explicitly under "Incompatibilities" (last sentence):

    - Currently, the '%u', '%x', '%X' and '%o' string formatting
      operators and the hex() and oct() built-in functions behave
      differently for negative numbers: negative short ints are
      formatted as unsigned C long, while negative long ints are
      formatted with a minus sign.  This will be changed to use the
      long int semantics in all cases (but without the trailing 'L'
      that currently distinguishes the output of hex() and oct() for
      long ints). ...

Since it wasn't mentioned explicitly again under "Transition", but the
trailing 'L' on repr() was explicitly mentioned twice under
"Transition", the least strained logic-chopping reading is that losing
the 'L' for hex() and oct() was intended to be done along with the
other changes in the paragraph quoted above.

> I think they should be considered just as sticky as the trailing 'L' for repr().

Given that the "least strained" reading above missed its target
release, and the purpose of target releases was to minimize annoying
changes, I agree it should be left for P3K now regardless.  I'll
change the PEP accordingly to make this explicit.

From amk at amk.ca  Mon Jun 12 21:16:37 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Mon, 12 Jun 2006 15:16:37 -0400
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
Message-ID: <20060612191637.GA9787@rogue.amk.ca>

On Mon, Jun 12, 2006 at 10:42:44AM -0700, Guido van Rossum wrote:
> standard library code is just more of a maintenance burden. Maybe we
> should get serious about slimming down the core distribution and
> having a separate group of people maintain sumo bundles containing
> Python and lots of other stuff. Or maybe we don't even need the

That separate group of people = Linux distributors, ActiveState,
whoever maintains the fink archive (and to some degree Apple), so we
already have sumo distributions available.

That doesn't help people on platforms without package databases or
repositories of free software, but I don't know if any such platforms
matter.  I'd bet over 90% of people writing Python code are on one of
Windows, MacOS, or Linux; Solaris would add a small sliver to that
90%, and then all the other platforms (other Unix variants, Palms,
Nokias, etc.) are in the noise.

--amk

From janssen at parc.com  Mon Jun 12 21:09:27 2006
From: janssen at parc.com (Bill Janssen)
Date: Mon, 12 Jun 2006 12:09:27 PDT
Subject: [Python-Dev] Import semantics
In-Reply-To: Your message of "Mon, 12 Jun 2006 01:00:13 PDT."
	<448D1F0D.7000405@strakt.com> 
Message-ID: <06Jun12.120928pdt."58641"@synergy1.parc.xerox.com>

> the difference in Jython is deliberate. I think the reason was to mimic 
> more the Java style for this, in java fully qualified names always work. 
> In jython importing the top level packages is enough to get a similar 
> effect.
> 
> This is unlikely to change for backward compatibility reasons, at least 
> from my POV.

While I appreciate the usage concerns, at some point someone has to
decide whether Jython is an implementation of Python, or a version of
BeanShell with odd syntax.  If it's Python, it has to comply with the
Python specification, regardless of what Java does.  If it doesn't do
that, it should be cast into the outer darkness of the Python world.

The escape hatch here would be to redefine the Python specification to
allow either behavior.

Bill

From g.brandl at gmx.net  Mon Jun 12 21:11:05 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 12 Jun 2006 21:11:05 +0200
Subject: [Python-Dev] file()
In-Reply-To: <ca471dc20606120711r2ca09fbbk36133c1c6fc8770a@mail.gmail.com>
References: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>
	<ca471dc20606120711r2ca09fbbk36133c1c6fc8770a@mail.gmail.com>
Message-ID: <e6ke89$d7n$1@sea.gmane.org>

Guido van Rossum wrote:
> Yup, although it's a change in behavior that would need to be studied
> carefully for backwards incompatibilities. Usually it's given as a
> constant, so there won't be any problems; but there might be code that
> receives a mode string and attempts to test its validity by trying it
> and catching IOError, such code would have to be changed.
> 
> --Guido
> 
> On 6/12/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
>> I notice that file() throws an IOError when it detects an invalid mode
>> string.  Wouldn't a ValueError be more appropriate?
> 

The situation is even more complex with the current trunk. open() raises
ValueError if it detects an invalid the mode string, such as universal
newline mode and a writable mode combined (the definition of
what is invalid has been made stricter, the mode string now must begin
with r, w, a or U), but it raises IOError if the OS call to fopen() fails
because of an invalid mode string. This might need unification.

Georg


From pje at telecommunity.com  Mon Jun 12 21:12:20 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 15:12:20 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please
 stopchanging wsgiref on the trunk)
In-Reply-To: <ca471dc20606121123v726d89d4pe507ba2fc7f9ed5c@mail.gmail.co
 m>
References: <058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
Message-ID: <5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>

At 11:23 AM 6/12/2006 -0700, Guido van Rossum wrote:
>developers contributing code without wanting
>to give up control are the problem.

Control isn't the issue; it's ensuring that fixes don't get lost or 
reverted from either the external version or the stdlib version.  Control 
is merely a means to that end.  If we can accomplish that via some other 
means (e.g. an Externals/ subtree), I'm all for it.  (Actually, perhaps 
Packages/ would be a better name, since the point is that they're packages 
that are maintained for separate distribution for older Python 
versions.  They're really *not* "external" any more, they just get 
snapshotted for release.)

I guess I should say, control isn't the issue for *me*.  I can't speak for 
anyone else.  Fredrik has raised the issue of API forks, but I haven't 
encountered this myself.  I *have* seen some developers make spurious 
"cleanups" to working code that breaks compatibility with older Python 
versions, though, just not in wsgiref.
But I don't mind policing such things myself as long as I have only one 
subtree to svn log (versus having to track three separate logs and diffs 
for Lib/wsgiref/, Lib/test/test_wsgiref.py, and Doc/lib/libwsgiref.tex, and 
then having to reformat the diffs to apply to a different directory layout).

Now, Barry's approach to the email package makes good sense to me, and I'd 
use it, except that SVN externals can't sync individual files.  I'd have to 
create Lib/wsgiref/tests (and make a dummy Lib/test/test_wsgiref that 
invokes them) and Lib/wsgiref/doc (and make Doc/lib/lib.tex include 
libwsgiref.tex from there).  If those changes are acceptable, I'd be happy 
to take that as a compromise approach.  I'll still have to manually update 
the Python PKG-INFO (due to no setup.py), but it'd be better than nothing.


From pedronis at strakt.com  Mon Jun 12 21:20:53 2006
From: pedronis at strakt.com (Samuele Pedroni)
Date: Mon, 12 Jun 2006 21:20:53 +0200
Subject: [Python-Dev] Import semantics
In-Reply-To: <06Jun12.120928pdt."58641"@synergy1.parc.xerox.com>
References: <06Jun12.120928pdt."58641"@synergy1.parc.xerox.com>
Message-ID: <448DBE95.2030703@strakt.com>

Bill Janssen wrote:
>>the difference in Jython is deliberate. I think the reason was to mimic 
>>more the Java style for this, in java fully qualified names always work. 
>>In jython importing the top level packages is enough to get a similar 
>>effect.
>>
>>This is unlikely to change for backward compatibility reasons, at least 
>>from my POV.
> 
> 
> While I appreciate the usage concerns, at some point someone has to
> decide whether Jython is an implementation of Python, or a version of
> BeanShell with odd syntax. 

this is mildy insulting, to the people that spent time trying to find 
the best compromises between various issues and keep jython alive.
For example, I spent quite some energy at times to justify not 
implementing some unpythonic but tempting features from a java pov.

> If it's Python, it has to comply with the
> Python specification, regardless of what Java does.  

cpython cannot import java packages. also python rules don't translate
completely to them, and sometimes python import semantics are as clear 
as mud and reading import.c is the only way to know. Of course allowing 
this for python packages was an overgeneralisation.

> If it doesn't do
> that, it should be cast into the outer darkness of the Python world.
> 

this is a design decision that originates back to Jim. It could be 
limited maybe to java packages, but is sadly quite addictive.
That's why I mentionted a backward compatibility problem.


> The escape hatch here would be to redefine the Python specification to
> allow either behavior.
> 
> Bill


From tim.peters at gmail.com  Mon Jun 12 21:29:03 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 12 Jun 2006 15:29:03 -0400
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <e6kb3d$2gg$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
Message-ID: <1f7befae0606121229t41771359w4b18e7ff230c2f23@mail.gmail.com>

[Brett]
>> But I don't think this is trying to say they don't care.  People just want
>> to lower the overhead of maintaining the distro.

[Fredrik]
> well, wouldn't the best way to do that be to leave all non-trivial maintenance of a
> given component to an existing external community?
>
> (after all, the number of non-python-dev python contributors are *much* larger
> than the number of python-dev contributors).
>
> I mean, we're not really talking about ordinary leak-elimination or portability-fixing
> or security-hole-plugging maintenance; it's the let's-extend-the-api-in-incompatible-
> ways and fork-because-we-can stuff that I'm worried about.

Well, look at the changes Philip originally complained about:

C:\Code\python\Lib\wsgiref>svn log -rHEAD:0 -v --stop-on-copy
------------------------------------------------------------------------
r46887 | phillip.eby | 2006-06-12 00:04:32 -0400 (Mon, 12 Jun 2006) | 2 lines
Changed paths:
...
Sync w/external release 0.1.2.  Please see PEP 360 before making
changes to external packages.

------------------------------------------------------------------------
r46855 | neal.norwitz | 2006-06-11 03:26:27 -0400 (Sun, 11 Jun 2006) | 1 line
Changed paths:
   M /python/trunk/Lib/pkgutil.py
   M /python/trunk/Lib/wsgiref/validate.py

Fix errors found by pychecker
------------------------------------------------------------------------
r46800 | andrew.kuchling | 2006-06-09 15:43:25 -0400 (Fri, 09 Jun 2006) | 1 line
Changed paths:
   M /python/trunk/Lib/wsgiref/simple_server.py

Remove unused variable
------------------------------------------------------------------------
r46794 | brett.cannon | 2006-06-09 14:40:46 -0400 (Fri, 09 Jun 2006) | 2 lines
Changed paths:
   M /python/trunk/Lib/msilib
   M /python/trunk/Lib/test/crashers
   M /python/trunk/Lib/wsgiref

svn:ignore .pyc and .pyo files.

------------------------------------------------------------------------
r46787 | tim.peters | 2006-06-09 13:47:00 -0400 (Fri, 09 Jun 2006) | 2 lines
Changed paths:
   M /python/trunk/Lib/wsgiref/handlers.py
   M /python/trunk/Lib/wsgiref/headers.py
   M /python/trunk/Lib/wsgiref/simple_server.py
   M /python/trunk/Lib/wsgiref/util.py
   M /python/trunk/Lib/wsgiref/validate.py

Whitespace normalization.


That's all ordinary everyday maintenance, and, e.g., there is no
mechanism to exempt anything in a checkout tree from reindent.py or
PyChecker complaints.

In addition, not shown above is that I changed test_wsgiref.py to stop
a test failure under -O.  Given that we're close to the next Python
release, and test_wsgiref was the only -O test failure, I wasn't going
to let that stand.  I did wait ~30 hours between emailing about the
problem and fixing it, but I like to whittle down my endless todo list
too <0.4 wink>.

From fredrik at pythonware.com  Mon Jun 12 21:30:53 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 21:30:53 +0200
Subject: [Python-Dev] Import semantics
In-Reply-To: <06Jun12.120928pdt."58641"@synergy1.parc.xerox.com>
References: <448D1F0D.7000405@strakt.com>
	<06Jun12.120928pdt."58641"@synergy1.parc.xerox.com>
Message-ID: <e6kfdd$i2p$1@sea.gmane.org>

Bill Janssen wrote:

> If it's Python, it has to comply with the Python specification,
 > regardless of what Java does.

what specification ?

</F>


From guido at python.org  Mon Jun 12 21:34:44 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 12:34:44 -0700
Subject: [Python-Dev] file()
In-Reply-To: <e6ke89$d7n$1@sea.gmane.org>
References: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>
	<ca471dc20606120711r2ca09fbbk36133c1c6fc8770a@mail.gmail.com>
	<e6ke89$d7n$1@sea.gmane.org>
Message-ID: <ca471dc20606121234j190c058dg60307bf9176ec78d@mail.gmail.com>

On 6/12/06, Georg Brandl <g.brandl at gmx.net> wrote:
> Guido van Rossum wrote:
> > Yup, although it's a change in behavior that would need to be studied
> > carefully for backwards incompatibilities. Usually it's given as a
> > constant, so there won't be any problems; but there might be code that
> > receives a mode string and attempts to test its validity by trying it
> > and catching IOError, such code would have to be changed.
> >
> > --Guido
> >
> > On 6/12/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
> >> I notice that file() throws an IOError when it detects an invalid mode
> >> string.  Wouldn't a ValueError be more appropriate?
> >
>
> The situation is even more complex with the current trunk. open() raises
> ValueError if it detects an invalid the mode string, such as universal
> newline mode and a writable mode combined (the definition of
> what is invalid has been made stricter, the mode string now must begin
> with r, w, a or U), but it raises IOError if the OS call to fopen() fails
> because of an invalid mode string. This might need unification.

That would be hard to fix unless we get rid of the stdio-based
implementation (which I intend to do in Py3k). I say we leave it alone
for now -- fopen() can fail for any number of platform-dependent
reasons and we can't really expect to do look-before-you-leap on this.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From g.brandl at gmx.net  Mon Jun 12 21:35:59 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 12 Jun 2006 21:35:59 +0200
Subject: [Python-Dev] file()
In-Reply-To: <ca471dc20606121234j190c058dg60307bf9176ec78d@mail.gmail.com>
References: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>	<ca471dc20606120711r2ca09fbbk36133c1c6fc8770a@mail.gmail.com>	<e6ke89$d7n$1@sea.gmane.org>
	<ca471dc20606121234j190c058dg60307bf9176ec78d@mail.gmail.com>
Message-ID: <e6kfmv$ivf$1@sea.gmane.org>

Guido van Rossum wrote:
> On 6/12/06, Georg Brandl <g.brandl at gmx.net> wrote:
>> Guido van Rossum wrote:
>> > Yup, although it's a change in behavior that would need to be studied
>> > carefully for backwards incompatibilities. Usually it's given as a
>> > constant, so there won't be any problems; but there might be code that
>> > receives a mode string and attempts to test its validity by trying it
>> > and catching IOError, such code would have to be changed.
>> >
>> > --Guido
>> >
>> > On 6/12/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
>> >> I notice that file() throws an IOError when it detects an invalid mode
>> >> string.  Wouldn't a ValueError be more appropriate?
>> >
>>
>> The situation is even more complex with the current trunk. open() raises
>> ValueError if it detects an invalid the mode string, such as universal
>> newline mode and a writable mode combined (the definition of
>> what is invalid has been made stricter, the mode string now must begin
>> with r, w, a or U), but it raises IOError if the OS call to fopen() fails
>> because of an invalid mode string. This might need unification.
> 
> That would be hard to fix unless we get rid of the stdio-based
> implementation (which I intend to do in Py3k). I say we leave it alone
> for now -- fopen() can fail for any number of platform-dependent
> reasons and we can't really expect to do look-before-you-leap on this.

One option would be to raise IOError in the former case too. That's what
I meant with "unification".

Cheers,
Georg


From guido at python.org  Mon Jun 12 21:37:01 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 12:37:01 -0700
Subject: [Python-Dev] Should hex() yield 'L' suffix for long numbers?
In-Reply-To: <1f7befae0606121205k645a2397q10047b7614623576@mail.gmail.com>
References: <Pine.LNX.4.58.0606111925410.5223@server1.LFW.org>
	<1f7befae0606112011y119f8f7ds91f753df01884ee4@mail.gmail.com>
	<ca471dc20606120957t223a1d44o78b15e401177b60d@mail.gmail.com>
	<1f7befae0606121205k645a2397q10047b7614623576@mail.gmail.com>
Message-ID: <ca471dc20606121237l2137d251o3bbda3fe5b95fdc2@mail.gmail.com>

On 6/12/06, Tim Peters <tim.peters at gmail.com> wrote:
> [Guido]
> > Here's how I interpret PEP 237. Some changes to hex() and oct() are
> > warned about in B1and to be implemented in B2. But I'm pretty sure
> > that was about the treatment of negative numbers, not about the
> > trailing 'L'. I believe the PEP authors overlooked the trailing 'L'
> > for hex() and oct().
>
> That was mentioned explicitly under "Incompatibilities" (last sentence):
>
>     - Currently, the '%u', '%x', '%X' and '%o' string formatting
>       operators and the hex() and oct() built-in functions behave
>       differently for negative numbers: negative short ints are
>       formatted as unsigned C long, while negative long ints are
>       formatted with a minus sign.  This will be changed to use the
>       long int semantics in all cases (but without the trailing 'L'
>       that currently distinguishes the output of hex() and oct() for
>       long ints). ...

Oops, I missed that.

> Since it wasn't mentioned explicitly again under "Transition", but the
> trailing 'L' on repr() was explicitly mentioned twice under
> "Transition", the least strained logic-chopping reading is that losing
> the 'L' for hex() and oct() was intended to be done along with the
> other changes in the paragraph quoted above.

I now agree with that.

> > I think they should be considered just as sticky as the trailing 'L' for repr().
>
> Given that the "least strained" reading above missed its target
> release, and the purpose of target releases was to minimize annoying
> changes, I agree it should be left for P3K now regardless.  I'll
> change the PEP accordingly to make this explicit.

Agreed again. Thanks for updating the PEP.

PS Tim: did you get my private email about SequenceMatcher?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 12 21:37:55 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 12:37:55 -0700
Subject: [Python-Dev] file()
In-Reply-To: <e6kfmv$ivf$1@sea.gmane.org>
References: <129CEF95A523704B9D46959C922A280002A4C6B3@nemesis.central.ccp.cc>
	<ca471dc20606120711r2ca09fbbk36133c1c6fc8770a@mail.gmail.com>
	<e6ke89$d7n$1@sea.gmane.org>
	<ca471dc20606121234j190c058dg60307bf9176ec78d@mail.gmail.com>
	<e6kfmv$ivf$1@sea.gmane.org>
Message-ID: <ca471dc20606121237y32911294h7438d3e9d6cd4e38@mail.gmail.com>

No, because ValueError is the better exception for an invalid mode string.

On 6/12/06, Georg Brandl <g.brandl at gmx.net> wrote:
> Guido van Rossum wrote:
> > On 6/12/06, Georg Brandl <g.brandl at gmx.net> wrote:
> >> Guido van Rossum wrote:
> >> > Yup, although it's a change in behavior that would need to be studied
> >> > carefully for backwards incompatibilities. Usually it's given as a
> >> > constant, so there won't be any problems; but there might be code that
> >> > receives a mode string and attempts to test its validity by trying it
> >> > and catching IOError, such code would have to be changed.
> >> >
> >> > --Guido
> >> >
> >> > On 6/12/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
> >> >> I notice that file() throws an IOError when it detects an invalid mode
> >> >> string.  Wouldn't a ValueError be more appropriate?
> >> >
> >>
> >> The situation is even more complex with the current trunk. open() raises
> >> ValueError if it detects an invalid the mode string, such as universal
> >> newline mode and a writable mode combined (the definition of
> >> what is invalid has been made stricter, the mode string now must begin
> >> with r, w, a or U), but it raises IOError if the OS call to fopen() fails
> >> because of an invalid mode string. This might need unification.
> >
> > That would be hard to fix unless we get rid of the stdio-based
> > implementation (which I intend to do in Py3k). I say we leave it alone
> > for now -- fopen() can fail for any number of platform-dependent
> > reasons and we can't really expect to do look-before-you-leap on this.
>
> One option would be to raise IOError in the former case too. That's what
> I meant with "unification".
>
> Cheers,
> Georg
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From edcjones at comcast.net  Mon Jun 12 21:42:23 2006
From: edcjones at comcast.net (Edward C. Jones)
Date: Mon, 12 Jun 2006 15:42:23 -0400
Subject: [Python-Dev] External Package Maintenance
Message-ID: <448DC39F.6020101@comcast.net>

Guido van Rossum wrote:
 > developers contributing code without wanting
 > to give up control are the problem.

That hits the nail on the head. If something is added to the standard 
library, it becomes part of Python and must be controlled by whoever 
controls Python. Otherwise there will be chaos. If a developer puts his 
code into the standard library, it may or may not increase their status, 
ego, or income. Each developer must decide for himself.

I suggest that the BDFL make a Pronouncement on this subject.

From janssen at parc.com  Mon Jun 12 21:43:28 2006
From: janssen at parc.com (Bill Janssen)
Date: Mon, 12 Jun 2006 12:43:28 PDT
Subject: [Python-Dev] Import semantics
In-Reply-To: Your message of "Mon, 12 Jun 2006 12:30:53 PDT."
	<e6kfdd$i2p$1@sea.gmane.org> 
Message-ID: <06Jun12.124328pdt."58641"@synergy1.parc.xerox.com>

</F> writes:
> what [Python] specification?

Good meta-point.

Bill

From amk at amk.ca  Mon Jun 12 21:55:48 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Mon, 12 Jun 2006 15:55:48 -0400
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606121125n321a2dc2oad2b69458e6a3bf1@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
	<ca471dc20606121125n321a2dc2oad2b69458e6a3bf1@mail.gmail.com>
Message-ID: <20060612195548.GA9840@rogue.amk.ca>

On Mon, Jun 12, 2006 at 11:25:21AM -0700, Guido van Rossum wrote:
> Have any instances of that actually happened? That would be a problem
> with *any* code in the Python library, not just external
> contributions, so I'm not sure why external contribions should be
> treated any differently here.

There have certainly been changes to xmlrpclib, such as support for
datetime objects and an allow_none flag to support handling None
values as a <nil/> element.  (I don't know if xmlrpclib is externally
released any longer, so maybe it doesn't count.)

--amk

From pje at telecommunity.com  Mon Jun 12 21:53:58 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 15:53:58 -0400
Subject: [Python-Dev] Dropping externally maintained packages
 (Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <1f7befae0606121229t41771359w4b18e7ff230c2f23@mail.gmail.co
 m>
References: <e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060612154245.03964f20@sparrow.telecommunity.com>

At 03:29 PM 6/12/2006 -0400, Tim Peters wrote:
>That's all ordinary everyday maintenance, and, e.g., there is no
>mechanism to exempt anything in a checkout tree from reindent.py or
>PyChecker complaints.
>
>In addition, not shown above is that I changed test_wsgiref.py to stop
>a test failure under -O.  Given that we're close to the next Python
>release, and test_wsgiref was the only -O test failure, I wasn't going
>to let that stand.  I did wait ~30 hours between emailing about the
>problem and fixing it, but I like to whittle down my endless todo list
>too <0.4 wink>.

Your fix masked one of the *actual* problems, which was that 
wsgiref.validate (contributed by Ian Bicking) was also using asserts to 
check for validation failures.  This required a more extensive fix.  (See 
my reply to your problem report.)

Your post about the error was on Friday afternoon; I had a corrected 
version on Sunday evening, but I couldn't check it in because nobody told 
me about any of the "ordinary everyday maintenance" they were doing, and I 
had to figure out how to merge the now-divergent trees.

The whitespace changes I expected, since you previously told me about 
reindent.py.  The other changes I did not expect, since my original message 
about the checkin requested that people at least keep me informed of 
changes (as does PEP 360), so I thought that people would abide by that or 
at least notify me if they found it necessary to make a change to e.g. fix 
a test.  Your email about the test problem didn't say you were making any 
changes.

Regardless of "everyday maintenance", my point was that I understood one 
procedure to be in effect, per PEP 360.  If nobody's expected to actually 
pay any attention to that procedure, there's no point in having the 
PEP.  Or if "everyday maintenance" is expected to be exempt, the PEP should 
reflect that.  Assuming that everybody knows which rules do and don't count 
is a non-starter on a project the size of Python.


From amk at amk.ca  Mon Jun 12 22:05:52 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Mon, 12 Jun 2006 16:05:52 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
References: <058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
Message-ID: <20060612200552.GB9840@rogue.amk.ca>

On Mon, Jun 12, 2006 at 03:12:20PM -0400, Phillip J. Eby wrote:
> encountered this myself.  I *have* seen some developers make spurious 
> "cleanups" to working code that breaks compatibility with older Python 
> versions, though, just not in wsgiref.

Note that the standard library policy has always been that the library
for Python 2.X does not need to work with an earlier Python
interpreter.  Modules that must continue to work with earlier
versions are listed in PEP 291.

(Yes, another PEP to look at in addition to 360.  IMHO we should
require all modules with version constraints or external master source
to have comments indicating this *in the code*, at the top of every
source file, so that someone writing a patch or bugfix knows what the
requirements are.)

--amk

From p.f.moore at gmail.com  Mon Jun 12 22:00:15 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 12 Jun 2006 21:00:15 +0100
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
In-Reply-To: <e6ka39$ucc$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<e6ka39$ucc$1@sea.gmane.org>
Message-ID: <79990c6b0606121300r68722b08r8e80b862e939e14b@mail.gmail.com>

On 6/12/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Guido van Rossum wrote:
>
> > Maybe we should get serious about slimming down the core distribution
> > and having a separate group of people maintain sumo bundles containing
> > Python and lots of other stuff.
>
> there are already lots of people doing that (most Linux distributions add stuff, directly
> or indirectly; ActiveState and Enthought are doing that for windows, Nokia is doing
> that for the S60 platform, etc); the PEP 360 approach is an attempt to emulate that
> for the python.org distribution.

I'm not sure if it's a consequence of ActiveState's process, or of the
current state of Python library packaging, or something else, but one
big disadvantage I see to the ActiveState distribution is that it does
not allow upgrading of parts of their distribution (specifically, I
had problems because I couldn't upgrade pywin32 with a bugfix version,
as it clashed with their bundled version - this was in the days when
ActiveState releases were infrequent, and is probably much less of an
issue now).

Until that packaging issue is resolved (and again, maybe eggs provide
the answer here, I'm not sure) externally packaged sumo bundles have
some issues before they can fully replace the stdlib. I'd hate to
distribute something that had to depend on "Enthought Python" because
"ActiveState Python" was bundled with too old a release of pywin32 -
or whatever.

This is purely speculation (and fairly negative speculation at that!)
but I'd advocate caution for a while yet. Maybe ActiveState or
Enthought could provide some input on how/if sumo bundles can address
the need to upgrade "parts" of the distribution?

Paul.

From guido at python.org  Mon Jun 12 22:01:06 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 13:01:06 -0700
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <448DC39F.6020101@comcast.net>
References: <448DC39F.6020101@comcast.net>
Message-ID: <ca471dc20606121301k50ab0b0an2fe7d221ab87d385@mail.gmail.com>

On 6/12/06, Edward C. Jones <edcjones at comcast.net> wrote:
> Guido van Rossum wrote:
>  > developers contributing code without wanting
>  > to give up control are the problem.
>
> That hits the nail on the head. If something is added to the standard
> library, it becomes part of Python and must be controlled by whoever
> controls Python. Otherwise there will be chaos. If a developer puts his
> code into the standard library, it may or may not increase their status,
> ego, or income. Each developer must decide for himself.
>
> I suggest that the BDFL make a Pronouncement on this subject.

I think I pretty much did already -- going forward, I'd like to see
that contributing something to the stdlib means that from then on
maintenance is done using the same policies and guidelines as the rest
of the stdlib (which are pretty conservative as far as new features
go), and not subject to the original contributor's veto or prior
permission. Rolling back changes without discussion is out of the
question. (Sometimes it's okay to roll back a change unilaterall, but
that's pretty much only if it breaks the unit tests and the original
author does not respond to requests to fix his mistake within a
reasonable time.)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 12 22:02:15 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 13:02:15 -0700
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
In-Reply-To: <20060612200552.GB9840@rogue.amk.ca>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
	<20060612200552.GB9840@rogue.amk.ca>
Message-ID: <ca471dc20606121302q7655cd9cxba4155b4a05493e@mail.gmail.com>

On 6/12/06, A.M. Kuchling <amk at amk.ca> wrote:
> IMHO we should
> require all modules with version constraints or external master source
> to have comments indicating this *in the code*, at the top of every
> source file, so that someone writing a patch or bugfix knows what the
> requirements are.

Agreed.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From theller at python.net  Mon Jun 12 22:08:32 2006
From: theller at python.net (Thomas Heller)
Date: Mon, 12 Jun 2006 22:08:32 +0200
Subject: [Python-Dev] Dropping externally maintained packages (Was:
 Please stop changing wsgiref on the trunk)
In-Reply-To: <448D9EA1.9000209@v.loewis.de>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de>
Message-ID: <e6khjs$qqv$1@sea.gmane.org>

Martin v. L?wis wrote:
> Guido van Rossum wrote:
>>> 4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
>>> to get rid of it, *now* would be the time.
>> 
>> I'm all for it.
>> 
>> While I am an enthusiastic supporter of several of those additions, I
>> am *not* in favor of the special status granted to software
>> contributed by certain developers, since it is a burden for all other
>> developers.
> 
> Then I guess we should deal with before 2.5b1, and delay 2.5b1 until the
> status of each of these has been clarified.
> 
> Each maintainer should indicate whether he is happy with a "this is
> part of Python" approach. If so, the entry should be removed from PEP
> 360 (*); if not, the code should be removed from Python before beta 1.

I will be happy to say "ctypes is part of Python" (although I *fear* it
is not one of the packages enthusiastically supported by Guido ;-).
Well, if the compatibility requirements can be retained, at least (ctypes
should currently be compatible with Python 2.3, but that can probably be raised
to 2.4 when Python 2.5 is officially released, or shortly thereafter).

I am *very* thankful for the fixes, the code review, the suggestions,
and the encouragement I got by various python-devers.  I'm even happy
to revert changes by others done by accident (which destroy comatibility
with Python 2.3, for example. Well, there was only one).

Thomas


From fredrik at pythonware.com  Mon Jun 12 22:14:38 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 22:14:38 +0200
Subject: [Python-Dev] "can't unpack IEEE 754 special value on non-IEEE
	platform"
Message-ID: <e6khve$sbs$1@sea.gmane.org>

I just ran the PIL test suite using the current Python trunk, and the 
tests for a user-contributed plugin raised an interesting exception:

ValueError: can't unpack IEEE 754 special value on non-IEEE platform

fixing this is easy, but the error is somewhat confusing: since when is 
a modern Intel CPU not an IEEE platform?

</F>


From barry at python.org  Mon Jun 12 22:17:31 2006
From: barry at python.org (Barry Warsaw)
Date: Mon, 12 Jun 2006 16:17:31 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please
 stopchanging wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
References: <058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
Message-ID: <20060612161731.379e344f@resist.wooz.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 12 Jun 2006 15:12:20 -0400
"Phillip J. Eby" <pje at telecommunity.com> wrote:

> Now, Barry's approach to the email package makes good sense to me,
> and I'd use it, except that SVN externals can't sync individual
> files.  I'd have to create Lib/wsgiref/tests (and make a dummy
> Lib/test/test_wsgiref that invokes them)

Which is email had to do.

> and Lib/wsgiref/doc (and make Doc/lib/lib.tex include libwsgiref.tex
> from there).

See mimelib.tex in Python's docs -- I had to do the same thing.  The
PITA is that I have to build the docs in Python's svn and then copy
them into the sandbox for the standalone distro.  This isn't to bad
when it's just one package, but if we adopt this approach more
generally, we may want to establish some guidelines, which of course
I'd be happy to stick to.

- -Barry
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)

iQCVAwUBRI3L23EjvBPtnXfVAQLAEwP/dQNcTatFUz7saxzRw2SEDKfDYTIsd0On
WdLkNbhr5E85upOg/12DQLmyHAKNbxO2OFZo6VuO7AdHF2OTgp8S6aAzAlOUq9eF
y1fiQ0Qfto3xCx4y1RYI9caZTJPuIN/1n8IHHt9Gkx+IsV+VDl+dhE3tVUP7RqAY
XxTRzc/sCys=
=AJ/r
-----END PGP SIGNATURE-----

From robert.kern at gmail.com  Mon Jun 12 22:24:11 2006
From: robert.kern at gmail.com (Robert Kern)
Date: Mon, 12 Jun 2006 15:24:11 -0500
Subject: [Python-Dev] External Package Maintenance (was Re: Please
 stopchanging wsgiref on the trunk)
In-Reply-To: <79990c6b0606121300r68722b08r8e80b862e939e14b@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>	<e6ka39$ucc$1@sea.gmane.org>
	<79990c6b0606121300r68722b08r8e80b862e939e14b@mail.gmail.com>
Message-ID: <e6kih6$v4v$1@sea.gmane.org>

Paul Moore wrote:

> This is purely speculation (and fairly negative speculation at that!)
> but I'd advocate caution for a while yet. Maybe ActiveState or
> Enthought could provide some input on how/if sumo bundles can address
> the need to upgrade "parts" of the distribution?

We at Enthought are moving towards distributing a smallish bootstrap
distribution and provide most of the packages as eggs. Upgrades should then be
an easy_install away.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco


From guido at python.org  Mon Jun 12 22:35:36 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 13:35:36 -0700
Subject: [Python-Dev] Dropping externally maintained packages (Was:
	Please stop changing wsgiref on the trunk)
In-Reply-To: <e6khjs$qqv$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6khjs$qqv$1@sea.gmane.org>
Message-ID: <ca471dc20606121335we776d62r5333b3d001692489@mail.gmail.com>

On 6/12/06, Thomas Heller <theller at python.net> wrote:
> I will be happy to say "ctypes is part of Python" (although I *fear* it
> is not one of the packages enthusiastically supported by Guido ;-).

I don't plan to use it myself, but I'm very happy that it's in the
stdlib since so many people like it.

Somebody please update PEP 360 (and PEP 291).

> Well, if the compatibility requirements can be retained, at least (ctypes
> should currently be compatible with Python 2.3, but that can probably be raised
> to 2.4 when Python 2.5 is officially released, or shortly thereafter).

That ought to be indicated in PEP 291 and in the ctypes source code somewhere.

> I am *very* thankful for the fixes, the code review, the suggestions,
> and the encouragement I got by various python-devers.

You're welcome. And we're thankful for your contribution!

> I'm even happy
> to revert changes by others done by accident (which destroy comatibility
> with Python 2.3, for example. Well, there was only one).

Please keep in mind that reverting someone else's changes without
prior discussion is a very rude thing to do. The proper procedure is
to point out their mistake and give them the opportunity to revert it
themselves (or defend their change -- so a public discussion can
ensue).

Disagreements should not be settled by battling checkins; I'd like to
see that developers who ignore this rule repeatedly risk having their
commit privileges taken away temporarily.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From theller at python.net  Mon Jun 12 22:44:30 2006
From: theller at python.net (Thomas Heller)
Date: Mon, 12 Jun 2006 22:44:30 +0200
Subject: [Python-Dev] Dropping externally maintained packages (Was:
 Please stop changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606121335we776d62r5333b3d001692489@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>	<448D9EA1.9000209@v.loewis.de>
	<e6khjs$qqv$1@sea.gmane.org>
	<ca471dc20606121335we776d62r5333b3d001692489@mail.gmail.com>
Message-ID: <e6kjna$2r6$1@sea.gmane.org>

Guido van Rossum wrote:
> On 6/12/06, Thomas Heller <theller at python.net> wrote:
>> I will be happy to say "ctypes is part of Python" (although I *fear* it
>> is not one of the packages enthusiastically supported by Guido ;-).
> 
> I don't plan to use it myself, but I'm very happy that it's in the
> stdlib since so many people like it.

;-)

> Somebody please update PEP 360 (and PEP 291).

I'll do that.

>> Well, if the compatibility requirements can be retained, at least (ctypes
>> should currently be compatible with Python 2.3, but that can probably be raised
>> to 2.4 when Python 2.5 is officially released, or shortly thereafter).
> 
> That ought to be indicated in PEP 291 and in the ctypes source code somewhere.

I'll add markers in the code.

>> I am *very* thankful for the fixes, the code review, the suggestions,
>> and the encouragement I got by various python-devers.
> 
> You're welcome. And we're thankful for your contribution!
> 
>> I'm even happy
>> to revert changes by others done by accident (which destroy comatibility
>> with Python 2.3, for example. Well, there was only one).
> 
> Please keep in mind that reverting someone else's changes without
> prior discussion is a very rude thing to do. The proper procedure is
> to point out their mistake and give them the opportunity to revert it
> themselves (or defend their change -- so a public discussion can
> ensue).

> Disagreements should not be settled by battling checkins; I'd like to
> see that developers who ignore this rule repeatedly risk having their
> commit privileges taken away temporarily.

Ok, I'll follow this procedure from now on.

Thanks,
Thomas


From pje at telecommunity.com  Mon Jun 12 22:44:21 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 16:44:21 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <ca471dc20606121325r30737f21xf442463867ec90df@mail.gmail.co
 m>
References: <5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612142631.01e83a30@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612145237.030c9860@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612153726.01ea6990@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>

[posting back to python-dev in case others also perceived my original 
message as impolite]

At 01:25 PM 6/12/2006 -0700, Guido van Rossum wrote:
>Oh, and the tone of your email was *not* polite. Messages starting
>with "I wasted an hour of my time" are not polite pretty much by
>definition.

Actually, I started out with "please" -- twice, after having previously 
asked please in advance.  I've also seen lots of messages on Python-Dev 
where Tim Peters wrote about having wasted time due to other folks not 
following established procedures, and I tried to emulate his tone.  I guess 
I didn't do a very good job, but not everybody is as funny as Tim is.  :)

Usually he manages to make it seem as though he would really be happy to 
give up his nights and weekends but that sadly, he just doesn't have any 
more time right at this particular moment.  A sort of "it's not you, it's 
me" thing.  I guess I just left out that particular bit of 
sleight-of-mouth.  :)

Anyway, will anyone who was offended by the original message please pretend 
that it was delightfully witty and written by Tim instead?  Thanks.  ;)


From pje at telecommunity.com  Mon Jun 12 22:45:36 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 16:45:36 -0400
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <ca471dc20606121301k50ab0b0an2fe7d221ab87d385@mail.gmail.co
 m>
References: <448DC39F.6020101@comcast.net>
 <448DC39F.6020101@comcast.net>
Message-ID: <5.1.1.6.0.20060612161032.03aef480@sparrow.telecommunity.com>

A01:01 PM 6/12/2006 -0700, Guido van Rossum wrote:
>I think I pretty much did already -- going forward, I'd like to see
>that contributing something to the stdlib means that from then on
>maintenance is done using the same policies and guidelines as the rest
>of the stdlib (which are pretty conservative as far as new features
>go), and not subject to the original contributor's veto or prior
>permission. Rolling back changes without discussion is out of the
>question.

I think there's some kind of misunderstanding here.  I didn't ask for veto 
or prior permission.  I just want to keep the external release in sync.

I also didn't roll anything back, at least not intentionally.  I was trying 
to merge the Python changes into the external release, and vice 
versa.  Two-way syncing is difficult and error-prone, especially when 
you're thinking you only need to do a one-way sync!  So if I managed to 
roll something back *un*intentionally in the process last night, I would 
hope someone would let me know.

That was my sole complaint: I requested a particular change process to 
ensure that syncing would be one-way, from wsgiref to Python.  If it has to 
be the other way, from Python to wsgiref, so be it.  However, my impression 
from PEP 360 was that the way I was asking for was the "One Obvious Way" of 
doing it.

This is not now, nor was it ever a control issue; I'd appreciate it if 
you'd stop implying that control has anything to do with it.  At most, it's 
a widespread ignorance and/or misunderstanding as to the optimum way of 
handling stdlib packages with external distribution.

It sounds like Barry has a potentially workable way of managing it that 
might reasonably be blessed as the One Obvious Way, and I'm certainly 
willing to try it.  I'd still rather have a Packages/ directory, but 
beggars can't be choosers.  However, if this is to be the One Obvious Way, 
it should be documented in a PEP as part of the "how packages get in the 
stdlib and how they're maintained".


From tim.peters at gmail.com  Mon Jun 12 22:50:41 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 12 Jun 2006 16:50:41 -0400
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612154245.03964f20@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060612154245.03964f20@sparrow.telecommunity.com>
Message-ID: <1f7befae0606121350sb1a26d9u8643335b577ee495@mail.gmail.com>

[Tim]
>> In addition, not shown above is that I changed test_wsgiref.py to stop
>> a test failure under -O.  Given that we're close to the next Python
>> release, and test_wsgiref was the only -O test failure, I wasn't going
>> to let that stand.  I did wait ~30 hours between emailing about the
>> problem and fixing it, but I like to whittle down my endless todo list
>> too <0.4 wink>.

[Phillip]
> Your fix masked one of the *actual* problems, which was that
> wsgiref.validate (contributed by Ian Bicking) was also using asserts to
> check for validation failures.  This required a more extensive fix.  (See
> my reply to your problem report.)

No, I didn't mask that.  Two individual tests in test_wsgiref were
failing under -O, and I only fixed one of them (testFileWrapper) at
the time.  In the checkin message for that (rev 46871), I noted that
the other failure (test_simple_validation_error) was due to the coding
of wsgiref.validate, and also noted that extensive changes would be
needed to repair that one.  The failure of the test I did repair was
solely due to code in test_wsgiref.py.

> Your post about the error was on Friday afternoon; I had a corrected
> version on Sunday evening, but I couldn't check it in because nobody told
> me about any of the "ordinary everyday maintenance" they were doing, and I
> had to figure out how to merge the now-divergent trees.

I'm sorry, but I don't think you can expect to get special email about
these never-ending kinds of small changes.  It's not realistic.  They
all show up on the python-checkins list, which you could filter (you
are subscribed to that, right?).

You could also use the svn command I showed last time to get a list of
all checkins that have modified one of your files since the last time
you blessed it.  I know this is possible, since at one point I had to
keep track of random changes to ZODB from copies in umpteen different
branches of Zope2 and Zope3.  I also know it's a PITA ;-)  But AFAIK
you have only _one_ "external copy" to keep track of now, and IME the
pain grew more like the square of the number of external copies (each
pair of external copies effectively had to get compared).

> The whitespace changes I expected, since you previously told me about
> reindent.py.  The other changes I did not expect, since my original message
> about the checkin requested that people at least keep me informed of
> changes (as does PEP 360), so I thought that people would abide by that or
> at least notify me if they found it necessary to make a change to e.g. fix
> a test.

Well, I don't pay any attention to which files reindent.py changes, so
you'll never get special email from me about that.  It looked like
Brett's checkin was the result of some similarly mechanical "look for
code directories that forgot to svn:ignore compiled Python files"
process.  Neal's PyChecker checkin also touched more than one
subproject, and he probably plain forgot that anything with "wsgiref"
in its path was supposed to get special treatment.  These things
simply will happen, and especially in more-or-less mindless cleanup
checkins.

> Your email about the test problem didn't say you were making any changes.

There's also a rule that all tests must pass, and as a release grows
close that gets higher priority.  As I said, I waited about 30 hours,
but got no response and saw no action, so moved it along simply
because it needed to be repaired and I was able to make time for it.
If you hadn't repaired test_simple_validation_error, I would have had
no qualms about hacking in anything to stop that failure too before
the release.  If we had been "far" away from a release, I wouldn't
have done anything here (beyond sending the email).

> Regardless of "everyday maintenance", my point was that I understood one
> procedure to be in effect, per PEP 360.  If nobody's expected to actually
> pay any attention to that procedure, there's no point in having the
> PEP.  Or if "everyday maintenance" is expected to be exempt, the PEP should
> reflect that.

Since that one is the only realistic outcome I can see, I think the
PEP should say so.

> Assuming that everybody knows which rules do and don't count is a non-starter
> on a project the size of Python.

I expect most rules will never be written down, either.  This works
fine so long as people of good will cooperate with liberal doses of
common sense and tolerance.  It doesn't work at all when the process
gets adversarial.  This project's traditional response to that hasn't
been to craft ever-more legalistic rules (in part because nobody will
volunteer for that), but to give the BDFL the last word on everything.
 Alas, that shows signs of not scaling well to dozens of active
developers either.

Short of freezing the code base and dropping support for buggy
platforms like Linux ;-), I don't pretend to have a solution.

From pje at telecommunity.com  Mon Jun 12 22:06:25 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 16:06:25 -0400
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <448DC39F.6020101@comcast.net>
Message-ID: <5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>

At 03:42 PM 6/12/2006 -0400, Edward C. Jones wrote:
>Guido van Rossum wrote:
>  > developers contributing code without wanting
>  > to give up control are the problem.
>
>That hits the nail on the head.

Actually it's both irrelevant and insulting.

I just want changes made by the Python core developers to be reflected in 
the external releases.  I'd be more than happy to move the external release 
to the Python SVN server if that would make it happen.

If there was only one release point for the package, I would've had no 
problem with any of the changes made by Tim or AMK or anybody else.  The 
"control" argument is a total red herring.  If I had an issue with the 
actual changes themselves, I'd address it on checkins or dev, as is normal!

The "nail" here is simply that maintaining two versions of a package is 
awkward if changes are being made in both places.  I'd love to have only 
one place in which wsgiref is maintained, but Python's current directory 
layout doesn't allow me to put all of wsgiref in "one place".

And if we hit *that* "nail" on the head (instead of hitting the external 
authors on theirs), it is a win for all the external contributors.


From fredrik at pythonware.com  Mon Jun 12 22:54:35 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 22:54:35 +0200
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <5.1.1.6.0.20060612161032.03aef480@sparrow.telecommunity.com>
References: <448DC39F.6020101@comcast.net> <448DC39F.6020101@comcast.net>
	<ca471dc20606121301k50ab0b0an2fe7d221ab87d385@mail.gmail.co m>
	<5.1.1.6.0.20060612161032.03aef480@sparrow.telecommunity.com>
Message-ID: <e6kkaa$57c$1@sea.gmane.org>

Phillip J. Eby wrote:

> I'd still rather have a Packages/ directory, but beggars can't be
 > choosers.

there's plenty of time to work on that for 2.6...

</F>


From tim.peters at gmail.com  Mon Jun 12 22:58:59 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 12 Jun 2006 16:58:59 -0400
Subject: [Python-Dev] "can't unpack IEEE 754 special value on non-IEEE
	platform"
In-Reply-To: <e6khve$sbs$1@sea.gmane.org>
References: <e6khve$sbs$1@sea.gmane.org>
Message-ID: <1f7befae0606121358k30c539aem12b87db0f4def49a@mail.gmail.com>

[Fredrik Lundh]
> I just ran the PIL test suite using the current Python trunk, and the
> tests for a user-contributed plugin raised an interesting exception:
>
> ValueError: can't unpack IEEE 754 special value on non-IEEE platform
>
> fixing this is easy, but the error is somewhat confusing: since when is
> a modern Intel CPU not an IEEE platform?

Which OS and compiler were in use?  A possible cause is that the
platform didn't supply #defines for SIZEOF_DOUBLE and/or SIZEOF_FLOAT
when Python was compiled.  This was, e.g., true on Windows before rev
46065.

On an Intel box, you should see this:

>>> float.__getformat__('double')
'IEEE, little-endian'

If you get 'unknown' instead, see above.

From brett at python.org  Mon Jun 12 23:06:15 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 12 Jun 2006 14:06:15 -0700
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <e6kkaa$57c$1@sea.gmane.org>
References: <448DC39F.6020101@comcast.net>
	<5.1.1.6.0.20060612161032.03aef480@sparrow.telecommunity.com>
	<e6kkaa$57c$1@sea.gmane.org>
Message-ID: <bbaeab100606121406g355fbe52g6c19bece77142c4d@mail.gmail.com>

On 6/12/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
>
> Phillip J. Eby wrote:
>
> > I'd still rather have a Packages/ directory, but beggars can't be
> > choosers.
>
> there's plenty of time to work on that for 2.6...



I have started a thread on python-3000 to try to get a PEP pulled together
to solidify what we want for accepted PEPs and thus lock down what is
expected.  I purposely started out heavy-handed to get discussion going to
get some back and forth going to work something out.

Started the thread there instead of here since I hope the resulting PEP can
be used to help decide what modules should be ripped out in Py3K.  We can
then use the guidelines for future acceptance for the 2.x branch.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/2eb12bdd/attachment.html 

From fredrik at pythonware.com  Mon Jun 12 23:06:18 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 23:06:18 +0200
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
References: <448DC39F.6020101@comcast.net>
	<5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
Message-ID: <e6kl0a$96q$2@sea.gmane.org>

Phillip J. Eby wrote:

> I just want changes made by the Python core developers to be reflected in 
> the external releases.

and presumably, the reason for that isn't that you care about your ego, 
but that you care about your users.

</F>


From fredrik at pythonware.com  Mon Jun 12 23:06:01 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 12 Jun 2006 23:06:01 +0200
Subject: [Python-Dev] "can't unpack IEEE 754 special value on non-IEEE
	platform"
In-Reply-To: <1f7befae0606121358k30c539aem12b87db0f4def49a@mail.gmail.com>
References: <e6khve$sbs$1@sea.gmane.org>
	<1f7befae0606121358k30c539aem12b87db0f4def49a@mail.gmail.com>
Message-ID: <e6kkvp$96q$1@sea.gmane.org>

Tim Peters wrote:

> Which OS and compiler were in use?  A possible cause is that the
> platform didn't supply #defines for SIZEOF_DOUBLE and/or SIZEOF_FLOAT
> when Python was compiled.  This was, e.g., true on Windows before rev
> 46065.

and looking again, I was indeed running 2.5 alpha 2 (revision 45740), 
not 2.5 trunk.  oh well.

</F>


From thomas at python.org  Mon Jun 12 23:31:14 2006
From: thomas at python.org (Thomas Wouters)
Date: Mon, 12 Jun 2006 23:31:14 +0200
Subject: [Python-Dev] Source control tools
Message-ID: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>

On 6/12/06, Guido van Rossum <guido at python.org> wrote:

> Perhaps issues like these should motivate us to consider a different
> source control tool. There's a new crop of tools out that could solve
> this by having multiple repositories that can be sync'ed with each
> other. This sounds like an important move towards world peace!


It would be an important move towards world peace, if it didn't inspire
whole new SCM-holy-wars :-)  I have a fair bit of experience with different
SCM (VC, source control tool, however you want to call them) so I'll take
this opportunity to toss up some observations. Not that switching to another
SCM will really solve the issues you are referring to, but I happen to think
switching to another SCM is  a great idea :) Although I don't see an obvious
candidate at the moment... I'll explain why.

First of all, changing SCM means changing how everyone works. It's nothing
like the CVS->Subversion switch, which changed very little in workflow. All
the cool SCMs use 'real branches', and to get full benefit you have to
switch your development to a 'branch oriented model', if you'll pardon the
buzzwordyness. At XS4ALL we've used BitKeeper for a few years now, and while
it really took a while for some of the developers to catch on, the branch
thing makes parallel development *much* easier. If you haven't experienced
it yourself, it's hard to see why it matters so much (except maybe in cases
with extreme parallel development, like the Linux kernel), but it really
does make life a lot easier in the long run, for programmers and release
managers.

Secondly, the way most of the 'less-linear' SCMs work is that everyone has
their own repository. That is, in fact, what makes it so useful -- no need
for central repository access, no need for a network connection for full
operability, no need for write access to get your changes checked in
(locally), and easy interchanging between repositories. With a large
(history-rich) project like Python, though, those repositories can get a tad
large. Most of the SCMs have ways to work around it (not downloading the
full history, side-storing the full history in a separate branch, etc) but
it's still an extra inconvenience. Now, me, I don't mind downloading a
couple hundred megabytes to get a 25Mb sourcetree with full history, but I
have a 1Gbit link to the python.org servers :) On the other hand, with most
of the SCMs, you only download that full history once, and then locally
branch off of that.

The real reason I think we should consider other SCMs is because I fear what
the history will look like when 3.0 is done. In Subversion, merges of
branches don't preserve history -- you have to do it yourself. Given the way
Subversion works, I don't think that'll really change; it's just not what
Subversion is meant to do (and that's fine.) It does mean, however, that
when we switch the trunk to 3.0, we have to decide between the history of
the trunk or the history of the p3yk branch. We either merge the p3yk branch
into the trunk, making a single huge checkin message explaining all the
changes (or not), or we swap the trunk and the p3yk branch. The former means
'svn blamelog', for instance, will show a single revision and a single
author for *all* p3yk changes. The latter means 'svn blamelog' will group
the trunk changes into the merge commits you can already see on the
python-3000-checkins list: a block of merges at a time, based on whenever I
(or someone else) has the time to merge the trunk in. So, in that case, 'svn
blamelog' will show *me* as author of all 2.5-to-2.7 changes, at a time the
original change didn't go in, with log messages that are largely irrelevant
;-) And the mess gets bigger if part of p3yk or trunk's development is done
in other branches -- svnmerge log messages hidden in svnmerge log messages.
ugh.

Before XS4ALL switched to BitKeeper, I spent quite a while examining
different SCMs, but at the time, they just weren't at the stage of
development you'd trust your company development to (not if you can afford
BitKeeper anyway ;)  After (re-)experiencing the pain that is
Subversion/CVS-style branching with the p3yk branch and the manual trunk
merges, I went ahead and checked out the current state of the alternatives.
There are quite a few, now (Monotone, Darcs, Git/Cogito, Mercurial,
Arch/tla/Bazaar, Bazaar-NG, Arx, CodeVille, SVK) and I haven't had time to
give them all the in-depth examination they are worthy of, but so far it
looks like only a few of them currently scale well enough to handle a large
(history-rich) project like Python. Not that it's fair to expect them to
scale well, mind you, given that most of them are only a few years old and
most don't claim to be "1.0".

Using 'tailor' ( http://www.darcs.net/DarcsWiki/Tailor ) I imported the
Python sourcetree with full history into Darcs and Git. I also did a partial
import into Monotone (history going back a few months) -- the Monotone
import was a lot slower than the others, and I didn't feel like waiting a
week. I then made a branch of each and imported the p3yk branch into them
(using some hand-crafting of tailor's internal data, as it doesn't seem to
support branch-imports at the moment.) Darcs was being troublesome at that
point, but I haven't spent the time to figure out if it was something I or
tailor did wrong in the import. As I said, Monotone was rather slow, which
is not surprising considering it does a lot of signing of digital
certificates. I personally like Monotone a lot, because its central
branch-database is the 'next step up' from what most SCMs do and because I
really like the cryptographic aspect, but it's probably too complex for
Python. Git, the 'low level SCM' developed for the Linux kernel, is
incredibly fast in normal operation and does branches quite well. I suspect
much of its speed is lost on non-Linux platforms, though, and I don't know
how well it works on Windows ;)

I did partial imports into Mercurial and Bazaar-NG, but I got interrupted
and couldn't draw any conclusions -- although from looking at the
implementation, I don't think they'd scale very well at the moment (but that
could probably be fixed.) I should also look at SVK (a BitKeeper-style SCM
ontop of Subversion, really), but tailor doesn't support it (yet) and my
last look was just too depressing to cope with manually importing Python.

In short[*], I don't see an immediate candidate for an alternate SCM for
Python (although Git is sexy), but there's lots of long-term possibilities.
I intend to keep my Git and Darcs repositories up to date (it's little
effort to make tailor update them), tailorize Mercurial, Bazaar-NG, (full)
Monotone and probably others, tailorize some branches as well, and publish
them somewhere, hopefully with instructions, observations and honest
opinions (I just need to find the right place to host it, as e.g. Monotone
really wants a separate daemon process to run.)  I also intend to do my own
p3yk development in one of those SCMs; I can just export patches and apply
them to SVN when they're ready ;P I would like to hear if others have any
interest at all in this, though, if anything to keep me motivated during the
tediously long tailorizing runs :)

Oh, and in case it matters to people: tailor, Mercurial and Bazaar-NG are
written in Python.

Blurt-blurt'ly y'rs,

[*] Short? This whole mail was short! I can talk for hours about the benefit
of proper branches and what kind of stuff is easier, better and more
efficient with them. I can draw huge ASCII diagrams explaining the
fundamental difference between CVS/SVN,
BitKeeper/Arch/Darcs/Bazaar-NG/Mercurial and Monotone (yes, that's three
groups), and I have powerpoint(!) sheets I used to give a presentation about
how and why BitKeeper works at work. It's probably a bit off-topic here,
though, so don't tempt me ;P
-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/b22b1dfb/attachment.htm 

From michael.walter at gmail.com  Mon Jun 12 23:33:49 2006
From: michael.walter at gmail.com (Michael Walter)
Date: Mon, 12 Jun 2006 23:33:49 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <448DA4CF.80907@egenix.com>
References: <20060610142736.GA19094@21degrees.com.au>
	<448DA4CF.80907@egenix.com>
Message-ID: <877e9a170606121433y6bb6aca0l39642bc361aedab8@mail.gmail.com>

Maybe "switch" became a keyword with the patch..

Regards,
Michael

On 6/12/06, M.-A. Lemburg <mal at egenix.com> wrote:
> Thomas Lee wrote:
> > Hi all,
> >
> > As the subject of this e-mail says, the attached patch adds a "switch"
> > statement to the Python language.
> >
> > However, I've been reading through PEP 275 and it seems that the PEP
> > calls for a new opcode - SWITCH - to be added to support the new
> > construct.
> >
> > I got a bit lost as to why the SWITCH opcode is necessary for the
> > implementation of the PEP. The reasoning seems to be
> > improving performance, but I'm not sure how a new opcode could improve
> > performance.
> >
> > Anybody care to take the time to explain this to me, perhaps within the
> > context of my patch?
>
> Could you upload your patch to SourceForge ? Then I could add
> it to the PEP.
>
> Thomas wrote a patch which implemented the switch statement
> using an opcode. The reason was probably that switch works
> a lot like e.g. the for-loop which also opens a new block.
>
> Could you explain how your patch works ?
>
> BTW, I think this part doesn't belong into the patch:
>
> > Index: Lib/distutils/extension.py
> > ===================================================================
> > --- Lib/distutils/extension.py        (revision 46818)
> > +++ Lib/distutils/extension.py        (working copy)
> > @@ -185,31 +185,31 @@
> >                  continue
> >
> >              suffix = os.path.splitext(word)[1]
> > -            switch = word[0:2] ; value = word[2:]
> > +            switch_word = word[0:2] ; value = word[2:]
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Source  (#1, Jun 12 2006)
> >>> Python/Zope Consulting and Support ...        http://www.egenix.com/
> >>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
> >>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
> ________________________________________________________________________
> 2006-07-03: EuroPython 2006, CERN, Switzerland              20 days left
>
> ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/michael.walter%40gmail.com
>

From brett at python.org  Mon Jun 12 23:39:45 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 12 Jun 2006 14:39:45 -0700
Subject: [Python-Dev] Source control tools
In-Reply-To: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
Message-ID: <bbaeab100606121439k6f5378bx8b775cf3f9fc8a26@mail.gmail.com>

On 6/12/06, Thomas Wouters <thomas at python.org> wrote:
>
>
> On 6/12/06, Guido van Rossum <guido at python.org> wrote:
>
> > Perhaps issues like these should motivate us to consider a different
> > source control tool. There's a new crop of tools out that could solve
> > this by having multiple repositories that can be sync'ed with each
> > other. This sounds like an important move towards world peace!
>
>
> It would be an important move towards world peace, if it didn't inspire
> whole new SCM-holy-wars :-)  I have a fair bit of experience with different
> SCM (VC, source control tool, however you want to call them) so I'll take
> this opportunity to toss up some observations. Not that switching to another
> SCM will really solve the issues you are referring to, but I happen to think
> switching to another SCM is  a great idea :) Although I don't see an obvious
> candidate at the moment... I'll explain why.
>
> First of all, changing SCM means changing how everyone works. It's nothing
> like the CVS->Subversion switch, which changed very little in workflow. All
> the cool SCMs use 'real branches', and to get full benefit you have to
> switch your development to a 'branch oriented model', if you'll pardon the
> buzzwordyness. At XS4ALL we've used BitKeeper for a few years now, and while
> it really took a while for some of the developers to catch on, the branch
> thing makes parallel development *much* easier. If you haven't experienced
> it yourself, it's hard to see why it matters so much (except maybe in cases
> with extreme parallel development, like the Linux kernel), but it really
> does make life a lot easier in the long run, for programmers and release
> managers.
>
> Secondly, the way most of the 'less-linear' SCMs work is that everyone has
> their own repository. That is, in fact, what makes it so useful -- no need
> for central repository access, no need for a network connection for full
> operability, no need for write access to get your changes checked in
> (locally), and easy interchanging between repositories. With a large
> (history-rich) project like Python, though, those repositories can get a tad
> large. Most of the SCMs have ways to work around it (not downloading the
> full history, side-storing the full history in a separate branch, etc) but
> it's still an extra inconvenience. Now, me, I don't mind downloading a
> couple hundred megabytes to get a 25Mb sourcetree with full history, but I
> have a 1Gbit link to the python.org servers :) On the other hand, with
> most of the SCMs, you only download that full history once, and then locally
> branch off of that.
>
> The real reason I think we should consider other SCMs is because I fear
> what the history will look like when 3.0 is done. In Subversion, merges of
> branches don't preserve history -- you have to do it yourself. Given the way
> Subversion works, I don't think that'll really change; it's just not what
> Subversion is meant to do (and that's fine.) It does mean, however, that
> when we switch the trunk to 3.0, we have to decide between the history of
> the trunk or the history of the p3yk branch. We either merge the p3yk branch
> into the trunk, making a single huge checkin message explaining all the
> changes (or not), or we swap the trunk and the p3yk branch. The former means
> 'svn blamelog', for instance, will show a single revision and a single
> author for *all* p3yk changes. The latter means 'svn blamelog' will group
> the trunk changes into the merge commits you can already see on the
> python-3000-checkins list: a block of merges at a time, based on whenever I
> (or someone else) has the time to merge the trunk in. So, in that case, 'svn
> blamelog' will show *me* as author of all 2.5-to-2.7 changes, at a time
> the original change didn't go in, with log messages that are largely
> irrelevant ;-) And the mess gets bigger if part of p3yk or trunk's
> development is done in other branches -- svnmerge log messages hidden in
> svnmerge log messages. ugh.
>
> Before XS4ALL switched to BitKeeper, I spent quite a while examining
> different SCMs, but at the time, they just weren't at the stage of
> development you'd trust your company development to (not if you can afford
> BitKeeper anyway ;)  After (re-)experiencing the pain that is
> Subversion/CVS-style branching with the p3yk branch and the manual trunk
> merges, I went ahead and checked out the current state of the alternatives.
> There are quite a few, now (Monotone, Darcs, Git/Cogito, Mercurial,
> Arch/tla/Bazaar, Bazaar-NG, Arx, CodeVille, SVK) and I haven't had time to
> give them all the in-depth examination they are worthy of, but so far it
> looks like only a few of them currently scale well enough to handle a large
> (history-rich) project like Python. Not that it's fair to expect them to
> scale well, mind you, given that most of them are only a few years old and
> most don't claim to be " 1.0".
>
> Using 'tailor' ( http://www.darcs.net/DarcsWiki/Tailor ) I imported the
> Python sourcetree with full history into Darcs and Git. I also did a partial
> import into Monotone (history going back a few months) -- the Monotone
> import was a lot slower than the others, and I didn't feel like waiting a
> week. I then made a branch of each and imported the p3yk branch into them
> (using some hand-crafting of tailor's internal data, as it doesn't seem to
> support branch-imports at the moment.) Darcs was being troublesome at that
> point, but I haven't spent the time to figure out if it was something I or
> tailor did wrong in the import. As I said, Monotone was rather slow, which
> is not surprising considering it does a lot of signing of digital
> certificates. I personally like Monotone a lot, because its central
> branch-database is the 'next step up' from what most SCMs do and because I
> really like the cryptographic aspect, but it's probably too complex for
> Python. Git, the 'low level SCM' developed for the Linux kernel, is
> incredibly fast in normal operation and does branches quite well. I suspect
> much of its speed is lost on non-Linux platforms, though, and I don't know
> how well it works on Windows ;)
>
> I did partial imports into Mercurial and Bazaar-NG, but I got interrupted
> and couldn't draw any conclusions -- although from looking at the
> implementation, I don't think they'd scale very well at the moment (but that
> could probably be fixed.) I should also look at SVK (a BitKeeper-style SCM
> ontop of Subversion, really), but tailor doesn't support it (yet) and my
> last look was just too depressing to cope with manually importing Python.
>
> In short[*], I don't see an immediate candidate for an alternate SCM for
> Python (although Git is sexy), but there's lots of long-term possibilities.
> I intend to keep my Git and Darcs repositories up to date (it's little
> effort to make tailor update them), tailorize Mercurial, Bazaar-NG, (full)
> Monotone and probably others, tailorize some branches as well, and publish
> them somewhere, hopefully with instructions, observations and honest
> opinions (I just need to find the right place to host it, as e.g. Monotone
> really wants a separate daemon process to run.)  I also intend to do my own
> p3yk development in one of those SCMs; I can just export patches and apply
> them to SVN when they're ready ;P I would like to hear if others have any
> interest at all in this, though, if anything to keep me motivated during the
> tediously long tailorizing runs :)
>

I am interested, especially in Bazaar-NG.  Martin Poole's (author of
Bazaar-NG) talk at PyCon was good.  Plus Canonical might be willing to help
us out if we ever went with Bazaar (I believe they stated this once back
when the cvs->svn move was happening).

-Brett

Oh, and in case it matters to people: tailor, Mercurial and Bazaar-NG are
> written in Python.
>
> Blurt-blurt'ly y'rs,
>
> [*] Short? This whole mail was short! I can talk for hours about the
> benefit of proper branches and what kind of stuff is easier, better and more
> efficient with them. I can draw huge ASCII diagrams explaining the
> fundamental difference between CVS/SVN,
> BitKeeper/Arch/Darcs/Bazaar-NG/Mercurial and Monotone (yes, that's three
> groups), and I have powerpoint(!) sheets I used to give a presentation about
> how and why BitKeeper works at work. It's probably a bit off-topic here,
> though, so don't tempt me ;P
> --
> Thomas Wouters <thomas at python.org>
>
> Hi! I'm a .signature virus! copy me into your .signature file to help me
> spread!
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/b1f6fc97/attachment.html 

From guido at python.org  Mon Jun 12 23:46:25 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 14:46:25 -0700
Subject: [Python-Dev] Source control tools
In-Reply-To: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
Message-ID: <ca471dc20606121446q6808735dw52e96c2408d5f505@mail.gmail.com>

On 6/12/06, Thomas Wouters <thomas at python.org> wrote:
> [*] Short? This whole mail was short! I can talk for hours about the benefit
> of proper branches and what kind of stuff is easier, better and more
> efficient with them. I can draw huge ASCII diagrams explaining the
> fundamental difference between CVS/SVN,
> BitKeeper/Arch/Darcs/Bazaar-NG/Mercurial and Monotone (yes,
> that's three groups), and I have powerpoint(!) sheets I used to give a
> presentation about how and why BitKeeper works at work. It's probably a bit
> off-topic here, though, so don't tempt me ;P

Will you be at EuroPython? This might be a good OpenSpace topic if you
haven't already secured your speaking slot. It so happens that I saw a
talk about Mercurial at last week's baypiggies meeting and the author
claimed that it's as fast as Git.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From mwh at python.net  Mon Jun 12 23:51:49 2006
From: mwh at python.net (Michael Hudson)
Date: Mon, 12 Jun 2006 22:51:49 +0100
Subject: [Python-Dev] "can't unpack IEEE 754 special value on non-IEEE
 platform"
In-Reply-To: <e6khve$sbs$1@sea.gmane.org> (Fredrik Lundh's message of "Mon,
	12 Jun 2006 22:14:38 +0200")
References: <e6khve$sbs$1@sea.gmane.org>
Message-ID: <2mhd2qoxe2.fsf@starship.python.net>

Fredrik Lundh <fredrik at pythonware.com> writes:

> I just ran the PIL test suite using the current Python trunk, and the 
> tests for a user-contributed plugin raised an interesting exception:
>
> ValueError: can't unpack IEEE 754 special value on non-IEEE platform
>
> fixing this is easy, but the error is somewhat confusing: since when is 
> a modern Intel CPU not an IEEE platform?

When it doesn't define SIZEOF_DOUBLE or maybe SIZEOF_FLOAT, IIRC.  But
I thought Tim fixed this, so I'm reduced to guessing again.  Some
questions that will help me help you:

* What OS/compiler/etc?
* What is the user plugin doing (i.e. is it C or Python)?
* Any chance of a minimal example (if it's Python code, it'll be
  struct usage, most likely, if C, one of _PyFloat_Unpack{4,8} or
  something that calls one of those)?
* What does [float.__getformat__(f) for f in ('float', 'double')] say?

Cheers,
mwh

-- 
  Just put the user directories on a 486 with deadrat7.1 and turn the
  Octane into the afforementioned beer fridge and keep it in your
  office. The lusers won't notice the difference, except that you're
  more cheery during office hours.              -- Pim van Riezen, asr

From phd at mail2.phd.pp.ru  Tue Jun 13 00:19:46 2006
From: phd at mail2.phd.pp.ru (Oleg Broytmann)
Date: Tue, 13 Jun 2006 02:19:46 +0400
Subject: [Python-Dev] Source control tools
In-Reply-To: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
Message-ID: <20060612221946.GB2136@phd.pp.ru>

On Mon, Jun 12, 2006 at 11:31:14PM +0200, Thomas Wouters wrote:
> First of all, changing SCM means changing how everyone works.

   Distributed branches is not the only requirement. There are also:

-- subtree authorization (different access rights in different parts of the
   tree); in distributed SCMs this is solved by policies, not by tools, as
   far as I understand;
-- web-based access (ViewCV or like);
-- tracker integration (like Subversion with Trac);
-- mail notification.

   Slightly offtopic: I am working for a company where developers work in
different OS (Linux, w32, FreeBSD) and speak different languages (Russian,
Latvian and English). Two features I really love in Subversion:
svn:mime-type and svn:eol-style. The former allows to set character
encoding for a file (useful for web-based access); the latter allow SVN to
automatically convert line endings between different OSes, but it also
allow to set a fixed line ending style for specific files. I don't know
another SCM that supports such useful features.

Oleg.
-- 
     Oleg Broytmann            http://phd.pp.ru/            phd at phd.pp.ru
           Programmers don't die, they just GOSUB without RETURN.

From martin at v.loewis.de  Tue Jun 13 00:28:57 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 00:28:57 +0200
Subject: [Python-Dev] Dropping externally maintained
 packages	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <e6kb3d$2gg$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com><5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com><ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com><448D9EA1.9000209@v.loewis.de>	<e6k8ug$p9q$1@sea.gmane.org>	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
Message-ID: <448DEAA9.6040108@v.loewis.de>

Fredrik Lundh wrote:
>> But I don't think this is trying to say they don't care.  People just want
>> to lower the overhead of maintaining the distro.
> 
> well, wouldn't the best way to do that be to leave all non-trivial maintenance of a
> given component to an existing external community?

If you remember that this is the procedure: sure. However, if the
maintainer of a package thinks (and says) "somebody edited my code,
this should not happen again", then I really think the code is better
not part of the Python distribution.

> I mean, we're not really talking about ordinary leak-elimination or portability-fixing
> or security-hole-plugging maintenance; it's the let's-extend-the-api-in-incompatible-
> ways and fork-because-we-can stuff that I'm worried about.

I can understand that (and supported it in the first place, as you may
well recall). You should decide whether you worry about that so much
that you don't trust python-dev contributors to treat this in a sensible
way. If you don't trust them, you should withdraw your code.

Regards,
Martin

From thomas at python.org  Tue Jun 13 00:30:29 2006
From: thomas at python.org (Thomas Wouters)
Date: Tue, 13 Jun 2006 00:30:29 +0200
Subject: [Python-Dev] Source control tools
In-Reply-To: <20060612221946.GB2136@phd.pp.ru>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
	<20060612221946.GB2136@phd.pp.ru>
Message-ID: <9e804ac0606121530t717d53d1ybd463763fc69696@mail.gmail.com>

On 6/13/06, Oleg Broytmann <phd at oper.phd.pp.ru> wrote:
>
> On Mon, Jun 12, 2006 at 11:31:14PM +0200, Thomas Wouters wrote:
> > First of all, changing SCM means changing how everyone works.
>
>    Distributed branches is not the only requirement.


Oh, I know, no worries about that.

There are also:
>
> -- subtree authorization (different access rights in different parts of
> the
>    tree); in distributed SCMs this is solved by policies, not by tools, as
>    far as I understand;


Pretty much a no-brainer in most SCMs, yes: you need privileges to push
certain changes to a repository, not to commit them locally. The receiving
repository can make as complicated an authentication and authorization step
as it wants. Monotone's approach to this is particularly enjoyable: it works
with digital signatures, and you can (have to) tell Monotone who's checkins
*you* trust ;)

-- web-based access (ViewCV or like);

-- tracker integration (like Subversion with Trac);
> -- mail notification.


All of these are of course requirements before Python can switch to another
SCM, but not for looking at them in the first place -- in most cases, it's
not hard to add, if they don't have it already. And besides, while svn has
trac integration, we don't actually use it (yet).

   Slightly offtopic: I am working for a company where developers work in
> different OS (Linux, w32, FreeBSD) and speak different languages (Russian,
> Latvian and English). Two features I really love in Subversion:
> svn:mime-type and svn:eol-style. The former allows to set character
> encoding for a file (useful for web-based access); the latter allow SVN to
> automatically convert line endings between different OSes, but it also
> allow to set a fixed line ending style for specific files. I don't know
> another SCM that supports such useful features.


Those two I actually consider more important features than the three you
mentioned above -- as they aren't as easily bolted-on.  Thanks, I'll keep
them in mind :)

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060613/89ec1139/attachment.html 

From martin at v.loewis.de  Tue Jun 13 00:31:17 2006
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Tue, 13 Jun 2006 00:31:17 +0200
Subject: [Python-Dev] Dropping externally maintained
 packages	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <bbaeab100606121126u64ee5f16kafcd720428a9e549@mail.gmail.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>	<448D9EA1.9000209@v.loewis.de>
	<e6k8ug$p9q$1@sea.gmane.org>	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>	<e6kb3d$2gg$1@sea.gmane.org>
	<bbaeab100606121126u64ee5f16kafcd720428a9e549@mail.gmail.com>
Message-ID: <448DEB35.1030407@v.loewis.de>

Brett Cannon wrote:
> Well, I don't know if that is necessarily the case.  PEP 360 doesn't
> have a single project saying that minor fixes can just go right in.  If
> we want to just change the wording such that all code in the tree can be
> touched for bug fixes and compatibility issues without clearance, that's
> great.

I think this is what Philipp Eby essentially said about wsgiref (re-read
the subject line for reference).

If the external maintainer really has a strict "don't change anything"
policy, the only reasonable way to enforce this policy is to withdraw
the code.

> But Phillip's email that sparked all of this was about basic changes to
> wsgiref, not some API change (at least to the best of my knowledge).

That's my understanding as well.

Regards,
Martin

From martin at v.loewis.de  Tue Jun 13 00:55:04 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 00:55:04 +0200
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
Message-ID: <448DF0C8.2080002@v.loewis.de>

Phillip J. Eby wrote:
> Actually it's both irrelevant and insulting.
> 
> I just want changes made by the Python core developers to be reflected in 
> the external releases.  I'd be more than happy to move the external release 
> to the Python SVN server if that would make it happen.
> 

I think the only way to guarantee that is that you track the Python
source code yourself. Here is how I did it with PyXML:

- whenever I want a two-way sync, I first look at the changes that
  happened in Python since the last two-way sync.
- of those changes, I eliminate all that where already applied to PyXML.
  I had the habit of using identical checkin messages, so it was easy
  to identify which changes already existed in the other tree.
- I applied the remaining changes, one by one, to PyXML (but with a
  single commit), using the same commit messages, and indicating that
  PyXML was syncronized up to revsion XY with Python.
- Then I copied all remaining changes from PyXML to Python, again
  indicating in the commit messages what the original changes were,
  and how they got synchronized. Ideally, the PyXML sources and the
  Python sources should now be identical, byte for byte.

Of course, this approach recently broke when byte-for-byte identity
was deliberately broken; until then, it worked fine for several years.

> The "nail" here is simply that maintaining two versions of a package is 
> awkward if changes are being made in both places.  I'd love to have only 
> one place in which wsgiref is maintained, but Python's current directory 
> layout doesn't allow me to put all of wsgiref in "one place".

I guess you just have to accept that. It will happen again.

Regards,
Martin

From martin at v.loewis.de  Tue Jun 13 00:56:48 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 00:56:48 +0200
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <e6kl0a$96q$2@sea.gmane.org>
References: <448DC39F.6020101@comcast.net>	<5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
	<e6kl0a$96q$2@sea.gmane.org>
Message-ID: <448DF130.7000206@v.loewis.de>

Fredrik Lundh wrote:
>> I just want changes made by the Python core developers to be reflected in 
>> the external releases.
> 
> and presumably, the reason for that isn't that you care about your ego, 
> but that you care about your users.

For that, yes. However, the reason to desire that no changes are made
to Python's wsgiref is just that he wants to reduce the amount of work
he has to do to keep the sources synchronized - which reduces his amount
of work, but unfortunately increases the amount of work to be done for
the other python-dev committers.

Regards,
Martin

From jcarlson at uci.edu  Tue Jun 13 01:07:18 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Mon, 12 Jun 2006 16:07:18 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <17547.29129.809800.765436@montanaro.dyndns.org>
References: <448B680A.9020000@canterbury.ac.nz>
	<17547.29129.809800.765436@montanaro.dyndns.org>
Message-ID: <20060612154924.F2BC.JCARLSON@uci.edu>


skip at pobox.com wrote:
> 
> 
>     Greg> Before accepting this, we could do with some debate about the
>     Greg> syntax. It's not a priori clear that C-style switch/case is the
>     Greg> best thing to adopt.
> 
> Oh sure.  That debate should probably leverage PEP 275.

Along the lines of PEP 275, one of my concerns with the use of a
switch/case automatic conversion or even explicit syntax is that of
low-hanging fruit...

In my experience, switch/case statements are generally used to either
structure code explicitly with state handling code, improve performance
of such state handling, or both.  I think that if we are going to be
modifying Python to include such a syntax or automatic conversion, those
standard use-cases be considered primary.

With that said, the low-hanging fruit, in my opinion, are precisely
those cases specified in PEP 275 in the Solution 1 option.  For the
majority of use-cases that I've seen, automatic conversion for the
if/elif/else blocks with a hash table would cover this low-hanging fruit,
and would be a nice transparent optimization.

One other nice thing about such an optimization is that it could work
fine for known-immutable types (strings, ints, ...), and with work,
could preserve previous behavior for not-known-immutable types.*

 - Josiah

* This would basically work by compiling the if/elif/else block normally,
adding a type check for the switched-on item, on success, switch to just
after the comparison operator in the if/elif/else bodies, and on failure,
proceed to the first comparison.

To handle the case of if x.i ==...: ... elif x.i == ...: ..., we could
state that the optimization will only occur if the 'switched on' value
is a bare name (x is OK, x.i and x[i] are not).


From martin at v.loewis.de  Tue Jun 13 01:05:42 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 01:05:42 +0200
Subject: [Python-Dev] 2.5 issues need resolving in a few days
In-Reply-To: <ca471dc20606120947r2b0940berf7a0c806b607dd4e@mail.gmail.com>
References: <ee2a432c0606082333o44b7465bk45b686ca83ccc802@mail.gmail.com>	<448CF735.7000404@v.loewis.de>
	<ca471dc20606120947r2b0940berf7a0c806b607dd4e@mail.gmail.com>
Message-ID: <448DF346.5050409@v.loewis.de>

Guido van Rossum wrote:
> PyXML appears pretty stable (in terms of release frequency -- I have
> no opinion on the code quality :-). Perhaps it could just be
> incorporated into the Python svn tree, if the various owners are
> willing to sign a contributor statement?

That is, in itself, a medium-sized project. There are many components
to PyXML, and finding all the authors might be a challenge already.

Some code is outdated, and part of PyXML only to support old
applications; that should not be moved into Python.

Some code is still incomplete. I used to say it's "work in progress",
but for some of it, that isn't really true. Still, there are users
of these pieces as well.

The only parts that I personally would like to see in Python is
some XPath implementation, and some XSLT implementation. Others
might have other preferences, of course.

Regards,
Martin

From steve at holdenweb.com  Tue Jun 13 01:09:46 2006
From: steve at holdenweb.com (Steve Holden)
Date: Tue, 13 Jun 2006 00:09:46 +0100
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>	<5.1.1.6.0.20060612142631.01e83a30@sparrow.telecommunity.com>	<5.1.1.6.0.20060612145237.030c9860@sparrow.telecommunity.com>	<5.1.1.6.0.20060612153726.01ea6990@sparrow.telecommunity.com>	<5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
	<ca471dc20606121325r30737f21xf442463867ec90df@mail.gmail.co m>
	<5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
Message-ID: <448DF43A.80604@holdenweb.com>

Phillip J. Eby wrote:
> [posting back to python-dev in case others also perceived my original 
> message as impolite]
> 
> At 01:25 PM 6/12/2006 -0700, Guido van Rossum wrote:
> 
>>Oh, and the tone of your email was *not* polite. Messages starting
>>with "I wasted an hour of my time" are not polite pretty much by
>>definition.
> 
> 
> Actually, I started out with "please" -- twice, after having previously 
> asked please in advance.  I've also seen lots of messages on Python-Dev 
> where Tim Peters wrote about having wasted time due to other folks not 
> following established procedures, and I tried to emulate his tone.  I guess 
> I didn't do a very good job, but not everybody is as funny as Tim is.  :)
> 
> Usually he manages to make it seem as though he would really be happy to 
> give up his nights and weekends but that sadly, he just doesn't have any 
> more time right at this particular moment.  A sort of "it's not you, it's 
> me" thing.  I guess I just left out that particular bit of 
> sleight-of-mouth.  :)
> 
> Anyway, will anyone who was offended by the original message please pretend 
> that it was delightfully witty and written by Tim instead?  Thanks.  ;)
> 
I wonder what the hell's up with Tim. He's been really crabby lately ...

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From pje at telecommunity.com  Tue Jun 13 01:09:03 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 19:09:03 -0400
Subject: [Python-Dev] Dropping externally maintained
 packages	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <448DEAA9.6040108@v.loewis.de>
References: <e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>

At 12:28 AM 6/13/2006 +0200, Martin v. L?wis wrote:
>If you remember that this is the procedure: sure. However, if the
>maintainer of a package thinks (and says) "somebody edited my code,
>this should not happen again", then I really think the code is better
>not part of the Python distribution.

The "this should not happen again" in this case was the *merge problem*, 
not the *editing*.  There is a significant difference between the two.


From pje at telecommunity.com  Tue Jun 13 01:23:36 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 19:23:36 -0400
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <448DF130.7000206@v.loewis.de>
References: <e6kl0a$96q$2@sea.gmane.org> <448DC39F.6020101@comcast.net>
	<5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
	<e6kl0a$96q$2@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060612191424.03911658@sparrow.telecommunity.com>

At 12:56 AM 6/13/2006 +0200, Martin v. L?wis wrote:
>Fredrik Lundh wrote:
> >> I just want changes made by the Python core developers to be reflected in
> >> the external releases.
> >
> > and presumably, the reason for that isn't that you care about your ego,
> > but that you care about your users.
>
>For that, yes. However, the reason to desire that no changes are made
>to Python's wsgiref is just that he wants to reduce the amount of work
>he has to do to keep the sources synchronized - which reduces his amount
>of work, but unfortunately increases the amount of work to be done for
>the other python-dev committers.

I see *now* why that would appear to be the case.  However, my previous 
assumption was that if somebody found a bug, they'd tell me about it and 
I'd do the work of fixing it, updating the tests, etc.  In other words, I 
was willing to do *all* the work, for changes that made sense to wsgiref.

What I didn't really "get" until now is that people might be making 
Python-wide changes that don't have anything to do with wsgiref per se, and 
that is the place where the increased work comes in.

This should definitely be explained to authors who are donating libraries 
to the stdlib, because from my perspective it seemed to me that I was 
graciously volunteering to be responsible for *all* the work related to 
wsgiref.

(And yes, I understand now why it doesn't actually work that way.)


From steve at holdenweb.com  Tue Jun 13 01:09:46 2006
From: steve at holdenweb.com (Steve Holden)
Date: Tue, 13 Jun 2006 00:09:46 +0100
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>	<5.1.1.6.0.20060612142631.01e83a30@sparrow.telecommunity.com>	<5.1.1.6.0.20060612145237.030c9860@sparrow.telecommunity.com>	<5.1.1.6.0.20060612153726.01ea6990@sparrow.telecommunity.com>	<5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
	<ca471dc20606121325r30737f21xf442463867ec90df@mail.gmail.co m>
	<5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
Message-ID: <448DF43A.80604@holdenweb.com>

Phillip J. Eby wrote:
> [posting back to python-dev in case others also perceived my original 
> message as impolite]
> 
> At 01:25 PM 6/12/2006 -0700, Guido van Rossum wrote:
> 
>>Oh, and the tone of your email was *not* polite. Messages starting
>>with "I wasted an hour of my time" are not polite pretty much by
>>definition.
> 
> 
> Actually, I started out with "please" -- twice, after having previously 
> asked please in advance.  I've also seen lots of messages on Python-Dev 
> where Tim Peters wrote about having wasted time due to other folks not 
> following established procedures, and I tried to emulate his tone.  I guess 
> I didn't do a very good job, but not everybody is as funny as Tim is.  :)
> 
> Usually he manages to make it seem as though he would really be happy to 
> give up his nights and weekends but that sadly, he just doesn't have any 
> more time right at this particular moment.  A sort of "it's not you, it's 
> me" thing.  I guess I just left out that particular bit of 
> sleight-of-mouth.  :)
> 
> Anyway, will anyone who was offended by the original message please pretend 
> that it was delightfully witty and written by Tim instead?  Thanks.  ;)
> 
I wonder what the hell's up with Tim. He's been really crabby lately ...

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From pje at telecommunity.com  Tue Jun 13 01:32:46 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 19:32:46 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
 changing wsgiref on the trunk)
In-Reply-To: <448DF43A.80604@holdenweb.com>
References: <5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612142631.01e83a30@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612145237.030c9860@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612153726.01ea6990@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
	<ca471dc20606121325r30737f21xf442463867ec90df@mail.gmail.co m>
	<5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612192542.03924800@sparrow.telecommunity.com>

At 12:09 AM 6/13/2006 +0100, Steve Holden wrote:
>Phillip J. Eby wrote:
> > Anyway, will anyone who was offended by the original message please 
> pretend
> > that it was delightfully witty and written by Tim instead?  Thanks.  ;)
> >
>I wonder what the hell's up with Tim. He's been really crabby lately ...

It's probably all that time he's been spending tracking down the wsgiref 
test failures.  ;-)


From martin at v.loewis.de  Tue Jun 13 01:36:36 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 01:36:36 +0200
Subject: [Python-Dev] Dropping externally maintained
 packages	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
References: <e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
Message-ID: <448DFA84.1040507@v.loewis.de>

Phillip J. Eby wrote:
> At 12:28 AM 6/13/2006 +0200, Martin v. L?wis wrote:
>> If you remember that this is the procedure: sure. However, if the
>> maintainer of a package thinks (and says) "somebody edited my code,
>> this should not happen again", then I really think the code is better
>> not part of the Python distribution.
> 
> The "this should not happen again" in this case was the *merge problem*,
> not the *editing*.  There is a significant difference between the two.

Well, you wrote, in

http://mail.python.org/pipermail/python-dev/2006-June/065908.html

"only to find that it doesn't correspond to any particular point in
the trunk, because people made changes without contacting me or the
Web-SIG. ...  Please don't do this again."

(where the three dots indicate something that you have done, not
somebody else)

>From that, I can only conclude that you requested that people should
not make changes again without contacting you or the Web-SIG.

It's not clear whether you want to be contacted before or after the
changes have been made. If "after" is ok, you should just subscribe
to python-checkins.

Still, Guido dislikes the notion of having to contact anybody when
making changes, as a matter of principle. I can sympathize with that
view.

Regards,
Martin



From jcarlson at uci.edu  Tue Jun 13 01:30:05 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Mon, 12 Jun 2006 16:30:05 -0700
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
In-Reply-To: <e6jm7e$hnn$1@sea.gmane.org>
References: <20060607083940.GA12003@code0.codespeak.net>
	<e6jm7e$hnn$1@sea.gmane.org>
Message-ID: <20060612162022.F2BF.JCARLSON@uci.edu>


Boris Borcic <bborcic at gmail.com> wrote:
> Hello,
> 
> Armin Rigo wrote:
> > Hi,
> > 
> > On Wed, Jun 07, 2006 at 02:07:48AM +0200, Thomas Wouters wrote:
> >> I just submitted http://python.org/sf/1501934 and assigned it to Neal so it
> >> doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles
> >> the following code incorrectly:
> > 
> > No, no, it's an underground move by Jeremy to allow assignment to
> > variables of enclosing scopes:
> ...
> > Credits to Samuele's evil side for the ideas.  His non-evil side doesn't
> > agree, and neither does mine, of course :-)
> ...
> > More seriously, a function with a variable that is only written to as
> > the target of augmented assignments cannot possibly be something else
> > than a newcomer's mistake: the augmented assignments will always raise
> > UnboundLocalError.
> 
> I am not really a newcomer to python. But lately I find myself regularly bitten
> by this compiler behavior that I tend to view as a (design) bug. This started
> happening after I saw that sets are just as good as lists performance-wise and I
> began changing code like this

I see your attempted use of a closure as a design bug in your code. 
Remember that while closures can be quite convenient, there are other
methods to do precisely the same thing without needing to use nested
scopes.  I find that it would be far easier (for the developers of
Python) and significantly more readable if it were implemented as a
class.

class solve:
    def __init__(self, problem):
        self.freebits = ...
    ...
    def search(self, data):
        ...
        self.freebits ^= swaps
        ...
    ...

Not everything needs to (or should) be a closure

 - Josiah


From t-bruch at microsoft.com  Tue Jun 13 01:47:02 2006
From: t-bruch at microsoft.com (Bruce Christensen)
Date: Mon, 12 Jun 2006 16:47:02 -0700
Subject: [Python-Dev] socket._socketobject.close() doesn't really close
	sockets
Message-ID: <3581AA168D87A2479D88EA319BDF7D327F61BB@RED-MSG-80.redmond.corp.microsoft.com>

In implementing the IronPython _socket module, I've discovered some
puzzling behavior in the standard Python socket wrapper module:
socket._socketobject.close() doesn't actually close sockets; rather, it
just sets _sock to an instance of _closedsocket and lets the GC clean up
the real socket. (See socket.py around line 160.)

This works fine with a reference counting GC, but can potentially leave
sockets hanging around for a long time on platforms (e.g. the CLR) with
other GC algorithms. It causes most of the socket unit tests to fail on
IronPython.

Is there a reason for this implementation?

This patch to _socketobject.close() makes socket.py work with
IronPython:

     def close(self):
+        if not isinstance(self._sock, _closedsocket):
+            self._sock.close()
         self._sock = _closedsocket()
         self.send = self.recv = self.sendto = self.recvfrom =
self._sock._dummy

--Bruce

From martin at v.loewis.de  Tue Jun 13 01:49:32 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 01:49:32 +0200
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <5.1.1.6.0.20060612191424.03911658@sparrow.telecommunity.com>
References: <e6kl0a$96q$2@sea.gmane.org> <448DC39F.6020101@comcast.net>
	<5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
	<e6kl0a$96q$2@sea.gmane.org>
	<5.1.1.6.0.20060612191424.03911658@sparrow.telecommunity.com>
Message-ID: <448DFD8C.9080200@v.loewis.de>

Phillip J. Eby wrote:
> This should definitely be explained to authors who are donating
> libraries to the stdlib, because from my perspective it seemed to me
> that I was graciously volunteering to be responsible for *all* the work
> related to wsgiref.

It's not only about python-wide changes. It is also for regular error
corrections: whenever I commit a bug fix that somebody contributed, I
now have to understand the code, and the bug, and the fix. Under PEP
360, I have to do all of these, *plus* checking PEP 360 to determine
whether I will step on somebodies' toes. I also have to consult PEP 291,
of course, to find out whether the code has additional compatibility
requirements.

I currently mostly manage to do this all because I remember (in brain)
whether something is externally maintained, and how to proceed in this
case. However, I can see how this doesn't scale.

So ideally, I would like to see the external maintainers state "we can
deal with occasional breakage arising from somebody forgetting the
procedures". This would scale, as it would put the responsibility
for the code on the shoulders of the maintainer. It appears that Thomas
Heller says this would work for him, and it worked for bsddb and
PyXML.

Regards,
Martin

From tim.peters at gmail.com  Tue Jun 13 01:49:53 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 12 Jun 2006 19:49:53 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please stop
	changing wsgiref on the trunk)
In-Reply-To: <448DF43A.80604@holdenweb.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612142631.01e83a30@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612145237.030c9860@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612153726.01ea6990@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612160657.03967430@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612163415.03af15d0@sparrow.telecommunity.com>
	<448DF43A.80604@holdenweb.com>
Message-ID: <1f7befae0606121649tda59cecsb9839aa3e05b4199@mail.gmail.com>

[Phillip J. Eby]
>> Actually, I started out with "please" -- twice, after having previously
>> asked please in advance.  I've also seen lots of messages on Python-Dev
>> where Tim Peters wrote about having wasted time due to other folks not
>> following established procedures, and I tried to emulate his tone.  I guess
>> I didn't do a very good job, but not everybody is as funny as Tim is.  :)

...

[Steve Holden]
> I wonder what the hell's up with Tim. He's been really crabby lately ...

Moi?  Not at all!  Once or twice a year I do get pissed off when a
period of scant free time coincides with a period of people checking
in test-breaking changes that _would_ have failed on their own box had
they bothered to run the tests at all.  That's plain bad practice, and
deserves all the flaming pixels a mythical authority figure can
conjure up in opposition.

But that hasn't happened lately (test_wsgiref only failed under -O,
and I don't expect people to run tests that way routinely -- I run the
tests 8 ways when a release is coming up).

>> Anyway, will anyone who was offended by the original message please pretend
>> that it was delightfully witty and written by Tim instead?  Thanks.  ;)

No -- but because I wasn't offended to begin with.  Let's compromise:
everyone can pretend they wrote this message instead, so if nobody
replies everyone can pretend they got the last word :-)

From greg.ewing at canterbury.ac.nz  Tue Jun 13 01:50:11 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 13 Jun 2006 11:50:11 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17548.49469.394804.146445@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
Message-ID: <448DFDB3.2050100@canterbury.ac.nz>

Terry Jones wrote:

> The code below uses a RNG with period 5, is deterministic, and has one
> initial state. It produces 20 different outcomes.

You misunderstand what's meant by "outcome" in this
context. The outcome of your algorithm is the whole
*sequence* of numbers it produces, not each individual
number. And if the rng state is the only initial
condition that can vary, it can't produce more than
5 distinct sequences.

--
Greg

From python-dev at zesty.ca  Tue Jun 13 01:52:16 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 12 Jun 2006 18:52:16 -0500 (CDT)
Subject: [Python-Dev] UUID module
In-Reply-To: <047e01c68dff$bae89760$3db72997@bagio>
References: <5.1.1.6.0.20060307103122.042c4ae8@mail.telecommunity.com>
	<Pine.LNX.4.58.0606090221330.5223@server1.LFW.org>
	<020a01c68d5b$f2130170$3db72997@bagio>
	<Pine.LNX.4.58.0606120347110.5223@server1.LFW.org>
	<047e01c68dff$bae89760$3db72997@bagio>
Message-ID: <Pine.LNX.4.58.0606121848170.5223@server1.LFW.org>

On Mon, 12 Jun 2006, Giovanni Bajo wrote:
> GetSystemDirectory() is the official way to find the system directory.

You're right.  I've added a call to GetSystemDirectory, with a fallback
to the usual locations if it doesn't work.

> Another thing that you might do is to drop those absolute system
> directories altogether. After all, ipconfig should always be in the path.

Yup, that's why '' is the first item in the list of directories to try.

> As a last note, you are parsing ipconfig output assuming an English Windows
> installation. My Italian Windows 2000 has localized output.

Thanks for catching this.  I've fixed it in the latest version,
which is now checked in to the trunk.


-- ?!ng

From rasky at develer.com  Tue Jun 13 01:54:53 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Tue, 13 Jun 2006 01:54:53 +0200
Subject: [Python-Dev] Source control tools
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
Message-ID: <00d201c68e7b$99de1d70$1cbf2997@bagio>

Thomas Wouters <thomas at python.org> wrote:

>> It would be an important move towards world peace, if it didn't
>> inspire whole new SCM-holy-wars :-)  I have a fair bit of experience with
>> different
>> SCM (VC, source control tool, however you want to call them) so I'll
>> take
>> this opportunity to toss up some observations. Not that switching to
>> another
>> SCM will really solve the issues you are referring to, but I happen
>> to think
>> switching to another SCM is  a great idea :)

Would you like to show me a comprehensive list of which procedures need to be
improved, wrt the current workflow with SVN?

My own experience is that SVN is pretty bare-bone, that is it provides powerful
low-level primitives, but basically no facility to abstract these primitives
into higher-level concepts. Basically, it makes many things possible, but few
convenient. I believe that tools like svnmerge show that it is indeed possible
to build upon SVN to construct higher-level layers, but there's quite some work
to do.

I would like also to note that SVK is sexy :)

>> The real reason I think we should consider other SCMs is because I
>> fear what the history will look like when 3.0 is done. In Subversion, merges
of
>> branches don't preserve history -- you have to do it yourself. Given
>> the way Subversion works, I don't think that'll really change;

Actually, Subversion is growing merge-tracking facilities, but it's a long way
to what you'd like to happen with the Py3k history (and I don't think that's
achievable whatsoever).

>> Git, the 'low level SCM' developed for the Linux kernel, is
>> incredibly fast in normal operation and does branches quite well. I
>> suspect much of its speed is lost on non-Linux platforms,
>> though, and I don't know  how well it works on Windows ;)

The higher-level part of GIT is written in a combination of bash and perl, with
richful usage of textuils, coreutils and whatnot. It's basically "unportable"
by design to native Windows. I guess the only sane approach is to use it under
Cygwin. IMO this is a big no-go for real Windows developers.

Giovanni Bajo


From pje at telecommunity.com  Tue Jun 13 01:58:43 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 19:58:43 -0400
Subject: [Python-Dev] Dropping externally maintained
 packages	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <448DFA84.1040507@v.loewis.de>
References: <5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
	<e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de> <e6k8ug$p9q$1@sea.gmane.org>
	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>
	<e6kb3d$2gg$1@sea.gmane.org>
	<5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612194112.01eb1198@sparrow.telecommunity.com>

At 01:36 AM 6/13/2006 +0200, Martin v. L?wis wrote:
> From that, I can only conclude that you requested that people should
>not make changes again without contacting you or the Web-SIG.

Indeed I was -- back when I was under the mistaken impression that PEP 360 
actually meant what it appeared to say about other packages added in 
2.5.  In *that* universe, what I said made perfect sense.  :-)

And if we *were* in an alternate hypothetical universe where, say, instead 
of PEP 360 not really being followed, it was the unit testing policy, then 
we would all be yelling at Tim for being rude about people breaking the 
tests, when he should "just expect" that people will break tests, because 
after all, they have to change the *code*, and it's not reasonable to 
expect them to change the *tests* too.

So, to summarize, it's all actually Tim's fault, but only in a parallel 
universe where nobody believes in unit testing.  ;-)


From rasky at develer.com  Tue Jun 13 02:00:39 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Tue, 13 Jun 2006 02:00:39 +0200
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
References: <058501c68e4c$d6e701c0$bf03030a@trilan><5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com><5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com><5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com><ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com><058501c68e4c$d6e701c0$bf03030a@trilan>
	<ca471dc20606121123v726d89d4pe507ba2fc7f9ed5c@mail.gmail.co m>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
Message-ID: <010401c68e7c$678f95f0$1cbf2997@bagio>

Phillip J. Eby <pje at telecommunity.com> wrote:

> Control isn't the issue; it's ensuring that fixes don't get lost or
> reverted from either the external version or the stdlib version.
> Control
> is merely a means to that end.  If we can accomplish that via some
> other
> means (e.g. an Externals/ subtree), I'm all for it.  (Actually,
> perhaps
> Packages/ would be a better name, since the point is that they're
> packages
> that are maintained for separate distribution for older Python
> versions.  They're really *not* "external" any more, they just get
> snapshotted for release.)

IMO, the better way is exactly this you depicted: move the official development
tree into this Externals/ dir *within* Python's repository. Off that, you can
have your own branch for experimental work, from which extract your own
releases, and merge changes back and forth much more simply (since if they
reside on the same repository, you can use svnmerge-like features to find out
modifications and whatnot).

Maintaining an external repository seems like a larger effort, and probably not
worth it.

Giovanni Bajo


From guido at python.org  Tue Jun 13 02:01:00 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 12 Jun 2006 17:01:00 -0700
Subject: [Python-Dev] socket._socketobject.close() doesn't really close
	sockets
In-Reply-To: <3581AA168D87A2479D88EA319BDF7D327F61BB@RED-MSG-80.redmond.corp.microsoft.com>
References: <3581AA168D87A2479D88EA319BDF7D327F61BB@RED-MSG-80.redmond.corp.microsoft.com>
Message-ID: <ca471dc20606121701q381f3636h84a522b570393c77@mail.gmail.com>

Yes, this is because of the (unfortunate) availability of the dup()
operation. Originally, dup() was only available on Unix, where it's
implemented in the kernel at the file descriptor level: when several
file descriptors all share the same underlying socket, the socket
isn't really closed until the last of those file descriptors are
closed, essentially implementing reference counting for file
descriptors. When it was deemed that this (only occasionally useful)
feature be made available on Windows, which doesn't support this
functionality at the socket descriptor level (perhaps it supports it
at some other level, but I wasn't a Windows wizard -- and still am not
:-), I implemented this by implementing the dup() operation in Python
objects, depending on Python's reference counting to close the single
underlying _socket object when the last Python-level socket object is
closed.

So, unfortunately, in IronPython, you'll have to implement reference
counting for socket object somehow.

Sorry,

--Guido

On 6/12/06, Bruce Christensen <t-bruch at microsoft.com> wrote:
> In implementing the IronPython _socket module, I've discovered some
> puzzling behavior in the standard Python socket wrapper module:
> socket._socketobject.close() doesn't actually close sockets; rather, it
> just sets _sock to an instance of _closedsocket and lets the GC clean up
> the real socket. (See socket.py around line 160.)
>
> This works fine with a reference counting GC, but can potentially leave
> sockets hanging around for a long time on platforms (e.g. the CLR) with
> other GC algorithms. It causes most of the socket unit tests to fail on
> IronPython.
>
> Is there a reason for this implementation?
>
> This patch to _socketobject.close() makes socket.py work with
> IronPython:
>
>      def close(self):
> +        if not isinstance(self._sock, _closedsocket):
> +            self._sock.close()
>          self._sock = _closedsocket()
>          self.send = self.recv = self.sendto = self.recvfrom =
> self._sock._dummy
>
> --Bruce
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Tue Jun 13 02:10:14 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 20:10:14 -0400
Subject: [Python-Dev] External Package Maintenance (was Re: Please
 stopchanging wsgiref on the trunk)
In-Reply-To: <010401c68e7c$678f95f0$1cbf2997@bagio>
References: <058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
	<ca471dc20606121123v726d89d4pe507ba2fc7f9ed5c@mail.gmail.co m>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612200738.03430ca0@sparrow.telecommunity.com>

At 02:00 AM 6/13/2006 +0200, Giovanni Bajo wrote:
>IMO, the better way is exactly this you depicted: move the official 
>development
>tree into this Externals/ dir *within* Python's repository. Off that, you can
>have your own branch for experimental work, from which extract your own
>releases, and merge changes back and forth much more simply (since if they
>reside on the same repository, you can use svnmerge-like features to find out
>modifications and whatnot).

Yes, that's certainly what seems ideal for me as an external developer.  I 
don't know if it addresses the core developers' concerns, though, since it 
would mean having Python code that lives outside of the Lib/ subtree, tests 
that live under other places thatn Lib/test, and documentation source that 
lives outside of Doc/.  But if those aren't showstoppers then it seems like 
a winner to do it for 2.6.


From brett at python.org  Tue Jun 13 02:16:08 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 12 Jun 2006 17:16:08 -0700
Subject: [Python-Dev] rewording PEP 360
Message-ID: <bbaeab100606121716j6f5ecbf8q359c729bc942d77a@mail.gmail.com>

It's sounding like a rewording of PEP 360 would help this whole external
code issue.  If it was changed to say that non-API changes could be directly
checked in would that help?

That would mean the PEP would list the contact person and what external
version corresponds to what Python release.  Basic fixes would be allowed
without issue, but API changes would be discussed with the contributor.  So
the special notes section would go away and all modules would be treated the
same.

The other option, of course, is to reject the PEP.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/aeea3289/attachment.htm 

From tim.peters at gmail.com  Tue Jun 13 02:18:43 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 12 Jun 2006 20:18:43 -0400
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <448D7192.8070607@ewtllc.com>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<448D7192.8070607@ewtllc.com>
Message-ID: <1f7befae0606121718x513eda51j243d3155ae1038f8@mail.gmail.com>

[Raymond Hettinger]
> I think the note is still useful, but the "rather small" wording
> should be replaced by something most precise (such as the
> value of n=len(x) where n! > 2**19997).

Note that I already removed it, and I'm not putting it back.  The
period of W-H was "so short" you could get into trouble, based on
period alone, with a list of only 16 elements.  The Twister is so much
more capable in respect of both period and high-dimensional
equidistribution properties that I think anyone sophisticated enough
to _understand_ an accurate warning correctly would have no need to be
warned.  Everyone else would find it a mix of confusing and scary, to
no real end.

None of this is to say it couldn't be useful to have a digestible
introduction to issues raised by use of deterministic PRNGs.  But I
don't think that one note in one docstring is actually "better than
nothing" in that regard ;-)

From greg.ewing at canterbury.ac.nz  Tue Jun 13 02:20:04 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 13 Jun 2006 12:20:04 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <17549.17179.944527.897085@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
	<448CBD66.80002@canterbury.ac.nz>
	<17549.17179.944527.897085@montanaro.dyndns.org>
Message-ID: <448E04B4.9050203@canterbury.ac.nz>

skip at pobox.com wrote:
>     Greg> Multiple values could be written
> 
>     Greg>    case 'a':
>     Greg>    case 'b':
>     Greg>    case 'c':
>     Greg>      ...
> 
> That would be an exception to the rule that a line ending in a colon
> introduces an indented block.

Yes, but I don't see that as a big problem. It
seems fairly clear what it's supposed to mean.

If it bothers you, think of them as alternative
introductions to the same block.

--
Greg

From greg.ewing at canterbury.ac.nz  Tue Jun 13 02:22:13 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 13 Jun 2006 12:22:13 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <17549.17256.241958.724752@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<448B680A.9020000@canterbury.ac.nz> <448C87E1.6070801@acm.org>
	<17548.39643.646760.994634@montanaro.dyndns.org>
	<448CBE21.6070009@canterbury.ac.nz>
	<17549.17256.241958.724752@montanaro.dyndns.org>
Message-ID: <448E0535.5030206@canterbury.ac.nz>

skip at pobox.com wrote:
>     Greg> A way out of this would be to define the semantics so that the
>     Greg> expression values are allowed to be cached, and the order of
>     Greg> evaluation and testing is undefined. So the first time through,
>     Greg> the values could all be put in a dict, to be looked up thereafter.
> 
> And if those expressions' values would change if evaluated after further
> execution?

Then you deserve what you get for not reading
the docs.

--
Greg

From pje at telecommunity.com  Tue Jun 13 02:22:46 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 12 Jun 2006 20:22:46 -0400
Subject: [Python-Dev] External Package Maintenance
In-Reply-To: <448DFD8C.9080200@v.loewis.de>
References: <5.1.1.6.0.20060612191424.03911658@sparrow.telecommunity.com>
	<e6kl0a$96q$2@sea.gmane.org> <448DC39F.6020101@comcast.net>
	<5.1.1.6.0.20060612155543.01e809c0@sparrow.telecommunity.com>
	<e6kl0a$96q$2@sea.gmane.org>
	<5.1.1.6.0.20060612191424.03911658@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060612201330.037eddc0@sparrow.telecommunity.com>

At 01:49 AM 6/13/2006 +0200, Martin v. L?wis wrote:
>Phillip J. Eby wrote:
> > This should definitely be explained to authors who are donating
> > libraries to the stdlib, because from my perspective it seemed to me
> > that I was graciously volunteering to be responsible for *all* the work
> > related to wsgiref.
>
>It's not only about python-wide changes. It is also for regular error
>corrections: whenever I commit a bug fix that somebody contributed, I
>now have to understand the code, and the bug, and the fix.

Again, my point was that I was volunteering to do all of those things for 
wsgiref.


>Under PEP 360, I have to do all of these, *plus* checking PEP 360 to determine
>whether I will step on somebodies' toes. I also have to consult PEP 291,
>of course, to find out whether the code has additional compatibility
>requirements.

In the wsgiref case, you mustn't forget PEP 333 either, actually.  :)


>So ideally, I would like to see the external maintainers state "we can
>deal with occasional breakage arising from somebody forgetting the
>procedures". This would scale, as it would put the responsibility
>for the code on the shoulders of the maintainer. It appears that Thomas
>Heller says this would work for him, and it worked for bsddb and
>PyXML.

I've also already said I can use Barry's approach, making the Python SVN 
repository version the primary home of wsgiref and taking snapshots to make 
releases from.  I didn't realize that cross-directory linkages of that sort 
were allowed, or I'd have done it that way in the first place.  Certainly 
it would've been a more effective use of my time to do so.  :)


From brett at python.org  Tue Jun 13 02:27:47 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 12 Jun 2006 17:27:47 -0700
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612200738.03430ca0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
	<010401c68e7c$678f95f0$1cbf2997@bagio>
	<5.1.1.6.0.20060612200738.03430ca0@sparrow.telecommunity.com>
Message-ID: <bbaeab100606121727m37403e3bm3e9d9f2915c0f29d@mail.gmail.com>

On 6/12/06, Phillip J. Eby <pje at telecommunity.com> wrote:
>
> At 02:00 AM 6/13/2006 +0200, Giovanni Bajo wrote:
> >IMO, the better way is exactly this you depicted: move the official
> >development
> >tree into this Externals/ dir *within* Python's repository. Off that, you
> can
> >have your own branch for experimental work, from which extract your own
> >releases, and merge changes back and forth much more simply (since if
> they
> >reside on the same repository, you can use svnmerge-like features to find
> out
> >modifications and whatnot).
>
> Yes, that's certainly what seems ideal for me as an external developer.  I
> don't know if it addresses the core developers' concerns, though, since it
> would mean having Python code that lives outside of the Lib/ subtree,
> tests
> that live under other places thatn Lib/test, and documentation source that
> lives outside of Doc/.  But if those aren't showstoppers then it seems
> like
> a winner to do it for 2.6.


As long as this is the exception and not the rule, I am fine with this
personally.  ctypes already has its tests in its package directory and no
one has complained yet (and I didn't find it a problem since the traceback
lists the file location of the failing test).

Not every contributed piece of code should go in there, though, if the
contributor has not shown a certain level of dedication to their personal
userbase and thus cover the inconvenience put on python-dev of having two
different places to check for everything.

And obviously the hope would be to eventually have something moved from
Extensions after a certain amount of time and merged into the rest of the
tree so that the dichotomy does not become a huge burden.

On the other hand, the example of ctypes of having tests kept in the package
directory instead of Lib/test might be a good thing overall, regardless of
whether the package is from the outside or not.  That would leave the
package location and docs as the only two places you would need to check for
chagnes and might help with organizing and promote smaller unit test files
for packages that spread their tests across multiple files.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060612/e305ab92/attachment.html 

From steve at holdenweb.com  Tue Jun 13 02:42:14 2006
From: steve at holdenweb.com (Steve Holden)
Date: Tue, 13 Jun 2006 01:42:14 +0100
Subject: [Python-Dev] Dropping externally maintained packages
 (Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612194112.01eb1198@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>	<e6kb3d$2gg$1@sea.gmane.org>	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>	<448D9EA1.9000209@v.loewis.de>
	<e6k8ug$p9q$1@sea.gmane.org>	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>	<e6kb3d$2gg$1@sea.gmane.org>	<5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
	<448DFA84.1040507@v.loewis.de>
	<5.1.1.6.0.20060612194112.01eb1198@sparrow.telecommunity.com>
Message-ID: <e6l1kg$ev0$1@sea.gmane.org>

Phillip J. Eby wrote:
[...]
> So, to summarize, it's all actually Tim's fault, but only in a parallel 
> universe where nobody believes in unit testing.  ;-)
> 
I'm sorry to contradict you, but every issue of significance is already 
known to be Barry's fault.

probably-until-the-end-of-time-ly y'rs  - steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From tom at vector-seven.com  Tue Jun 13 02:49:27 2006
From: tom at vector-seven.com (Thomas Lee)
Date: Tue, 13 Jun 2006 10:49:27 +1000
Subject: [Python-Dev] Switch statement
Message-ID: <20060613004927.GA7988@21degrees.com.au>

On Mon, Jun 12, 2006 at 11:33:49PM +0200, Michael Walter wrote:
> Maybe "switch" became a keyword with the patch..
> 
> Regards,
> Michael
> 

That's correct.

> On 6/12/06, M.-A. Lemburg <mal at egenix.com> wrote:
> >
> > Could you upload your patch to SourceForge ? Then I could add
> > it to the PEP.
> >

It's already up there :) I thought I sent that through in another
e-mail, but maybe not:

http://sourceforge.net/tracker/index.php?func=detail&aid=1504199&group_id=5470&atid=305470

Complete with documentation changes and a unit test.

> > Thomas wrote a patch which implemented the switch statement
> > using an opcode. The reason was probably that switch works
> > a lot like e.g. the for-loop which also opens a new block.
> >

No, Skip explained this in an earlier e-mail: apparently some
programming languages use a compile-time generated lookup table
for switch statements rather than COMPARE_OP for each case. The
restriction is, of course, that you're stuck with constants for each
case statement.

In a programming language like Python, where there are no named
constants, the usefulness of such a construct might be questioned.
Again, see Skip's earlier e-mails.

> > Could you explain how your patch works ?
> >

1. Evaluate the "switch" expression so that it's at the top of the stack
2. For each case clause:
2.1. Generate a DUP_TOP to duplicate the switch value for a comparison
2.2. Evaluate the "case" expression
2.3. COMPARE_OP(PyCmp_EQ)
2.4. Jump to the next case statement if false
2.5. Otherwise, POP_TOP and execute the suite for the case clause
2.6. Then jump to 3
3. POP_TOP to remove the evaluated switch expression from the stack

As you can see from the above, my patch generates a COMPARE_OP for each
case, so you can use expressions - not just constants - for cases.

All of this is in the code found in Python/compile.c.

Cheers,
Tom

-- 
Tom Lee
http://www.vector-seven.com


----- End forwarded message -----

-- 
Tom Lee
http://www.vector-seven.com


From janssen at parc.com  Tue Jun 13 05:18:35 2006
From: janssen at parc.com (Bill Janssen)
Date: Mon, 12 Jun 2006 20:18:35 PDT
Subject: [Python-Dev] Import semantics
In-Reply-To: Your message of "Mon, 12 Jun 2006 12:20:53 PDT."
	<448DBE95.2030703@strakt.com> 
Message-ID: <06Jun12.201844pdt."58641"@synergy1.parc.xerox.com>

> this is mildy insulting, to the people that spent time trying to find 
> the best compromises between various issues and keep jython alive.

Sorry, didn't mean to disparage that work.

Bill

From nnorwitz at gmail.com  Tue Jun 13 07:10:23 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Mon, 12 Jun 2006 22:10:23 -0700
Subject: [Python-Dev] External Package Maintenance (was Re: Please
	stopchanging wsgiref on the trunk)
In-Reply-To: <5.1.1.6.0.20060612200738.03430ca0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612125445.01fd9258@sparrow.telecommunity.com>
	<ca471dc20606121042s77c0708fr54f9cea4d30772e9@mail.gmail.com>
	<058501c68e4c$d6e701c0$bf03030a@trilan>
	<5.1.1.6.0.20060612143737.03130e80@sparrow.telecommunity.com>
	<010401c68e7c$678f95f0$1cbf2997@bagio>
	<5.1.1.6.0.20060612200738.03430ca0@sparrow.telecommunity.com>
Message-ID: <ee2a432c0606122210o1de6b526u5093fc0f9450dc12@mail.gmail.com>

On 6/12/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 02:00 AM 6/13/2006 +0200, Giovanni Bajo wrote:
> >IMO, the better way is exactly this you depicted: move the official
> >development
> >tree into this Externals/ dir *within* Python's repository. Off that, you can
> >have your own branch for experimental work, from which extract your own
> >releases, and merge changes back and forth much more simply (since if they
> >reside on the same repository, you can use svnmerge-like features to find out
> >modifications and whatnot).
>
> Yes, that's certainly what seems ideal for me as an external developer.  I
> don't know if it addresses the core developers' concerns, though, since it
> would mean having Python code that lives outside of the Lib/ subtree, tests
> that live under other places thatn Lib/test, and documentation source that
> lives outside of Doc/.  But if those aren't showstoppers then it seems like
> a winner to do it for 2.6.

I'm not sure I understand.  Is something like this the proposed
directory structure (all within the python repo):

  python/trunk/ - current top-level where Lib, Modules, Python,
Objects, etc live
      + Add another subdir under trunk/ called Externals/
               + under Externals/ would be a directory per project
(wsgiref, etree, etc)

And each project could have it's own directory structure?

This probably wouldn't be so bad.  It would be particularly good if
the subdirs under Externals/project could be similar (ie, they each
have a Doc, Lib, src, etc. directories).

The more consistency we have, the easier it is to remember and follow the rules.

n

From steven.bethard at gmail.com  Tue Jun 13 07:45:48 2006
From: steven.bethard at gmail.com (Steven Bethard)
Date: Mon, 12 Jun 2006 23:45:48 -0600
Subject: [Python-Dev] DRAFT: python-dev summary for 2006-04-16 to 2006-04-30
Message-ID: <d11dcfba0606122245s15c25255m46d7cfbf7e619dfd@mail.gmail.com>

Ok, here's the summary for the second half of April.  My brain is numb
from the setuptools ane PEP 343 discussions, so please have a look
over it and see what I messed up. ;-)

As always, if you have any corrections or comments, please let me know.


=============
Announcements
=============

---------------------------
Python 2.5 alpha 2 released
---------------------------

Python 2.5 alpha 2 was released on April 27th.  If you haven't tried
it out yet, now's the time!  `PEP 356`_ has more details and the full
schedule.

.. _PEP 356: http://www.python.org/dev/peps/pep-0356/

Contributing threads:

- `TRUNK FREEZE from 00:00 UTC, 27th April 2006 for 2.5a2
<http://mail.python.org/pipermail/python-dev/2006-April/064382.html>`__
- `RELEASED Python 2.5 (alpha 2)
<http://mail.python.org/pipermail/python-dev/2006-April/064479.html>`__
- `trunk is UNFROZEN
<http://mail.python.org/pipermail/python-dev/2006-April/064483.html>`__
- `2.5 open issues
<http://mail.python.org/pipermail/python-dev/2006-April/064519.html>`__

----------------------------
QOTF: Quote of the Fortnight
----------------------------

Phillip J. Eby, on the social behavior of package distributors (and
why setuptools goes to such efforts to guess what package distributors
were really trying to do):

    The problem isn't fundamentally a technical one, but a social one.
 You can effect social change through technology, but not by being
some random guy with a nagging 'bot.

(And yes, Phillip, you can nominate yourself for QOTF.) ;-)

Contributing thread:

- `setuptools: past, present, future
<http://mail.python.org/pipermail/python-dev/2006-April/064168.html>`__


=========
Summaries
=========

-------------------------------
Adding setuptools to the stdlib
-------------------------------

Phillip J. Eby started checking in some of the `setuptools`_ modules
for Python 2.5.  People started to get nervous about some of the
modules because:

* Setuptools changes the distutils "install" command to install
everything as zipfile eggs
* Setuptools changes the distutils "sdist" command to automate
generation of MANIFEST.in and MANIFEST.
* Setuptools adds 'site.py' files so that .pth files are processed in
PYTHONPATH directories.
* Setuptools monkey-patches three classes from distutils:
Distribution, Command, and Extension.

While most people recognized the need for the features that setuptools
provides, a number of them objected to introducing setuptools as a new
package and instead argued for merging setuptools into distutils.
Phillip said that such a merge might be possible, but that there were
a fair number of backwards compatibility concerns (e.g. its
redefinition of the "install" and "sdist" commands) and that it would
make it much harder for him to maintain external versions of
setuptools compatible with Python 2.3 and 2.4.

In the end, Phillip suggested withdrawing setuptools from Python 2.5,
and figuring out how to merge setuptools into distutils for the 2.6
release.  In particular, he asked for help with the mostly trivial
porting of the "alias", "rotate", "saveopts" and "setopt" commands.
Additionally, he made available some information about the `setuptools
internals`_ for anyone who wanted to see better what was going on
behind the scenes.

.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
.. _setuptools internals: http://peak.telecommunity.com/DevCenter/EggFormats

Contributing threads:

- `[Python-checkins] r45510 - python/trunk/Lib/pkgutil.py
python/trunk/Lib/pydoc.py
<http://mail.python.org/pipermail/python-dev/2006-April/063838.html>`__
- `Place for setuptools manuals and source for .exe files?
<http://mail.python.org/pipermail/python-dev/2006-April/063846.html>`__
- `setuptools in the stdlib ([Python-checkins] r45510 -
python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py)
<http://mail.python.org/pipermail/python-dev/2006-April/063848.html>`__
- `setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py
python/trunk/Lib/pydoc.py)
<http://mail.python.org/pipermail/python-dev/2006-April/063872.html>`__
- `setuptools in the stdlib
<http://mail.python.org/pipermail/python-dev/2006-April/063877.html>`__
- `Raising objections (was: setuptools in the stdlib)
<http://mail.python.org/pipermail/python-dev/2006-April/063884.html>`__
- `Raising objections
<http://mail.python.org/pipermail/python-dev/2006-April/063906.html>`__
- `3rd party extensions hot-fixing the stdlib (setuptools in the
stdlib) <http://mail.python.org/pipermail/python-dev/2006-April/063930.html>`__
- `setuptools in 2.5.
<http://mail.python.org/pipermail/python-dev/2006-April/063952.html>`__
- `magic in setuptools (Was: setuptools in the stdlib)
<http://mail.python.org/pipermail/python-dev/2006-April/063966.html>`__
- `Distutils for Python 2.1 (was "Raising objections")
<http://mail.python.org/pipermail/python-dev/2006-April/064000.html>`__
- `setuptools in 2.5. (summary)
<http://mail.python.org/pipermail/python-dev/2006-April/064001.html>`__
- `Distutils thoughts
<http://mail.python.org/pipermail/python-dev/2006-April/064079.html>`__
- `setuptools: past, present, future
<http://mail.python.org/pipermail/python-dev/2006-April/064145.html>`__
- `Easy, uncontroversial setuptools->distutils tasks
<http://mail.python.org/pipermail/python-dev/2006-April/064150.html>`__
- `Internal documentation for egg formats now available
<http://mail.python.org/pipermail/python-dev/2006-April/064348.html>`__

-----------------------------
PEP 343: The "with" Statement
-----------------------------

A.M. Kuchling asked about some terminology in the `PEP 343`_
documentation.  Originally, the objects with ``__enter__`` and
``__exit__`` methods which were used in the with-statement were called
"context managers".  When the additional ``__context__`` method was
introduced, the term "manageable context" was given to objects that
provided it.  At PyCon, the PEP was changed to switch the terms,
though the names of ``decimal.Context`` and
``contextlib.contextmanager`` were not changed correspondingly.  This
all led to a rather confusing discussion where people argued over the
terminology, appropriately summarized in this quote by Phillip J. Eby:

    ...everybody has been perfectly clear, we just haven't really
gotten on the same page about which words mean what.

People then played around with a variety of alternative terms
("context factory", "context specifier", etc.) but in the end, Guido
decided that it was much clearer to just drop the ``__context__``
method entirely -- most use-cases at the time just had ``__context__``
returning self, and the few use-cases that didn't could be accomodated
with an appropriately named method or function.  And dropping the
``__context__`` method meant that only the term "context manager" was
necessary, removing most of the earlier terminological confusion.

.. _PEP 343: http://www.python.org/dev/peps/pep-0343/

Contributing threads:

- `PEP 343: confusing context terminology
<http://mail.python.org/pipermail/python-dev/2006-April/063842.html>`__
- `Why are contexts also managers? (was r45544 -
peps/trunk/pep-0343.txt)
<http://mail.python.org/pipermail/python-dev/2006-April/063863.html>`__
- `With context, please
<http://mail.python.org/pipermail/python-dev/2006-April/064191.html>`__
- `PEP 343 update (with statement context terminology)
<http://mail.python.org/pipermail/python-dev/2006-April/064213.html>`__
- `Must objects with __enter__/__exit__ also supply __context__?
<http://mail.python.org/pipermail/python-dev/2006-April/064296.html>`__
- `Updated context management documentation
<http://mail.python.org/pipermail/python-dev/2006-April/064331.html>`__
- `[Python-checkins] r45721 - python/trunk/Lib/test/test_with.py
<http://mail.python.org/pipermail/python-dev/2006-April/064385.html>`__
- `More on contextlib - adding back a contextmanager decorator
<http://mail.python.org/pipermail/python-dev/2006-April/064619.html>`__

------------------------------------------------
Dropping __init__.py requirement for subpackages
------------------------------------------------

After some pressure from other Googlers, Guido suggested that Python
drop the requirement that subpackages have an __init__.py file (though
top-level packages would still require one).  People were concerned
that a number of non-subpackage directories would suddenly become
packages, e.g. "test" in the Python distribution.  There was also some
concern about tools that depended on the current requirement -- any
tool that currently searches directories for __init__.py files would
have to be updated to work with the new semantics.  In the end,
`Thomas Wouters provided a patch`_ that makes Python now issue a
warning when it would have imported a module but failed to because
__init__.py was missing.

.. _Thomas Wouters provided a patch: http://bugs.python.org/1477281

Contributing threads:

- `Dropping __init__.py requirement for subpackages
<http://mail.python.org/pipermail/python-dev/2006-April/064400.html>`__
- `[Python-checkins] r45770 - in python/trunk:
<http://mail.python.org/pipermail/python-dev/2006-April/064583.html>`__

-------------------------------------
Python and Visual Studio 2005 Express
-------------------------------------

Guido noticed that Microsoft had announced that `Visual Studio 2005
express will be free forever`_, and asked if Python should try to
accomodate this.  People seemed to think that switching to Visual
Studio 2005 for Python 2.5 was too soon but that 2.6 might be a
reasonable candidate, particularly since Microsoft no longer provides
downloads for the Visual Studio 2003 compiler.

.. _Visual Studio 2005 express will be free forever:
http://msdn.microsoft.com/vstudio/express/

Contributing thread:

- `Visual studio 2005 express now free
<http://mail.python.org/pipermail/python-dev/2006-April/064087.html>`__

----------------------------
Adding wsgiref to the stdlib
----------------------------

Guido suggested adding Phillip J. Eby's `reference implementation`_ of
`PEP 333`_ (the Web Server Gateway Interface) to the standard library.
 People were generally in favor, though there was some brief
discussion about adding a few extras including Ian Bicking's
paste.lint validator and a simple dispatch mechanism.  It looked like
wsgiref would be added in time for Python 2.5.

.. _reference implementation: http://svn.eby-sarna.com/wsgiref/
.. _PEP 333: http://www.python.org/dev/peps/pep-0333/

Contributing threads:

- `Adding wsgiref
<http://mail.python.org/pipermail/python-dev/2006-April/064199.html>`__
- `Adding wsgiref to stdlib
<http://mail.python.org/pipermail/python-dev/2006-April/064549.html>`__
- `[Web-SIG] Adding wsgiref to stdlib
<http://mail.python.org/pipermail/python-dev/2006-April/064556.html>`__

--------------------
Buildbots on Windows
--------------------

Some of the Windows buildbots were hanging when the buildbot slave
terminated the run early, but failed to terminate python_d.exe, thus
making it impossible to recompile anything (since python_d.exe cannot
be deleted while running on Windows).  No one was certain what was
causing the problems (though it seemed like it might be some sort of
indexing service running in the background) but Martin v. L?wis was
able to unstick the machines by creating a kill_python.exe application
which looks for python_d.exe and kills it.

Contributing thread:

- `windows buildbot failures
<http://mail.python.org/pipermail/python-dev/2006-April/063792.html>`__


----------------------------------
Externally maintained packages PEP
----------------------------------

Following up on work from the last fortnight, Brett Cannon put
together information on all the packages like sqlite and elementtree
that are part of the Python standard library, but have their central
repository somewhere else.  It looked like a PEP would soon be
forthcoming.

Contributing threads:

- `need info for externally maintained modules PEP
<http://mail.python.org/pipermail/python-dev/2006-April/063788.html>`__
- `draft of externally maintained packages PEP
<http://mail.python.org/pipermail/python-dev/2006-April/064362.html>`__
- `(hopefully) final draft of externally maintained code PEP
<http://mail.python.org/pipermail/python-dev/2006-April/064604.html>`__

-----------------------------
PEP 359: The "make" statement
-----------------------------

Discussion continued on `PEP 359`_'s "make" statement continued,
looking in particular at ways that breaking the parallel with the
class statement might make the "make" statement easier to use.  In the
end Guido asked me to kill the discussion, so I withdrew the PEP.

.. _PEP 359: http://www.python.org/dev/peps/pep-0359/

Contributing threads:

- `PEP 359: The "make" Statement
<http://mail.python.org/pipermail/python-dev/2006-April/063776.html>`__
- `Updated: PEP 359: The make statement
<http://mail.python.org/pipermail/python-dev/2006-April/063847.html>`__

--------------------------------
PEP 3102: Keyword-only arguments
--------------------------------

Talin introduced `PEP 3102`_ which proposes support for "keyword-only"
arguments, that is, arguments that must be specified with the
name=value keyword syntax, not as positional arguments.  The proposal
came in two parts:

* Allowing keyword arguments after varargs, e.g.::

      def add_option(*opts, action='store', type='string', ...)

* Allowing keyword arguments with no varargs, e.g.::

      def compare(a, b, *, key=None):

There was widespread support for the first part of the PEP,
particularly since such signatures exist in functions like the builtin
min and max already (which gained key= arguments in Python 2.5).  The
latter part people were unsure of, and suggested a variety of other
syntaxes, though Guido had already decided on the "*" syntax.
Discussion continued on this issue into the next fortnight.

.. _PEP 3102: http://www.python.org/dev/peps/pep-3102/

Contributing thread:

- `PEP 3102: Keyword-only arguments
<http://mail.python.org/pipermail/python-dev/2006-April/064598.html>`__

------------------------------------
String formatting for size_t objects
------------------------------------

Brett Cannon ran into some problems with the formatting of size_t
objects.  What was really needed was the C99 "z" printf modifier, but
since Python aims for C89 conformance, that can't be used
unconditionally.  In the end, Brett Cannon modified Python's config to
test if the "z" modifier is supported and then used that whenever
possible.

Contributing threads:

- `PY_FORMAT_SIZE_T warnings on OS X
<http://mail.python.org/pipermail/python-dev/2006-April/063280.html>`__
- `PY_FORMAT_SIZE_T warnings on OS X
<http://mail.python.org/pipermail/python-dev/2006-April/064363.html>`__

---------------------------------
The Grammar file and SyntaxErrors
---------------------------------

In working on a parser for a subset of the Python language, Michael
Foord ran into some troubles using the Grammar/Grammar file in the
Python distribution.  In particular, the problem he was having was the
entry::

    list_for: 'for' exprlist 'in' testlist_safe [list_iter]

which indicates that any expression list can be used as the assignment
expression in a list comprehension, so that code like the following is
valid by the Grammar file::

    [1 for 1 in n]

Of course Python doesn't allow this, but as Guido explained, the
expression that Python really wants is not LL1, so Python fudges it in
the parser, and then issues a SyntaxError later by checking that the
nodes in the parse tree actually match the restricted type of nodes
required.

Contributing thread:

- `Python Grammar Ambiguity
<http://mail.python.org/pipermail/python-dev/2006-April/064276.html>`__

-------------------------
Pybench now in Python SVN
-------------------------

Marc-Andre Lemburg offered to contribute pybench to Python in response
to a previous thread indicating some of the problems with pystone.
It's now checked in under the Tools directory.

Contributing thread:

- `2.5a1 Performance
<http://mail.python.org/pipermail/python-dev/2006-April/063849.html>`__

---------------------------
Adding threading.released()
---------------------------

Nick Coghlan asked if the ``released()`` function suggested in `PEP
343`_ should be added for Python 2.5 to support code like::

    def thread_safe():
        with sync_lock:
            # This is thread-safe
            with released(sync_lock):
                # Perform long-running or blocking operation
                # that doesn't need to hold the lock
            # We have the lock back here

While it seemed occasionally useful (e.g. in the implementation of
threading._Condition), this one looked like it would be put off until
folks had had more experience with context managers.

Contributing thread:

- `Proposed addition to threading module - released
<http://mail.python.org/pipermail/python-dev/2006-April/064243.html>`__

------------------------------
Removing the me_hash for dicts
------------------------------

Kirat Singh asked about removing me_hash from the dict struct to save
some space.  People generally seemed opposed to trading off speed for
space, particularly since it would slow down classes that define their
own ``__hash__``.

Contributing thread:

- `Reducing memory overhead for dictionaries by removing me_hash
<http://mail.python.org/pipermail/python-dev/2006-April/064212.html>`__

------------------------------------
PEP 3101: Advanced String Formatting
------------------------------------

Talin introduced `PEP 3101`_ which proposes a new formatting system
through a new ``string.format()`` method.  This would allow specifying
items through both positional and keyword arguments, as well as
allowing user-defined classes to support their own set of format
specifiers, e.g.::

    "The story of {0}, {1}, and {c}".format(a, b, c=d)
    "Today is: {0:%x}".format(datetime.now())

There was some brief discussion about how to escape the brace
characters, and some opposition to the compound name syntax (which
allowed __getitem__-style access using a dotted notation) before the
thread trailed off.

.. _PEP 3101: http://www.python.org/dev/peps/pep-3101/

Contributing thread:

- `PEP 3101: Advanced String Formatting
<http://mail.python.org/pipermail/python-dev/2006-April/064608.html>`__

---------------------
Replacing SourceForge
---------------------

Brett Cannon asked for some things that people liked and disliked
about SourceForge as the Infrastructure committee moved forward on
finding a replacement tracker for SF.  There weren't too many positive
comments, but some of the things SF was missing are:

* A way to flag a patch as "reviewed"; "pending" won't work -- it just
sets a timeout period and then deletes the message.
* A way to tag a tracker item by the relevant documentation section
(e.g. language reference, library reference, etc.)
* The ability to participate in a bug discussion through email instead
of the web interface
* The ability to define a filter on tracker items so that you can be
emailed when a new tracker item meets your criteria
* A way to identify relations between bugs, e.g. this bug closes another bug.
* The ability to close a bug with a checkin.

A Call for Trackers was to be soon forthcoming.

Contributing threads:

- `Reviewed patches [was: SoC proposal: "fix some old, old bugs in
sourceforge"] <http://mail.python.org/pipermail/python-dev/2006-April/064342.html>`__
- `what do you like about other trackers and what do you hate about
SF? <http://mail.python.org/pipermail/python-dev/2006-April/064343.html>`__
- `Reviewed patches
<http://mail.python.org/pipermail/python-dev/2006-April/064387.html>`__

-----------------------------------------------
Making decorated functions easier to introspect
-----------------------------------------------

Nick Coghlan offered to add a function to the new functools module to
make writing introspectable decorators easier.  Many decorators that
return new functions look something like::

    def decorator(func):
        def new_func(*args, **kwargs):
            ...
        return new_func

But the newly created function metadata (__name__, __doc__, etc.) is
now inconsistent with the original function.  Originally, Nick had
proposed introducing a decorator function to do this, but he settled
instead on a simple helper function, ``functools.update_wrapper``,
which can be called by decorator writers to update the appropriate
metadata.

Contributing thread:

- `Adding functools.decorator
<http://mail.python.org/pipermail/python-dev/2006-April/064620.html>`__

---------------------------
Tkinter hangs due to Tk bug
---------------------------

Thomas Wouters noticed that test_tcl locked up when Tcl and Tk were
compiled with --enable-threads and he tried to refleaktest Tkinter.
Seems that despite the lack of any note in the Tk documentation,
Tk_Init() doesn't like being called twice even when the first call
results in an error.  Martin v. L?wis reported the bug to the Tk folks
and added a workaround for the current behavior.

Contributing thread:

- `Tkinter lockups.
<http://mail.python.org/pipermail/python-dev/2006-April/064235.html>`__


================
Deferred Threads
================
- `binary trees. Review obmalloc.c
<http://mail.python.org/pipermail/python-dev/2006-April/064473.html>`__
- `introducing the experimental pyref wiki
<http://mail.python.org/pipermail/python-dev/2006-April/064591.html>`__
- `methods on the bytes object
<http://mail.python.org/pipermail/python-dev/2006-April/064613.html>`__


==================
Previous Summaries
==================
- `[Python-checkins] r45321 - in python/trunk:
Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS
<http://mail.python.org/pipermail/python-dev/2006-April/063772.html>`__
- `Py_Finalize does not release all memory, not even closely
<http://mail.python.org/pipermail/python-dev/2006-April/063773.html>`__
- `Preserving the blamelist
<http://mail.python.org/pipermail/python-dev/2006-April/063775.html>`__
- `PyObject_REPR()
<http://mail.python.org/pipermail/python-dev/2006-April/063803.html>`__
- `New-style icons, .desktop file
<http://mail.python.org/pipermail/python-dev/2006-April/063967.html>`__
- `Win64 AMD64 (aka x64) binaries available64
<http://mail.python.org/pipermail/python-dev/2006-April/064125.html>`__


===============
Skipped Threads
===============
- `valgrind reports
<http://mail.python.org/pipermail/python-dev/2006-April/063774.html>`__
- `Any reason that any()/all() do not take apredicateargument?
<http://mail.python.org/pipermail/python-dev/2006-April/063777.html>`__
- `refleaks &amp; test_tcl &amp; threads
<http://mail.python.org/pipermail/python-dev/2006-April/063785.html>`__
- `2.5 post-alpha1 broken on mac-intel machines
<http://mail.python.org/pipermail/python-dev/2006-April/063793.html>`__
- `Summer of Code preparation
<http://mail.python.org/pipermail/python-dev/2006-April/063795.html>`__
- `[C++-sig] GCC version compatibility
<http://mail.python.org/pipermail/python-dev/2006-April/063799.html>`__
- `remote debugging with pdb
<http://mail.python.org/pipermail/python-dev/2006-April/063800.html>`__
- `Py_BEGIN_ALLOW_THREADS around readdir()?
<http://mail.python.org/pipermail/python-dev/2006-April/063801.html>`__
- `FYI: more clues re: tee+generator leak
<http://mail.python.org/pipermail/python-dev/2006-April/063802.html>`__
- `[ python-Patches-790710 ] breakpoint command lists in pdb
<http://mail.python.org/pipermail/python-dev/2006-April/063809.html>`__
- `FishEye on Python CVS Repository
<http://mail.python.org/pipermail/python-dev/2006-April/063810.html>`__
- `problem installing current cvs - TabError
<http://mail.python.org/pipermail/python-dev/2006-April/063811.html>`__
- `fat binaries for OSX
<http://mail.python.org/pipermail/python-dev/2006-April/063814.html>`__
- `Returning -1 from function with unsigned long type
<http://mail.python.org/pipermail/python-dev/2006-April/063818.html>`__
- `adding Construct to the standard library?
<http://mail.python.org/pipermail/python-dev/2006-April/063819.html>`__
- `posix_confstr seems wrong
<http://mail.python.org/pipermail/python-dev/2006-April/063820.html>`__
- `pdb segfaults in 2.5 trunk?
<http://mail.python.org/pipermail/python-dev/2006-April/063825.html>`__
- `possible fix for recursive __call__ segfault
<http://mail.python.org/pipermail/python-dev/2006-April/063828.html>`__
- `How to make _sre.c compile w/ C++?
<http://mail.python.org/pipermail/python-dev/2006-April/063831.html>`__
- `Gentoo failures - it's blaming me...
<http://mail.python.org/pipermail/python-dev/2006-April/063832.html>`__
- `a flattening operator?
<http://mail.python.org/pipermail/python-dev/2006-April/063867.html>`__
- `[Python-checkins] r45505 - python/trunk/Modules/posixmodule.c
<http://mail.python.org/pipermail/python-dev/2006-April/063874.html>`__
- `bug with __dict__?
<http://mail.python.org/pipermail/python-dev/2006-April/063931.html>`__
- `Python Software Foundation seeks mentors and students for Google
Summer of Code <http://mail.python.org/pipermail/python-dev/2006-April/063965.html>`__
- `PEP 355 (object-oriented paths)
<http://mail.python.org/pipermail/python-dev/2006-April/063977.html>`__
- `proposal: evaluated string
<http://mail.python.org/pipermail/python-dev/2006-April/064005.html>`__
- `SVN question
<http://mail.python.org/pipermail/python-dev/2006-April/064009.html>`__
- `Google Summer of Code proposal: Pdb improvments
<http://mail.python.org/pipermail/python-dev/2006-April/064012.html>`__
- `Stream codecs and _multibytecodec
<http://mail.python.org/pipermail/python-dev/2006-April/064051.html>`__
- `python 2.5alpha and naming schemes
<http://mail.python.org/pipermail/python-dev/2006-April/064052.html>`__
- `unrecognized command line option "-Wno-long-double"
<http://mail.python.org/pipermail/python-dev/2006-April/064053.html>`__
- `extended bitwise operations
<http://mail.python.org/pipermail/python-dev/2006-April/064054.html>`__
- `Weekly Python Patch/Bug Summary
<http://mail.python.org/pipermail/python-dev/2006-April/064070.html>`__
- `[pypy-dev] Python Software Foundation seeks mentors and students
for Google Summer of Code
<http://mail.python.org/pipermail/python-dev/2006-April/064074.html>`__
- `Google Summer of Code proposal: improvement of long int and adding
new types/modules.
<http://mail.python.org/pipermail/python-dev/2006-April/064076.html>`__
- `bin codec + EOF
<http://mail.python.org/pipermail/python-dev/2006-April/064078.html>`__
- `Module names in Python: was Re: python 2.5alpha and naming schemes
<http://mail.python.org/pipermail/python-dev/2006-April/064094.html>`__
- `Removing Python 2.4 -m switch helpers from import.c
<http://mail.python.org/pipermail/python-dev/2006-April/064099.html>`__
- `patch #1454481 - runtime tunable thread stack size
<http://mail.python.org/pipermail/python-dev/2006-April/064108.html>`__
- `[Python-3000-checkins] r45617 - in
python/branches/p3yk/Lib/plat-mac/lib-scriptpackages:
CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py
Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py
Finder/Finder_Basics.py Finder
<http://mail.python.org/pipermail/python-dev/2006-April/064115.html>`__
- `[Python-3000-checkins] r45617 - in
python/branches/p3yk/Lib/plat-mac/lib-scriptpackages:
CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py
Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py
Finder/Finder_Bas
<http://mail.python.org/pipermail/python-dev/2006-April/064117.html>`__
- `Reject or update PEP 243?
<http://mail.python.org/pipermail/python-dev/2006-April/064147.html>`__
- `IronPython Performance
<http://mail.python.org/pipermail/python-dev/2006-April/064149.html>`__
- `[pypy-dev] Python Software Foundation seeksmentors and students for
Google Summer of Code
<http://mail.python.org/pipermail/python-dev/2006-April/064155.html>`__
- `New artwork for the osx port
<http://mail.python.org/pipermail/python-dev/2006-April/064182.html>`__
- `[Python-checkins] Python Regression Test Failures refleak (1)
<http://mail.python.org/pipermail/python-dev/2006-April/064206.html>`__
- `PEP 8 pylintrc?
<http://mail.python.org/pipermail/python-dev/2006-April/064231.html>`__
- `Compiling w/ C++ (was: Reducing memory overhead for dictionaries by
removing me_hash)
<http://mail.python.org/pipermail/python-dev/2006-April/064233.html>`__
- `Builtin exit, good in interpreter, bad in code.
<http://mail.python.org/pipermail/python-dev/2006-April/064239.html>`__
- `Buildbot messages and the build svn revision number
<http://mail.python.org/pipermail/python-dev/2006-April/064251.html>`__
- `Google Summer of Code proposal: New class for work with binary
trees AVL and RB as with the standard dictionary.
<http://mail.python.org/pipermail/python-dev/2006-April/064260.html>`__
- `gettext.py bug #1448060
<http://mail.python.org/pipermail/python-dev/2006-April/064263.html>`__
- `SoC proposal: "fix some old, old bugs in sourceforge"
<http://mail.python.org/pipermail/python-dev/2006-April/064271.html>`__
- `[Mono-dev] IronPython Performance
<http://mail.python.org/pipermail/python-dev/2006-April/064275.html>`__
- `[IronPython] [Mono-dev] IronPython Performance
<http://mail.python.org/pipermail/python-dev/2006-April/064277.html>`__
- `interested in Google Summer of Code: what should I do?
<http://mail.python.org/pipermail/python-dev/2006-April/064322.html>`__
- `EuroPython 2006: Call for papers
<http://mail.python.org/pipermail/python-dev/2006-April/064329.html>`__
- `GNU info version of documentation
<http://mail.python.org/pipermail/python-dev/2006-April/064367.html>`__
- `big-memory tests
<http://mail.python.org/pipermail/python-dev/2006-April/064377.html>`__
- `suggestion: except in list comprehension
<http://mail.python.org/pipermail/python-dev/2006-April/064388.html>`__
- `Accessing DLL objects from other DLLs
<http://mail.python.org/pipermail/python-dev/2006-April/064390.html>`__
- `Addressing Outstanding PEPs
<http://mail.python.org/pipermail/python-dev/2006-April/064391.html>`__
- `PEP 304 (Was: Addressing Outstanding PEPs)
<http://mail.python.org/pipermail/python-dev/2006-April/064395.html>`__
- `inheriting basic types more efficiently
<http://mail.python.org/pipermail/python-dev/2006-April/064396.html>`__
- `Google Summer of Code proposal: New class for workwith binary trees
AVL and RB as with the standard dictionary.
<http://mail.python.org/pipermail/python-dev/2006-April/064443.html>`__
- `A better and more basic array type
<http://mail.python.org/pipermail/python-dev/2006-April/064485.html>`__
- `Type-Def-ing Python
<http://mail.python.org/pipermail/python-dev/2006-April/064486.html>`__
- `traceback.py still broken in 2.5a2
<http://mail.python.org/pipermail/python-dev/2006-April/064501.html>`__
- `"mick-windows" buildbot uptime
<http://mail.python.org/pipermail/python-dev/2006-April/064510.html>`__
- `rest2latex - was: Re: Raising objections
<http://mail.python.org/pipermail/python-dev/2006-April/064522.html>`__
- `Is this a bad idea: picky floats?
<http://mail.python.org/pipermail/python-dev/2006-April/064527.html>`__
- `Summer of Code mailing list
<http://mail.python.org/pipermail/python-dev/2006-April/064546.html>`__
- `Bug day? <http://mail.python.org/pipermail/python-dev/2006-April/064552.html>`__
- `Float formatting and #
<http://mail.python.org/pipermail/python-dev/2006-April/064555.html>`__
- `Crazy idea for str.join
<http://mail.python.org/pipermail/python-dev/2006-April/064584.html>`__
- `rich comparisions and old-style classes
<http://mail.python.org/pipermail/python-dev/2006-April/064607.html>`__
- `methods on the bytes object (was: Crazy idea for str.join)
<http://mail.python.org/pipermail/python-dev/2006-April/064612.html>`__
- `Problem with inspect and PEP 302
<http://mail.python.org/pipermail/python-dev/2006-April/064614.html>`__
- `PyThreadState_SetAsyncExc and native extensions
<http://mail.python.org/pipermail/python-dev/2006-April/064621.html>`__
- `__getslice__ usage in sre_parse
<http://mail.python.org/pipermail/python-dev/2006-April/064622.html>`__
- `elimination of scope bleeding of iteration variables
<http://mail.python.org/pipermail/python-dev/2006-April/064623.html>`__
- `[Python-3000] in-out parameters
<http://mail.python.org/pipermail/python-dev/2006-April/064635.html>`__
- `socket module recvmsg/sendmsg
<http://mail.python.org/pipermail/python-dev/2006-April/064649.html>`__

From mal at egenix.com  Tue Jun 13 09:08:10 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 13 Jun 2006 09:08:10 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <877e9a170606121433y6bb6aca0l39642bc361aedab8@mail.gmail.com>
References: <20060610142736.GA19094@21degrees.com.au>	
	<448DA4CF.80907@egenix.com>
	<877e9a170606121433y6bb6aca0l39642bc361aedab8@mail.gmail.com>
Message-ID: <448E645A.1010307@egenix.com>

Michael Walter wrote:
> Maybe "switch" became a keyword with the patch..

Ah, right. Good catch :-)

> Regards,
> Michael
> 
> On 6/12/06, M.-A. Lemburg <mal at egenix.com> wrote:
>> Thomas Lee wrote:
>> > Hi all,
>> >
>> > As the subject of this e-mail says, the attached patch adds a "switch"
>> > statement to the Python language.
>> >
>> > However, I've been reading through PEP 275 and it seems that the PEP
>> > calls for a new opcode - SWITCH - to be added to support the new
>> > construct.
>> >
>> > I got a bit lost as to why the SWITCH opcode is necessary for the
>> > implementation of the PEP. The reasoning seems to be
>> > improving performance, but I'm not sure how a new opcode could improve
>> > performance.
>> >
>> > Anybody care to take the time to explain this to me, perhaps within the
>> > context of my patch?
>>
>> Could you upload your patch to SourceForge ? Then I could add
>> it to the PEP.
>>
>> Thomas wrote a patch which implemented the switch statement
>> using an opcode. The reason was probably that switch works
>> a lot like e.g. the for-loop which also opens a new block.
>>
>> Could you explain how your patch works ?
>>
>> BTW, I think this part doesn't belong into the patch:
>>
>> > Index: Lib/distutils/extension.py
>> > ===================================================================
>> > --- Lib/distutils/extension.py        (revision 46818)
>> > +++ Lib/distutils/extension.py        (working copy)
>> > @@ -185,31 +185,31 @@
>> >                  continue
>> >
>> >              suffix = os.path.splitext(word)[1]
>> > -            switch = word[0:2] ; value = word[2:]
>> > +            switch_word = word[0:2] ; value = word[2:]
>>
>> -- 
>> Marc-Andre Lemburg
>> eGenix.com
>>
>> Professional Python Services directly from the Source  (#1, Jun 12 2006)
>> >>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>> >>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>> >>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
>> ________________________________________________________________________
>> 2006-07-03: EuroPython 2006, CERN, Switzerland              20 days left
>>
>> ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/michael.walter%40gmail.com
>>
>>

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 13 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              19 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From tim.peters at gmail.com  Tue Jun 13 09:24:41 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Tue, 13 Jun 2006 03:24:41 -0400
Subject: [Python-Dev] [Python-checkins] r46795 - in python/trunk:
	Doc/lib/libstdtypes.tex Lib/test/string_tests.py Misc/NEWS
	Objects/stringobject.c Objects/unicodeobject.c
In-Reply-To: <ee2a432c0606122358o2226c0em42043b82f6e083aa@mail.gmail.com>
References: <20060609184549.B5ABC1E400A@bag.python.org>
	<ee2a432c0606122358o2226c0em42043b82f6e083aa@mail.gmail.com>
Message-ID: <1f7befae0606130024x4eff304fo7efc585cb3ddfdcc@mail.gmail.com>

[georg.brandl]
>> Author: georg.brandl
>> Date: Fri Jun  9 20:45:48 2006
>> New Revision: 46795
>>
>> Log:
>> RFE #1491485: str/unicode.endswith()/startswith() now accept a
tuple as first argument.

[Neal Norwitz]
> What's the reason to not support any sequence and only support tuples?

It can't support any sequence, else e.g. s.endswith(".py") would be ambiguous.

> Are there any other APIs that only support tuples rather than all
> sequences?

Oh, who cares -- it would be a relief to leave _something_ simple <0.7
wink>.  It's a bit of a stretch, but, e.g.,

>>> isinstance(1, [int, str])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: isinstance() arg 2 must be a class, type, or tuple of
classes and types

is restricted to tuple containers; ditto issubclass.  They're similar
in that most expected uses involve small, constant collections.

Given that {start,end}swidth can't support all sequences regardless,
restricting it to tuples is OK by me, and was clearly sufficient for
the uses made of this already in the standard library.

From vys at renet.ru  Tue Jun 13 09:34:12 2006
From: vys at renet.ru (Vladimir 'Yu' Stepanov)
Date: Tue, 13 Jun 2006 11:34:12 +0400
Subject: [Python-Dev] xrange vs. int.__getslice__
Message-ID: <448E6A74.3010409@renet.ru>

You were bothered yet with function xrange ? :) I suggest to replace it.

---------------------------------------------
        for i in xrange(100): pass
vs.
        for i in int[:100]: pass
---------------------------------------------

---------------------------------------------
        for i in xrange(1000, 1020): pass
vs.
        for i in int[1000:1020]: pass
---------------------------------------------

---------------------------------------------
        for i in xrange(200, 100, -2): pass
vs.
        for i in int[200:100:-2]: pass
---------------------------------------------

From nnorwitz at gmail.com  Tue Jun 13 09:38:42 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Tue, 13 Jun 2006 00:38:42 -0700
Subject: [Python-Dev] [Python-checkins] r46795 - in python/trunk:
	Doc/lib/libstdtypes.tex Lib/test/string_tests.py Misc/NEWS
	Objects/stringobject.c Objects/unicodeobject.c
In-Reply-To: <1f7befae0606130024x4eff304fo7efc585cb3ddfdcc@mail.gmail.com>
References: <20060609184549.B5ABC1E400A@bag.python.org>
	<ee2a432c0606122358o2226c0em42043b82f6e083aa@mail.gmail.com>
	<1f7befae0606130024x4eff304fo7efc585cb3ddfdcc@mail.gmail.com>
Message-ID: <ee2a432c0606130038n3403fe03l76891b1c63de02ae@mail.gmail.com>

On 6/13/06, Tim Peters <tim.peters at gmail.com> wrote:
> [georg.brandl]
> >> Author: georg.brandl
> >> Date: Fri Jun  9 20:45:48 2006
> >> New Revision: 46795
> >>
> >> Log:
> >> RFE #1491485: str/unicode.endswith()/startswith() now accept a
> tuple as first argument.
>
> [Neal Norwitz]
> > What's the reason to not support any sequence and only support tuples?
>
> It can't support any sequence, else e.g. s.endswith(".py") would be ambiguous.

Good point, I was really just thinking of lists though.

> > Are there any other APIs that only support tuples rather than all
> > sequences?
>
> Oh, who cares -- it would be a relief to leave _something_ simple <0.7
> wink>.

:-)

I was thinking about a use case like this:

   supported_suffixes = ['.py', '.pyc', '.pyo']
   if sys.platform[:3] == 'win':
      supported_suffixes.append('.pyw')
   if pathname.endswith(supported_suffixes):
      # ...

I realize you could just throw a tuple(supported_suffixes).  I don't
know that I've ever needed that specific use case.  I don't think it's
too common.  Just thinking about avoiding newbie surprises.

> Given that {start,end}swidth can't support all sequences regardless,
> restricting it to tuples is OK by me, and was clearly sufficient for
> the uses made of this already in the standard library.

I'm ok with this either way.  It's easy enough to add later, but hard
to take away.

n

From thomas at python.org  Tue Jun 13 09:39:40 2006
From: thomas at python.org (Thomas Wouters)
Date: Tue, 13 Jun 2006 09:39:40 +0200
Subject: [Python-Dev] xrange vs. int.__getslice__
In-Reply-To: <448E6A74.3010409@renet.ru>
References: <448E6A74.3010409@renet.ru>
Message-ID: <9e804ac0606130039o29ce1f39neff8af92e8faeff7@mail.gmail.com>

On 6/13/06, Vladimir 'Yu' Stepanov <vys at renet.ru> wrote:
>
> You were bothered yet with function xrange ? :) I suggest to replace it.


http://www.python.org/dev/peps/pep-0204/

(If you must really discuss this, which would probably be futile and
senseless, please do it on python-3000 only.)
-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060613/0625c63e/attachment.htm 

From nnorwitz at gmail.com  Tue Jun 13 09:43:57 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Tue, 13 Jun 2006 00:43:57 -0700
Subject: [Python-Dev] crash in dict on gc collect
In-Reply-To: <ee2a432c0606110055i1367ac50q681bdd990d252602@mail.gmail.com>
References: <ee2a432c0606110055i1367ac50q681bdd990d252602@mail.gmail.com>
Message-ID: <ee2a432c0606130043j6573e637q31e8892fb9532eb0@mail.gmail.com>

# This crashes, but i need to print type(encoding_table) at end of cp1140.py
import imp, sys
path = sys.path

enc = imp.find_module('encodings')
imp.load_module('encodings', *enc)

path.append(enc[1])

cp1140 = imp.find_module('cp1140')
imp.load_module('cp1140', *cp1140)

###

0x0000000000465689 in PyType_IsSubtype (a=0x0, b=0x663f60) at typeobject.c:816
816             if (!(a->tp_flags & Py_TPFLAGS_HAVE_CLASS))
(gdb) bt
#0  0x0000000000465689 in PyType_IsSubtype (a=0x0, b=0x663f60)
    at typeobject.c:816
#1  0x000000000042bd7a in PyFile_WriteObject (v=0x6615a0, f=0x2a9557a238,
    flags=1) at fileobject.c:2159
#2  0x0000000000496ddf in PyEval_EvalFrameEx (f=0x7359a0, throwflag=0)
    at ceval.c:1570
#3  0x000000000049c352 in PyEval_EvalCodeEx (co=0x2a95dec6d0,
    globals=0x722420, locals=0x722420, args=0x0, argcount=0, kws=0x0,
    kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2841
#4  0x0000000000492b9c in PyEval_EvalCode (co=0x2a95dec6d0, globals=0x722420,
    locals=0x722420) at ceval.c:503
#5  0x00000000004b6821 in PyImport_ExecCodeModuleEx (
    name=0x2a95ded434 "cp1140", co=0x2a95dec6d0,
    pathname=0x2a95dd6d84 "Lib/encodings/cp1140.py") at import.c:642
#6  0x00000000004b6ff8 in load_source_module (name=0x2a95ded434 "cp1140",
    pathname=0x2a95dd6d84 "Lib/encodings/cp1140.py", fp=0x731d10) at
import.c:923


On 6/11/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> I wonder if this is similar to Kevin's problem?  I couldn't reproduce
> his problem though.  This happens with both debug and release builds.
> Not sure how to reduce the test case.  pychecker was just iterating
> through the byte codes.  It wasn't doing anything particularly
> interesting.
>
> ./python pychecker/pychecker/checker.py Lib/encodings/cp1140.py
>
> 0x00000000004cfa18 in visit_decref (op=0x661180, data=0x0) at gcmodule.c:270
> 270             if (PyObject_IS_GC(op)) {
> (gdb) bt
> #0  0x00000000004cfa18 in visit_decref (op=0x661180, data=0x0) at gcmodule.c:270
> #1  0x00000000004474ab in dict_traverse (op=0x7cdd90,  visit=0x4cf9e0
> <visit_decref>, arg=0x0) at dictobject.c:1819
> #2  0x00000000004cfaf0 in subtract_refs (containers=0x670240) at gcmodule.c:295
> #3  0x00000000004d07fd in collect (generation=0) at gcmodule.c:790
> #4  0x00000000004d0ad1 in collect_generations () at gcmodule.c:897
> #5  0x00000000004d1505 in _PyObject_GC_Malloc (basicsize=56) at gcmodule.c:1332
> #6  0x00000000004d1542 in _PyObject_GC_New (tp=0x64f4a0) at gcmodule.c:1342
> #7  0x000000000041d992 in PyInstance_NewRaw (klass=0x2a95dffcc0,
> dict=0x800e80) at classobject.c:505
> #8  0x000000000041dab8 in PyInstance_New (klass=0x2a95dffcc0,
> arg=0x2a95f5f9e0, kw=0x0) at classobject.c:525
> #9  0x000000000041aa4e in PyObject_Call (func=0x2a95dffcc0,
> arg=0x2a95f5f9e0,  kw=0x0) at abstract.c:1802
> #10 0x000000000049ecd2 in do_call (func=0x2a95dffcc0,
> pp_stack=0x7fbfffb5b0,  na=3, nk=0) at ceval.c:3785
> #11 0x000000000049e46f in call_function (pp_stack=0x7fbfffb5b0,
> oparg=3) at ceval.c:3597
>

From ronaldoussoren at mac.com  Tue Jun 13 10:08:54 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Tue, 13 Jun 2006 10:08:54 +0200
Subject: [Python-Dev] request for review: patch 1446489 (zip64 extensions in
	zipfile)
Message-ID: <994ED1A1-E334-44A3-B94C-BAF4ABFBBB45@mac.com>

Hi,

As I mentioned earlier I'd like to get patch 1446489 (support for  
zip64 extensions in the zipfile module) in python 2.5. The patch  
should be perfectly safe, it comes with unittests and a documentation  
update. I'm also using this version of zipfile in (closed-source)  
projects to handle huge zipfiles.

There are two backward incompatbile changes, both minor. First of all  
ZipInfo will lose the file_offset attribute because calculating it  
when opening a zipfile is very expensive (it basically requires a  
full scan of the zipfile). This should be harmless, I couldn't come  
up with a usecase other then reimplementing the read method outside  
of zipfile. The second incompatibility is that zipfile will try to  
raise an error when the zipfile gets to large, instead of reducing  
the zipfile to garbage as the current revision of zipfile does.

The major changes in the patch are: support for ZIP64 extensions  
which make it possible to handle zipfiles that are larger than 2  
GByte in size and a significant speed-up of zipfile opening when  
dealing with zipfiles that contain a large amount of files. These are  
in one patch because large zipfiles support isn't very useful when  
opening the zipfile takes more than 30 seconds.

Ronald

From vys at renet.ru  Tue Jun 13 10:11:17 2006
From: vys at renet.ru (Vladimir 'Yu' Stepanov)
Date: Tue, 13 Jun 2006 12:11:17 +0400
Subject: [Python-Dev] xrange vs. int.__getslice__
In-Reply-To: <9e804ac0606130039o29ce1f39neff8af92e8faeff7@mail.gmail.com>
References: <448E6A74.3010409@renet.ru>
	<9e804ac0606130039o29ce1f39neff8af92e8faeff7@mail.gmail.com>
Message-ID: <448E7325.4010000@renet.ru>

Thomas Wouters wrote:
> http://www.python.org/dev/peps/pep-0204/
>
> (If you must really discuss this, which would probably be futile and 
> senseless, please do it on python-3000 only.)

Certainly looks very similar. PEP-204 demands change in a parser
and considers a new design as replacement to range functions. My
offer can be considered as replacement to xrange functions. Any
change in a syntactic design of language to spend it is not
necessary.

Thanks.

From steven.bethard at gmail.com  Tue Jun 13 10:17:53 2006
From: steven.bethard at gmail.com (Steven Bethard)
Date: Tue, 13 Jun 2006 02:17:53 -0600
Subject: [Python-Dev] DRAFT: python-dev summary for 2006-05-01 to 2006-05-15
Message-ID: <d11dcfba0606130117s6ac6c49exff77c11660106591@mail.gmail.com>

Ok, here's the frist half or May.  I'd almost feel like I was catching
up if there wasn't going to be another summary waiting for me in two
days. ;-)

As always, please look it over and let me know if you have any
corrections/comments.

Thanks!


=============
Announcements
=============

-------------------
Python 2.5 progress
-------------------

Python 2.5 is moving steadily towards its first beta release.  See
`PEP 356`_ for more details and the full schedule.

.. _PEP 356: http://www.python.org/dev/peps/pep-0356/

Contributing threads:

- `2.5 open issues
<http://mail.python.org/pipermail/python-dev/2006-May/064965.html>`__
- `nag, nag -- 2.5 open issues
<http://mail.python.org/pipermail/python-dev/2006-May/064966.html>`__

----------------------------------------------------------
Experimental wiki for editing the Python library reference
----------------------------------------------------------

Fredrik Lundh introduced his `pyref wiki`_ which allows wiki-style
editing of the Python Library Reference.  In addition to providing
useful links, like unique URLs for all keywords, types and special
methods, the project aims to make cleaning up and rewriting parts of
the Python documentation as easy as editing a wiki.  If you'd like to
help out, let `Fredrik`_ know your infogami user name and he can add
you to the group.

.. _pyref wiki: http://pyref.infogami.com/
.. _Fredrik: fredrik at effbot.org

Contributing threads:

- `introducing the experimental pyref wiki
<http://mail.python.org/pipermail/python-dev/2006-April/064591.html>`__
- `introducing the experimental pyref wiki
<http://mail.python.org/pipermail/python-dev/2006-May/064720.html>`__
- `more pyref: continue in finally statements
<http://mail.python.org/pipermail/python-dev/2006-May/064726.html>`__
- `more pyref: a better term for "string conversion"
<http://mail.python.org/pipermail/python-dev/2006-May/064746.html>`__
- `more pyref: comparison precedence
<http://mail.python.org/pipermail/python-dev/2006-May/064754.html>`__
- `context guards, context entry values, context managers, context
contexts <http://mail.python.org/pipermail/python-dev/2006-May/064853.html>`__

-----------------------------------------------
Assigning a SourceForge group to a tracker item
-----------------------------------------------

When opening a new patch on the SourceForge tracker, you should set
"Group" to the earliest still-maintained Python version to which it
applies.  Currently, that means if it's a candidate for backporting,
you should set the "Group" to 2.4.

Contributing thread:

- `Assigning "Group" on SF tracker?
<http://mail.python.org/pipermail/python-dev/2006-May/064760.html>`__


=========
Summaries
=========

--------------------------------
PEP 3102: Keyword-only arguments
--------------------------------

This fortnight continued discussion from the last on Talin's PEP for
keyword-only arguments.  Mainly the discussion focused on the second
half of his proposal which would allow positional arguments and
keyword-only arguments at the same time with syntax like::

      def compare(a, b, *, key=None):

The arguments for it included:

* It allows function APIs to be more strict initially to allow API
evolution without breaking existing code.
* It provides better documentation for functions that currently would
have to take a \*\*kwargs.

Still, a lot of people felt uncomfortable with the idea that the
writer of a function could force the callee to use keyword arguments
even if the callee found positional arguments to be simpler.

Contributing thread:

- `PEP 3102: Keyword-only arguments
<http://mail.python.org/pipermail/python-dev/2006-May/064656.html>`__

----------------------------------
Alternative to PEP 355 path object
----------------------------------

Noam Raphael suggested an alternative to the path object suggested by
`PEP 355`_ which makes paths more like tuples than strings.  The
ensuing discussion considered a variety of options, which would have
allowed code something like::

    pth = Path("~/foo/bar/baz.tar.gz"):
    assert pth.basepath == HOMEDIR
    assert pth.dirparts == ('foo', 'bar')
    assert pth.nameparts == ('baz', 'tar', 'gz')
    assert pth.prefix == str(pth.basepath)
    assert pth.dir == os.sep.join(pth.dirparts + ('',))
    assert pth.name == os.extsep.join(pth.nameparts)

Most of the ideas were also posted to the wiki under
`AlternativePathClass`_ or `AlternativePathDiscussion`_, and a number
of people asked for a PEP, but none was available at the time of this
summary.

.. _PEP 355: http://www.python.org/dev/peps/pep-0355/
.. _AlternativePathClass: http://wiki.python.org/moin/AlternativePathClass
.. _AlternativePathDiscussion:
http://wiki.python.org/moin/AlternativePathDiscussion

Contributing thread:

- `Alternative path suggestion
<http://mail.python.org/pipermail/python-dev/2006-May/064802.html>`__

----------------------------
Mechanics for Python sprints
----------------------------

Tim Peters started a discussion about the best way to handle SVN
commits during a sprint.  After discussing a number of heavier-handed
solutions, like trying to grant commit privileges for a single branch,
in the end it seemed easiest to just add all the sprinters as
committers, warn them to be careful about their commits, and have
folks keep an eye on python-checkins.

Contributing thread:

- `Python sprint mechanics
<http://mail.python.org/pipermail/python-dev/2006-May/064861.html>`__

-------------------------
Methods of the bytes type
-------------------------

Josiah Carlson asked about which str/unicode methods would still be
available in Python 3000's bytes type.  Guido asked for the thread to
be moved to the `Python-3000 list`_ but then also suggested that
"startswith", "endswith", "index", "rindex", "find", "rfind", "split",
"rsplit", "join", "count", "replace", and "translate" might all be
candidate methods.  Josiah brought up some concerns about the bytes
type not being hashable, but then Guido stepped in to ask that the
debate be put on hold until the Python 3000 branch is more complete
and some of these usability issues can be tested out there.

.. _Python-3000 list: http://mail.python.org/mailman/listinfo/python-3000

Contributing threads:

- `methods on the bytes object
<http://mail.python.org/pipermail/python-dev/2006-April/064613.html>`__
- `methods on the bytes object
<http://mail.python.org/pipermail/python-dev/2006-May/064663.html>`__
- `methods on the bytes object (was: Crazy idea for str.join)
<http://mail.python.org/pipermail/python-dev/2006-May/064700.html>`__

------------------------------------
PEP 3101: Advanced String Formatting
------------------------------------

Talin presented an updated `PEP 3101`_, and Edward Loper brought up an
issue with the current escaping strategy -- code like ``'Foo\\%s' %
x`` could not be written with the new string formatting since
``'Foo\\{0}'.format(x)`` would read the first brace as being escaped.

.. _PEP 3101: http://www.python.org/dev/peps/pep-3101/

Contributing threads:

- `PEP 3101: Advanced String Formatting
<http://mail.python.org/pipermail/python-dev/2006-May/064655.html>`__
- `PEP 3101 Update
<http://mail.python.org/pipermail/python-dev/2006-May/064921.html>`__

--------------------------------------------
Additional support for Py_ssize_t formatting
--------------------------------------------

Georg Brandl asked about formatting unsigned Py_ssize_t values with
PyString_FromFormat.  To support this, Tim Peters added %u, %lu, and
%zu to PyString_FromFormat, PyErr_Format, and PyString_FromFormatV.

Contributing thread:

- `Py_ssize_t formatting
<http://mail.python.org/pipermail/python-dev/2006-May/064997.html>`__

---------------------------------------------
Supporting long options: --help and --version
---------------------------------------------

Heiko Wundram provided a `patch to support long options`_ for the
Python interpreter in order to support --version and --help on Unix
and -?, /?, /version and /help on Windows.  No one seemed opposed to
the idea, but at the time of this summary, the patch was still open.

.. _patch to support long options: http://bugs.python.org/1481112

Contributing thread:

- `Python long command line options
<http://mail.python.org/pipermail/python-dev/2006-May/064820.html>`__

----------------------
Error codes on Windows
----------------------

Martin v. L?wis and Marc-Andre Lemburg discussed how to include both
DOS and WIN32 error codes on WindowsError objects.  As part of the
solution, they discussed making the Win32 error code for a specific
exception available as a .winerror attribute and making all the
Windows error codes available through a winerror module.

Contributing thread:

- `[Python-checkins] r45925 - in python/trunk: Lib/tempfile.py
Lib/test/test_os.py Misc/NEWS Modules/posixmodule.c
<http://mail.python.org/pipermail/python-dev/2006-May/064962.html>`__

-------------------------------
Signature objects for functions
-------------------------------

Brett Cannon asked for some discussion of signature objects that would
accompany functions and describe what kind of arguments they take.  In
particular, he wanted to know:

* should signature objects be automatically generated, or only created
at the request of a user?
* should there be a function somewhere that can determine if a
particular set of arguments are valid for a function?

Some people wanted signature objects to always be available, but with
the current C API, that isn't possible because functions declared in C
can't be guaranteed to have the information necessary for determining
a signature.  Others suggested that since the signature object was
only useful for introspection, it should only be available through,
say, ``inspect.getsignature()``.  No PEP was available at the time of
this summary.

Contributing thread:

- `signature object issues (to discuss while I am out of contact)
<http://mail.python.org/pipermail/python-dev/2006-May/064718.html>`__

-------------------------
Set complement operations
-------------------------

Terry Jones asked about adding efficient set complement operations to
Python's builtin sets so that, say, the complement of a 999,999
element set in a 1,000,000 element universe would take up the space of
1 element, not 999,999.  Most folks thought it would be better to
implement this as a standalone module first before there were any
considerations about adding it to the stdlib.

Contributing thread:

- `Efficient set complement and operation on large/infinite sets.
<http://mail.python.org/pipermail/python-dev/2006-May/064977.html>`__

------------------------------------------------------------------
Getting the weakref objects out of weakref.Weak*Dictionary objects
------------------------------------------------------------------

Fred L. Drake, Jr. presented a `patch to let users get the weakref
objects out`_ of weakref.Weak*Dictionary objects.  There was a brief
discussion about trying to allow iteration over such dictionaries, but
it looked like the patch was pretty reasonable and would soon be
applied.

.. _patch to let users get the weakref objects out:
http://bugs.python.org/1479988

Contributing thread:

- `New methods for weakref.Weak*Dictionary types
<http://mail.python.org/pipermail/python-dev/2006-May/064744.html>`__

-----------------------------
Python support for Windows CE
-----------------------------

Luke Dunstan offered to maintain the port of Python to Windows CE.  He
got some clarifications about a number of issues, in particular that,
although #ifdefs are occasionally removed to ease Python's
maintenance, if they are accompanied by a record of what system and
version needs them, they will not be dropped while there is an
appropriate maintainer.

Contributing thread:

- `Python for Windows CE
<http://mail.python.org/pipermail/python-dev/2006-May/064812.html>`__

---------------------------------
Universal binaries for Python 2.4
---------------------------------

Ronald Oussoren asked about backporting to Python 2.4 the universal
binary patches he applied to 2.5, mainly in order to avoid Apple
picking up a recent copy of Python and shipping with a broken
universal build like it did for python 2.3.  While 2.4.4 isn't planned
until after 2.5.0 (so if Apple picks up the newest version, they won't
get the 2.4 line anyway), people seemed happy with the plan, and so
there should be universal binary support in both Python 2.4.4 and
2.5.0.

Contributing thread:

- `python 2.4 and universal binaries
<http://mail.python.org/pipermail/python-dev/2006-May/064970.html>`__


================
Deferred Threads
================
- `pthreads, fork, import, and execvp
<http://mail.python.org/pipermail/python-dev/2006-May/064983.html>`__


==================
Previous Summaries
==================
- `Adding functools.decorator
<http://mail.python.org/pipermail/python-dev/2006-May/064653.html>`__
- `More on contextlib - adding back a contextmanager decorator
<http://mail.python.org/pipermail/python-dev/2006-May/064654.html>`__
- `Tkinter lockups.
<http://mail.python.org/pipermail/python-dev/2006-May/064667.html>`__
- `Visual studio 2005 express now free
<http://mail.python.org/pipermail/python-dev/2006-May/064941.html>`__


===============
Skipped Threads
===============
- `unittest argv
<http://mail.python.org/pipermail/python-dev/2006-May/064657.html>`__
- `speeding up function calls
<http://mail.python.org/pipermail/python-dev/2006-May/064668.html>`__
- `elimination of scope bleeding of iteration variables
<http://mail.python.org/pipermail/python-dev/2006-May/064673.html>`__
- `global variable modification in functions [Re: elimination of scope
bleeding of iteration variables]
<http://mail.python.org/pipermail/python-dev/2006-May/064677.html>`__
- `python syntax additions to support indentation
insensitivity/generated code
<http://mail.python.org/pipermail/python-dev/2006-May/064678.html>`__
- `socket module recvmsg/sendmsg
<http://mail.python.org/pipermail/python-dev/2006-May/064699.html>`__
- `__getslice__ usage in sre_parse
<http://mail.python.org/pipermail/python-dev/2006-May/064723.html>`__
- `More Path comments (PEP 355)
<http://mail.python.org/pipermail/python-dev/2006-May/064745.html>`__
- `Path.ancestor()
<http://mail.python.org/pipermail/python-dev/2006-May/064749.html>`__
- `[Python-checkins] r45850 - in python/trunk: Doc/lib/libfuncs.tex
Lib/test/test_subprocess.py Misc/NEWS Objects/fileobject.c
Python/bltinmodule.c
<http://mail.python.org/pipermail/python-dev/2006-May/064766.html>`__
- `Reminder: call for proposals "Python Language and Libraries Track"
for Europython 2006
<http://mail.python.org/pipermail/python-dev/2006-May/064772.html>`__
- `Date for DC-area Python sprint?
<http://mail.python.org/pipermail/python-dev/2006-May/064782.html>`__
- `test failures in test_ctypes (HEAD)
<http://mail.python.org/pipermail/python-dev/2006-May/064789.html>`__
- `Positional-only Arguments
<http://mail.python.org/pipermail/python-dev/2006-May/064790.html>`__
- `Any reason that any()/all() do not take a predicate argument?
<http://mail.python.org/pipermail/python-dev/2006-May/064792.html>`__
- `mail to talin is bouncing
<http://mail.python.org/pipermail/python-dev/2006-May/064800.html>`__
- `Seeking students for the Summer of Code
<http://mail.python.org/pipermail/python-dev/2006-May/064808.html>`__
- `binary trees. Review obmalloc.c
<http://mail.python.org/pipermail/python-dev/2006-May/064809.html>`__
- `Shared libs on Linux (Was: test failures in test_ctypes (HEAD))
<http://mail.python.org/pipermail/python-dev/2006-May/064813.html>`__
- `lambda in Python
<http://mail.python.org/pipermail/python-dev/2006-May/064817.html>`__
- `Time since the epoch
<http://mail.python.org/pipermail/python-dev/2006-May/064833.html>`__
- `[Python-checkins] r45898 - in python/trunk: Lib/test/test_os.py
Lib/test/test_shutil.py Misc/NEWS Modules/posixmodule.c
<http://mail.python.org/pipermail/python-dev/2006-May/064848.html>`__
- `Confirmed: DC-area sprint on Sat. June 3rd
<http://mail.python.org/pipermail/python-dev/2006-May/064850.html>`__
- `A critic of Guido's blog on Python's lambda
<http://mail.python.org/pipermail/python-dev/2006-May/064892.html>`__
- `Weekly Python Patch/Bug Summary
<http://mail.python.org/pipermail/python-dev/2006-May/064893.html>`__
- `binary trees.
<http://mail.python.org/pipermail/python-dev/2006-May/064904.html>`__
- `Yet another type system -- request for comments on a SoC proposal
<http://mail.python.org/pipermail/python-dev/2006-May/064912.html>`__
- `possible use of __decorates__ in functools.decorator
<http://mail.python.org/pipermail/python-dev/2006-May/064928.html>`__
- `total ordering.
<http://mail.python.org/pipermail/python-dev/2006-May/064942.html>`__
- `rest2latex - pydoc writer - tables
<http://mail.python.org/pipermail/python-dev/2006-May/064943.html>`__
- `[Python-checkins] Python Regression Test Failures basics (1)
<http://mail.python.org/pipermail/python-dev/2006-May/064949.html>`__
- `PyThreadState_SetAsyncExc, PyErr_Clear and native extensions
<http://mail.python.org/pipermail/python-dev/2006-May/064984.html>`__
- `[Python-3000] Questions on optional type annotations
<http://mail.python.org/pipermail/python-dev/2006-May/064988.html>`__
- `Status: sqlite3 module docs
<http://mail.python.org/pipermail/python-dev/2006-May/064992.html>`__
- `cleaned windows icons
<http://mail.python.org/pipermail/python-dev/2006-May/065000.html>`__
- `correction of a bug
<http://mail.python.org/pipermail/python-dev/2006-May/065007.html>`__
- `Building with VS 2003 .NET
<http://mail.python.org/pipermail/python-dev/2006-May/065013.html>`__
- `[Python-checkins] r46005 - in python/trunk: Lib/tarfile.py
Lib/test/test_tarfile.py Misc/NEWS
<http://mail.python.org/pipermail/python-dev/2006-May/065016.html>`__

From nnorwitz at gmail.com  Tue Jun 13 10:27:34 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Tue, 13 Jun 2006 01:27:34 -0700
Subject: [Python-Dev] pychecker warnings in Lib/encodings
Message-ID: <ee2a432c0606130127g4d1b909ey688825e189a583fe@mail.gmail.com>

All are missing parameters.  I'm not sure of the proper signature, so
I didn't fix these:

Lib/encodings/punycode.py:217: No global (errors) found
Lib/encodings/utf_8_sig.py:33: No global (errors) found
Lib/encodings/uu_codec.py:109: No global (errors) found

IIUC (and I probably don't), mbcs is on windows only.  But should I be
able to import encodings.mbcs on Linux or is this expected?

>>> import encodings.mbcs
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "Lib/encodings/mbcs.py", line 14, in <module>
    class Codec(codecs.Codec):
  File "Lib/encodings/mbcs.py", line 18, in Codec
    encode = codecs.mbcs_encode
AttributeError: 'module' object has no attribute 'mbcs_encode'

From 2006a at usenet.alexanderweb.de  Tue Jun 13 10:27:17 2006
From: 2006a at usenet.alexanderweb.de (Alexander Schremmer)
Date: Tue, 13 Jun 2006 10:27:17 +0200
Subject: [Python-Dev] Source control tools
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
Message-ID: <1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>

On Mon, 12 Jun 2006 23:31:14 +0200, Thomas Wouters wrote:

> I did partial imports into Mercurial and Bazaar-NG, but I got interrupted
> and couldn't draw any conclusions -- although from looking at the
> implementation, I don't think they'd scale very well at the moment (but that
> could probably be fixed.)

Maybe you benchmarked a Tailor deficiency here, but Mercurial scales very
well. People use it for work on the Linux kernel etc.
Compared to that, Bazaar-NG seems to reach limits already when working on
it's own code/repository.

Here is a paper comparing different DVCS for the FreeBSD ports tree (one of
the largest CVS repositories that exists ;-)):

http://www.keltia.net/BSDCan/slides.pdf

Kind regards,
Alexander


From thomas at python.org  Tue Jun 13 10:39:09 2006
From: thomas at python.org (Thomas Wouters)
Date: Tue, 13 Jun 2006 10:39:09 +0200
Subject: [Python-Dev] crash in dict on gc collect
In-Reply-To: <ee2a432c0606130043j6573e637q31e8892fb9532eb0@mail.gmail.com>
References: <ee2a432c0606110055i1367ac50q681bdd990d252602@mail.gmail.com>
	<ee2a432c0606130043j6573e637q31e8892fb9532eb0@mail.gmail.com>
Message-ID: <9e804ac0606130139w727a1b02h997325d69fafa5e@mail.gmail.com>

On 6/13/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
>
> # This crashes, but i need to print type(encoding_table) at end of
> cp1140.py


Here's a shorter version:

import codecs
decmap = u"".join(unichr(i) for i in xrange(256))
print type(codecs.charmap_build(decmap))

The source fo the crash is the EncodingMap type (defined in unicodeobject.c);
it has an invalid type:

Breakpoint 2, PyUnicode_BuildEncodingMap (string=0x2b97d44dbf40)
    at Objects/unicodeobject.c:3213
(gdb) print EncodingMapType
$1 = {_ob_next = 0x0, _ob_prev = 0x0, ob_refcnt = 1, ob_type = 0x0,
  ob_size = 0, tp_name = 0x53d15a "EncodingMap", tp_basicsize = 80,
[...]

Did someone forget a PyType_Ready() call when EncodingMap was added? (And
what other types are missing PyType_Ready() calls? :)
-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060613/9991c905/attachment.html 

From nnorwitz at gmail.com  Tue Jun 13 10:43:38 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Tue, 13 Jun 2006 01:43:38 -0700
Subject: [Python-Dev] crash in dict on gc collect
In-Reply-To: <9e804ac0606130139w727a1b02h997325d69fafa5e@mail.gmail.com>
References: <ee2a432c0606110055i1367ac50q681bdd990d252602@mail.gmail.com>
	<ee2a432c0606130043j6573e637q31e8892fb9532eb0@mail.gmail.com>
	<9e804ac0606130139w727a1b02h997325d69fafa5e@mail.gmail.com>
Message-ID: <ee2a432c0606130143o109814ebp64c3f02a885f06ac@mail.gmail.com>

On 6/13/06, Thomas Wouters <thomas at python.org> wrote:
>
>
> The source fo the crash is the EncodingMap type (defined in
> unicodeobject.c); it has an invalid type:
>
> Breakpoint 2, PyUnicode_BuildEncodingMap (string=0x2b97d44dbf40)
>     at Objects/unicodeobject.c:3213
> (gdb) print EncodingMapType
> $1 = {_ob_next = 0x0, _ob_prev = 0x0, ob_refcnt = 1, ob_type = 0x0,
>   ob_size = 0, tp_name = 0x53d15a "EncodingMap", tp_basicsize = 80,
> [...]
>
> Did someone forget a PyType_Ready() call when EncodingMap was added? (And
> what other types are missing PyType_Ready() calls? :)

Ya, read your mail, you're behind. I already checked in the fix (and
later the test). :-)
I didn't see any other missing PyType_Ready() calls in
unicodeobject.c.  But I don't know if other types were added at the
NFS sprint.  Hmmm, exceptions and struct?  Heading off to look.

n

From mal at egenix.com  Tue Jun 13 11:04:34 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 13 Jun 2006 11:04:34 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <44895112.4040600@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<e696es$5hs$1@sea.gmane.org>
	<4489401E.9040606@egenix.com>	<e6bifh$5bv$1@sea.gmane.org>
	<44895112.4040600@egenix.com>
Message-ID: <448E7FA2.8080607@egenix.com>

Fredrik,

could you check whether the get_machine_details() function
is causing the hand on your machine ?

Does anyone else observe this as well ?

I'm about to check in version 2.0 of pybench, but would like
to get this resolved first, if possible.

Thanks,
-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 13 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              19 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

M.-A. Lemburg wrote:
> Fredrik Lundh wrote:
>> M.-A. Lemburg wrote:
>>
>>> You can download a current snapshot from:
>>>
>>> http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip
>> believe it or not, but this hangs on my machine, under 2.5 trunk.  and 
>> it hangs hard; nether control-c, break, or the task manager manages to 
>> kill it.
> 
> Weird.
> 
>> if it's any clue, it prints
>>
>>> -------------------------------------------------------------------------------
>>> PYBENCH 2.0
>>> -------------------------------------------------------------------------------
>>> * using Python 2.5a2
>>> * disabled garbage collection
>>> * system check interval set to maximum: 2147483647
>>> * using timer: time.clock
>> and that's it; the process is just sitting there, using exactly 0% CPU.
> 
> This is the output to expect:
> 
> -------------------------------------------------------------------------------
> PYBENCH 2.0
> -------------------------------------------------------------------------------
> * using Python 2.4.2
> * disabled garbage collection
> * system check interval set to maximum: 2147483647
> * using timer: time.time
> 
> Calibrating tests. Please wait...
> 
> Running 10 round(s) of the suite at warp factor 10:
> 
> * Round 1 done in 6.627 seconds.
> * Round 2 done in 7.307 seconds.
> * Round 3 done in 7.180 seconds.
> ...
> 
> Note that the calibration step takes a while.
> 
> Looking at the code, the only place where it could
> hang (because it's relying on a few external tools)
> is when fetching the platform details:
> 
> def get_machine_details():
> 
>     import platform
>     buildno, builddate = platform.python_build()
>     python = platform.python_version()
>     if python > '2.0':
>         try:
>             unichr(100000)
>         except ValueError:
>             # UCS2 build (standard)
>             unicode = 'UCS2'
>         else:
>             # UCS4 build (most recent Linux distros)
>             unicode = 'UCS4'
>     else:
>         unicode = None
>     bits, linkage = platform.architecture()
>     return {
>         'platform': platform.platform(),
>         'processor': platform.processor(),
>         'executable': sys.executable,
>         'python': platform.python_version(),
>         'compiler': platform.python_compiler(),
>         'buildno': buildno,
>         'builddate': builddate,
>         'unicode': unicode,
>         'bits': bits,
>         }
> 
> It does run fine on my WinXP machine, both with the win32
> package installed or not.
> 

From walter at livinglogic.de  Tue Jun 13 14:08:43 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Tue, 13 Jun 2006 14:08:43 +0200
Subject: [Python-Dev] pychecker warnings in Lib/encodings
In-Reply-To: <ee2a432c0606130127g4d1b909ey688825e189a583fe@mail.gmail.com>
References: <ee2a432c0606130127g4d1b909ey688825e189a583fe@mail.gmail.com>
Message-ID: <448EAACB.8030809@livinglogic.de>

Neal Norwitz wrote:

> All are missing parameters.  I'm not sure of the proper signature, so
> I didn't fix these:
> 
> Lib/encodings/punycode.py:217: No global (errors) found
> Lib/encodings/utf_8_sig.py:33: No global (errors) found
> Lib/encodings/uu_codec.py:109: No global (errors) found

Fixed in r46915 and r46917.

> IIUC (and I probably don't), mbcs is on windows only.  But should I be
> able to import encodings.mbcs on Linux or is this expected?
> 
>>>> import encodings.mbcs
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "Lib/encodings/mbcs.py", line 14, in <module>
>     class Codec(codecs.Codec):
>   File "Lib/encodings/mbcs.py", line 18, in Codec
>     encode = codecs.mbcs_encode
> AttributeError: 'module' object has no attribute 'mbcs_encode'

mbcs_encode() is compiled conditionally in Modules/_codecsmodule.c with
"#if defined(MS_WINDOWS) && defined(HAVE_USABLE_WCHAR_T)".

Should encodings/mbcs.py be made unimportable on non-Windows?

Servus,
   Walter


From ncoghlan at iinet.net.au  Tue Jun 13 14:39:25 2006
From: ncoghlan at iinet.net.au (Nick Coghlan)
Date: Tue, 13 Jun 2006 22:39:25 +1000
Subject: [Python-Dev] Moving PEP 343 to Final
Message-ID: <448EB1FD.6050205@iinet.net.au>

Steven's summary reminded me that PEP 343 was still sitting at 'Accepted' with 
a couple of open issues still listed (ignore what PEP 0 claims for the moment ;)

Unless there are any objections, I'll move it to Final later this week, 
marking the open issues as resolved in favour of what was in 2.5 alpha 2 
(which is the same as what the PEP currently describes).

Cheers,
Nick.


-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From bborcic at gmail.com  Tue Jun 13 14:52:52 2006
From: bborcic at gmail.com (Boris Borcic)
Date: Tue, 13 Jun 2006 14:52:52 +0200
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
 locals' in Python 2.5)
In-Reply-To: <20060612162022.F2BF.JCARLSON@uci.edu>
References: <20060607083940.GA12003@code0.codespeak.net>	<e6jm7e$hnn$1@sea.gmane.org>
	<20060612162022.F2BF.JCARLSON@uci.edu>
Message-ID: <e6mcfg$iao$1@sea.gmane.org>

Josiah Carlson wrote:
> Boris Borcic <bborcic at gmail.com> wrote:
>> Hello,
>>
>> Armin Rigo wrote:
>>> Hi,
>>>
>>> On Wed, Jun 07, 2006 at 02:07:48AM +0200, Thomas Wouters wrote:
>>>> I just submitted http://python.org/sf/1501934 and assigned it to Neal so it
>>>> doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles
>>>> the following code incorrectly:
>>> No, no, it's an underground move by Jeremy to allow assignment to
>>> variables of enclosing scopes:
>> ...
>>> Credits to Samuele's evil side for the ideas.  His non-evil side doesn't
>>> agree, and neither does mine, of course :-)
>> ...
>>> More seriously, a function with a variable that is only written to as
>>> the target of augmented assignments cannot possibly be something else
>>> than a newcomer's mistake: the augmented assignments will always raise
>>> UnboundLocalError.
>> I am not really a newcomer to python. But lately I find myself regularly bitten
>> by this compiler behavior that I tend to view as a (design) bug. This started
>> happening after I saw that sets are just as good as lists performance-wise and I
>> began changing code like this
> 
> I see your attempted use of a closure as a design bug in your code.

<aside> original_post.replace('(design)','(design?)') </aside>

It was no "attempted" use but working code (incorporating a closure) that was 
being transformed to profit from simplifications I expected sets to allow. 
There, itemwise augmented assigments in loops very naturally transform to 
wholesale augmented assignments without loops. Except for this wart.

> Remember that while closures can be quite convenient, there are other
> methods to do precisely the same thing without needing to use nested
> scopes.

Of course there are other methods. I pointed out one relatively straightforward 
workaround that Python offered me in its usually friendly manner. You seem to 
say that I should have considered wholesale restructuration of my code using 
classes as a more appropriate workaround. Hum.

What's unnerving is to be stopped in the course of applying a straightforward 
simplification to working code, by a compiler (!) objection in favor of an 
interpretation that *has no use-cases*.

NB : That the compiler's interpretation has no use-cases is my crucial point, 
it's the reason why I dared suggest a design bug - as you seem to take at heart.

Are you telling me closures have no use cases ? Of course not. Are you telling 
me closures have use cases to which mine don't belong ? Please point me to the 
part of documentation where I should have learned that will of the Gods.

 >  I find that it would be far easier (for the developers of
> Python)

Could you be more specific ? This thread started with Thomas Wouter announcing 
he assigned a bug because pre-beta 2.5 unwittingly behaved as I would prefer. 
This implies that said behavior can't be _far_ more difficult to implement, right ?

Or is what you mean that python developers feel especially bad about the typical 
confusion of newcomers when the automatic scoping rules of python fail to 
provide the most helpful error messages ?

So that I shouldn't complain about, well, error dynamics getting special-cased 
for augmented assignments, to feature python with an error message that's 
perfectly to the point... for an audience that the developers of python intend 
to capture by her serendipitous misuse of augmented assignment, right ? An 
audience to which me and my use-cases don't belong, and therefore, should be 
designed out of existence ?

Judging from the rest of your intervention, I guess that's it. Well, I think the 
principle of it at least deserves debate. First, please face/state the explicit 
notion that while you speak of the "designers of python", what's the object of 
design here is actually not python itself at all but "how python should be 
learned by spontaneous (trials and) errors". (Proof: or else you have no 
use-cases worthy of the name).

Once you admit the matter belongs to the business of designing a, well, 
spontaneous human learning process; rather than the business of designing a 
programming language for actual use-cases, then maybe we could have a 
constructive discussion between adults.

(To anticipate on it, I must confess a deeply felt aversion for whatever starts 
resembling a constraining curriculum of errors to make and then rectify. Or else 
I would be programming in Java).

> and significantly more readable if it were implemented as a
> class.

I'll deny that flatly since first of all the issue isn't limited to closures. It 
would apply just as well if it involved globals and top-level functions.

> 
> class solve:
>     def __init__(self, problem):
>         self.freebits = ...
>     ...
>     def search(self, data):
>         ...
>         self.freebits ^= swaps
>         ...
>     ...
> 
> Not everything needs to (or should) be a closure

Right. Let's thus consider

freebits = ...

def search(data) :
     ...
     freebits ^= swaps

Boris
--
"On na?t tous les m?tres du m?me monde"


From aahz at pythoncraft.com  Tue Jun 13 15:08:48 2006
From: aahz at pythoncraft.com (Aahz)
Date: Tue, 13 Jun 2006 06:08:48 -0700
Subject: [Python-Dev] request for review: patch 1446489 (zip64
	extensions in zipfile)
In-Reply-To: <994ED1A1-E334-44A3-B94C-BAF4ABFBBB45@mac.com>
References: <994ED1A1-E334-44A3-B94C-BAF4ABFBBB45@mac.com>
Message-ID: <20060613130847.GA18026@panix.com>

On Tue, Jun 13, 2006, Ronald Oussoren wrote:
> 
> There are two backward incompatbile changes, both minor. First of all  
> ZipInfo will lose the file_offset attribute because calculating it  
> when opening a zipfile is very expensive (it basically requires a  
> full scan of the zipfile). This should be harmless, I couldn't come  
> up with a usecase other then reimplementing the read method outside  
> of zipfile. 

Not knowing anything about this, why not implement file_offset as a
property?
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From arigo at tunes.org  Tue Jun 13 15:15:35 2006
From: arigo at tunes.org (Armin Rigo)
Date: Tue, 13 Jun 2006 15:15:35 +0200
Subject: [Python-Dev] Please stop changing wsgiref on the trunk
In-Reply-To: <5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
Message-ID: <20060613131535.GA30908@code0.codespeak.net>

Hi Phillip,

On Mon, Jun 12, 2006 at 12:29:48PM -0400, Phillip J. Eby wrote:
> This idea would address the needs of external maintainers (having a single 
> release history) while still allowing Python developers to modify the code 
> (if the external package is in Python's SVN repository).

It's actually possible to import a part of an SVN repository into
another while preserving history.  That would be a way to move the
regular development of such packages completely to the Python SVN,
without loss.


A bientot,

Armin

From barry at python.org  Tue Jun 13 15:20:57 2006
From: barry at python.org (Barry Warsaw)
Date: Tue, 13 Jun 2006 09:20:57 -0400
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <e6l1kg$ev0$1@sea.gmane.org>
References: <5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>	<e6kb3d$2gg$1@sea.gmane.org>	<5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>	<448D9EA1.9000209@v.loewis.de>
	<e6k8ug$p9q$1@sea.gmane.org>	<bbaeab100606121105t3a0e633fne59f67241cc28a@mail.gmail.com>	<e6kb3d$2gg$1@sea.gmane.org>	<5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
	<448DFA84.1040507@v.loewis.de>
	<5.1.1.6.0.20060612194112.01eb1198@sparrow.telecommunity.com>
	<e6l1kg$ev0$1@sea.gmane.org>
Message-ID: <3C012812-6F72-40BE-8E47-CF8C93DFC7BC@python.org>

On Jun 12, 2006, at 8:42 PM, Steve Holden wrote:

> Phillip J. Eby wrote:
> [...]
>> So, to summarize, it's all actually Tim's fault, but only in a  
>> parallel
>> universe where nobody believes in unit testing.  ;-)
>>
> I'm sorry to contradict you, but every issue of significance is  
> already
> known to be Barry's fault.
>
> probably-until-the-end-of-time-ly y'rs  - steve

Oh sure, blame the guy who was going to buy both you /and/ Tim kung  
pao chicken in the futile attempt to relieve us all of your new found  
crabbiness.  That's okay, I'll only hold it against you until the end  
of time.

oh-heck-let-the-psf-buy-a-round-for-the-house-ly y'rs,
-Barry


From fdrake at acm.org  Tue Jun 13 15:30:06 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Tue, 13 Jun 2006 09:30:06 -0400
Subject: [Python-Dev] Dropping externally maintained packages
	(Was:Please stop changing wsgiref on the trunk)
In-Reply-To: <e6l1kg$ev0$1@sea.gmane.org>
References: <5.1.1.6.0.20060612190701.01e98790@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612194112.01eb1198@sparrow.telecommunity.com>
	<e6l1kg$ev0$1@sea.gmane.org>
Message-ID: <200606130930.06320.fdrake@acm.org>

On Monday 12 June 2006 20:42, Steve Holden wrote:
 > Phillip J. Eby wrote:
 > I'm sorry to contradict you, but every issue of significance is already
 > known to be Barry's fault.

And don't forget, all the issues of no significance as well.  Barry's been 
busy!  :-)


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From ronaldoussoren at mac.com  Tue Jun 13 15:59:05 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Tue, 13 Jun 2006 15:59:05 +0200
Subject: [Python-Dev] request for review: patch 1446489
	(zip64	extensions in zipfile)
In-Reply-To: <20060613130847.GA18026@panix.com>
References: <994ED1A1-E334-44A3-B94C-BAF4ABFBBB45@mac.com>
	<20060613130847.GA18026@panix.com>
Message-ID: <1CE83B0A-82E7-4BD6-8D06-65781DA5C18F@mac.com>


On 13-jun-2006, at 15:08, Aahz wrote:

> On Tue, Jun 13, 2006, Ronald Oussoren wrote:
>>
>> There are two backward incompatbile changes, both minor. First of all
>> ZipInfo will lose the file_offset attribute because calculating it
>> when opening a zipfile is very expensive (it basically requires a
>> full scan of the zipfile). This should be harmless, I couldn't come
>> up with a usecase other then reimplementing the read method outside
>> of zipfile.
>
> Not knowing anything about this, why not implement file_offset as a
> property?

That is possible but would introduce a link from ZipInfo instances to  
Zipfile instances and would therefore introduce a cycle. As Zipfile  
has a __del__ method that's not something I wanted to do.

Ronald

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2157 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060613/b3586fbe/attachment-0001.bin 

From jdc at uwo.ca  Tue Jun 13 17:22:58 2006
From: jdc at uwo.ca (Dan Christensen)
Date: Tue, 13 Jun 2006 11:22:58 -0400
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
	<448DFDB3.2050100@canterbury.ac.nz>
Message-ID: <877j3l13n1.fsf@uwo.ca>

Greg Ewing <greg.ewing at canterbury.ac.nz> writes:

> Terry Jones wrote:
>
>> The code below uses a RNG with period 5, is deterministic, and has one
>> initial state. It produces 20 different outcomes.
>
> You misunderstand what's meant by "outcome" in this
> context. The outcome of your algorithm is the whole
> *sequence* of numbers it produces, not each individual
> number. 

I think Terry's point is valid.  While no one call to
random.shuffle(L) can produce every possible ordering of L (when
len(L) is large), since random.shuffle shuffle's the data in place,
repeated calls to random.shuffle(L) could in principle produce every
possible ordering, since the "algorithm" is saving state.  Down below
I show code that calls random.shuffle on a 4 element list using a
"random" number generator of period 13, and it produces all
permutations.

(More generally, there's nothing to stop someone from changing
the random.shuffle code to explicitly store more internal state
to ensure that every permutation eventually gets produced.  I'm
of course not suggesting that this would be a good idea!)

In any case, the (old) text in the docstring:

        Note that for even rather small len(x), the total number of
        permutations of x is larger than the period of most random
        number generators; this implies that "most" permutations of a
        long sequence can never be generated.

is at least a bit misleading, especially the "never" part.

Dan



import random

i = 0
period = 13
def myrand():
    global i
    i = (i+2)%period
    return float(i)/period

def countshuffles(L, num):
    L = list(L)
    S = set([])

    for i in range(num):
        random.shuffle(L, random=myrand)
        S.add(tuple(L))

    return len(S)

print countshuffles([1,2,3,4], 40)


From mal at egenix.com  Tue Jun 13 19:17:00 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 13 Jun 2006 19:17:00 +0200
Subject: [Python-Dev] Updating packages from external ?
Message-ID: <448EF30C.6000908@egenix.com>

I just tried to upgrade Tools/pybench/ to my latest version,
so I imported pybench-2.0 into the externals/ tree and then
tried copying over the new version into the Tools/pybench/
trunk.

Unfortunately the final copy didn't actually replace the files in
Tools/pybench/ but instead created a sub-directory with name
"pybench-2.0".

Here's the command I used:

svn copy svn+pythonssh://pythondev at svn.python.org/external/pybench-2.0  \
         svn+pythonssh://pythondev at svn.python.org/python/trunk/Tools/pybench

Am I missing some final slash in the copy command or is there
a different procedure which should be followed for these upgrades,
such as e.g. remove the package directory first, then copy over
the new version ?

Thanks for any help,
-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 13 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Tue Jun 13 19:19:26 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 13 Jun 2006 19:19:26 +0200
Subject: [Python-Dev] Updating packages from external ?
In-Reply-To: <448EF30C.6000908@egenix.com>
References: <448EF30C.6000908@egenix.com>
Message-ID: <e6ms2s$igj$2@sea.gmane.org>

M.-A. Lemburg wrote:

> Here's the command I used:
> 
> svn copy svn+pythonssh://pythondev at svn.python.org/external/pybench-2.0  \
>          svn+pythonssh://pythondev at svn.python.org/python/trunk/Tools/pybench
> 
> Am I missing some final slash in the copy command or is there
> a different procedure which should be followed for these upgrades,
> such as e.g. remove the package directory first, then copy over
> the new version ?

that's one way to do it.

another way is to use the svn_load_dirs.pl script described here:

     http://svnbook.red-bean.com/en/1.0/ch07s04.html

</F>


From tjreedy at udel.edu  Tue Jun 13 19:48:24 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 13 Jun 2006 13:48:24 -0400
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
References: <20060607083940.GA12003@code0.codespeak.net>	<e6jm7e$hnn$1@sea.gmane.org><20060612162022.F2BF.JCARLSON@uci.edu>
	<e6mcfg$iao$1@sea.gmane.org>
Message-ID: <e6mtp8$q08$1@sea.gmane.org>


"Boris Borcic" <bborcic at gmail.com> wrote in message 
news:e6mcfg$iao$1 at sea.gmane.org...

>being transformed to profit from simplifications I expected sets to allow.
>There, itemwise augmented assigments in loops very naturally transform to
>wholesale augmented assignments without loops. Except for this wart.

Your transformation amounted to switching from collection mutation to 
object rebinding.  In Python, that is a crucial difference.  That the 
mutation and rebinding were both done with augmented assignments is not 
terribly important except as this masks the difference.  When *you* read 
your code, you know that you will only call the inner function with a 
mutable collection object, so you know that the name will be rebound to the 
same object after mutation, so you can think of the augmented assignment as 
being the same as collection mutation.

But the compiler does not know any such thing about the target of the 
augmented assignment and must therefore treat the statement as an 
assigment.  It was a bug for a2 to do otherwise, even though the bug was 
locally optimal for this particular usage.

Terry Jan Reedy




From walter at livinglogic.de  Tue Jun 13 20:03:26 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Tue, 13 Jun 2006 20:03:26 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <448D9991.1050601@v.loewis.de>
References: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>	<448953EA.9080006@livinglogic.de>	<003f01c68c40$8360c2b0$0400a8c0@whiterabc2znlh>
	<448D3A41.3090402@livinglogic.de> <448D9991.1050601@v.loewis.de>
Message-ID: <448EFDEE.3030302@livinglogic.de>

Martin v. L?wis wrote:

> Walter D?rwald wrote:
>>>> The best way to throughly test the patch is of course to check it in. ;)
>>> Is it too risky? ;)
>> At least I'd like to get a second review of the patch.
> 
> I've reviewed it, and am likely to check it in.

Great!

> I notice that the
> patch still has problems. In particular, it is limited to "DBCS"
> (and SBCS) character sets in the strict sense; general "MBCS"
> character sets are not supported. There are a few of these, most
> notably the ISO-2022 ones, UTF-8, and GB18030 (can't be bothered
> to look up the code page numbers for them right now).

True, but there's no IsMBCSLeadByte().

And passing MB_ERR_INVALID_CHARS in a call to MultiByteToWideChar()
doesn't help either, because AFAICT there's no information about the
error location. What could work would be to try MultiByteToWideChar()
with various string lengths to try to determine whether the error is due
to an incomplete byte sequence or invalid data. But that sounds ugly and
slow to me.

> What I don't know is whether any Windows locale uses a "true"
> MBCS character set as its "ANSI" code page.
> 
> The approach taken in the patch could be extended to GB18030 and
> UTF-8 in principle,

Would that mean that we'd have to determine the active code page and
implement the incomplete byte sequence detection ourselves?

> but can't possibly work for ISO-2022.

So does that mean that IsDBCSLeadByte() returns garbage in this case?

Servus,
   Walter


From martin at v.loewis.de  Tue Jun 13 20:33:59 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 13 Jun 2006 20:33:59 +0200
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <448EFDEE.3030302@livinglogic.de>
References: <005101c68bad$cfd9ac30$0400a8c0@whiterabc2znlh>	<448953EA.9080006@livinglogic.de>	<003f01c68c40$8360c2b0$0400a8c0@whiterabc2znlh>
	<448D3A41.3090402@livinglogic.de> <448D9991.1050601@v.loewis.de>
	<448EFDEE.3030302@livinglogic.de>
Message-ID: <448F0517.3090705@v.loewis.de>

Walter D?rwald wrote:
> And passing MB_ERR_INVALID_CHARS in a call to MultiByteToWideChar()
> doesn't help either, because AFAICT there's no information about the
> error location. What could work would be to try MultiByteToWideChar()
> with various string lengths to try to determine whether the error is due
> to an incomplete byte sequence or invalid data. But that sounds ugly and
> slow to me.

That's all true, yes.

>> but can't possibly work for ISO-2022.
> 
> So does that mean that IsDBCSLeadByte() returns garbage in this case?

IsDBCSLeadByteEx is documented to only validate lead bytes for selected
code pages; MSDN versions differ in what these code pages are. The
current online version says

"This function validates leading byte values only in the following code
pages: 932, 936, 949, 950, and 1361."

whereas my January 2006 MSDN (DVD version) says

"IsDBCSLeadByteEx does not validate any lead byte in multi-byte
character set (MBCS) code pages, for example, code pages 52696, 54936,
51949 and 5022x."

Whether or not this is relevant for IsDBCSLeadByte also, I cannot tell:
- maybe they forgot to document the limitation there as well
- maybe you can't use one of the unsupported code pages as CP_ACP,
  so the problem cannot occur
- maybe IsDBCSLeadByte does indeed work correctly in these cases, when
  IsDBCSLeadByteEx doesn't

The latter is difficult to believe, though, as IsDBCSLeadByte is likely
implemented as


BOOL IsDBCSLeadByte(BYTE TestChar)
{
  return IsDBCLeadByteEx(GetACP(), TestChar);
}

Regards,
Martin

From jcarlson at uci.edu  Tue Jun 13 20:49:27 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue, 13 Jun 2006 11:49:27 -0700
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
In-Reply-To: <e6mcfg$iao$1@sea.gmane.org>
References: <20060612162022.F2BF.JCARLSON@uci.edu> <e6mcfg$iao$1@sea.gmane.org>
Message-ID: <20060613111408.F2C5.JCARLSON@uci.edu>


Boris Borcic <bborcic at gmail.com> wrote:
> NB : That the compiler's interpretation has no use-cases is my crucial point, 
> it's the reason why I dared suggest a design bug - as you seem to take at heart.

I think that Python's compiler with respect to augmented assignment and
nested scopes is proper and sufficient. Believe whatever you want about
my intentions.


> Josiah Carlson wrote:
> > and significantly more readable if it were implemented as a
> > class.
> 
> I'll deny that flatly since first of all the issue isn't limited to closures. It 
> would apply just as well if it involved globals and top-level functions.
> 
> > 
> > class solve:
> >     def __init__(self, problem):
> >         self.freebits = ...
> >     ...
> >     def search(self, data):
> >         ...
> >         self.freebits ^= swaps
> >         ...
> >     ...
> > 
> > Not everything needs to (or should) be a closure
> 
> Right. Let's thus consider
> 
> freebits = ...
> 
> def search(data) :
>      ...
>      freebits ^= swaps

You seem to not realize that these different use-cases.  Your new
example involves a global variable that is *shared* among everyone that
knows  about this particular module.  It also is repaired by a simple
insertion of 'global freebits' at the beginning of the search function. 
The closure/class example is merely a method of encapsulating state,
which I find easier to define, describe, and document than the closure
version.

Back in February, there was a discussion about allowing people to
'easily' access and modify variables defined in lexically nested scopes,
but I believed then, as I believe now, that such attempted uses of
closures are foolish when given classes.  Given the *trivial* conversion
of your closure example to a class, and my previous comments on closures
"I find their use rarely, if ever, truely elegant, [...] more like
kicking a puppy for barking: [...] there are usually better ways of
dealing with the problem (don't kick puppies for barking and don't use
closures).", you are not likely to find me agreeing with you about
augmented assignment and/or lexically nested scopes.

 - Josiah


From mal at egenix.com  Tue Jun 13 21:10:57 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 13 Jun 2006 21:10:57 +0200
Subject: [Python-Dev] Updating packages from external ?
In-Reply-To: <e6ms2s$igj$2@sea.gmane.org>
References: <448EF30C.6000908@egenix.com> <e6ms2s$igj$2@sea.gmane.org>
Message-ID: <448F0DC1.8040900@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> 
>> Here's the command I used:
>>
>> svn copy svn+pythonssh://pythondev at svn.python.org/external/pybench-2.0  \
>>          svn+pythonssh://pythondev at svn.python.org/python/trunk/Tools/pybench
>>
>> Am I missing some final slash in the copy command or is there
>> a different procedure which should be followed for these upgrades,
>> such as e.g. remove the package directory first, then copy over
>> the new version ?
> 
> that's one way to do it.
> 
> another way is to use the svn_load_dirs.pl script described here:
> 
>      http://svnbook.red-bean.com/en/1.0/ch07s04.html

Thanks.

I tried that and also the approach to merge the differences
by hand, but both result in lost version history on the files
in the trunk.

In the end, I simply copied over the distribution onto the
trunk version and did the necessary svn add/remove by
hand. As a result, there's no history leading back to the
external/pybench-2.0/ tree (though I've added a checkin
comment), but it is possible to see and follow the changes
I made to the trunk version.

BTW, just saw the discussion burst about externally managed
packages:

The way I see things with pybench is that development
can and should happen in the trunk, but I reserve the freedom
to use that version as basis for new external releases
which I then merge back into the trunk.

This does sometimes mean incompatible changes or even
removal of things others have added, but then if it
wouldn't the package wouldn't be /externally/
managed :-)

That said, being the maintainer of such a package
becomes harder not easier by contributing it to the
Python core.

Contributors need to be aware of this, but Python core
developers should as well. It's all about working
together rather than fighting each other.

Cheers,
-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 13 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From mal at egenix.com  Tue Jun 13 21:16:43 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 13 Jun 2006 21:16:43 +0200
Subject: [Python-Dev] Python Benchmarks
In-Reply-To: <448E7FA2.8080607@egenix.com>
References: <AF774AE0-6E02-4ED9-A23E-5315B93B89BB@alum.mit.edu>	<17533.48558.364179.717911@montanaro.dyndns.org>	<447DC377.4000504@egenix.com>	<17533.51096.462552.451772@montanaro.dyndns.org>	<447DD153.8080202@egenix.com>	<e5kkr6$14a$1@sea.gmane.org>	<447DE055.4040105@egenix.com><e5oa7b$kvl$1@sea.gmane.org>	<447FF966.5050807@egenix.com>	<e5p06h$tp1$1@sea.gmane.org>	<448141F6.7030806@v.loewis.de>	<e5rjkj$rph$1@sea.gmane.org>	<44859C12.8080306@egenix.com>	<4485C152.5050705@egenix.com>	<e671tn$rhg$1@sea.gmane.org>	<448740F2.4020900@egenix.com>	<e67g2b$o33$1@sea.gmane.org>	<4487E1DF.3030302@egenix.com>	<e696es$5hs$1@sea.gmane.org>	<4489401E.9040606@egenix.com>	<e6bifh$5bv$1@sea.gmane.org>	<44895112.4040600@egenix.com>
	<448E7FA2.8080607@egenix.com>
Message-ID: <448F0F1B.7010909@egenix.com>

FYI: I've just checked in pybench 2.0 under Tools/pybench/.

Please give it a go and let me know whether the new
calibration strategy and default timers result in
better repeatability of the benchmark results.

I've tested the release extensively on Windows and Linux
and found that the test times are repeatable within +/- 5%
for each test and under 1% for the whole suite.

Note that this is an incompatible change to pybench. You
will not be able to compare results from runs using pybench 1.x
against results from pybench 2.0 (and pybench will refuse
to display such diffs).

Enjoy,
-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 13 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From jeremy at alum.mit.edu  Tue Jun 13 23:05:14 2006
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Jun 2006 17:05:14 -0400
Subject: [Python-Dev] Python sprint at Google Aug. 21-24
Message-ID: <e8bf7a530606131405x6ba94fa1wc23f6a998521e92b@mail.gmail.com>

We'd like to invite you to attend a Python development sprint at
Google the week of Aug. 21.  We will be hosting sprints on two
coasts--at Google HQ in Mountain View CA and at our New York City
office.  You can find more information here:

    http://wiki.python.org/moin/GoogleSprint

The sprint will occur just after the Python 2.5 release, assuming the
schedule doesn't slip.  It will be a great opportunity to get started
on Python 2.6 and Python 3000 work.  We hope to get some time for
face-to-face discussion about Python 3000 issues.  (We're expecting to
have a video conference hookup between the two sprints so that
everyone can participate equally.)

Google will provide the basic infrastructure for sprinting--desk
space, network connectivity, and food.  You need to bring your laptop
and your coding skills.  The wiki page has more details about
locations, hotels, &c.  We'll keep it up to date with all the
logistical information about the sprint.

The sprints are sponsored by the Open Source Program Office at Google.
 We'd like to thank Chris DiBona and Leslie Hawthorn for their
enthusiastic support!

Please send any questions about logistics to Neal and me.  I assume we
can discuss the actual technical work on python-dev as usual.

Jeremy & Neal
(co-organizers)

From terry at jon.es  Tue Jun 13 23:43:57 2006
From: terry at jon.es (Terry Jones)
Date: Tue, 13 Jun 2006 23:43:57 +0200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: Your message at 11:22:58 on Tuesday, 13 June 2006
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
	<448DFDB3.2050100@canterbury.ac.nz> <877j3l13n1.fsf@uwo.ca>
Message-ID: <17551.12701.575992.955330@terry.jones.tc>

>>>>> "Dan" == Dan Christensen <jdc at uwo.ca> writes:
Dan> Greg Ewing <greg.ewing at canterbury.ac.nz> writes:
>> Terry Jones wrote:
>> 
>>> The code below uses a RNG with period 5, is deterministic, and has one
>>> initial state. It produces 20 different outcomes.
>> 
>> You misunderstand what's meant by "outcome" in this
>> context. The outcome of your algorithm is the whole
>> *sequence* of numbers it produces, not each individual
>> number. 

Dan> I think Terry's point is valid.  While no one call to
Dan> random.shuffle(L) can produce every possible ordering of L (when
Dan> len(L) is large), since random.shuffle shuffle's the data in place,
Dan> repeated calls to random.shuffle(L) could in principle produce every
Dan> possible ordering, since the "algorithm" is saving state.  Down below
Dan> I show code that calls random.shuffle on a 4 element list using a
Dan> "random" number generator of period 13, and it produces all
Dan> permutations.

Maybe I should reiterate what I meant, as it seems the discussion is really
just semantics at this point.

Greg said:

    >>>>> "Greg" == Greg Ewing <greg.ewing at canterbury.ac.nz> writes:
    Greg> A generator with only N possible internal states can't
    Greg> possibly result in more than N different outcomes from
    Greg> any algorithm that uses its results.

And I replied:

    I don't mean to pick nits, but I do find this a bit too general.

    Suppose you have a RNG with a cycle length of 5. There's nothing to
    stop an algorithm from taking multiple already returned values and
    combining them in some (deterministic) way to generate > 5 outcomes.
    (Yes, those outcomes might be more, or less, predictable but that's not
    the point). If you expanded what you meant by "internal states" to
    include the state of the algorithm (as well as the state of the RNG),
    then I'd be more inclined to agree.

I was not meaning to say that anyone was wrong, just that I found Greg's
characterization a bit too general, or not as well defined as it might have
been.

It's clear, I think, from the example code that I and Dan posted, that one
can move the boundary between the RNG and the algorithm using it. You can
take a technique (like using lagging as I did, or Dan's method which stores
and composes permutations) out of the RNG and put it in the algorithm.
That's the reason I reacted to Greg's summary - I don't think it's right
when you confine yourself to just the internal states of the generator. As
I said, if you also consider the states of the algorithm, then the argument
is easier to defend. From an information theory point of view, it's simpler
not to draw the distinction between what's in the "RNG" and what's in the
"algorithm" that uses it. I guess we must all agree at that level, which to
me means that the recent discussion is necessarily just semantics.

[Tim's response to my first post wasn't just semantics - I was wrong in
what I said, and he made it clear (to me anyway) why. But at that point
there was no discussion of what any algorithm could produce as an outcome,
algorithm state, determinism, etc.]

And yes, you can also define outcome as you like. I deliberately included
the word 'outcome' in my print statement.  I thought that was definitive :-)

Terry

From greg.ewing at canterbury.ac.nz  Wed Jun 14 02:45:29 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Jun 2006 12:45:29 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <877j3l13n1.fsf@uwo.ca>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
	<448DFDB3.2050100@canterbury.ac.nz> <877j3l13n1.fsf@uwo.ca>
Message-ID: <448F5C29.50204@canterbury.ac.nz>

Dan Christensen wrote:

> I think Terry's point is valid.  While no one call to
> random.shuffle(L) can produce every possible ordering of L (when
> len(L) is large), since random.shuffle shuffle's the data in place,
> repeated calls to random.shuffle(L) could in principle produce every
> possible ordering,

But how do you decide how many times to shuffle before
dealing the cards? If you pull that number out of your
RNG, then you're back in the same boat. If you get it
from somewhere else, the RNG is no longer the only
thing determining the result.

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 14 03:22:53 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Jun 2006 13:22:53 +1200
Subject: [Python-Dev] a note in random.shuffle.__doc__ ...
In-Reply-To: <17551.12701.575992.955330@terry.jones.tc>
References: <1CC07EFE-45B2-4AA4-BEFB-0CFFE4CDB1F9@gmail.com>
	<20060610124332.F2B2.JCARLSON@uci.edu>
	<20060610125305.F2B5.JCARLSON@uci.edu>
	<EEEB13F7-072A-4B17-A99E-3463B53AB434@gmail.com>
	<17547.16708.105058.906604@terry.jones.tc>
	<448B664E.3040003@canterbury.ac.nz>
	<17547.27686.67002.988677@terry.jones.tc>
	<448CB505.2040304@canterbury.ac.nz>
	<17548.49469.394804.146445@terry.jones.tc>
	<448DFDB3.2050100@canterbury.ac.nz>
	<877j3l13n1.fsf@uwo.ca> <17551.12701.575992.955330@terry.jones.tc>
Message-ID: <448F64ED.20002@canterbury.ac.nz>

Terry Jones wrote:

> I was not meaning to say that anyone was wrong, just that I found Greg's
> characterization a bit too general, or not as well defined as it might have
> been.

I meant it in the context being discussed, which was a
shuffling algorithm being used the way shuffling algorithms
are normally used.

To be clear: there is an algorithm with a fixed input and
a single output, and a PRNG whose initial state determines
the actions of the algorithm. The only thing which can
change is the initial state of the PRNG.

> It's clear, I think, from the example code that I and Dan posted, that one
> can move the boundary between the RNG and the algorithm using it.

Only if you correspondingly move the boundary of what
constitutes the initial state. It doesn't matter how
much internal state the algorithm has; if it starts
out in the *same* initial state each time, it can't
increase the number of possible results.

While you probably understood this, it's worth
pointing out explicitly, because some people don't,
or neglect to consider it when thinking about this
sort of situation.

--
Greg

From greg at electricrain.com  Wed Jun 14 06:59:53 2006
From: greg at electricrain.com (Gregory P. Smith)
Date: Tue, 13 Jun 2006 21:59:53 -0700
Subject: [Python-Dev] request for review: patch 1446489 (zip64
	extensions in zipfile)
In-Reply-To: <994ED1A1-E334-44A3-B94C-BAF4ABFBBB45@mac.com>
References: <994ED1A1-E334-44A3-B94C-BAF4ABFBBB45@mac.com>
Message-ID: <20060614045953.GA31665@zot.electricrain.com>

> As I mentioned earlier I'd like to get patch 1446489 (support for  
> zip64 extensions in the zipfile module) in python 2.5. The patch  
> should be perfectly safe, it comes with unittests and a documentation  
> update. I'm also using this version of zipfile in (closed-source)  
> projects to handle huge zipfiles.
...
> The major changes in the patch are: support for ZIP64 extensions  
> which make it possible to handle zipfiles that are larger than 2  
> GByte in size and a significant speed-up of zipfile opening when  
> dealing with zipfiles that contain a large amount of files.

+1  I've reviewed it and think it sounds useful.  ZIP64 is
    supported by many of the popular windows zip file utils.

I'd like to see this one checked in.  Unless I see objections in the
next 24 hours I'll check it in.

-greg


From g.brandl at gmx.net  Wed Jun 14 08:51:03 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 14 Jun 2006 08:51:03 +0200
Subject: [Python-Dev] Improve error msgs?
Message-ID: <e6obkn$6ao$1@sea.gmane.org>

In abstract.c, there are many error messages like

type_error("object does not support item assignment");

It helps debugging if the object's type was prepended.
Should I go through the code and try to enhance them
where possible?

Georg


From g.brandl at gmx.net  Wed Jun 14 10:37:55 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 14 Jun 2006 10:37:55 +0200
Subject: [Python-Dev] Long options support
Message-ID: <e6oht3$q2n$1@sea.gmane.org>

I've just closed a bug report wishing for long option support,
pointing to a patch sitting in the patch tracker implementing
this.

Should we accept at least the very common options "--help" and
"--version" in 2.5?

Georg


From fredrik at pythonware.com  Wed Jun 14 11:23:47 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Jun 2006 11:23:47 +0200
Subject: [Python-Dev] Long options support
References: <e6oht3$q2n$1@sea.gmane.org>
Message-ID: <e6okjb$3s0$1@sea.gmane.org>

Georg Brandl wrote:

> I've just closed a bug report wishing for long option support,
> pointing to a patch sitting in the patch tracker implementing
> this.
>
> Should we accept at least the very common options "--help" and
> "--version" in 2.5?

Guido pronounced on this in May:

    "Guido van Rossum" <guido at python.org> wrote:

    > On 5/4/06, Fredrik Lundh <fredrik at pythonware.com> wrote:

    > > I'm +1 on adding --help and --version, +1 on adding -? and /? for windows only,
    > > -0=slightly sceptical if adding a generic long option machinery is worth it, and -1
    > > on a guessing machinery.
    >
    > I fully support Fredrik's position on this issue.
    >
    > --Guido van Rossum (home page: http://www.python.org/~guido/)

</F> 




From bborcic at gmail.com  Wed Jun 14 12:05:26 2006
From: bborcic at gmail.com (Boris Borcic)
Date: Wed, 14 Jun 2006 12:05:26 +0200
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
 locals' in Python 2.5)
In-Reply-To: <e6mtp8$q08$1@sea.gmane.org>
References: <20060607083940.GA12003@code0.codespeak.net>	<e6jm7e$hnn$1@sea.gmane.org><20060612162022.F2BF.JCARLSON@uci.edu>	<e6mcfg$iao$1@sea.gmane.org>
	<e6mtp8$q08$1@sea.gmane.org>
Message-ID: <e6on1i$bvc$1@sea.gmane.org>

Terry Reedy wrote:
> "Boris Borcic" <bborcic at gmail.com> wrote in message 
> news:e6mcfg$iao$1 at sea.gmane.org...
> 
>> being transformed to profit from simplifications I expected sets to allow.
>> There, itemwise augmented assigments in loops very naturally transform to
>> wholesale augmented assignments without loops. Except for this wart.
> 
> Your transformation amounted to switching from collection mutation to 
> object rebinding.  In Python, that is a crucial difference.

Ok, that is a crucial difference. The question becomes : is that difference in 
the case of augmented assignment maintained for practical or for purity aka 
ideological reasons ?

> That the 
> mutation and rebinding were both done with augmented assignments is not 
> terribly important except as this masks the difference.  When *you* read
> your code, you know that you will only call the inner function with a 
> mutable collection object, so you know that the name will be rebound to the 
> same object after mutation, so you can think of the augmented assignment as 
> being the same as collection mutation.

Not quite. When I read my code (and while I transform it) I know (as an 
invariant) that nowhere in my function do I initially bind a value to my 
variable, and I know that it can't be done with augmented assignments, and I 
know that no working code can /ever/ result from letting a function-local scope 
capture a name /just/ because it is the target of an augmented assignment.

So, when despite this obvious certainty the compiler translates my code to 
something that can't possibly run while it happily translates x+=1 and x=x+1 to 
different bytecodes with no qualms about x possibly being bound to an immutable, 
well, I do feel victim of something like a hidden agenda.

> 
> But the compiler does not know any such thing about the target of the 
> augmented assignment and must therefore treat the statement as an 
> assigment.  It was a bug for a2 to do otherwise, even though the bug was 
> locally optimal for this particular usage.

I must thank you for the effort of elaborating a polite and consistent 
explanation that was almost to the point and believable.

Regards, Boris Borcic
--
"On na?t tous les m?tres du m?me monde"


From bborcic at gmail.com  Wed Jun 14 13:12:09 2006
From: bborcic at gmail.com (Boris Borcic)
Date: Wed, 14 Jun 2006 13:12:09 +0200
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
 locals' in Python 2.5)
In-Reply-To: <20060613111408.F2C5.JCARLSON@uci.edu>
References: <20060612162022.F2BF.JCARLSON@uci.edu> <e6mcfg$iao$1@sea.gmane.org>
	<20060613111408.F2C5.JCARLSON@uci.edu>
Message-ID: <e6oquo$olu$1@sea.gmane.org>

Josiah Carlson wrote:

> You seem to not realize that these different use-cases.  Your new
> example involves a global variable that is *shared* among everyone that
> knows  about this particular module.  It also is repaired by a simple
> insertion of 'global freebits' at the beginning of the search function. 

My point here : a simple repair, and by a statement that amounts to a compiler 
directive with no other effect than obtaining my intent in bytecodes, so that 
"all other things being equal" comparisons of code versions remain possible 
(recall that I was studying the impact on working code of adopting sets - 
including performance).

> The closure/class example is merely a method of encapsulating state,
> which I find easier to define, describe, and document than the closure
> version.

Your priviledge of course (and I am not saying it is misguided, although I would 
in particular argue that the matter gets debatable in the low LOC-count limit). 
(I am also wondering about performance comparisons).

> Back in February, there was a discussion about allowing people to
> 'easily' access and modify variables defined in lexically nested scopes,
> but I believed then, as I believe now, that such attempted uses of
> closures are foolish when given classes.   Given the *trivial* conversion
> of your closure example to a class, and my previous comments on closures
> "I find their use rarely, if ever, truely elegant, [...] more like
> kicking a puppy for barking: [...] there are usually better ways of
> dealing with the problem (don't kick puppies for barking and don't use
> closures).", you are not likely to find me agreeing with you about
> augmented assignment and/or lexically nested scopes.

I see. Thanks for the background. Background for backround, let me just say that 
python hadn't yet grown a lambda when I first played with it. May I read your 
last statement as acknowledging that I am not so much asking for a door to be 
created, than asking for a naturally builtin door, not to be locked by special 
efforts ?

Regards, Boris Borcic
--
"On na?t tous les m?tres du m?me monde"


From ndbecker2 at gmail.com  Wed Jun 14 13:33:43 2006
From: ndbecker2 at gmail.com (Neal Becker)
Date: Wed, 14 Jun 2006 07:33:43 -0400
Subject: [Python-Dev] High Level Virtual Machine
Message-ID: <e6os71$sei$1@sea.gmane.org>

I thought this announcement was interesting:

http://hlvm.org/


From arigo at tunes.org  Wed Jun 14 16:01:00 2006
From: arigo at tunes.org (Armin Rigo)
Date: Wed, 14 Jun 2006 16:01:00 +0200
Subject: [Python-Dev] Improve error msgs?
In-Reply-To: <e6obkn$6ao$1@sea.gmane.org>
References: <e6obkn$6ao$1@sea.gmane.org>
Message-ID: <20060614140100.GA10470@code0.codespeak.net>

Hi Georg,

On Wed, Jun 14, 2006 at 08:51:03AM +0200, Georg Brandl wrote:
> type_error("object does not support item assignment");
> 
> It helps debugging if the object's type was prepended.
> Should I go through the code and try to enhance them
> where possible?

I think it's an excellent idea.


Armin

From gh at ghaering.de  Wed Jun 14 16:02:26 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Wed, 14 Jun 2006 16:02:26 +0200
Subject: [Python-Dev] sqlite3 test errors - was : Re: [Python-checkins]
 r46936 - in python/trunk: Lib/sqlite3/test/regression.py
 Lib/sqlite3/test/types.py	Lib/sqlite3/test/userfunctions.py
 Modules/_sqlite/connection.c
 Modules/_sqlite/cursor.c	Modules/_sqlite/module.c Modules/_sqlite/module.h
In-Reply-To: <448FDF37.9020809@ghaering.de>
References: <20060613222453.4561E1E4004@bag.python.org>	<1f7befae0606131837x2a4c71bdo588544a759968542@mail.gmail.com>
	<448FDF37.9020809@ghaering.de>
Message-ID: <449016F2.7050302@ghaering.de>

Co-posting to python-dev in the hope of getting help of people verifying 
my suspicion ...

Gerhard H?ring wrote:
> [...]
> For some reason, they don't seem to have picked up the changed tests of 
> the sqlite3 module. At least the error messages look exactly like the 
> ones I had when I ran the current code against old tests.

That guess was wrong. The failed sqlite3 tests come from an old SQLite 
version being linked against. Until recently, SQLite was buggy and it 
was only fixed in

http://www.sqlite.org/cvstrac/chngview?cn=2981

that callbacks can throw errors that are usefully returned to the 
original caller.

The tests for the sqlite3 module currently assume a recent version 
SQLite (3.3.something). Otherwise some tests will fail.

Still, it can be built against any SQLite 3 release.

Can somebody please also verify if the malloc/free error message goes 
away (it really only happened on Mac, didn't it?) if you upgrade SQLite 
to the latest version on the build host?

-- Gerhard

From gh at ghaering.de  Wed Jun 14 16:32:16 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Wed, 14 Jun 2006 16:32:16 +0200
Subject: [Python-Dev] [Python-checkins] sqlite3 test errors - was : Re:
 r46936 - in python/trunk: Lib/sqlite3/test/regression.py
 Lib/sqlite3/test/types.py	Lib/sqlite3/test/userfunctions.py
 Modules/_sqlite/connection.c
 Modules/_sqlite/cursor.c	Modules/_sqlite/module.c Modules/_sqlite/module.h
In-Reply-To: <449016F2.7050302@ghaering.de>
References: <20060613222453.4561E1E4004@bag.python.org>	<1f7befae0606131837x2a4c71bdo588544a759968542@mail.gmail.com>	<448FDF37.9020809@ghaering.de>
	<449016F2.7050302@ghaering.de>
Message-ID: <44901DF0.6060404@ghaering.de>

Gerhard H?ring wrote:
> Co-posting to python-dev in the hope of getting help of people verifying 
> my suspicion ...
> 
> Gerhard H?ring wrote:
>> [...]
>> For some reason, they don't seem to have picked up the changed tests of 
>> the sqlite3 module. At least the error messages look exactly like the 
>> ones I had when I ran the current code against old tests.
> 
> That guess was wrong. The failed sqlite3 tests come from an old SQLite 
> version being linked against. Until recently, SQLite was buggy and it 
> was only fixed in
> 
> http://www.sqlite.org/cvstrac/chngview?cn=2981
> 
> that callbacks can throw errors that are usefully returned to the 
> original caller.
> 
> The tests for the sqlite3 module currently assume a recent version 
> SQLite (3.3.something). Otherwise some tests will fail.
> 
> Still, it can be built against any SQLite 3 release.
> 
> Can somebody please also verify if the malloc/free error message goes 
> away (it really only happened on Mac, didn't it?) if you upgrade SQLite 
> to the latest version on the build host?

With SQLite 3.2.8, I also get segfaults on Linux x86 (Ubuntu dapper, gcc).

I've provided a preliminary patch (cannot check in from this place) that 
I've attached. Maybe somebody wants to test it, otherwise I'll make a 
few other tests in the late evening and probably also selectively 
disable certain tests in the test suite if the SQLite version is too old 
to pass them.

-- Gerhard
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: result_error.patch
Url: http://mail.python.org/pipermail/python-dev/attachments/20060614/978af359/attachment.asc 

From tjreedy at udel.edu  Wed Jun 14 16:50:05 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 14 Jun 2006 10:50:05 -0400
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
References: <20060607083940.GA12003@code0.codespeak.net>	<e6jm7e$hnn$1@sea.gmane.org><20060612162022.F2BF.JCARLSON@uci.edu>	<e6mcfg$iao$1@sea.gmane.org><e6mtp8$q08$1@sea.gmane.org>
	<e6on1i$bvc$1@sea.gmane.org>
Message-ID: <e6p7mt$9j1$1@sea.gmane.org>


"Boris Borcic" <bborcic at gmail.com> wrote in message 
news:e6on1i$bvc$1 at sea.gmane.org...
Terry Reedy wrote:
>> Your transformation amounted to switching from collection mutation to
>> object rebinding.  In Python, that is a crucial difference.

>Ok, that is a crucial difference. The question becomes : is that 
>difference in
>the case of augmented assignment maintained for practical or for purity 
>aka
>ideological reasons ?

Consistency.  a op=b is almost the same as a = a op b except 1) 'a' is 
computed only once and 2) if a is mutable, type(a) may choose to do op in 
place via the call to __iop__ instead of __op__.  But in any case, the 
assigment is made and 'a', if a name, is bound to the result.

Anyway, this will not change for 2.x and further discussion is really 
c.l.p. material.

Terry Jan Reedy






From bborcic at gmail.com  Wed Jun 14 17:09:40 2006
From: bborcic at gmail.com (Boris Borcic)
Date: Wed, 14 Jun 2006 17:09:40 +0200
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
 locals' in Python 2.5)
In-Reply-To: <20060613111408.F2C5.JCARLSON@uci.edu>
References: <20060612162022.F2BF.JCARLSON@uci.edu> <e6mcfg$iao$1@sea.gmane.org>
	<20060613111408.F2C5.JCARLSON@uci.edu>
Message-ID: <e6p8s3$eok$1@sea.gmane.org>

Josiah Carlson wrote:

> The closure/class example is merely a method of encapsulating state,
> which I find easier to define, describe, and document than the closure
> version.

In the case of the code discussed, eg the actual model of

def solve(problem) :
     freebits = set(range(N))
     def search(data) :
         ....
         freebits ^= swap
     ...
     search(initial_data)
     ...

the closure isn't used to encapsulate state if what you mean is passing "search" 
around as an interface to said state - search() is only for internal consumption 
and in fact exists only because of a quite opposite reason. Namely, the search 
requires copying parts of the state and this is most easily expressed with a 
recursive "search" inner function whose parameters are the copied parts.

Whatever you say, it doesn't feel adequate to me nor particularly clear to reify 
such a recursive inner abstraction as an object method. Especially in Python, I 
can't help reading the methods of a class declaration as intended primarily to 
define an external interface, which is misleading in this case.

I'd say a first step in convincing me I am wrong would be to show me examples of 
object methods of the standard library that are recursive, and cut out for 
recursion.

Regards,

Boris Borcic
--
"On na?t tous les m?tres du m?me monde"


From mal at egenix.com  Wed Jun 14 18:36:54 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 14 Jun 2006 18:36:54 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060613004927.GA7988@21degrees.com.au>
References: <20060613004927.GA7988@21degrees.com.au>
Message-ID: <44903B26.1020409@egenix.com>

Thomas Lee wrote:
> On Mon, Jun 12, 2006 at 11:33:49PM +0200, Michael Walter wrote:
>> Maybe "switch" became a keyword with the patch..
>>
>> Regards,
>> Michael
>>
> 
> That's correct.
> 
>> On 6/12/06, M.-A. Lemburg <mal at egenix.com> wrote:
>>> Could you upload your patch to SourceForge ? Then I could add
>>> it to the PEP.
>>>
> 
> It's already up there :) I thought I sent that through in another
> e-mail, but maybe not:
> 
> http://sourceforge.net/tracker/index.php?func=detail&aid=1504199&group_id=5470&atid=305470
> 
> Complete with documentation changes and a unit test.

Thanks. Please CC me on these emails.

It would also help if your mailer wouldn't add
	Mail-Followup-To: python-dev at python.org
to the messages, since then a Reply-All will not include
any other folks on CC in this thread.

>>> Thomas wrote a patch which implemented the switch statement
>>> using an opcode. The reason was probably that switch works
>>> a lot like e.g. the for-loop which also opens a new block.
>>>
> 
> No, Skip explained this in an earlier e-mail: apparently some
> programming languages use a compile-time generated lookup table
> for switch statements rather than COMPARE_OP for each case. The
> restriction is, of course, that you're stuck with constants for each
> case statement.
> 
> In a programming language like Python, where there are no named
> constants, the usefulness of such a construct might be questioned.
> Again, see Skip's earlier e-mails.
> 
>>> Could you explain how your patch works ?
>>>
> 
> 1. Evaluate the "switch" expression so that it's at the top of the stack
> 2. For each case clause:
> 2.1. Generate a DUP_TOP to duplicate the switch value for a comparison
> 2.2. Evaluate the "case" expression
> 2.3. COMPARE_OP(PyCmp_EQ)
> 2.4. Jump to the next case statement if false
> 2.5. Otherwise, POP_TOP and execute the suite for the case clause
> 2.6. Then jump to 3
> 3. POP_TOP to remove the evaluated switch expression from the stack
> 
> As you can see from the above, my patch generates a COMPARE_OP for each
> case, so you can use expressions - not just constants - for cases.
> 
> All of this is in the code found in Python/compile.c.

Thanks for the explanation, but the original motivation
for adding a switch statement was to be able to use
a lookup table for the switching in order to speed
up branching code, e.g. in parsers which typically
use constants to identify tokens (see the "Problem" section
of the PEP for the motivation).

The standard

if ...
elif ...
efif ...
else:
    ...

scheme already provides the above logic. There's really
no need to invent yet another syntax to write such
constructs, IMHO.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 14 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From brett at python.org  Wed Jun 14 19:04:45 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 14 Jun 2006 10:04:45 -0700
Subject: [Python-Dev] [Python-checkins] sqlite3 test errors - was : Re:
	r46936 - in python/trunk: Lib/sqlite3/test/regression.py
	Lib/sqlite3/test/types.py Lib/sqlite3/test/userfunctions.py
	Modules/_sqlite/connection.c Modules/_sqlite/cursor.c
	Modules/_sqlite/module.c
Message-ID: <bbaeab100606141004w7bb67e8cm87a09ad32b1f9ff3@mail.gmail.com>

On 6/14/06, Gerhard H?ring <gh at ghaering.de> wrote:
>
> Co-posting to python-dev in the hope of getting help of people verifying
> my suspicion ...
>
> Gerhard H?ring wrote:
> > [...]
> > For some reason, they don't seem to have picked up the changed tests of
> > the sqlite3 module. At least the error messages look exactly like the
> > ones I had when I ran the current code against old tests.
>
> That guess was wrong. The failed sqlite3 tests come from an old SQLite
> version being linked against. Until recently, SQLite was buggy and it
> was only fixed in
>
> http://www.sqlite.org/cvstrac/chngview?cn=2981
>
> that callbacks can throw errors that are usefully returned to the
> original caller.
>
> The tests for the sqlite3 module currently assume a recent version
> SQLite (3.3.something). Otherwise some tests will fail.
>
> Still, it can be built against any SQLite 3 release.


Perhaps this is the wrong thing to do.  Either sqlite3 should not build or
raise an exception  if it is built against a buggy version.  The other
option is to not run those known failing tests or error out immediately with
a message stating that this tests fails on known versions of sqlite.

-Brett

Can somebody please also verify if the malloc/free error message goes
> away (it really only happened on Mac, didn't it?) if you upgrade SQLite
> to the latest version on the build host?
>
> -- Gerhard
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060614/377d452f/attachment.html 

From jcarlson at uci.edu  Wed Jun 14 19:54:48 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 14 Jun 2006 10:54:48 -0700
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
In-Reply-To: <e6oquo$olu$1@sea.gmane.org>
References: <20060613111408.F2C5.JCARLSON@uci.edu> <e6oquo$olu$1@sea.gmane.org>
Message-ID: <20060614093042.F2D7.JCARLSON@uci.edu>


Boris Borcic <bborcic at gmail.com> wrote:
> Josiah Carlson wrote:
> 
> > You seem to not realize that these different use-cases.  Your new
> > example involves a global variable that is *shared* among everyone that
> > knows  about this particular module.  It also is repaired by a simple
> > insertion of 'global freebits' at the beginning of the search function. 
> 
> My point here : a simple repair, and by a statement that amounts to a compiler 
> directive with no other effect than obtaining my intent in bytecodes, so that 
> "all other things being equal" comparisons of code versions remain possible 
> (recall that I was studying the impact on working code of adopting sets - 
> including performance).

Time to access and/or modify a particular variable is only important if
you are performing the operation often enough to matter.

Have you profiled your code to show that the fetch/assignment portion of
x ^= y is slowing you down? Have you tested to see if x[0] ^= y slows
you down significantly (embed your set in a list of size 1)? Have you
tested to see if x.a ^= y slows you down significantly?

If not, then you haven't really profiled your code.  If you have, I'm
curious as to your results; as my results have shown that for the two
functions...

    def foo1(x):
        a = 0
        for i in xrange(x):
            a += 1

    def foo2(x)
        a = [0]
        def bar(x):
            for i in xrange(x):
                a[0] += 1
        return bar(x)

... and an x of 1000000, foo1(x) runs in .15 seconds, while foo2(x) runs
in .30 seconds.  So the overhead of 1 million fetch/assignment to the
first item in a list is ~.15 seconds, or 150 ns each.

Further, I can almost guarantee that your set manipulations are going to
dwarf the assignment/fetch times in either of these cases, so it seems
to me that your desired Python change, for the sake of speed as you
claim, is arbitrarily insignificant to your application, and likely so
in many applications (being roughly equivalent to the cost of an
iterator.next(), a name binding, and an int.__iadd__).


> > The closure/class example is merely a method of encapsulating state,
> > which I find easier to define, describe, and document than the closure
> > version.
> 
> Your priviledge of course (and I am not saying it is misguided, although I would 
> in particular argue that the matter gets debatable in the low LOC-count limit). 
> (I am also wondering about performance comparisons).

Unless your code profiling has shown that such assignments are a
limiting factor (if 150 ns per pass is a limiting factor, please let us
know), then performance comparisons are moot.  Further, code length is
almost immaterial to this discussion, because even if you didn't want to
bite the bullet and convert to a class-based approach, using a list of
length 1, just for the sake of set modification, would solve your
problem.

def solve(problem) :
     freebits = set(range(N))
     fb = [freebits]
     def search(data) :
         ....
         fb[0] ^= swap
     ...
     search(initial_data)
     ...


> > Back in February, there was a discussion about allowing people to
> > 'easily' access and modify variables defined in lexically nested scopes,
> > but I believed then, as I believe now, that such attempted uses of
> > closures are foolish when given classes.   Given the *trivial* conversion
> > of your closure example to a class, and my previous comments on closures
> > "I find their use rarely, if ever, truely elegant, [...] more like
> > kicking a puppy for barking: [...] there are usually better ways of
> > dealing with the problem (don't kick puppies for barking and don't use
> > closures).", you are not likely to find me agreeing with you about
> > augmented assignment and/or lexically nested scopes.
> 
> I see. Thanks for the background. Background for backround, let me just say that 
> python hadn't yet grown a lambda when I first played with it. May I read your 
> last statement as acknowledging that I am not so much asking for a door to be 
> created, than asking for a naturally builtin door, not to be locked by special 
> efforts ?

No, you may not.  There are two options for the Python compiler/runtime
behavior when faced with an augmented assignment for a name not yet
known in this particular lexical scope but known in a parent scope:
assume you meant the name in the parent scope, or assume you made a
mistake. The current state of affairs (for all released Pythons with
augmented assignment) is to assume that you meant the local scope. Why?
Because while allowing the augmented assignment would make your life
easier in this case, the error would "pass silently" when it was an
actual error (a not uncommon case), which is a bit of a no-no, and the
case when you meant the parent scope is easily worked around.

 - Josiah


From jcarlson at uci.edu  Wed Jun 14 20:26:46 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 14 Jun 2006 11:26:46 -0700
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
In-Reply-To: <e6p8s3$eok$1@sea.gmane.org>
References: <20060613111408.F2C5.JCARLSON@uci.edu> <e6p8s3$eok$1@sea.gmane.org>
Message-ID: <20060614110222.F2DA.JCARLSON@uci.edu>


Boris Borcic <bborcic at gmail.com> wrote:
> Josiah Carlson wrote:
> 
> > The closure/class example is merely a method of encapsulating state,
> > which I find easier to define, describe, and document than the closure
> > version.
> 
> In the case of the code discussed, eg the actual model of
> 
> def solve(problem) :
>      freebits = set(range(N))
>      def search(data) :
>          ....
>          freebits ^= swap
>      ...
>      search(initial_data)
>      ...
> 
> the closure isn't used to encapsulate state if what you mean is passing "search" 
> around as an interface to said state - search() is only for internal consumption 
> and in fact exists only because of a quite opposite reason. Namely, the search 
> requires copying parts of the state and this is most easily expressed with a 
> recursive "search" inner function whose parameters are the copied parts.

Ok, so here's a bit of a benchmark for you.

    def helper(x,y):
        return y
    
    def fcn1(x):
        _helper = helper
        y = x+1
        for i in xrange(x):
            y = _helper(x,y)
    
    def fcn2(x):
        y = x+1
        def _helper(x):
            return y
        for i in xrange(x):
            y = _helper(x)


Can you guess which one is faster?  I guessed, but I was wrong ;).

>>> x = 1000000
>>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
(0.53200006484985352, 0.59299993515014648)

It turns out that passing two arguments to a helper function is actually
faster than passing one argument and pulling a second out of an
enclosing scope.

From here on out, I'll consider the speed discussion closed.


> Whatever you say, it doesn't feel adequate to me nor particularly clear to reify 
> such a recursive inner abstraction as an object method. Especially in Python, I 
> can't help reading the methods of a class declaration as intended primarily to 
> define an external interface, which is misleading in this case.

I agree.  If one is not seeking to offer an external interface, one
shouldn't, and classes may be overkill.


> I'd say a first step in convincing me I am wrong would be to show me examples of 
> object methods of the standard library that are recursive, and cut out for 
> recursion.

Actually, I don't believe that is necessary.  I've shown that you would
get better performance with a non-recursive function and passing two
arguments, than you would passing one argument with scoping tricks to
get the second.

Also, given a recursive function that provides an external interface,
I'm confident in my statement that a class would likely be more suitable,
though I agree that recursive functions without an external interface
may not be better as classes.

 - Josiah


From guido at python.org  Wed Jun 14 20:24:35 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 14 Jun 2006 11:24:35 -0700
Subject: [Python-Dev] Scoping vs augmented assignment vs sets (Re: 'fast
	locals' in Python 2.5)
In-Reply-To: <20060614093042.F2D7.JCARLSON@uci.edu>
References: <20060613111408.F2C5.JCARLSON@uci.edu> <e6oquo$olu$1@sea.gmane.org>
	<20060614093042.F2D7.JCARLSON@uci.edu>
Message-ID: <ca471dc20606141124i3b3dabb4v57fbce86617efec7@mail.gmail.com>

Is it perhaps time to move this discussion to c.l.py? The behavior
isn't going to change.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Wed Jun 14 20:52:47 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 14 Jun 2006 14:52:47 -0400
Subject: [Python-Dev] Comparing closures and arguments (was Re: Scoping vs
 augmented assignment vs sets (Re: 'fast locals' in Python 2.5)
In-Reply-To: <20060614110222.F2DA.JCARLSON@uci.edu>
References: <e6p8s3$eok$1@sea.gmane.org> <20060613111408.F2C5.JCARLSON@uci.edu>
	<e6p8s3$eok$1@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060614144115.01e9bf30@sparrow.telecommunity.com>

At 11:26 AM 6/14/2006 -0700, Josiah Carlson wrote:
>Ok, so here's a bit of a benchmark for you.
>
>     def helper(x,y):
>         return y
>
>     def fcn1(x):
>         _helper = helper
>         y = x+1
>         for i in xrange(x):
>             y = _helper(x,y)
>
>     def fcn2(x):
>         y = x+1
>         def _helper(x):
>             return y
>         for i in xrange(x):
>             y = _helper(x)
>
>
>Can you guess which one is faster?  I guessed, but I was wrong ;).
>
> >>> x = 1000000
> >>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
>(0.53200006484985352, 0.59299993515014648)
>
>It turns out that passing two arguments to a helper function is actually
>faster than passing one argument and pulling a second out of an
>enclosing scope.

That claim isn't necessarily supported by your benchmark, which includes 
the time to *define* the nested function 10 times, but calls it only 45 
times!  Try comparing fcn1(1000) and fcn2(1000) - I suspect the results 
will be somewhat closer, but probably still in favor of fcn1.

However, I suspect that the remaining difference in the results would be 
due to the fact that the interpreter loop has a "fast path" function call 
implementation that doesn't work with closures IIRC.  Perhaps someone who's 
curious might try adjusting the fast path to support closures, and see if 
it can be made to speed them up without slowing down other "fast path" calls.


From jcarlson at uci.edu  Wed Jun 14 22:00:41 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 14 Jun 2006 13:00:41 -0700
Subject: [Python-Dev] Comparing closures and arguments (was Re: Scoping
	vs augmented assignment vs sets (Re: 'fast locals' in Python 2.5)
In-Reply-To: <5.1.1.6.0.20060614144115.01e9bf30@sparrow.telecommunity.com>
References: <20060614110222.F2DA.JCARLSON@uci.edu>
	<5.1.1.6.0.20060614144115.01e9bf30@sparrow.telecommunity.com>
Message-ID: <20060614124603.F2E6.JCARLSON@uci.edu>


"Phillip J. Eby" <pje at telecommunity.com> wrote:
> At 11:26 AM 6/14/2006 -0700, Josiah Carlson wrote:
> >Ok, so here's a bit of a benchmark for you.
> >
> >     def helper(x,y):
> >         return y
> >
> >     def fcn1(x):
> >         _helper = helper
> >         y = x+1
> >         for i in xrange(x):
> >             y = _helper(x,y)
> >
> >     def fcn2(x):
> >         y = x+1
> >         def _helper(x):
> >             return y
> >         for i in xrange(x):
> >             y = _helper(x)
> >
> >
> >Can you guess which one is faster?  I guessed, but I was wrong ;).
> >
> > >>> x = 1000000
> > >>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
> >(0.53200006484985352, 0.59299993515014648)
> >
> >It turns out that passing two arguments to a helper function is actually
> >faster than passing one argument and pulling a second out of an
> >enclosing scope.
> 
> That claim isn't necessarily supported by your benchmark, which includes 
> the time to *define* the nested function 10 times, but calls it only 45 
> times!  Try comparing fcn1(1000) and fcn2(1000) - I suspect the results 
> will be somewhat closer, but probably still in favor of fcn1.

Please re-read the code and test as I have specified.  You seem to have
misunderstood something in there, as in the example I provide, _helper
is called 1,000,000 times (and is defined only once for each call of
fcn2) during each fcn1 or fcn2 call, and _helper is only defined once
inside of fcn2.

Further, reducing the passes to 1000 preserves the relative performance
measures as I had previously stated.

    >>> x = 1000
    >>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
    (0.00051907656835226135, 0.00056413073832572991)
    >>> x = 10000
    >>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
    (0.0037536511925964078, 0.0044071910377851964)
    >>> x = 100000
    >>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
    (0.053770416317590275, 0.057610581942384442)
    >>> x = 1000000
    >>> min([fcn1(x) for i in xrange(10)]), min([fcn2(x) for i in xrange(10)])
    (0.54333500712479577, 0.58664054298870383)


> However, I suspect that the remaining difference in the results would be 
> due to the fact that the interpreter loop has a "fast path" function call 
> implementation that doesn't work with closures IIRC.  Perhaps someone who's 
> curious might try adjusting the fast path to support closures, and see if 
> it can be made to speed them up without slowing down other "fast path" calls.

That would be an interesting direction for improving speed.

 - Josiah


From pje at telecommunity.com  Wed Jun 14 22:01:33 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 14 Jun 2006 16:01:33 -0400
Subject: [Python-Dev] Comparing closures and arguments (was Re: Scoping
 vs augmented assignment vs sets (Re: 'fast locals' in Python 2.5)
In-Reply-To: <20060614124603.F2E6.JCARLSON@uci.edu>
References: <5.1.1.6.0.20060614144115.01e9bf30@sparrow.telecommunity.com>
	<20060614110222.F2DA.JCARLSON@uci.edu>
	<5.1.1.6.0.20060614144115.01e9bf30@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060614155923.01e93c08@sparrow.telecommunity.com>

At 01:00 PM 6/14/2006 -0700, Josiah Carlson wrote:
> > That claim isn't necessarily supported by your benchmark, which includes
> > the time to *define* the nested function 10 times, but calls it only 45
> > times!  Try comparing fcn1(1000) and fcn2(1000) - I suspect the results
> > will be somewhat closer, but probably still in favor of fcn1.
>
>Please re-read the code and test as I have specified.  You seem to have
>misunderstood something in there, as in the example I provide, _helper
>is called 1,000,000 times (and is defined only once for each call of
>fcn2) during each fcn1 or fcn2 call, and _helper is only defined once
>inside of fcn2.

Oops.  I misread "[fcn2(x) for i in xrange(10)]" as "[fcn2(x) for x in 
xrange(10)]".  The latter is so much more common of a pattern that I guess 
I didn't even *look* at what the loop variable was.  Weird.  I guess the 
mind is a terrible thing.  :)


From theller at python.net  Wed Jun 14 23:28:38 2006
From: theller at python.net (Thomas Heller)
Date: Wed, 14 Jun 2006 21:28:38 -0000
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <e6pv1e$591$1@sea.gmane.org>

Neal Norwitz wrote:
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.

It*s June 14 no longer in too many parts of the world ;-).
Any *official* news about beta1?  I guess the release will not be started
as long as the tests fail, but is there a new plan?

Thomas


From tim.peters at gmail.com  Thu Jun 15 00:04:15 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Wed, 14 Jun 2006 18:04:15 -0400
Subject: [Python-Dev] [Python-checkins] sqlite3 test errors - was : Re:
	r46936 - in python/trunk: Lib/sqlite3/test/regression.py
	Lib/sqlite3/test/types.py Lib/sqlite3/test/userfunctions.py
	Modules/_sqlite/connection.c Modules/_sqlite/cursor.c Modules/_sql
Message-ID: <1f7befae0606141504p194987b0q668d4203cd039392@mail.gmail.com>

[Gerhard H?ring]
>> ...
>> Until recently, SQLite was buggy and it was only fixed in
>>
>> http://www.sqlite.org/cvstrac/chngview?cn=2981
>>
>> that callbacks can throw errors that are usefully returned to the
>> original caller.
>>
>> The tests for the sqlite3 module currently assume a recent version
>> SQLite (3.3.something). Otherwise some tests will fail.

Sounds like that explains why none of the Windows buildbots failed (on
Windows, Python currently uses the sqlite checked in at

    http://svn.python.org/projects/external/sqlite-source-3.3.4

).  I suppose some OSes think they're doing you a favor by forcing you
to be the system SQLite admin ;-)

From gh at ghaering.de  Thu Jun 15 00:37:21 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Thu, 15 Jun 2006 00:37:21 +0200
Subject: [Python-Dev] [Python-checkins] sqlite3 test errors - was : Re:
 r46936 - in python/trunk:
 Lib/sqlite3/test/regression.py	Lib/sqlite3/test/types.py
 Lib/sqlite3/test/userfunctions.py	Modules/_sqlite/connection.c
 Modules/_sqlite/cursor.c Modules/_sql
In-Reply-To: <1f7befae0606141504p194987b0q668d4203cd039392@mail.gmail.com>
References: <1f7befae0606141504p194987b0q668d4203cd039392@mail.gmail.com>
Message-ID: <44908FA1.5080008@ghaering.de>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tim Peters wrote:
> [Gerhard H?ring]
>>> ...
>>> Until recently, SQLite was buggy and it was only fixed in
>>>
>>> http://www.sqlite.org/cvstrac/chngview?cn=2981
>>>
>>> that callbacks can throw errors that are usefully returned to the
>>> original caller.
>>>
>>> The tests for the sqlite3 module currently assume a recent version
>>> SQLite (3.3.something). Otherwise some tests will fail.
> 
> Sounds like that explains why none of the Windows buildbots failed (on
> Windows, Python currently uses the sqlite checked in at
> 
>     http://svn.python.org/projects/external/sqlite-source-3.3.4
> 
> ).  I suppose some OSes think they're doing you a favor by forcing you
> to be the system SQLite admin ;-)

Yes, this issue made development of the SQLite module a bit more
"interesting" for me. And I deserved no better for committing changes so
late before beta1.

Anyway, I verified my theory about the SQLite bugs (that's what they are),
and added version checks to the C code and to the test suite, so now
Everything Should Work (*crossing fingers*).

If anything should still fail, I'll ruthlessly blame Anthony, he brought
the idea up of supporting older SQLite versions in the first place :-P

- -- Gerhard
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEkI+hdIO4ozGCH14RAlW/AJ4uQVZrvWC7265+9wshxaBotyLolgCgstKd
5xU5DZm1EC/G9qNctPlMcGc=
=gaaO
-----END PGP SIGNATURE-----

From anthony at interlink.com.au  Thu Jun 15 01:04:00 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 15 Jun 2006 09:04:00 +1000
Subject: [Python-Dev]
 =?iso-8859-1?q?=5BPython-checkins=5D_sqlite3_test_er?=
 =?iso-8859-1?q?rors_-_was_=3A_Re=3A_r46936_-_in_python/trunk=3A_Lib/sqlit?=
 =?iso-8859-1?q?e3/test/regression=2Epy=09Lib/sqlite3/test/types=2Epy_Lib/?=
 =?iso-8859-1?q?sqlite3/test/userfunctions=2Epy=09Modules/=5Fsqlite/connec?=
 =?iso-8859-1?q?tion=2Ec_Modules/=5Fsqlite/cursor=2Ec_Modules/=5Fsql?=
In-Reply-To: <44908FA1.5080008@ghaering.de>
References: <1f7befae0606141504p194987b0q668d4203cd039392@mail.gmail.com>
	<44908FA1.5080008@ghaering.de>
Message-ID: <200606150904.03221.anthony@interlink.com.au>

Well, the just-released Ubuntu 06.06 LTS (Long Term Support) ships 
with sqlite 3.2.8. I'd suggest that whatever version ships with 
Python should _at_ _least_ work with this version. 06.06 is supposed 
to be supported for a couple of years, at least. Since this is the 
latest and greatest version of what's probably the most rapidly 
updating Linux (I don't include gentoo, obviously, because gentoo 
scares me (ObBarryBaiting: funroll-loops.org)), I don't think we can 
expect many people's platforms to have the absolute latest 3.4.4 or 
whatever.

Alternately, we ship the sqlite3 code with Python. I'm deeply 
unthrilled with this idea, as it means more emergency security 
releases to fix sqlite3 security bugs, as well as bloating out the 
size of the release. 

In the meantime, I'd suggest the appropriate fix is to roll back to 
the previous version of the python-sqlite bindings on the trunk. 

Anthony

From python at discworld.dyndns.org  Thu Jun 15 03:23:20 2006
From: python at discworld.dyndns.org (Charles Cazabon)
Date: Wed, 14 Jun 2006 19:23:20 -0600
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <e6pv1e$591$1@sea.gmane.org>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<e6pv1e$591$1@sea.gmane.org>
Message-ID: <20060615012320.GA23948@discworld.dyndns.org>

Thomas Heller <theller at python.net> wrote:
> Neal Norwitz wrote:
> > It's June 9 in most parts of the world.  The schedule calls for beta 1
> > on June 14.
> 
> It*s June 14 no longer in too many parts of the world ;-).

Apparently it's still 1999 in some parts of the world.  From your message:

> From: Thomas Heller <theller at python.net>
> To: python-dev at python.org
> Date: Fri, 31 Dec 1999 23:11:35 +0100
> Subject: Re: [Python-Dev] beta1 coming real soon

Stolen Guido's time machine, have we?

Charles
-- 
-----------------------------------------------------------------------
Charles Cazabon                           <python at discworld.dyndns.org>
GPL'ed software available at:               http://pyropus.ca/software/
-----------------------------------------------------------------------

From anthony at interlink.com.au  Thu Jun 15 03:43:44 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 15 Jun 2006 11:43:44 +1000
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <e6pv1e$591$1@sea.gmane.org>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<e6pv1e$591$1@sea.gmane.org>
Message-ID: <200606151143.48358.anthony@interlink.com.au>

On Saturday 01 January 2000 09:11, Thomas Heller wrote:
> Neal Norwitz wrote:
> > It's June 9 in most parts of the world.  The schedule calls for
> > beta 1 on June 14.
>
> It*s June 14 no longer in too many parts of the world ;-).
> Any *official* news about beta1?  I guess the release will not be
> started as long as the tests fail, but is there a new plan?

I want to be confident we've got this sqlite issue resolved. Since it 
might involve API changes, I'm stalling a little.

Anthony

-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From nnorwitz at gmail.com  Thu Jun 15 07:51:27 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Wed, 14 Jun 2006 22:51:27 -0700
Subject: [Python-Dev] pychecker warnings in Lib/encodings
In-Reply-To: <448EAACB.8030809@livinglogic.de>
References: <ee2a432c0606130127g4d1b909ey688825e189a583fe@mail.gmail.com>
	<448EAACB.8030809@livinglogic.de>
Message-ID: <ee2a432c0606142251w390a466wb6e3eaf04f25e17c@mail.gmail.com>

On 6/13/06, Walter D?rwald <walter at livinglogic.de> wrote:
>
> > IIUC (and I probably don't), mbcs is on windows only.  But should I be
> > able to import encodings.mbcs on Linux or is this expected?
> >
> >>>> import encodings.mbcs
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in <module>
> >   File "Lib/encodings/mbcs.py", line 14, in <module>
> >     class Codec(codecs.Codec):
> >   File "Lib/encodings/mbcs.py", line 18, in Codec
> >     encode = codecs.mbcs_encode
> > AttributeError: 'module' object has no attribute 'mbcs_encode'
>
> mbcs_encode() is compiled conditionally in Modules/_codecsmodule.c with
> "#if defined(MS_WINDOWS) && defined(HAVE_USABLE_WCHAR_T)".
>
> Should encodings/mbcs.py be made unimportable on non-Windows?

That's what I was thinking.  It makes sense to this non-unicode,
non-windows luser. :-)

n

From gh at ghaering.de  Thu Jun 15 08:02:00 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Thu, 15 Jun 2006 08:02:00 +0200
Subject: [Python-Dev] [Python-checkins] sqlite3 test errors - was : Re:
 r46936 - in python/trunk: Lib/sqlite3/test/regression.py
 Lib/sqlite3/test/types.py Lib/sqlite3/test/userfunctions.py
 Modules/_sqlite/connection.c Modules/_sqlite/cursor.c Modules/_sql
In-Reply-To: <200606150904.03221.anthony@interlink.com.au>
References: <1f7befae0606141504p194987b0q668d4203cd039392@mail.gmail.com>
	<44908FA1.5080008@ghaering.de>
	<200606150904.03221.anthony@interlink.com.au>
Message-ID: <4490F7D8.20101@ghaering.de>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Anthony Baxter wrote:
> Well, the just-released Ubuntu 06.06 LTS (Long Term Support) ships 
> with sqlite 3.2.8. I'd suggest that whatever version ships with 
> Python should _at_ _least_ work with this version. 

I have no problems continuing to support any halfway sane version, and that
is 3.0.8 or later (AFAIR the first non-beta SQLite 3.x release anyway). It
just requires a bit more testing.

> [...]
> Alternately, we ship the sqlite3 code with Python. I'm deeply 
> unthrilled with this idea, as it means more emergency security 
> releases to fix sqlite3 security bugs, as well as bloating out the 
> size of the release.

No, that's not an option.

> In the meantime, I'd suggest the appropriate fix is to roll back to 
> the previous version of the python-sqlite bindings on the trunk. 

Why? The previous version is really no better, except it ignores errors in
callbacks for all SQLite versions, instead of doing something useful and
aborting the query with a useful error message for those SQLite versions
that support it.

- -- Gerhard
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD4DBQFEkPfYdIO4ozGCH14RAvQ+AJdxy8Iy0sfkSQVxShmGbq/HGKRzAKCPKMtG
ZoEqmcNrgMX6k/7xzy0HKA==
=OeDy
-----END PGP SIGNATURE-----

From ncoghlan at gmail.com  Thu Jun 15 12:37:35 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 15 Jun 2006 20:37:35 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <44903B26.1020409@egenix.com>
References: <20060613004927.GA7988@21degrees.com.au>
	<44903B26.1020409@egenix.com>
Message-ID: <4491386F.9010007@gmail.com>

M.-A. Lemburg wrote:
> The standard
> 
> if ...
> elif ...
> efif ...
> else:
>     ...
> 
> scheme already provides the above logic. There's really
> no need to invent yet another syntax to write such
> constructs, IMHO.

It's a DRY thing. The construct that a switch statement can replace actually 
needs to be written:

v = ...
if v == ...:
    ...
elif v == ...:
    ...
elif v == ...:
    ...
else:
    ...

The 'v =' part is replaced by 'switch' and the 'if v ==' and 'elif v ==' are 
all replaced by 'case'.

A separate statement is also easier to document to say that order of 
evaluation of the cases is not guaranteed, and that the cases may only be 
evaluated once and cached thereafter, allowing us free rein for optimisations 
that aren't possible with a normal if statement. The optimisation of the 
if-elif case would then simply be to say that the compiler can recognise 
if-elif chains like the one above where the RHS of the comparisons are all 
hashable literals and collapse them to switch statements.

It's also worth thinking about what a 'missing else' might mean for a switch 
statement. Should it default to "else: pass" like other else clauses, or does 
it make more sense for the default behaviour to be "else: raise 
ValueError('Unexpected input to switch statement')", with an explicit "else: 
pass" required to suppress the exception?

The lack of a switch statement doesn't really bother me personally, since I 
tend to just write my state machine type code so that it works off a 
dictionary that I define elsewhere, but I can see the appeal of having one, 
*if* there are things that we can do with a separate statement to make it 
*better* for the purpose than a simple if-elif chain or a predefined function 
lookup dictionary.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From walter at livinglogic.de  Thu Jun 15 15:08:35 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Thu, 15 Jun 2006 15:08:35 +0200
Subject: [Python-Dev] Last-minute curses patch
Message-ID: <44915BD3.4020109@livinglogic.de>

I have a small patch http://bugs.python.org/1506645 that adds
resizeterm() and resize_term() to the curses module. Can this still go
in for beta1? I'm not sure, if it needs some additional configure magic.

Servus,
   Walter


From mal at egenix.com  Thu Jun 15 17:37:39 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 15 Jun 2006 17:37:39 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <4491386F.9010007@gmail.com>
References: <20060613004927.GA7988@21degrees.com.au>	<44903B26.1020409@egenix.com>
	<4491386F.9010007@gmail.com>
Message-ID: <44917EC3.5000208@egenix.com>

Nick Coghlan wrote:
> M.-A. Lemburg wrote:
>> The standard
>>
>> if ...
>> elif ...
>> efif ...
>> else:
>>     ...
>>
>> scheme already provides the above logic. There's really
>> no need to invent yet another syntax to write such
>> constructs, IMHO.
> 
> It's a DRY thing.

Exactly, though not in the sense you are suggesting :-)

The motivation for the PEP 275 is to enhance performance when
switching on multiple values with these values being
constants.

I don't want to repeat the discussion here. Please see
the PEP for details:

http://www.python.org/dev/peps/pep-0275/

Note that the PEP is entitled "Switching on Multiple Values".
Adding a switch statement is only one of the two proposed
solutions.

My personal favorite is making the compiler
smarter to detect the mentioned if-elif-else scheme
and generate code which uses a lookup table for
implementing fast branching. The main arguments
for this solution are:

* existing code can benefit from the optimization
* no need for new keywords
* code written for Python 2.6 will be executable
  by earlier Python versions as well

> The construct that a switch statement can replace
> actually needs to be written:
> 
> v = ...
> if v == ...:
>    ...
> elif v == ...:
>    ...
> elif v == ...:
>    ...
> else:
>    ...
> 
> The 'v =' part is replaced by 'switch' and the 'if v ==' and 'elif v =='
> are all replaced by 'case'.

Actually, there's one more contraint: the ... part on the right
needs to be constant - at least if you use the same motiviation
as in the PEP 275.

> A separate statement is also easier to document to say that order of
> evaluation of the cases is not guaranteed, and that the cases may only
> be evaluated once and cached thereafter, allowing us free rein for
> optimisations that aren't possible with a normal if statement.

No need for any caching magic: the values on which you
switch will have to be constants anyway.

> The optimisation of the if-elif case would then simply be to say that the
> compiler can recognise if-elif chains like the one above where the RHS
> of the comparisons are all hashable literals and collapse them to switch
> statements.

Right (constants are usually hashable :-).

> It's also worth thinking about what a 'missing else' might mean for a
> switch statement. Should it default to "else: pass" like other else
> clauses, or does it make more sense for the default behaviour to be
> "else: raise ValueError('Unexpected input to switch statement')", with
> an explicit "else: pass" required to suppress the exception?

Good point.

I'd say it's a SyntaxError not to provide an else part.
That way you leave the decision to raise an exception
or not to the programmer.

> The lack of a switch statement doesn't really bother me personally,
> since I tend to just write my state machine type code so that it works
> off a dictionary that I define elsewhere, but I can see the appeal of
> having one, *if* there are things that we can do with a separate
> statement to make it *better* for the purpose than a simple if-elif
> chain or a predefined function lookup dictionary.

The problem with a dispatch approach is that Python function
or method calls take rather long to setup and execute.

If you write things like parsers, you typically have
code that only does very few things per switch case,
e.g. add the data to some list - the function call
overhead pretty much kills the dispatch approach
compared to the O(n) approach using if-elif-chains.

Dispatching is useful in situations where you do lots
of complicated things for each case. The function
call overhead then becomes negligible.

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 15 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              17 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From steven.bethard at gmail.com  Thu Jun 15 19:28:28 2006
From: steven.bethard at gmail.com (Steven Bethard)
Date: Thu, 15 Jun 2006 11:28:28 -0600
Subject: [Python-Dev] DRAFT: python-dev summary for 2006-05-16 to 2006-05-31
Message-ID: <d11dcfba0606151028u45fa6e2asbfa3aba36e9a5d96@mail.gmail.com>

Ok, for the first time in a few months, you're getting this summary
before the next one is due.  Woo-hoo!  (Yes, I know I'm not even a day
ahead.  Let me enjoy my temporary victory.) =)

Here's the draft summary for the second half of May.  Let me know what
comments/corrections you have.  Thanks!


=============
Announcements
=============

----------------------------
QOTF: Quote of the Fortnight
----------------------------

Martin v. L?wis on what kind of questions are appropriate for python-dev:

    ... [python-dev] is the list where you say "I want to help", not
so much "I need your help".

Contributing thread:

- `Segmentation fault of Python if build on Solaris 9 or 10 with Sun
Studio 11 <http://mail.python.org/pipermail/python-dev/2006-May/065493.html>`__

-------------------
Python 2.5 schedule
-------------------

Python 2.5 is moving steadily towards its next release.  See `PEP
356`_ for more details and the full schedule.  You may start to see a
few warnings at import time if you've named non-package directories
with the same names as your modules/packages.  Python-dev suggests
renaming these directories -- though the warnings won't give you any
real trouble in Python 2.5, there's a chance that a future version of
Python will drop the need for __init__.py.

.. _PEP 356: http://www.python.org/dev/peps/pep-0356/

Contributing thread:

- `2.5 schedule
<http://mail.python.org/pipermail/python-dev/2006-May/065058.html>`__
- `warnings about missing __init__.py in toplevel directories
<http://mail.python.org/pipermail/python-dev/2006-May/065270.html>`__

------------------------------
Restructured library reference
------------------------------

Thanks to work by A.M. Kuchling and Michael Spencer, the organization
of the `development Library Reference documentation`_ structure is
much improved over the `old one`_.  Thanks for your hard work guys!

.. _development Library Reference documentation:
http://docs.python.org/dev/lib/lib.html
.. _old one: http://docs.python.org/lib/lib.html

Contributing thread:

- `[Python-3000] stdlib reorganization
<http://mail.python.org/pipermail/python-dev/2006-May/065441.html>`__

-----------------------------
Need for Speed Sprint results
-----------------------------

The results of the `Need for Speed Sprint`_ are all posted on the
wiki.  In particular, you should check a number of `successes`_ they
had in speeding up various parts of Python including function calls,
string and Unicode operations, and string<->integer conversions.

.. _Need for Speed Sprint: http://wiki.python.org/moin/NeedForSpeed/
.. _successes: http://wiki.python.org/moin/NeedForSpeed/Successes

Contributing threads:

- `[Python-checkins] r46043 - peps/trunk/pep-0356.txt
<http://mail.python.org/pipermail/python-dev/2006-May/065061.html>`__
- `Need for Speed Sprint status
<http://mail.python.org/pipermail/python-dev/2006-May/065279.html>`__

-------------------------
Python old-timer memories
-------------------------

Guido's been collecting `memories of old-timers`_ who have been using
Python for 10 years or more.  Be sure to check 'em out and add your
own!

.. _memories of old-timers:
http://www.artima.com/weblogs/viewpost.jsp?thread=161207

Contributing thread:

- `Looking for Memories of Python Old-Timers
<http://mail.python.org/pipermail/python-dev/2006-May/065121.html>`__


=========
Summaries
=========

-----------------------------
Struct module inconsistencies
-----------------------------

Changes to the struct module to do proper range checking resulted in a
few bugs showing up where the stdlib depended on the old, undocumented
behavior.  As a compromise, Bob Ippolito added code to do the proper
range checking and issue DeprecationWarnings, and then made sure that
the all struct results were calculated with appropriate bit masking.
The warnings are expected to become errors in Python 2.6 or 2.7.

Bob also updated the struct module to return ints instead of longs
whenever possible, even for the format codes that had previously
guaranteed longs (I, L, q and Q).

Contributing threads:

- `Returning int instead of long from struct when possible for
performance <http://mail.python.org/pipermail/python-dev/2006-May/065199.html>`__
- `test_gzip/test_tarfile failure om AMD64
<http://mail.python.org/pipermail/python-dev/2006-May/065311.html>`__
- `Converting crc32 functions to use unsigned
<http://mail.python.org/pipermail/python-dev/2006-May/065430.html>`__
- `test_struct failure on 64 bit platforms
<http://mail.python.org/pipermail/python-dev/2006-May/065463.html>`__

---------------------------------
Using epoll for the select module
---------------------------------

Ross Cohen implemented a `drop-in replacement for select.poll`_ using
Linux's epoll (a more efficient io notifcation system than poll).  The
select interface is already much closer to the the epoll API than the
poll API, and people liked the idea of using epoll silently when
available. Ross promised to look into merging his code with the
current select module (though it wasn't clear whether or not he would
do this using ctypes isntead of an extension module as some people had
suggested).

.. _drop-in replacement for select.poll: http://sourceforge.net/projects/pyepoll

Contributing thread:

- `epoll implementation
<http://mail.python.org/pipermail/python-dev/2006-May/065277.html>`__

-----------------------
Negatives and sequences
-----------------------

Fredrik Lundh pointed out that using a negative sign and multiplying
by -1 do not always produce the same behavior, e.g.::

    >>> -1 * (1, 2, 3)
    ()
    >>> -(1, 2, 3)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: bad operand type for unary -

Though no one seemed particularly concerned about the discrepancy, the
thread did spend some time discussing the behavior of sequences
multiplied by negatives.  A number of folks were pushing for this to
become an error until Uncle Timmy showed some use-cases like::

    # right-justify to 80 columns, padding with spaces
    s = " " * (80 - len(s)) + s

The rest of the thread turned into a (mostly humorous) competition for
the best way to incomprehensibly alter sequence multiplication
semantics.

Contributing thread:

- `A Horrible Inconsistency
<http://mail.python.org/pipermail/python-dev/2006-May/065176.html>`__

---------------------
Removing METH_OLDARGS
---------------------

Georg Brandl asked about removing METH_OLDARGS which has been
deprecated since 2.2.  Unfortunately, there are still a bunch of uses
of it in Modules, and it's still the default if no flag is specified.
Georg promised to work on removing the ones in Python core, and there
was some discussion of trying to mark the API as deprecated.  Issuing
a DeprecationWarning seemeed too heavy-handed, so Georg looked into
generating C compile time warnings by marking PyArg_Parse as
Py_DEPRECATED.

Contributing thread:

- `Remove METH_OLDARGS?
<http://mail.python.org/pipermail/python-dev/2006-May/065306.html>`__

-------------------------------------
Propogating exceptions in dict lookup
-------------------------------------

Armin Rigo offered up `a patch to stop dict lookup from hiding
exceptions`_ in user-defined __eq__ methods.  The PyDict_GetItem() API
gives no way of propogating such an exception, so previously the
exceptions were just swallowed.  Armin moved the exception-swallowing
part out of lookdict() and into PyDict_GetItem() so that even though
PyDict_GetItem() will still swallow the exceptions, all other ways of
invoking dict lookup (e.g. ``value = d[key]`` in Python code) will now
propogate the exception properly.  Scott Dial brought up an odd corner
case where the old behavior would cause insertion of a value into the
dict because the exception was assumed to indicate a new key, but
people didn't seem to worried about breaking this behavior.

.. _a patch to stop dict lookup from hiding exceptions:
http://bugs.python.org/1497053

Contributing thread:

- `Let's stop eating exceptions in dict lookup
<http://mail.python.org/pipermail/python-dev/2006-May/065346.html>`__

------------------------------
String/unicode inconsistencies
------------------------------

After the Need for Speed Sprint unified some of the string and unicode
code, some tests started failing where string and unicode objects had
different behavior, e.g. ``'abc'.find('', 100)`` used to return -1 but
started returning 100.  There was some discussion about what was the
right behavior here and Fredrik Lundh promised to implement whatever
was decided.

Contributing thread:

- `replace on empty strings
<http://mail.python.org/pipermail/python-dev/2006-May/065153.html>`__
- `Let's stop eating exceptions in dict lookup
<http://mail.python.org/pipermail/python-dev/2006-May/065346.html>`__

-----------------------------------
Allowing inline "if" with for-loops
-----------------------------------

Heiko Wundram presented a brief PEP suggesting that if-statements in
the first line of a for-loop could be optionally inlined, so for
example instead of::

    for node in tree:
        if node.haschildren():
            <do something with node>

you could write::

    for node in tree if node.haschildren():
        <do something with node>

Most people seemed to feel that saving a colon character and a few
indents was not a huge gain.  Some also worried that this change would
encourage code that was harder to read, particularly if the for-clause
or if-clause got long.  Guido rejected it, and Heiko promised to
submit it as a full PEP so that the rejection would be properly
recorded.

Contributing thread:

- `PEP-xxx: Unification of for statement and list-comp syntax
<http://mail.python.org/pipermail/python-dev/2006-May/065088.html>`__

----------------------------------------------
Splitting strings with embedded quoted strings
----------------------------------------------

Dave Cinege proposed augmenting str.split to allow a non-split
delimiter to be specified so that splits would not happen within
particular substrings, e.g.::

    >>> 'Here is "a phrase that should not get split"'.split(None,-1,'"')
    ['Here', 'is', 'a phrase that should not get split']

Most people were opposed to complicating the API of str.split, but
even as a separate method, people didn't seem to think that the need
was that great, particularly since the most common needs for such
functionality were already covered by ``shlex.split()`` and the csv
module.

Contributing thread:

- `New string method - splitquoted
<http://mail.python.org/pipermail/python-dev/2006-May/065037.html>`__

----------------------------------------
Deadlocks with fork() and multithreading
----------------------------------------

Rotem Yaari ran into some deadlocks using the subprocess module in a
multithreaded environment.  If a thread other than the thread calling
fork is holding the import lock, then since posix only replicates the
calling thread, the new child process ends up with an import lock that
is locked by a no longer existing thread.  Ronald Oussoren offered up
a repeatable test case, and a number of strategies for solving the
problem were discussed, including releasing the import lock during a
fork and throwing away the old import lock after a fork.

Contributing threads:

- `pthreads, fork, import, and execvp
<http://mail.python.org/pipermail/python-dev/2006-May/064983.html>`__
- `pthreads, fork, import, and execvp
<http://mail.python.org/pipermail/python-dev/2006-May/065023.html>`__

----------------
string.partition
----------------

Fredrik Lundh asked about the status of string.partition, and there
was a brief discussion about whether or not to return real string
objects or lazy objects that would only make a copy if the original
string disappeared.  Guido opted for the simpler approach using real
string objects, and Fredrik implemented it.

Contributing threads:

- `whatever happened to string.partition ?
<http://mail.python.org/pipermail/python-dev/2006-May/065191.html>`__
- `[Python-checkins] whatever happened to string.partition ?
<http://mail.python.org/pipermail/python-dev/2006-May/065195.html>`__
- `partition() variants
<http://mail.python.org/pipermail/python-dev/2006-May/065216.html>`__

----------------------------
Speeding up parsing of longs
----------------------------

Runar Petursson asked about speeding up parsing of longs from a slice
of a string, e.g. ``long(mystring[x:y])``.  He initially proposed
adding start= and end= keyword arguments to the long constructor, but
that seemed like a slippery slope where every function that took a
string would eventually need the same arguments.  Tim Peters pointed
out that a buffer object would solve the problem if
``PyLong_FromString()`` supported buffer's "offset & length" view or
the world instead of only seeing the start index.  While adding a
``PyLong_FromStringAndSize()`` would solve this particular problem,
all the internal parsing routines have a similar problem -- none of
them support a slice-based API.

As an alternate approach, Martin Blais was working on a "hot" buffer
class, based on the design of the Java NIO ByteBuffer class, which
would work without an intermediate string creation or memory copy.

Contributing thread:

- `Cost-Free Slice into FromString constructors--Long
<http://mail.python.org/pipermail/python-dev/2006-May/065163.html>`__

----------------------
Speeding up try/except
----------------------

After Steve Holden noticed a ~60% slowdown between Python 2.4.3 and
the Python trunk on the pybench try/except test, Sean Reifschneider
and Richard Jones looked into the problem and found that the slowdown
was due to creation of Exception objects.  Exceptions had been
converted to new-style objects by using PyType_New() as the
constructor and then adding magic methods with PyMethodDef().  By
changing BaseException to use a PyType_Type definition and the proper
C struct to associate methods with the class, Sean and Richard Jones
were able to speed up try/except to 30% faster than it was in Python
2.4.3.

Contributing thread:

- `2.5a2 try/except slow-down: Convert to type?
<http://mail.python.org/pipermail/python-dev/2006-May/065147.html>`__

-----------------------------
Supporting zlib's inflateCopy
-----------------------------

Guido noticed that the zlib module was failing with libz 1.1.4.  Even
though Python has its own copy of libz 1.2.3, it tries to use the
system libraries on Unix, so when the zlib module's compress and
decompress objects were updated with a copy() method (using libz's
inflateCopy() function), this broke compatibility for any system that
used a zlib older than 1.2.0.  Chris AtLee provided a `patch
conditionalizing the addition of the copy() method`_ on the version of
libz available.

.. _patch conditionalizing the addition of the copy() method:
http://bugs.python.org/1503046

Contributing thread:

- `zlib module doesn't build - inflateCopy() not found
<http://mail.python.org/pipermail/python-dev/2006-May/065068.html>`__

------------------------
Potential ssize_t values
------------------------

Neal Norwitz looked through the Python codebase for longs that should
potentially be declared as ssize_t instead.  There was a brief
discussion about changing int's ob_ival to ssize_t, but this would
have been an enormous change this late in the release cycle and would
have slowed down operations on short int operations.  Hash values were
also discussed, but since there's no natural correlation between a
hash value and the size of a collection, most people thought it was
unnecessary for the moment.  Martin v. L?wis suggested upping the
recursion limit to ssize_t, and formalizing a 16-bit and 31-bit limit
on line and column numbers, respectively.

Contributing threads:

- `ssize_t question: longs in header files
<http://mail.python.org/pipermail/python-dev/2006-May/065332.html>`__
- `ssize_t: ints in header files
<http://mail.python.org/pipermail/python-dev/2006-May/065333.html>`__

-----------------
itertools.iwindow
-----------------

Torsten Marek proposed adding a windowing function to itertools like::

    >>> list(iwindow(range(0,5), 3))
    [[0, 1, 2], [1, 2, 3], [2, 3, 4]]

Raymond Hettinger pointed him to a `previous discussion`_ on
comp.lang.python where he had explained that ``collections.deque()``
was usually a better solution.  Nick Coghlan suggested putting the
deque example in the collections module docs, but the thread trailed
off after that.

.. _previous discussion:
http://mail.python.org/pipermail/python-list/2005-March/270757.html

Contributing thread:

- `Proposal for a new itertools function: iwindow
<http://mail.python.org/pipermail/python-dev/2006-May/065276.html>`__

---------------------------------------------
Problems with buildbots and files left around
---------------------------------------------

Neal Norwitz discovered some problems with the buildbots after finding
a few tests that didn't properly clean up, leaving a few files around
afterwards.  Martin v. L?wis explained that forcing a build on a
non-existing branch will remove the build tree (which should clean up
a lot of the files) and also that "make distclean" could be added to
the clean step of Makefile.pre.in and master.cfg.

Contributing thread:

- `fixing buildbots
<http://mail.python.org/pipermail/python-dev/2006-May/065416.html>`__

------------------------------------
PEP 3101: Advanced String Formatting
------------------------------------

The discussion of `PEP 3101`_'s string formatting continued again this
fortnight.  Guido generally liked the proposal, though he suggested
following .NET's quoting syntax of doubling the braces, and maybe
allowing all formatting errors to pass silently so that rarely raised
exceptions don't hide themselves if their format string has an error.
The discussion was then moved to the `python-3000 list`_.

.. _PEP 3101: http://www.python.org/dev/peps/pep-3101/
.. _python-3000 list: http://mail.python.org/mailman/listinfo/python-3000

Contributing thread:

- `PEP 3101 Update
<http://mail.python.org/pipermail/python-dev/2006-May/065059.html>`__

-----------------------------
DONT_HAVE_* vs. HAVE_* macros
-----------------------------

Neal Norwitz asked whether some recently checked-in DONT_HAVE_* macros
should be replaced with HAVE_* macros instead.  Martin v. L?wis
indicated that these were probably written this way because Luke
Dunstan (the contributor) didn't want to modify configure.in and run
autoconf.  Luke noted that the configure.in and autoconf effort is
greater for Windows developers, but also agreed to convert things to
autoconf anyway.

Contributing thread:

- `[Python-checkins] r46064 - in python/trunk: Include/Python.h
Include/pyport.h Misc/ACKS Misc/NEWS Modules/_localemodule.c
Modules/main.c Modules/posixmodule.c Modules/sha512module.c
PC/pyconfig.h Python/thread_nt.h
<http://mail.python.org/pipermail/python-dev/2006-May/065124.html>`__

--------------------------------
Changing python int to long long
--------------------------------

Sean Reifschneider looked into converting the Python int type to long
long.  Though for simple math he saw speedups of around 25%, for ints
that fit entirely within 32-bits, the slowdown was around 11%.  Sean
was considering changing the int->long automatic conversion so that
ints would first be up-converted to long longs and then to Python
longs.  Guido said that it would be okay to standardize all ints as
64-bits everywhere, but only for Python 2.6.

Contributing thread:

- `Changing python int to "long long".
<http://mail.python.org/pipermail/python-dev/2006-May/065133.html>`__

----------------------------
C-level exception invariants
----------------------------

Tim Peters was looking at what kind of invariants could be promised
for C-level exceptions.  In particular, he was hoping to promise that
for PyFrameObject's f_exc_type, f_exc_value, and f_exc_traceback,
either all are NULL or none are NULL.  In his investigation, he found
a number of errors, including that _PyExc_Init() tries to raise an
AttributeError before the exception pointers have been initialized.

Contributing thread:

- `Low-level exception invariants?
<http://mail.python.org/pipermail/python-dev/2006-May/065231.html>`__

------------
C-code style
------------

Martin Blais asked about the policy for C code in Python core.  `PEP
7`_ explains that for old code, the most important thing is to be
consistent with the surrounding style.  For new C files (and for
Python 3000 code) indentation should be 4 spaces per indent, all
spaces (no tabs in any file).  There was a short discussion about
reformatting the current C code, but that would unnecessarily break
svn blame and make merging more difficult.

.. _PEP 7: http://www.python.org/dev/peps/pep-0007/

Contributing thread:

- `A can of worms... (does Python C code have a new C style?)
<http://mail.python.org/pipermail/python-dev/2006-May/065469.html>`__


================
Deferred Threads
================
- `feature request: inspect.isgenerator
<http://mail.python.org/pipermail/python-dev/2006-May/065334.html>`__
- `Python Benchmarks
<http://mail.python.org/pipermail/python-dev/2006-May/065480.html>`__


==================
Previous Summaries
==================
- `[Python-checkins] r45925 - in python/trunk: Lib/tempfile.py
Lib/test/test_os.py Misc/NEWS Modules/posixmodule.c
<http://mail.python.org/pipermail/python-dev/2006-May/065022.html>`__
- `[Web-SIG] Adding wsgiref to stdlib
<http://mail.python.org/pipermail/python-dev/2006-May/065116.html>`__


===============
Skipped Threads
===============
- `FYI: building on OS X
<http://mail.python.org/pipermail/python-dev/2006-May/065020.html>`__
- `MSVC 6.0 patch
<http://mail.python.org/pipermail/python-dev/2006-May/065027.html>`__
- `total ordering.
<http://mail.python.org/pipermail/python-dev/2006-May/065028.html>`__
- `[Python-checkins] r46002 - in python/branches/release24-maint:
Misc/ACKS Misc/NEWS Objects/unicodeobject.c
<http://mail.python.org/pipermail/python-dev/2006-May/065030.html>`__
- `Reminder: call for proposals "Python Language and Libraries Track"
for Europython 2006
<http://mail.python.org/pipermail/python-dev/2006-May/065057.html>`__
- `Decimal and Exponentiation
<http://mail.python.org/pipermail/python-dev/2006-May/065067.html>`__
- `Weekly Python Patch/Bug Summary
<http://mail.python.org/pipermail/python-dev/2006-May/065071.html>`__
- `New string method - splitquoted - New EmailAddress
<http://mail.python.org/pipermail/python-dev/2006-May/065078.html>`__
- `Iterating generator from C (PostgreSQL's pl/python RETUN
SETOF/RECORD iterator support broken on RedHat buggy libs)
<http://mail.python.org/pipermail/python-dev/2006-May/065111.html>`__
- `Charles Waldman?
<http://mail.python.org/pipermail/python-dev/2006-May/065120.html>`__
- `SSH key for work computer
<http://mail.python.org/pipermail/python-dev/2006-May/065138.html>`__
- `PySequence_Fast_GET_ITEM in string join
<http://mail.python.org/pipermail/python-dev/2006-May/065145.html>`__
- `Socket regression
<http://mail.python.org/pipermail/python-dev/2006-May/065160.html>`__
- `extensions and multiple source files with the same basename
<http://mail.python.org/pipermail/python-dev/2006-May/065162.html>`__
- `Import hooks falling back on built-in import machinery?
<http://mail.python.org/pipermail/python-dev/2006-May/065173.html>`__
- `This <http://mail.python.org/pipermail/python-dev/2006-May/065192.html>`__
- `SQLite header scan order
<http://mail.python.org/pipermail/python-dev/2006-May/065206.html>`__
- `[Python-checkins] r46247 - in
python/branches/sreifschneider-newnewexcept: Makefile.pre.in
Objects/exceptions.c Python/exceptions.c
<http://mail.python.org/pipermail/python-dev/2006-May/065211.html>`__
- `SQLite status?
<http://mail.python.org/pipermail/python-dev/2006-May/065214.html>`__
- `Request for patch review
<http://mail.python.org/pipermail/python-dev/2006-May/065215.html>`__
- `patch for mbcs codecs
<http://mail.python.org/pipermail/python-dev/2006-May/065268.html>`__
- `[Python-checkins] This
<http://mail.python.org/pipermail/python-dev/2006-May/065291.html>`__
- `[Python-checkins] r46300 - in python/trunk: Lib/socket.py
Lib/test/test_socket.py Lib/test/test_struct.py Modules/_struct.c
Modules/arraymodule.c Modules/socketmodule.c
<http://mail.python.org/pipermail/python-dev/2006-May/065296.html>`__
- `getting rid of confusing "expected a character buffer object"
messages <http://mail.python.org/pipermail/python-dev/2006-May/065298.html>`__
- `[Python-checkins] Python Regression Test Failures refleak (101)
<http://mail.python.org/pipermail/python-dev/2006-May/065310.html>`__
- `PEP 42 - New kind of Temporary file
<http://mail.python.org/pipermail/python-dev/2006-May/065315.html>`__
- `DRAFT: python-dev summary for 2006-03-16 to 2006-03-31
<http://mail.python.org/pipermail/python-dev/2006-May/065321.html>`__
- `dictionary order
<http://mail.python.org/pipermail/python-dev/2006-May/065322.html>`__
- `Syntax errors on continuation lines
<http://mail.python.org/pipermail/python-dev/2006-May/065327.html>`__
- `DRAFT: python-dev summary for 2006-04-01 to 2006-04-15
<http://mail.python.org/pipermail/python-dev/2006-May/065344.html>`__
- `Contributor agreements (was Re: DRAFT: python-dev summary for
2006-04-01 to 2006-04-15)
<http://mail.python.org/pipermail/python-dev/2006-May/065368.html>`__
- `Integer representation (Was: ssize_t question: longs in header
files) <http://mail.python.org/pipermail/python-dev/2006-May/065391.html>`__
- `problem with compilation flags used by distutils
<http://mail.python.org/pipermail/python-dev/2006-May/065418.html>`__
- `bug in PEP 318
<http://mail.python.org/pipermail/python-dev/2006-May/065440.html>`__
- `Socket module corner cases
<http://mail.python.org/pipermail/python-dev/2006-May/065443.html>`__
- `uriparsing library (Patch #1462525)
<http://mail.python.org/pipermail/python-dev/2006-May/065446.html>`__
- `optparse and unicode
<http://mail.python.org/pipermail/python-dev/2006-May/065458.html>`__
- `Reporting unexpected import failures as test failures in
regrtest.py <http://mail.python.org/pipermail/python-dev/2006-May/065468.html>`__
- `Add new PyErr_WarnEx() to 2.5?
<http://mail.python.org/pipermail/python-dev/2006-May/065478.html>`__
- `Arguments and PyInt_AsLong
<http://mail.python.org/pipermail/python-dev/2006-May/065485.html>`__

From lists at janc.be  Thu Jun 15 19:00:09 2006
From: lists at janc.be (Jan Claeys)
Date: Thu, 15 Jun 2006 19:00:09 +0200
Subject: [Python-Dev] Source control tools
In-Reply-To: <1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
	<1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>
Message-ID: <1150390810.28709.41.camel@localhost.localdomain>

Op di, 13-06-2006 te 10:27 +0200, schreef Alexander Schremmer:
> Bazaar-NG seems to reach limits already when working on
> it's own code/repository. 

Canonical uses bzr to develop launchpad.net, which is a "little bit"
larger dan bzr itself, I suspect...?


-- 
Jan Claeys


From bborcic at gmail.com  Thu Jun 15 19:41:03 2006
From: bborcic at gmail.com (Boris Borcic)
Date: Thu, 15 Jun 2006 19:41:03 +0200
Subject: [Python-Dev] The baby and the bathwater (Re: Scoping,
 augmented assignment, 'fast   locals' - conclusion)
In-Reply-To: <20060614093042.F2D7.JCARLSON@uci.edu>
References: <20060613111408.F2C5.JCARLSON@uci.edu> <e6oquo$olu$1@sea.gmane.org>
	<20060614093042.F2D7.JCARLSON@uci.edu>
Message-ID: <e6s63v$2id$1@sea.gmane.org>

[this is bytes of an oversized put-all-into-it intervention. A possibly expanded 
version will be submitted on clp with local followup before a couple days]

Josiah Carlson wrote:

[BB]
 >> I'd say a first step in convincing me I am wrong would be to show me 
examples of
 >> object methods of the standard library that are recursive, and cut out for
 >> recursion.

[JC]
 > Actually, I don't believe that is necessary. I've shown that you would
 > get better performance with a non-recursive function and passing two
 > arguments, than you would passing one argument with scoping tricks to
 > get the second.

Assuming my aim was purely performance clearly stretches the hints I gave that I 
wasn't unconcerned by it. Taking ground on this to unwarrantedly suppose that my 
methodology was deficient, rectify it, and then dictate the "necessary" 
conclusions I find... er, preposterous seems balanced, "thought police" not 
totally unwarranted.

What I am doing. After a long while of relaxed ties with the latest Python, and 
upon observing at http://pythonsudoku.sourceforge.net/, an imo hugely bloated 
code-base concerned with sudoku, I found it a convenient means to refamiliarize 
myself with some leading edge Python, to explore the manifold of equivalent 
programs defined a priori by the invariant constraint :

"universal sudoku solver in +-pure python, ~10ms/problem, and ~60 LOCS total".

At present I must have examined about 50 versions, some unfinished, some 
differing only by details you would call "trivial", others massively differing 
by choice of key data types and modules. And algorithmic details, readability, 
ease of debugging, obviousness, speed, concision... in brief, differing in 
beauty as I see it.

A minute example that this approach works (given my aims) is that has 
signification to me, such a leading edge detail as the news that 
unicode.translate() had been accelerated during the latest 2.5 sprint.

While I've not excluded code profiling as a tool to use at some point, I don't 
think it is adapted at this stage; timeit does what I need, whenever I want to 
compare speed of versions differing by one factor that usually implies many 
local changes. As made clear (I hope) speed isn't the primary or unique concern 
anyway.

[...]

This said...

This said, first : I deny you have or ever had real ground to refuse legitimacy, 
motivation, efficiency or methodological soundness to my approach.

Second : I tell you with good ground that under these (admittedly peculiar, but 
quite intense) "lighting conditions" the compiler "feature" that's the cause of 
the present debate stands out like a sore thumb.

[JC]
 >>> Given [...]
 >>> you are not likely to find me agreeing with you about
 >>> augmented assignment and/or lexically nested scopes.

[BB]
 >> I see. Thanks for the background. Background for backround, let me just say 
that
 >> python hadn't yet grown a lambda when I first played with it. May I read your
 >> last statement as acknowledging that I am not so much asking for a door to be
 >> created, than asking for a naturally builtin door, not to be locked by special
 >> efforts ?

[JC]
 > No, you may not.  There are two options for the Python compiler/runtime
 > behavior when faced with an augmented assignment for a name not yet
 > known in this particular lexical scope but known in a parent scope:

"Special efforts" I maintain. Here they hide in the choice of hypothesis and 
tensing to artificially exclude from the "use case", what is in fact the most 
likely situation whenever the compiler/runtime follows to its end the codepath 
you advocate.

Indeed, that situation is almost bound to be the case *if* the resulting error 
message acurately describes the user's real error *and* the latter is not prey 
to foolish coding habits (further than the in-your-opinion fatally foolish habit 
that's assumed by hypothesis : and I guess this explains your bias here, 
crediting *presumed* fools with more than their share of foolishness - circular 
thinking).

As relates to this "background use case", what I advocate amounts to 
substituting "reference to unknown global variable" for "local variable 
referenced before initialization". An error message for another, both bringing 
attention to the right problem.

[JC]
 > to expect whenever the "compiler/runtime" system
 > assume you meant the name in the parent scope, or assume you made a
 > mistake. The current state of affairs (for all released Pythons with
 > augmented assignment) is to assume that you meant the local scope. Why?
 > Because while allowing the augmented assignment would make your life
 > easier in this case, the error would "pass silently"

Yeah, this all quite fits what I had anticipated when I spoke of my "deeply felt 
aversion for whatever resembles a constraining curriculum of errors to make (or 
else I would program in Java)". But its true I hadn't anticipated the concurrent 
case of errors one is required to make simultaneously ;)

[JC]
 > when it was an
 > actual error (a not uncommon case),

I guess your insistence on the rightful methodology entitles me to enquire in 
turn on your choice of tool to establish the relative frequency of said case ?

Given that I've shown that it *is* the exceptional case, structurally speaking !

[JC]
 > which is a bit of a no-no,

What I would have believed a /bit/ of a no-no for Python, is to turn away from 
the "consenting adults" spirit to go the way of so many other programming languages.

Picture of the latter way : - As a matter of course, to place the virtue of 
catching errors early, before the virtue of following programmers' legitimate 
intentions. - Designing for assumed chronic lusers in need of being protected by 
the compiler from shooting themselves in the foot.

(Ok, this relates to a single marginal "feature", whose closest peer in 
characteristic is maybe the strange quirks of sum() - a quite distant peer).

[JC]
 > and the
 > case when you meant the parent scope is easily worked around.

Did I ever deny this ? I actually started by showing how the runtime (friendly 
as opposed to the compiler) taught me a workaround I did not yet know (to the 
contrary of the flurry of workarounds you've cited since, who knows why)

To pythondevers, my concluding message will go thus, in mixed metaphors : beware 
of not throwing away with the "bathwater" - the cultural intrusion of hordes of 
Scheme immigrants who take closures for the first word of Creation, the "baby" : 
an *original* space in Python, well moderated but roomy enough for 
Python-natives to evolve a python-inspired style in closures with some flesh to 
it. Which means freedom from the nuisance barkings of an obnoxious javanese 
watchdog named "Sin".

Regards,

Boris Borcic
--
"On na?t tous les m?tres du m?me monde"



From rhettinger at ewtllc.com  Thu Jun 15 21:26:59 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Thu, 15 Jun 2006 12:26:59 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <44917EC3.5000208@egenix.com>
References: <20060613004927.GA7988@21degrees.com.au>	<44903B26.1020409@egenix.com>	<4491386F.9010007@gmail.com>
	<44917EC3.5000208@egenix.com>
Message-ID: <4491B483.1060703@ewtllc.com>

An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060615/11b5cff5/attachment.html 

From jcarlson at uci.edu  Thu Jun 15 21:49:29 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Thu, 15 Jun 2006 12:49:29 -0700
Subject: [Python-Dev] The baby and the bathwater (Re: Scoping,
	augmented assignment, 'fast   locals' - conclusion)
In-Reply-To: <e6s63v$2id$1@sea.gmane.org>
References: <20060614093042.F2D7.JCARLSON@uci.edu> <e6s63v$2id$1@sea.gmane.org>
Message-ID: <20060615124803.F327.JCARLSON@uci.edu>


As Guido has already asked for this thread to be moved to c.l.py, I'm
going to do so.

 - Josiah


From 2006a at usenet.alexanderweb.de  Thu Jun 15 22:33:49 2006
From: 2006a at usenet.alexanderweb.de (Alexander Schremmer)
Date: Thu, 15 Jun 2006 22:33:49 +0200
Subject: [Python-Dev] Source control tools
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
	<1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>
	<1150390810.28709.41.camel@localhost.localdomain>
Message-ID: <oiyy2xppuzka$.dlg@usenet.alexanderweb.de>

On Thu, 15 Jun 2006 19:00:09 +0200, Jan Claeys wrote:

> Op di, 13-06-2006 te 10:27 +0200, schreef Alexander Schremmer:
>> Bazaar-NG seems to reach limits already when working on
>> it's own code/repository. 
> 
> Canonical uses bzr to develop launchpad.net, which is a "little bit"
> larger dan bzr itself, I suspect...?

I don't think so, without having seen the Launchpad code. I assume that
Launchpad has less comitters (closed source!) and therefore less change
sets and less parallel branches.

Once I pulled the bzr changesets (1-3 months ago) and it needed 3 hours on
a 900 MHz machine with a high-speed (> 50 MBit) internet connection (and it
was CPU bound). Note that bzr has gained a lot of speed since then, though.

Kind regards,
Alexander


From alexander.belopolsky at gmail.com  Thu Jun 15 23:13:08 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 15 Jun 2006 17:13:08 -0400
Subject: [Python-Dev] Keeping interned strings in a set
Message-ID: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>

As an exercise in using the new set C API, I've replaced the
"interned" dictionary in stringobject.c with a set.  Surprisingly,
what I thought would be a simple exercise, took several hours to
implement and debug.  Two problems are worth mentioning:

1. I had to add a function to setobject.h to retrieve a pointer to an
object stored in a set. I could not find a way to do it using current
API (short of iterating through the set of course).

2. I had to change the  PyString_Fini() and PySet_Fini() in
Py_Finalize() because cleaning "interned" set cannot be done after the
set module is finalized.

If there is any interest, I will submit a patch, but it does not seem
to affect performance in any meaningful way.

From nnorwitz at gmail.com  Thu Jun 15 23:18:24 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 15 Jun 2006 14:18:24 -0700
Subject: [Python-Dev] Keeping interned strings in a set
In-Reply-To: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
References: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
Message-ID: <ee2a432c0606151418k4cd03d32pc602efcdc4329cc1@mail.gmail.com>

On 6/15/06, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> As an exercise in using the new set C API, I've replaced the
> "interned" dictionary in stringobject.c with a set.
>
> If there is any interest, I will submit a patch, but it does not seem
> to affect performance in any meaningful way.

Can you measure memory usage difference in a large app?  How many
strings are interned?
(I think sets use less memory, it seems like they could, but I don't
really remember.)

n

From nicko at nicko.org  Fri Jun 16 00:45:40 2006
From: nicko at nicko.org (Nicko van Someren)
Date: Thu, 15 Jun 2006 23:45:40 +0100
Subject: [Python-Dev] Switch statement
In-Reply-To: <4491386F.9010007@gmail.com>
References: <20060613004927.GA7988@21degrees.com.au>
	<44903B26.1020409@egenix.com> <4491386F.9010007@gmail.com>
Message-ID: <B78A695A-7288-46EF-9EF8-E45F8B0C9B9D@nicko.org>

On 15 Jun 2006, at 11:37, Nick Coghlan wrote:
> ...
> The lack of a switch statement doesn't really bother me personally,  
> since I
> tend to just write my state machine type code so that it works off a
> dictionary that I define elsewhere,

Not trying to push more LISP into python or anything, but of course  
we could converge your method and the switch statement elegantly if  
only we could put whole suites into lamdbas rather than just single  
expressions :-)

{"case1": lamdba : some-suite-lambda... ,
  "case2": lambda : some-other-suite...
}[switch-condition-expression]()

	Nicko


From pje at telecommunity.com  Fri Jun 16 01:49:48 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 15 Jun 2006 19:49:48 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <B78A695A-7288-46EF-9EF8-E45F8B0C9B9D@nicko.org>
References: <4491386F.9010007@gmail.com>
	<20060613004927.GA7988@21degrees.com.au>
	<44903B26.1020409@egenix.com> <4491386F.9010007@gmail.com>
Message-ID: <5.1.1.6.0.20060615194231.01e8afb8@sparrow.telecommunity.com>

At 11:45 PM 6/15/2006 +0100, Nicko van Someren wrote:
>On 15 Jun 2006, at 11:37, Nick Coghlan wrote:
> > ...
> > The lack of a switch statement doesn't really bother me personally,
> > since I
> > tend to just write my state machine type code so that it works off a
> > dictionary that I define elsewhere,
>
>Not trying to push more LISP into python or anything, but of course
>we could converge your method and the switch statement elegantly if
>only we could put whole suites into lamdbas rather than just single
>expressions :-)

As has already been pointed out, this

1) adds function call overhead,
2) doesn't allow changes to variables in the containing function, and
3) even if we had a rebinding operator for free variables, we would have 
the overhead of creating closures.

The lambda syntax does nothing to fix any of these problems, and you can 
already use a mapping of closures if you are so inclined.  However, you'll 
probably find that the cost of creating the dictionary of closures exceeds 
the cost of a naive sequential search using if/elif.


From rhettinger at ewtllc.com  Fri Jun 16 02:47:03 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Thu, 15 Jun 2006 17:47:03 -0700
Subject: [Python-Dev] Keeping interned strings in a set
In-Reply-To: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
References: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
Message-ID: <4491FF87.5080006@ewtllc.com>

Alexander Belopolsky wrote:

>As an exercise in using the new set C API, I've replaced the
>"interned" dictionary in stringobject.c with a set.  Surprisingly,
>what I thought would be a simple exercise, took several hours to
>implement and debug.  Two problems are worth mentioning:
>
>1. I had to add a function to setobject.h to retrieve a pointer to an
>object stored in a set. I could not find a way to do it using current
>API (short of iterating through the set of course).
>
>2. I had to change the  PyString_Fini() and PySet_Fini() in
>Py_Finalize() because cleaning "interned" set cannot be done after the
>set module is finalized.
>
>If there is any interest, I will submit a patch, but it does not seem
>to affect performance in any meaningful way.
>  
>

I would be curious to see your patch.


Raymond

From alexander.belopolsky at gmail.com  Fri Jun 16 03:24:37 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 15 Jun 2006 21:24:37 -0400
Subject: [Python-Dev] Keeping interned strings in a set
In-Reply-To: <4491FF87.5080006@ewtllc.com>
References: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
	<4491FF87.5080006@ewtllc.com>
Message-ID: <4B030711-ADF8-4817-8958-6BA758736F9B@local>

This is very raw, but in the spirit of "release early and often",  
here it is:

http://sourceforge.net/tracker/download.php? 
group_id=5470&atid=305470&file_id=181807&aid=1507011

On Jun 15, 2006, at 8:47 PM, Raymond Hettinger wrote:
>
> I would be curious to see your patch.
>
>
> Raymond


From rhettinger at ewtllc.com  Fri Jun 16 04:29:28 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Thu, 15 Jun 2006 19:29:28 -0700
Subject: [Python-Dev] Keeping interned strings in a set
In-Reply-To: <4B030711-ADF8-4817-8958-6BA758736F9B@local>
References: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
	<4491FF87.5080006@ewtllc.com>
	<4B030711-ADF8-4817-8958-6BA758736F9B@local>
Message-ID: <44921788.10601@ewtllc.com>

Alexander Belopolsky wrote:

> This is very raw, but in the spirit of "release early and often",  
> here it is:
>
> http://sourceforge.net/tracker/download.php? 
> group_id=5470&atid=305470&file_id=181807&aid=1507011
>
> On Jun 15, 2006, at 8:47 PM, Raymond Hettinger wrote:
>
>>
>> I would be curious to see your patch.
>

Nicely done.  It is fine by me if this goes in so we save a little space 
in the intern table.

One nit, in _Py_ReleaseInternedStrings() you can use 
PySet_CheckExact(interned) instead of the ob_type.


Raymond

From alexander.belopolsky at gmail.com  Fri Jun 16 05:47:56 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 15 Jun 2006 23:47:56 -0400
Subject: [Python-Dev] Keeping interned strings in a set
In-Reply-To: <44921788.10601@ewtllc.com>
References: <d38f5330606151413w31249461td8ca5b2f043a6fcd@mail.gmail.com>
	<4491FF87.5080006@ewtllc.com>
	<4B030711-ADF8-4817-8958-6BA758736F9B@local>
	<44921788.10601@ewtllc.com>
Message-ID: <35EF44A0-7774-4CE3-A655-702A448A71BD@local>


On Jun 15, 2006, at 10:29 PM, Raymond Hettinger wrote:
>
> Nicely done.  It is fine by me if this goes in so we save a little  
> space in the intern table.

Thanks for the good word.   I've reworked the code a little bit and  
fixed the comments.  I don't have svn write access, so someone else  
will have to take over from here.

http://sourceforge.net/tracker/download.php? 
group_id=5470&atid=305470&file_id=181816&aid=1507011

>
> One nit, in _Py_ReleaseInternedStrings() you can use  
> PySet_CheckExact(interned) instead of the ob_type.

That's what I wanted to use myself, but  PySet_CheckExact does not  
exist.  This looks like an oversight, but fixing that should probably  
come as a separate patch.

From greg.ewing at canterbury.ac.nz  Fri Jun 16 06:13:34 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Jun 2006 16:13:34 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <44917EC3.5000208@egenix.com>
References: <20060613004927.GA7988@21degrees.com.au>
	<44903B26.1020409@egenix.com> <4491386F.9010007@gmail.com>
	<44917EC3.5000208@egenix.com>
Message-ID: <44922FEE.1010001@canterbury.ac.nz>

M.-A. Lemburg wrote:

> My personal favorite is making the compiler
> smarter to detect the mentioned if-elif-else scheme
> and generate code which uses a lookup table for
> implementing fast branching.

But then the values need to be actual compile-time
constants, precluding the use of symbolic names,
values precomputed a run time, etc.

A new statement would allow us to simply document
that the case values are *assumed* constant, and
then the implementation could cache them in a dict
or whatever.

--
Greg

From nicko at nicko.org  Fri Jun 16 06:26:31 2006
From: nicko at nicko.org (Nicko van Someren)
Date: Fri, 16 Jun 2006 05:26:31 +0100
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060615194231.01e8afb8@sparrow.telecommunity.com>
References: <4491386F.9010007@gmail.com>
	<20060613004927.GA7988@21degrees.com.au>
	<44903B26.1020409@egenix.com> <4491386F.9010007@gmail.com>
	<5.1.1.6.0.20060615194231.01e8afb8@sparrow.telecommunity.com>
Message-ID: <3E682639-4F32-434A-A911-FEB6A90163A8@nicko.org>

On 16 Jun 2006, at 00:49, Phillip J. Eby wrote:

> At 11:45 PM 6/15/2006 +0100, Nicko van Someren wrote:
>> On 15 Jun 2006, at 11:37, Nick Coghlan wrote:
>> > ...
>> > The lack of a switch statement doesn't really bother me personally,
>> > since I
>> > tend to just write my state machine type code so that it works  
>> off a
>> > dictionary that I define elsewhere,
>>
>> Not trying to push more LISP into python or anything, but of course
>> we could converge your method and the switch statement elegantly if
>> only we could put whole suites into lamdbas rather than just single
>> expressions :-)
>
> As has already been pointed out, this
>
> 1) adds function call overhead,
> 2) doesn't allow changes to variables in the containing function, and
> 3) even if we had a rebinding operator for free variables, we would  
> have the overhead of creating closures.

Noted.  I find (2) the most compelling issue.  I was merely  
suggesting a succinct way to express the model that Nick Cohglan was  
espousing.

> The lambda syntax does nothing to fix any of these problems, and  
> you can already use a mapping of closures if you are so inclined.   
> However, you'll probably find that the cost of creating the  
> dictionary of closures exceeds the cost of a naive sequential  
> search using if/elif.

The smiley was supposed to indicate that this was not an entirely  
serious suggestion; my apologies if the signal was lost in  
transmission.  In the equivalent if/elif to a switch you're only  
comparing a single value against a set of pre-computed values, and  
expecting to only do half the tests, so it's almost certainly going  
to be quicker than sorting out the whole set of closures.  I do  
however have a bug-bear about lambdas being restricted to single  
expressions, but maybe that's just me.

	Nicko


From talin at acm.org  Fri Jun 16 06:35:58 2006
From: talin at acm.org (Talin)
Date: Thu, 15 Jun 2006 21:35:58 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060615194231.01e8afb8@sparrow.telecommunity.com>
References: <4491386F.9010007@gmail.com>	<20060613004927.GA7988@21degrees.com.au>	<44903B26.1020409@egenix.com>
	<4491386F.9010007@gmail.com>
	<5.1.1.6.0.20060615194231.01e8afb8@sparrow.telecommunity.com>
Message-ID: <4492352E.4020605@acm.org>

Phillip J. Eby wrote:
> As has already been pointed out, this
> 
> 1) adds function call overhead,
> 2) doesn't allow changes to variables in the containing function, and
> 3) even if we had a rebinding operator for free variables, we would have 
> the overhead of creating closures.
> 
> The lambda syntax does nothing to fix any of these problems, and you can 
> already use a mapping of closures if you are so inclined.  However, you'll 
> probably find that the cost of creating the dictionary of closures exceeds 
> the cost of a naive sequential search using if/elif.

This brings me back to my earlier point - I wonder if it would make 
sense for Python to have a non-closure form of lambda - essentially an 
old-fashioned subroutine:

    def foo( x ):
       x = 0
       sub bar: # Arguments are not allowed, since they create a scope
          x = y # Writes over the x defined in 'foo'

       bar()

The idea is that 'bar' would share the same scope as 'foo'. To keep the 
subroutine lightweight (i.e. just a single jump and return instruction 
in the virtual machine) arguments would not be allowed.

-- Talin

From martin at v.loewis.de  Fri Jun 16 07:13:55 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 16 Jun 2006 07:13:55 +0200
Subject: [Python-Dev] Last-minute curses patch
In-Reply-To: <44915BD3.4020109@livinglogic.de>
References: <44915BD3.4020109@livinglogic.de>
Message-ID: <44923E13.2090203@v.loewis.de>

Walter D?rwald wrote:
> I have a small patch http://bugs.python.org/1506645 that adds
> resizeterm() and resize_term() to the curses module. Can this still go
> in for beta1? I'm not sure, if it needs some additional configure magic.

It can go into beta1 until the beta1 code freeze is announced.
It does need a configure test, though.

Regards,
Martin

From walter at livinglogic.de  Fri Jun 16 09:42:30 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri, 16 Jun 2006 09:42:30 +0200
Subject: [Python-Dev] Last-minute curses patch
In-Reply-To: <44923E13.2090203@v.loewis.de>
References: <44915BD3.4020109@livinglogic.de> <44923E13.2090203@v.loewis.de>
Message-ID: <449260E6.3050008@livinglogic.de>

Martin v. L?wis wrote:
> Walter D?rwald wrote:
>> I have a small patch http://bugs.python.org/1506645 that adds
>> resizeterm() and resize_term() to the curses module. Can this still go
>> in for beta1? I'm not sure, if it needs some additional configure magic.
> 
> It can go into beta1 until the beta1 code freeze is announced.

Great!

> It does need a configure test, though.

Unfortunately I have no idea how this whole configure business works!

Servus,
    Walter


From mal at egenix.com  Fri Jun 16 10:20:20 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 16 Jun 2006 10:20:20 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <4491B483.1060703@ewtllc.com>
References: <20060613004927.GA7988@21degrees.com.au>	<44903B26.1020409@egenix.com>	<4491386F.9010007@gmail.com>	<44917EC3.5000208@egenix.com>
	<4491B483.1060703@ewtllc.com>
Message-ID: <449269C4.1010909@egenix.com>

Raymond Hettinger wrote:
>>> The optimisation of the if-elif case would then simply be to say that the
>>> compiler can recognise if-elif chains like the one above where the RHS
>>> of the comparisons are all hashable literals and collapse them to switch
>>> statements.
>>>     
>>
>> Right (constants are usually hashable :-).
>>   
> 
> The LHS is more challenging.  Given:
> 
>     if x == 1: p_one()
>     elif x == 2: p_two()
>     elif x == 3: p_three()
>     else: p_catchall()
> 
> There is no guarantee that x is hashable.  For example:
> 
>     class X:
>         def __init__(self, value):
>              self.value = value
>         def __cmp__(self, y):
>              return self.value == y
>     x = X(2)

That's a good point.

The PEP already addresses this by retricting the type of x to a
few builtin immutable and hashable types:

         ...the switching variable is one of the builtin
         immutable types: int, float, string, unicode, etc. (not
         subtypes, since it's not clear whether these are still
         immutable or not).

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From mal at egenix.com  Fri Jun 16 10:29:51 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 16 Jun 2006 10:29:51 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <44922FEE.1010001@canterbury.ac.nz>
References: <20060613004927.GA7988@21degrees.com.au>	<44903B26.1020409@egenix.com>
	<4491386F.9010007@gmail.com>	<44917EC3.5000208@egenix.com>
	<44922FEE.1010001@canterbury.ac.nz>
Message-ID: <44926BFF.3000707@egenix.com>

Greg Ewing wrote:
> M.-A. Lemburg wrote:
> 
>> My personal favorite is making the compiler
>> smarter to detect the mentioned if-elif-else scheme
>> and generate code which uses a lookup table for
>> implementing fast branching.
> 
> But then the values need to be actual compile-time
> constants, precluding the use of symbolic names,
> values precomputed a run time, etc.

Good point.

> A new statement would allow us to simply document
> that the case values are *assumed* constant, and
> then the implementation could cache them in a dict
> or whatever.

That would a very well hidden assignment of a "constantness"
property to a symbol or expression. We'd also run into
the problem of not knowing when to evaluate those
case expressions, e.g. at function compile time,
at run-time when first entering the switch statement,
etc.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From gmccaughan at synaptics-uk.com  Fri Jun 16 13:15:41 2006
From: gmccaughan at synaptics-uk.com (Gareth McCaughan)
Date: Fri, 16 Jun 2006 12:15:41 +0100
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448A6377.8040902@canterbury.ac.nz>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
Message-ID: <200606161215.41941.gmccaughan@synaptics-uk.com>

> But only if it makes sense. I still think there are some
> severe conceptual difficulties with 0D arrays. One is
> the question of how many items it contains. With 1 or
> more dimensions, you can talk about its size along any
> chosen dimension. But with 0 dimensions there's no size
> to measure. Who's to say a 0D array has a size of 1, then?
> Part of my brain keeps saying it should be 0 -- i.e.
> it contains nothing at all!

For what it's worth (probably little), I'm fairly sure that
if you were to ask the question of a bunch of mathematicians
you'd get absolute unanimity on a 0-D array containing exactly
one element, indexed by the (unique) empty sequence. You'd
probably also get absolute unanimous puzzlement as to why
anyone other than mathematicians should care.

I'd say there are "conceptual difficulties" in the sense
that the concept is difficult to get one's head around,
not in the sense that there's any real doubt what the
Right Answer is.

For anyone unconvinced, it may or may not be helpful to
meditate on the fact that <anything>**0 is 1, and that an
empty product is conventionally defined to be 1.

None of the above is intended to constitute argument for
or against Noam's proposed change to Python. Python isn't
primarily a language for mathematicians, and so much the
better for Python.

-- 
Gareth McCaughan (unashamed pure mathematician, at least
by training and temperament)


From g.brandl at gmx.net  Fri Jun 16 13:57:17 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 16 Jun 2006 13:57:17 +0200
Subject: [Python-Dev] Bug in stringobject?
Message-ID: <e6u6at$sbu$1@sea.gmane.org>

In string_replace, there is

	if (PyString_Check(from)) {
	  /* Can this be made a '!check' after the Unicode check? */
	}
#ifdef Py_USING_UNICODE
	if (PyUnicode_Check(from))
		return PyUnicode_Replace((PyObject *)self,
					 from, to, count);
#endif
	else if (PyObject_AsCharBuffer(from, &tmp_s, &tmp_len))
		return NULL;

[the same check with "to"]

	return (PyObject *)replace((PyStringObject *) self,
				   (PyStringObject *) from,
				   (PyStringObject *) to, count);


Can this be correct if from or to isn't a string object, but a
char buffer compatible object?

Georg


From mal at egenix.com  Fri Jun 16 15:07:07 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 16 Jun 2006 15:07:07 +0200
Subject: [Python-Dev] Bug in stringobject?
In-Reply-To: <e6u6at$sbu$1@sea.gmane.org>
References: <e6u6at$sbu$1@sea.gmane.org>
Message-ID: <4492ACFB.7050407@egenix.com>



Georg Brandl wrote:
> In string_replace, there is
> 
> 	if (PyString_Check(from)) {
> 	  /* Can this be made a '!check' after the Unicode check? */
> 	}
> #ifdef Py_USING_UNICODE
> 	if (PyUnicode_Check(from))
> 		return PyUnicode_Replace((PyObject *)self,
> 					 from, to, count);
> #endif
> 	else if (PyObject_AsCharBuffer(from, &tmp_s, &tmp_len))
> 		return NULL;
> 
> [the same check with "to"]
> 
> 	return (PyObject *)replace((PyStringObject *) self,
> 				   (PyStringObject *) from,
> 				   (PyStringObject *) to, count);
> 
> 
> Can this be correct if from or to isn't a string object, but a
> char buffer compatible object?

Certainly not :-)

Also note that tmp_s and tmp_len are no longer used in the
function.

It appears as if there's some code missing from the function
(and that there's no unit which actually does a string
replace with non-string objects).

Since replace() only works on string objects, it appears
as if a temporary string object would have to be created.
However, this would involve an unnecessary allocation
and copy process... it appears as if the refactoring
during the NFS sprint left out that case.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Fri Jun 16 15:13:35 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 16 Jun 2006 15:13:35 +0200
Subject: [Python-Dev] Bug in stringobject?
References: <e6u6at$sbu$1@sea.gmane.org> <4492ACFB.7050407@egenix.com>
Message-ID: <e6uaqc$c5r$1@sea.gmane.org>

M.-A. Lemburg wrote:
> Since replace() only works on string objects, it appears
> as if a temporary string object would have to be created.
> However, this would involve an unnecessary allocation
> and copy process... it appears as if the refactoring
> during the NFS sprint left out that case.

what's the beta 1 status ?  fixing this should be trivial, but I don't have any
cycles to spare today.

</F> 




From kristjan at ccpgames.com  Fri Jun 16 15:44:09 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Fri, 16 Jun 2006 13:44:09 -0000
Subject: [Python-Dev] unicode imports
Message-ID: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>

Greetings!
 
Although python has had full unicode support for filenames for a long time on selected platforms (e.g. Windows), there is one glaring deficiency:  It cannot import from paths containing unicode.  I?ve tried creating folders with chinese characters and adding them to path, to no avail.
The standard install path in chinese distributions can be with a non-ANSI path, and installing an embedded python application there will break it.  At the moment this is hindering the installation of EVE on Chinese internet-caf?s.
 
A cursory glance at import.c shows that the import mechanism is fairly complicated, and riddled with "char *path" thingies, and manual string arithmetic.  Do you have any suggestions on a clean way to unicodify the import mechanism?
 
A completely parallel implementation on the sys.path[i] level?
 
Are there other platforms beside Windows that would profit from this?
 
Cheers,
 
Kristj?n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060616/a0bf0f77/attachment.html 

From noamraph at gmail.com  Fri Jun 16 15:52:33 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Fri, 16 Jun 2006 16:52:33 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <448BAEDD.7050300@gmail.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<e6cdkg$kci$1@sea.gmane.org> <448A0C16.9080301@canterbury.ac.nz>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
	<Pine.LNX.4.58.0606100206550.5223@server1.LFW.org>
	<448AA5A6.6090803@canterbury.ac.nz> <448AD1CB.1080409@gmail.com>
	<b348a0850606101218w653537b9ke163ff1f5c1f737b@mail.gmail.com>
	<448BAEDD.7050300@gmail.com>
Message-ID: <b348a0850606160652i684a1855hbbe60fcec1ee9ac3@mail.gmail.com>

Hello,

It seems to me that people don't object to my proposal, but don't find
it useful to them either.

The question is, what to do next. I guess one possibility is to raise
this discussion again in a few months, when people will be less
occupied with 2.5 beta. This is ok, although I would prefer a decision
before that, because it might affect the design of the library -
should I find a permanent workaround, or one that I know that will be
removed in the future.

If you do want to continue the discussion to reach a decision, please
do. You can say that if nobody else on python-dev is interested, it
shouldn't be implemented. You can examine my use case, say if you
think it's reasonable, and suggest alternative solutions - or say that
you see how allowing empty subscript list solves it elegantly (yes!)

My point is, I don't want this discussion to naturally die because
nobody is interested, since I am interested. So please say what you
think should happen to it, so we can reach a conclusion.

Now, if a the discussion is to continue, Nick proposed an alternative:

2006/6/11, Nick Coghlan <ncoghlan at gmail.com>:
> For your specific use cases, though, I'd be inclined to tweak the API a bit,
> and switch to using attributes for the single-valued data:
>
> tax_rates.income_tax = 0.18

It's probably ok, although I would prefer not having to artificially
group scalars just to make them attributes of something. I would
prefer remaining with one object, and having something like
income_tax.setvalue(), or even income_tax.value.

> Although the income tax rate should actually depend on the current financial
> year, since it can change over time as the government increases taxes ;)

But that's exactly why I prefer writing simply "income_tax[] = 0.18"
when it's a constant, which is completely analogous to
"income_tax[2005] = 0.17; income_tax[2006] = 0.18" when it depends on
something.

By the way, another thing about consistency: A friend of mine brought
the point that there isn't another example of forbidden empty brackets
- [], {}, (), x() are all allowed.

And about the other thing Nick said:
> I guess I'm really only -0 on the idea of x[] invoking x.__getitem__(), and
> allowing the class to decide whether or not to define a default value for the
> subscript. I wouldn't implement it myself, but I wouldn't object strenuously
> if Guido decided it was OK :)

I would prefer an empty tuple, since invoking __getitem__ with no
arguments would be a special case: for all other possible subscript
lists, exactly one argument is passed to __getitem__. This leaves us
with one special case: a subscript list with one item and without a
trailing comma results in __getitem__ not getting a tuple, where in
all other cases it does get a tuple. This works exactly like
parentheses: they don't mean a tuple only when there's one item inside
them and no trailing comma.

Good bye,
Noam

From noamraph at gmail.com  Fri Jun 16 16:10:00 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Fri, 16 Jun 2006 17:10:00 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <200606161215.41941.gmccaughan@synaptics-uk.com>
References: <b348a0850606090853n62a1d01fhfdef21463a06e37f@mail.gmail.com>
	<448A3F75.7090703@gmail.com> <448A6377.8040902@canterbury.ac.nz>
	<200606161215.41941.gmccaughan@synaptics-uk.com>
Message-ID: <b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>

2006/6/16, Gareth McCaughan <gmccaughan at synaptics-uk.com>:
> None of the above is intended to constitute argument for
> or against Noam's proposed change to Python. Python isn't
> primarily a language for mathematicians, and so much the
> better for Python.
>
Thanks for your explanation of mathematical zero dimensional array! I
just wanted to say that I really got to this just from trying to make
a *computer program* as simple as possible - from what I know now,
with empty subscript lists not allowed, my library will have more
lines of code, will have more details of interface, and will require
longer code to operate it. I'm not saying that not changing it will be
terrible - I'm just saying that if changing something makes other
things simpler AND goes along with mathematical intuition, it might be
the right thing to do...

Noam

From ncoghlan at gmail.com  Fri Jun 16 17:29:54 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 17 Jun 2006 01:29:54 +1000
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
Message-ID: <4492CE72.2070704@gmail.com>

Kristj?n V. J?nsson wrote:
> A cursory glance at import.c shows that the import mechanism is fairly 
> complicated, and riddled with "char *path" thingies, and manual string 
> arithmetic.  Do you have any suggestions on a clean way to unicodify the 
> import mechanism?

Can you install a PEP 302 path hook and importer/loader that can handle path 
entries that are Unicode strings? (I think this would end up being the 
parallel implementation you were talking about, though)

If the code that traverses sys.path and sys.path_hooks is itself 
unicode-unaware (I don't remember if it is or isn't), then you might be able 
to trick it by poking a Unicode-savvy importer directly into the 
path_importer_cache for affected Unicode paths.

One issue is that the package and file names still have to be valid Python 
identifiers, which means ASCII. Unicode would be, at best, permitted only in 
the path entries.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From pje at telecommunity.com  Fri Jun 16 18:02:51 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri, 16 Jun 2006 12:02:51 -0400
Subject: [Python-Dev] unicode imports
In-Reply-To: <4492CE72.2070704@gmail.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
Message-ID: <5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>

At 01:29 AM 6/17/2006 +1000, Nick Coghlan wrote:
>Kristj?n V. J?nsson wrote:
> > A cursory glance at import.c shows that the import mechanism is fairly
> > complicated, and riddled with "char *path" thingies, and manual string
> > arithmetic.  Do you have any suggestions on a clean way to unicodify the
> > import mechanism?
>
>Can you install a PEP 302 path hook and importer/loader that can handle path
>entries that are Unicode strings? (I think this would end up being the
>parallel implementation you were talking about, though)
>
>If the code that traverses sys.path and sys.path_hooks is itself
>unicode-unaware (I don't remember if it is or isn't), then you might be able
>to trick it by poking a Unicode-savvy importer directly into the
>path_importer_cache for affected Unicode paths.

Actually, you would want to put it in sys.path_hooks, and then instances 
would be placed in path_importer_cache automatically.  If you are adding it 
to the path_hooks after the fact, you should simply clear the 
path_importer_cache.  Simply poking stuff into the path_importer_cache is 
not a recommended approach.


>One issue is that the package and file names still have to be valid Python
>identifiers, which means ASCII. Unicode would be, at best, permitted only in
>the path entries.

If I understand the problem correctly, the issue is that if you install 
Python itself to a Unicode directory, you'll be unable to import anything 
from the standard library.  This isn't about module names, it's about the 
places on the path where that stuff goes.

However, if the issue is that the program works, but it puts unicode 
entries on sys.path, I would suggest simply encoding them to strings using 
the platform-appropriate codec.


From jcarlson at uci.edu  Fri Jun 16 18:24:01 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 16 Jun 2006 09:24:01 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
Message-ID: <20060616091435.F332.JCARLSON@uci.edu>


"Noam Raphael" <noamraph at gmail.com> wrote:
> 
> 2006/6/16, Gareth McCaughan <gmccaughan at synaptics-uk.com>:
> > None of the above is intended to constitute argument for
> > or against Noam's proposed change to Python. Python isn't
> > primarily a language for mathematicians, and so much the
> > better for Python.
> >
> Thanks for your explanation of mathematical zero dimensional array! I
> just wanted to say that I really got to this just from trying to make
> a *computer program* as simple as possible - from what I know now,
> with empty subscript lists not allowed, my library will have more
> lines of code, will have more details of interface, and will require
> longer code to operate it. I'm not saying that not changing it will be
> terrible - I'm just saying that if changing something makes other
> things simpler AND goes along with mathematical intuition, it might be
> the right thing to do...

I'm not a mathematician, and I don't really work with arrays of any
dimensionality, so the need for 0-D subscripting via arr[] while being
cute, isn't compelling to my uses for Python.

Now, I appreciate the desire to reduce code length and complexity, but
from what I understand, the ultimate result of such a change to your
code would be to go from:
    arr[()]
to:
    arr[]

I don't see how this can reduce lines of code in implementation or use.
At most it is a two characters per use, and a change in documentation
(specifying how you subscript 0-D arrays).  If you can show an example
where actual code line count is reduced with this change, I can't
guarantee that I would get behind this proposal in a few months (if the
conversation starts up again), but it may make me feel less that your
proposal is essentially about aesthetics.

 - Josiah


From bob at redivi.com  Fri Jun 16 19:04:41 2006
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 16 Jun 2006 10:04:41 -0700
Subject: [Python-Dev] unicode imports
In-Reply-To: <5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
Message-ID: <F75C6D7E-2605-466A-8391-EB7D392324E0@redivi.com>


On Jun 16, 2006, at 9:02 AM, Phillip J. Eby wrote:

> At 01:29 AM 6/17/2006 +1000, Nick Coghlan wrote:
>> Kristj?n V. J?nsson wrote:
>>> A cursory glance at import.c shows that the import mechanism is  
>>> fairly
>>> complicated, and riddled with "char *path" thingies, and manual  
>>> string
>>> arithmetic.  Do you have any suggestions on a clean way to  
>>> unicodify the
>>> import mechanism?
>>
>> Can you install a PEP 302 path hook and importer/loader that can  
>> handle path
>> entries that are Unicode strings? (I think this would end up being  
>> the
>> parallel implementation you were talking about, though)
>>
>> If the code that traverses sys.path and sys.path_hooks is itself
>> unicode-unaware (I don't remember if it is or isn't), then you  
>> might be able
>> to trick it by poking a Unicode-savvy importer directly into the
>> path_importer_cache for affected Unicode paths.
>
> Actually, you would want to put it in sys.path_hooks, and then  
> instances
> would be placed in path_importer_cache automatically.  If you are  
> adding it
> to the path_hooks after the fact, you should simply clear the
> path_importer_cache.  Simply poking stuff into the  
> path_importer_cache is
> not a recommended approach.
>
>
>> One issue is that the package and file names still have to be  
>> valid Python
>> identifiers, which means ASCII. Unicode would be, at best,  
>> permitted only in
>> the path entries.
>
> If I understand the problem correctly, the issue is that if you  
> install
> Python itself to a Unicode directory, you'll be unable to import  
> anything
> from the standard library.  This isn't about module names, it's  
> about the
> places on the path where that stuff goes.

There's a similar issue in that if sys.prefix contains a colon,  
Python is also busted:
http://python.org/sf/1507224

Of course, that's not a Windows issue, but it is everywhere else. The  
offending code in that case is Modules/getpath.c, which probably also  
has to change in order to make unicode directories work on Win32  
(though I think there may be a separate win32 implementation of  
getpath).

-bob


From janssen at parc.com  Fri Jun 16 19:06:08 2006
From: janssen at parc.com (Bill Janssen)
Date: Fri, 16 Jun 2006 10:06:08 PDT
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
Message-ID: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>

A colleague recently posted this message:

> I'm trying to build a Python extension, and Python 2.4 insists on the MS
> Visual C++ compiler version 7.1, which is included with the MS VC++ 2003
> toolkit.  This toolkit is no longer available for download from
> Microsoft (superseded by the 2005 version), so I'm stuck.

This seems sub-optimal.  I'm afraid I don't follow the Windows track
closely; has this been fixed for 2.5, or should it be reported as a
bug?

Bill


From astrand at lysator.liu.se  Fri Jun 16 21:41:39 2006
From: astrand at lysator.liu.se (Peter Astrand)
Date: Fri, 16 Jun 2006 21:41:39 +0200 (MEST)
Subject: [Python-Dev] Fwd: subprocess.Popen(.... stdout=IGNORE, ...)
In-Reply-To: <8393fff0606122207q75cbef99o15053d869805952f@mail.gmail.com>
References: <8393fff0606111659j16e0ed73wf16d3f7e0892d268@mail.gmail.com>
	<8393fff0606122207q75cbef99o15053d869805952f@mail.gmail.com>
Message-ID: <Pine.GSO.4.51L2.0606162118260.688@koeberg.lysator.liu.se>

On Tue, 13 Jun 2006, Martin Blais wrote:

Hi all. Now let's see if I remember something about my module...


> In the subprocess module, by default the files handles in the child
> are inherited from the parent.  To ignore a child's output, I can use
> the stdout or stderr options to send the output to a pipe::
>
>    p = Popen(command, stdout=PIPE, stderr=PIPE)
>
> However, this is sensitive to the buffer deadlock problem, where for
> example the buffer for stderr might become full and a deadlock occurs
> because the child is blocked on writing to stderr and the parent is
> blocked on reading from stdout or waiting for the child to finish.
>
> For example, using this command will cause deadlock::
>
>    call('cat /boot/vmlinuz'.split(), stdout=PIPE, stderr=PIPE)

Yes, the call convenience function is basically for the case when you are
not interested in redirection.


> Popen.communicate() implements a solution using either select() or
> multiple threads (under Windows) to read from the pipes, and returns
> the strings as a result.  It works out like this::
>
>    p = Popen(command, stdout=PIPE, stderr=PIPE)
>    output, errors = p.communicate()
>    if p.returncode != 0:
>         ?
>
> Now, as a user of the subprocess module, sometimes I just want to
> call some child process and simply ignore its output, and to do so I
> am forced to use communicate() as above and wastefully capture and
> ignore the strings.  This is actually quite a common use case.  "Just
> run something, and check the return code".

Yes, this is a common case, and using communicate() is indeed overkill and
wasteful.


> Right now, in order to do
> this without polluting the parent's output, you cannot use the call()
> convenience (or is there another way?).
>
> A workaround that works under UNIX is to do this::
>
>    FNULL = open('/dev/null', 'w')
>    returncode = call(command, stdout=FNULL, stderr=FNULL)

Yes, this works. You can also do:

returncode = subprocess.call(command, stdout=open('/dev/null', 'w'), stderr=subprocess.STDOUT)


> Some feedback requested, I'd like to know what you think:
>
> 1. Would it not be nice to add a IGNORE constant to subprocess.py
>    that would do this automatically?, i.e. ::
>
>      returncode = call(command, stdout=IGNORE, stderr=IGNORE)
>
>    Rather than capture and accumulate the output, it would find an
>    appropriate OS-specific way to ignore the output (the /dev/null file
>    above works well under UNIX, how would you do this under Windows?
>    I'm sure we can find something.)

I have a vague feeling of that this has been discussed before, but I
cannot find a tracker for this. I guess an IGNORE constant would be nice.
Using open('/dev/null', 'w') is only a few more characters to type, but as
you say, it's not platform independent.

So, feel free to submit a patch or a Feature Request Tracker.


> 2. call() should be modified to not be sensitive to the deadlock
>    problem, since its interface provides no way to return the
>    contents of the output.  The IGNORE value provides a possible
>    solution for this.

How do you suggest the call() should be modified? I'm not really sure it
can do more without being more complicated. Being simple is the main
purpose of call().


> 3. With the /dev/null file solution, the following code actually
>    works without deadlock, because stderr is never blocked on writing
>    to /dev/null::
>
>      p = Popen(command, stdout=PIPE, stderr=IGNORE)
>      text = p.stdout.read()
>      retcode = p.wait()
>
>    Any idea how this idiom could be supported using a more portable
>    solution (i.e. how would I make this idiom under Windows, is there
>    some equivalent to /dev/null)?

Yes, as Terry Reedy points out, NUL: can be used.

Regards,
/Peter ?strand <astrand at lysator.liu.se>


From jcarlson at uci.edu  Fri Jun 16 22:38:16 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 16 Jun 2006 13:38:16 -0700
Subject: [Python-Dev] Fwd: subprocess.Popen(.... stdout=IGNORE, ...)
In-Reply-To: <Pine.GSO.4.51L2.0606162118260.688@koeberg.lysator.liu.se>
References: <8393fff0606122207q75cbef99o15053d869805952f@mail.gmail.com>
	<Pine.GSO.4.51L2.0606162118260.688@koeberg.lysator.liu.se>
Message-ID: <20060616132455.F33A.JCARLSON@uci.edu>


There is a related bit of functionality for subprocess that would allow
for asynchronous handling of IO to/from the called subprocess.  It has
been implemented as a recipe [1], but requires the use of additional
pywin32 functionality on Windows.  As was the case for the original
subprocess module, in order to get the proper functionality on Windows,
we may need to include additional features from pywin32 into the
_subprocess.c driver, or alternatively, convert all _subprocess.c bits
into ctypes calls.

If the inclusion of additional code into _subprocess.c or the use of
ctypes is undesireable, this feature could require the *user* to install
pywin32 on Windows, which would be unfortunate, but perfectly reasonable.

With an asynchronous handler for the subprocess module, a user could
ignore or process output from a subprocess as desired or necessary.

 - Josiah

[1] http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/440554

Peter Astrand <astrand at lysator.liu.se> wrote:
> 
> On Tue, 13 Jun 2006, Martin Blais wrote:
> 
> Hi all. Now let's see if I remember something about my module...
> 
> 
> > In the subprocess module, by default the files handles in the child
> > are inherited from the parent.  To ignore a child's output, I can use
> > the stdout or stderr options to send the output to a pipe::
> >
> >    p = Popen(command, stdout=PIPE, stderr=PIPE)
> >
> > However, this is sensitive to the buffer deadlock problem, where for
> > example the buffer for stderr might become full and a deadlock occurs
> > because the child is blocked on writing to stderr and the parent is
> > blocked on reading from stdout or waiting for the child to finish.
> >
> > For example, using this command will cause deadlock::
> >
> >    call('cat /boot/vmlinuz'.split(), stdout=PIPE, stderr=PIPE)
> 
> Yes, the call convenience function is basically for the case when you are
> not interested in redirection.
> 
> 
> > Popen.communicate() implements a solution using either select() or
> > multiple threads (under Windows) to read from the pipes, and returns
> > the strings as a result.  It works out like this::
> >
> >    p = Popen(command, stdout=PIPE, stderr=PIPE)
> >    output, errors = p.communicate()
> >    if p.returncode != 0:
> >         ?
> >
> > Now, as a user of the subprocess module, sometimes I just want to
> > call some child process and simply ignore its output, and to do so I
> > am forced to use communicate() as above and wastefully capture and
> > ignore the strings.  This is actually quite a common use case.  "Just
> > run something, and check the return code".
> 
> Yes, this is a common case, and using communicate() is indeed overkill and
> wasteful.
> 
> 
> > Right now, in order to do
> > this without polluting the parent's output, you cannot use the call()
> > convenience (or is there another way?).
> >
> > A workaround that works under UNIX is to do this::
> >
> >    FNULL = open('/dev/null', 'w')
> >    returncode = call(command, stdout=FNULL, stderr=FNULL)
> 
> Yes, this works. You can also do:
> 
> returncode = subprocess.call(command, stdout=open('/dev/null', 'w'), stderr=subprocess.STDOUT)
> 
> 
> > Some feedback requested, I'd like to know what you think:
> >
> > 1. Would it not be nice to add a IGNORE constant to subprocess.py
> >    that would do this automatically?, i.e. ::
> >
> >      returncode = call(command, stdout=IGNORE, stderr=IGNORE)
> >
> >    Rather than capture and accumulate the output, it would find an
> >    appropriate OS-specific way to ignore the output (the /dev/null file
> >    above works well under UNIX, how would you do this under Windows?
> >    I'm sure we can find something.)
> 
> I have a vague feeling of that this has been discussed before, but I
> cannot find a tracker for this. I guess an IGNORE constant would be nice.
> Using open('/dev/null', 'w') is only a few more characters to type, but as
> you say, it's not platform independent.
> 
> So, feel free to submit a patch or a Feature Request Tracker.
> 
> 
> > 2. call() should be modified to not be sensitive to the deadlock
> >    problem, since its interface provides no way to return the
> >    contents of the output.  The IGNORE value provides a possible
> >    solution for this.
> 
> How do you suggest the call() should be modified? I'm not really sure it
> can do more without being more complicated. Being simple is the main
> purpose of call().
> 
> 
> > 3. With the /dev/null file solution, the following code actually
> >    works without deadlock, because stderr is never blocked on writing
> >    to /dev/null::
> >
> >      p = Popen(command, stdout=PIPE, stderr=IGNORE)
> >      text = p.stdout.read()
> >      retcode = p.wait()
> >
> >    Any idea how this idiom could be supported using a more portable
> >    solution (i.e. how would I make this idiom under Windows, is there
> >    some equivalent to /dev/null)?
> 
> Yes, as Terry Reedy points out, NUL: can be used.
> 
> Regards,
> /Peter ?strand <astrand at lysator.liu.se>
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/jcarlson%40uci.edu


From mal at egenix.com  Fri Jun 16 23:06:19 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 16 Jun 2006 23:06:19 +0200
Subject: [Python-Dev] Beta 1 schedule ? (Bug in stringobject?)
In-Reply-To: <e6uaqc$c5r$1@sea.gmane.org>
References: <e6u6at$sbu$1@sea.gmane.org> <4492ACFB.7050407@egenix.com>
	<e6uaqc$c5r$1@sea.gmane.org>
Message-ID: <44931D4B.5060507@egenix.com>

Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
>> Since replace() only works on string objects, it appears
>> as if a temporary string object would have to be created.
>> However, this would involve an unnecessary allocation
>> and copy process... it appears as if the refactoring
>> during the NFS sprint left out that case.
> 
> what's the beta 1 status ?  fixing this should be trivial, but I don't have any
> cycles to spare today.

Good question. PEP 356 says beta 1 was planned two days
ago...

http://www.python.org/dev/peps/pep-0356/

I'd also like to get the new winerror module in before
beta1 is released - documentation will follow next week:

https://sourceforge.net/tracker/?func=detail&atid=305470&aid=1505257&group_id=5470

Is it OK to first check in a pure Python version and then
replace this with a C implementation having the same interface
later on in the beta cycle ?

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From martin at v.loewis.de  Fri Jun 16 23:50:35 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 16 Jun 2006 23:50:35 +0200
Subject: [Python-Dev] Last-minute curses patch
In-Reply-To: <449260E6.3050008@livinglogic.de>
References: <44915BD3.4020109@livinglogic.de> <44923E13.2090203@v.loewis.de>
	<449260E6.3050008@livinglogic.de>
Message-ID: <449327AB.2000206@v.loewis.de>

Walter D?rwald wrote:
>> It can go into beta1 until the beta1 code freeze is announced.
> 
> Great!
> 
>> It does need a configure test, though.
> 
> Unfortunately I have no idea how this whole configure business works!

Unfortunately, this either means you have to learn it, or find somebody
who does it for you.

Regards,
Martin

From pje at telecommunity.com  Sat Jun 17 04:01:05 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri, 16 Jun 2006 22:01:05 -0400
Subject: [Python-Dev] An obscene computed goto bytecode hack for "switch" :)
Message-ID: <5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>

For folks contemplating what opcodes might need to be added to implement a 
"switch" statement, it turns out that there is a "clever way" (i.e. a 
filthy hack) to implement computed jumps in Python bytecode, using 
WHY_CONTINUE and END_FINALLY.

I discovered this rather by accident, while working on my BytecodeAssembler 
package: I was adding validation code to minimize the likelihood of 
generating incorrect code for blocks and loops, and so I was reading 
ceval.c to make sure I knew how those bytecodes worked.

And at some point it dawned on me that an END_FINALLY opcode that sees 
WHY_FINALLY on top of the stack *is actually a computed goto*!  It has to 
be inside a SETUP_LOOP/POP_BLOCK pair, but apart from that it's quite 
straightforward.

So, taking the following example code as a basis for the input:

     def foo(x):
         switch x:
             case 1: return 42
             case 2: return 'foo'
             else:   return 27

I created a proof-of-concept implementation that generated the following 
bytecode for the function:

       0           0 SETUP_LOOP              36 (to 39)
                   3 LOAD_CONST               1 (<...method get of dict...>)
                   6 LOAD_FAST                0 (x)
                   9 CALL_FUNCTION            1

                  12 JUMP_IF_FALSE           18 (to 33)
                  15 LOAD_CONST               2 (...)
                  18 END_FINALLY

                  19 LOAD_CONST               3 (42)
                  22 RETURN_VALUE
                  23 JUMP_FORWARD            12 (to 38)

                  26 LOAD_CONST               4 ('foo')
                  29 RETURN_VALUE
                  30 JUMP_FORWARD             5 (to 38)

             >>   33 POP_TOP
                  34 LOAD_CONST               5 (27)
                  37 RETURN_VALUE

             >>   38 POP_BLOCK

             >>   39 LOAD_CONST               0 (None)
                  42 RETURN_VALUE

The code begins with a SETUP_LOOP, so that our pseudo-continues will 
work.  As a pleasant side-effect, any BREAK_LOOP operations in any of the 
suites will exit the entire switch block, jumping to offset 39 and the 
function exit.

At offset 3, I load the 'get' method of the switching dictionary as a 
constant -- this was simpler for my proof-of-concept, but a production 
version should probably load the dictionary and then get its 'get' method, 
because methods aren't marshallable and the above code therefore can't be 
saved in a .pyc file.  The remaining code up to offset 12 does a dictionary 
lookup, defaulting to None if the value of the switch expression isn't found.

At offset 12, I check if the jump target is false, and if so I assume it's 
None, and  jump ahead to the "else" suite.  If it's true, I load a constant 
value equal to the correct value of WHY_CONTINUE for the current Python 
version and fall through to the END_FINALLY.  So the END_FINALLY then pops 
WHY_CONTINUE and the jump target, jumping forward to the correct "case" branch.

The code that follows is then a series of "case" suites, each ending with a 
JUMP_FORWARD to the POP_BLOCK that ends the "loop".  In this case, however, 
those jumps are never actually taken, but if execution "fell out" of any of 
the cases, they would proceed to the end this way.

Anyway, the above function actually *runs* in any version of Python back to 
2.3, as long as the LOAD_CONST at offset 15 uses the right value of 
WHY_CONTINUE for that Python version.  Older Python versions are of course 
not going to have a "switch" statement, but the reason I'm excited about 
this is that I've been wishing for some way to branch within a function in 
order to create fast jump tables for generic functions.  This is pretty 
much what the doctor ordered.

One thing I'm curious about, if there are any PyPy folks listening: will 
tricks like this drive PyPy or Psyco insane?  :)  It's more than idle 
curiosity, as one of my goals for my next generic function system is that 
it should generate bytecode that's usable by PyPy and Psyco for 
optimization or translation purposes.


From alexander.belopolsky at gmail.com  Sat Jun 17 05:49:23 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Fri, 16 Jun 2006 23:49:23 -0400
Subject: [Python-Dev] setobject code
Message-ID: <36A93C28-6789-4623-ADAA-D15F1950C111@local>

I would like to share a couple of observations that I made as I  
studied the latest setobject implementation.

1. Is there a reason not to have PySet_CheckExact, given that  
PyFrozenSet_CheckExact exists? Similarly, why PyAnySet_Check, but no  
PySet_Check or PyFrozenSet_Check?

2. Type of several data members in dict-object and dict-entry structs  
were recently changed to Py_ssize_t . Whatever considerations  
prompted the change, they should probably apply to the similar  
members of set-object and set-entry structs as well.



From ncoghlan at gmail.com  Sat Jun 17 06:17:23 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 17 Jun 2006 14:17:23 +1000
Subject: [Python-Dev] unicode imports
In-Reply-To: <5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
Message-ID: <44938253.2080203@gmail.com>

Phillip J. Eby wrote:
> Actually, you would want to put it in sys.path_hooks, and then instances 
> would be placed in path_importer_cache automatically.  If you are adding 
> it to the path_hooks after the fact, you should simply clear the 
> path_importer_cache.  Simply poking stuff into the path_importer_cache 
> is not a recommended approach.

Oh, I agree - poking it in directly was a desperation measure if the 
path_hooks machinery didn't like Unicode either.

I've since gone and looked, and you may be screwed either way - the standard 
import paths appear to be always put on the system path as encoded 8-bit 
strings, not as Unicode objects.

That said, it also appears that the existing machinery *should* be able to 
handle non-ASCII path items, so long as 'Py_FileSystemDefaultEncoding' is set 
correctly. If it isn't handling it, then there's something else going wrong.

Modules/getpath.c and friends don't encode the results returned by the 
platform APIs, so the strings in

Kristj?n, can you provide more details on the fault you get when trying to 
import from the path containing the Chinese characters? Specifically:

What is the actual file system path?
What do sys.prefix, sys.exec_prefix and sys.path contain?
What does sys.getdefaultencoding() return?
What do sys.stdin.encoding, sys.stdout.encoding and sys.stderr.encoding say?
What does "python -v" show?
Does adding the standard lib directories manually to sys.path make any difference?
Does setting PYTHONHOME to the appropriate settings make any difference?

Running something like the following would be good:

   import sys
   print "Prefixes:", sys.prefix, sys.exec_prefixes
   print "Path:", sys.path
   print "Default encoding:", sys.getdefaultencoding()
   print "Input encoding:", sys.stdin.encoding,
   print "Output encodings:", sys.stdout.encoding, sys.stderr.encoding
   try:
       import string # Make python -v do something interesting
   except ImportError:
       print "Could not find string module"
   sys.path.append(u"stdlib directory name")
   try:
       import string # Make python -v do something interesting
   except ImportError:
       print "Could not find string module"






-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Sat Jun 17 06:44:54 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 17 Jun 2006 14:44:54 +1000
Subject: [Python-Dev] unicode imports
In-Reply-To: <F75C6D7E-2605-466A-8391-EB7D392324E0@redivi.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
	<F75C6D7E-2605-466A-8391-EB7D392324E0@redivi.com>
Message-ID: <449388C6.30601@gmail.com>

Bob Ippolito wrote:
> There's a similar issue in that if sys.prefix contains a colon, Python 
> is also busted:
> http://python.org/sf/1507224
> 
> Of course, that's not a Windows issue, but it is everywhere else. The 
> offending code in that case is Modules/getpath.c,

Since it has to do with the definition of Py_GetPath as returning a single 
string that is really a DELIM separated list of strings, where DELIM is 
defined by the current platform (';' on Windows, ':' everywhere else), this 
seems more like a platform problem than a Python problem, though - you can't 
have directories containing a colon as an entry in PATH or PYTHONPATH either. 
It's not really Python's fault that the platform defines a legal filename 
character as the delimiter for path entries.

The only real alternative I can see is to normalise Py_GetPath to always 
return a ';' delimited list of strings, regardless of platform, and update 
PySys_SetPath accordingly. That'd cause potential compatibility problems for 
embedded interpreters, though.

I guess we could create a Py_GetPathEx and a PySys_SetPathEx that accepted the 
delimeters as arguments, and change the call in pythonrun.c from:

   PySys_SetPath(Py_GetPath())

to:

   PySys_SetPathEx(Py_GetPathEx(';'), ';')

(still an incompatible change, but an easier to manage one since you can 
easily provide different behavior for earlier versions of Python)

> which probably also 
> has to change in order to make unicode directories work on Win32 (though 
> I think there may be a separate win32 implementation of getpath).

There is - PC/getpathp.c

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From nyamatongwe at gmail.com  Sat Jun 17 06:53:08 2006
From: nyamatongwe at gmail.com (Neil Hodgson)
Date: Sat, 17 Jun 2006 14:53:08 +1000
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
Message-ID: <50862ebd0606162153w47caf10ej38a1c74f92857718@mail.gmail.com>

Kristj?n V. J?nsson:

> Although python has had full unicode support for filenames for a long time
> on selected platforms (e.g. Windows), there is one glaring deficiency:  It
> cannot import from paths containing unicode.  I?ve tried creating folders
> with chinese characters and adding them to path, to no avail.
> The standard install path in chinese distributions can be with a non-ANSI
> path, and installing an embedded python application there will break it.

   It should be unusual for a Chinese installation to use an install
path that can not be represented in MBCS. Try encoding the install
directory into MBCS before adding it to sys.path.

   Neil

From nnorwitz at gmail.com  Sat Jun 17 07:41:22 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Fri, 16 Jun 2006 22:41:22 -0700
Subject: [Python-Dev] Beta 1 schedule ? (Bug in stringobject?)
In-Reply-To: <44931D4B.5060507@egenix.com>
References: <e6u6at$sbu$1@sea.gmane.org> <4492ACFB.7050407@egenix.com>
	<e6uaqc$c5r$1@sea.gmane.org> <44931D4B.5060507@egenix.com>
Message-ID: <ee2a432c0606162241t303c173ct681e6bab49760b88@mail.gmail.com>

On 6/16/06, M.-A. Lemburg <mal at egenix.com> wrote:
> Fredrik Lundh wrote:
> >
> > what's the beta 1 status ?  fixing this should be trivial, but I don't have any
> > cycles to spare today.
>
> Good question. PEP 356 says beta 1 was planned two days
> ago...
>
> http://www.python.org/dev/peps/pep-0356/

beta 1 won't be released until the tests pass consistently.  That
hasn't happened much this week.  I updated the PEP's schedule.
Hopefully we can release early next week.  This means the code freeze
is likely to happen as early as Sunday (more likely Monday or
Tuesday).

http://mail.python.org/pipermail/python-checkins/2006-June/054104.html

> I'd also like to get the new winerror module in before
> beta1 is released - documentation will follow next week:
>
> https://sourceforge.net/tracker/?func=detail&atid=305470&aid=1505257&group_id=5470
>
> Is it OK to first check in a pure Python version and then
> replace this with a C implementation having the same interface
> later on in the beta cycle ?

My answer is no.  We've had too much breakage.  There are so many
things already in 2.5.  We really don't need one more thing to break.
There will be a 2.6.  winerror has limited impact.  At this point, I'd
rather not see any checkins except to fix something that's broken.
tests, doc, and bugfixes.  I seem to recall a bunch of checkins
recently that didn't have a test.

n

From rasky at develer.com  Fri Jun 16 21:12:29 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Fri, 16 Jun 2006 21:12:29 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>
Message-ID: <011c01c69178$d0250d30$2b4d2a97@bagio>

Bill Janssen <janssen at parc.com> wrote:

>> I'm trying to build a Python extension, and Python 2.4 insists on
>> the MS
>> Visual C++ compiler version 7.1, which is included with the MS VC++
>> 2003
>> toolkit.  This toolkit is no longer available for download from
>> Microsoft (superseded by the 2005 version), so I'm stuck.
>
> This seems sub-optimal.  I'm afraid I don't follow the Windows track
> closely; has this been fixed for 2.5, or should it be reported as a
> bug?


It was discussed before, and the agreement was to use VS 2003 for another cycle
(i.e. 2.5). But the fact that VS 2003 is no longer available for download is an
important fact, and we might want to rediscuss the issue.

Giovanni Bajo


From fredrik at pythonware.com  Sat Jun 17 08:46:51 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 17 Jun 2006 08:46:51 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <011c01c69178$d0250d30$2b4d2a97@bagio>
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>
	<011c01c69178$d0250d30$2b4d2a97@bagio>
Message-ID: <e708gp$dgi$2@sea.gmane.org>

Giovanni Bajo wrote:

> It was discussed before, and the agreement was to use VS 2003 for another cycle
> (i.e. 2.5). But the fact that VS 2003 is no longer available for download is an
> important fact, and we might want to rediscuss the issue.

it's still available in the .net sdk packages (see comp.lang.python), 
and it's still available for MSDN subscribers.

</F>


From bob at redivi.com  Sat Jun 17 08:55:51 2006
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 16 Jun 2006 23:55:51 -0700
Subject: [Python-Dev] unicode imports
In-Reply-To: <449388C6.30601@gmail.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
	<F75C6D7E-2605-466A-8391-EB7D392324E0@redivi.com>
	<449388C6.30601@gmail.com>
Message-ID: <07F3F437-E7BD-4228-8A87-098BA241907D@redivi.com>


On Jun 16, 2006, at 9:44 PM, Nick Coghlan wrote:

> Bob Ippolito wrote:
>> There's a similar issue in that if sys.prefix contains a colon,  
>> Python is also busted:
>> http://python.org/sf/1507224
>> Of course, that's not a Windows issue, but it is everywhere else.  
>> The offending code in that case is Modules/getpath.c,
>
> Since it has to do with the definition of Py_GetPath as returning a  
> single string that is really a DELIM separated list of strings,  
> where DELIM is defined by the current platform (';' on Windows, ':'  
> everywhere else), this seems more like a platform problem than a  
> Python problem, though - you can't have directories containing a  
> colon as an entry in PATH or PYTHONPATH either. It's not really  
> Python's fault that the platform defines a legal filename character  
> as the delimiter for path entries.
>
> The only real alternative I can see is to normalise Py_GetPath to  
> always return a ';' delimited list of strings, regardless of  
> platform, and update PySys_SetPath accordingly. That'd cause  
> potential compatibility problems for embedded interpreters, though.
>
> I guess we could create a Py_GetPathEx and a PySys_SetPathEx that  
> accepted the delimeters as arguments, and change the call in  
> pythonrun.c from:
>
>   PySys_SetPath(Py_GetPath())
>
> to:
>
>   PySys_SetPathEx(Py_GetPathEx(';'), ';')
>
> (still an incompatible change, but an easier to manage one since  
> you can easily provide different behavior for earlier versions of  
> Python)

No, that doesn't fix anything at all. The right solution is not to  
provide for a different delimiter, but allow for a list (probably too  
early for PyObject* though) or array of some kind (e.g. int argc,  
char **argv).

-bob


From scott+python-dev at scottdial.com  Sat Jun 17 09:10:52 2006
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Sat, 17 Jun 2006 03:10:52 -0400
Subject: [Python-Dev] Last-minute curses patch
In-Reply-To: <449260E6.3050008@livinglogic.de>
References: <44915BD3.4020109@livinglogic.de> <44923E13.2090203@v.loewis.de>
	<449260E6.3050008@livinglogic.de>
Message-ID: <4493AAFC.5060703@scottdial.com>

Walter D?rwald wrote:
> Martin v. L?wis wrote:
>> It does need a configure test, though.
> 
> Unfortunately I have no idea how this whole configure business works!

I got bored. I posted a comment to the bug, which will direct you to 
http://scottdial.com/python-dev/curses-resizeterm.diff

-- 
Scott Dial
scott at scottdial.com
scodial at indiana.edu


From scott+python-dev at scottdial.com  Sat Jun 17 09:12:09 2006
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Sat, 17 Jun 2006 03:12:09 -0400
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <011c01c69178$d0250d30$2b4d2a97@bagio>
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>
	<011c01c69178$d0250d30$2b4d2a97@bagio>
Message-ID: <4493AB49.9020105@scottdial.com>

Giovanni Bajo wrote:
> It was discussed before, and the agreement was to use VS 2003 for another cycle
> (i.e. 2.5). But the fact that VS 2003 is no longer available for download is an
> important fact, and we might want to rediscuss the issue.

I don't recall the discussion vividly, but I think the reasoning was 
something like "because it still works." Maybe I remember wrong, but 
this is not a compelling argument en lieu of it being hard to get a hold 
of the toolkit. If there is some kind of legwork involved with getting 
python moved to VS2005, then I'll volunteer..

Despite the possibility illegality of the posting of the toolkit, I 
suggest curious people to google for '"Index Of" VCToolkitSetup.exe', if 
need be.

-- 
Scott Dial
scott at scottdial.com
scodial at indiana.edu

From martin at v.loewis.de  Sat Jun 17 10:16:09 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 10:16:09 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <e708gp$dgi$2@sea.gmane.org>
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>	<011c01c69178$d0250d30$2b4d2a97@bagio>
	<e708gp$dgi$2@sea.gmane.org>
Message-ID: <4493BA49.9050800@v.loewis.de>

Fredrik Lundh wrote:
>> It was discussed before, and the agreement was to use VS 2003 for another cycle
>> (i.e. 2.5). But the fact that VS 2003 is no longer available for download is an
>> important fact, and we might want to rediscuss the issue.
> 
> it's still available in the .net sdk packages (see comp.lang.python), 
> and it's still available for MSDN subscribers.

It's also easy to get a used copy on ebay.

Regards,
Martin

From martin at v.loewis.de  Sat Jun 17 10:25:46 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 10:25:46 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <4493AB49.9020105@scottdial.com>
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>	<011c01c69178$d0250d30$2b4d2a97@bagio>
	<4493AB49.9020105@scottdial.com>
Message-ID: <4493BC8A.4040709@v.loewis.de>

Scott Dial wrote:
> I don't recall the discussion vividly, but I think the reasoning was 
> something like "because it still works." Maybe I remember wrong, but 
> this is not a compelling argument en lieu of it being hard to get a hold 
> of the toolkit. If there is some kind of legwork involved with getting 
> python moved to VS2005, then I'll volunteer..

There were several reasons: it's the same compiler that was used to
compile Python 2.4, so authors of Python extension modules typically
already have a copy. Switching to VS 2005 would require people to
get that first, and it would require people to have three releases
of VC installed just to build modules for Python 2.3, 2.4, and 2.5.

Another reason is that I consider VS 2005 buggy, I hope that some
of the breakage that Microsoft has done to the C library is reverted
in a future release. VS2005 managed to break compatibility with
C89 and C99 in a way that made Python fail to start up, also, it
was possible to have the CRT abort just by calling the builtin open
with the wrong arguments. There is now a work-around for that breakage,
but still, I don't trust that VS 2005 is a good product.

I'm hoping that Python can skip VS 2005 entirely, and go straight
to VS 2007 (or whatever it will be called) for 2.6.

Regards,
Martin

From g.brandl at gmx.net  Sat Jun 17 11:17:09 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Sat, 17 Jun 2006 11:17:09 +0200
Subject: [Python-Dev] Improve error msgs?
In-Reply-To: <e6obkn$6ao$1@sea.gmane.org>
References: <e6obkn$6ao$1@sea.gmane.org>
Message-ID: <e70f2d$ssn$1@sea.gmane.org>

Georg Brandl wrote:
> In abstract.c, there are many error messages like
> 
> type_error("object does not support item assignment");
> 
> It helps debugging if the object's type was prepended.
> Should I go through the code and try to enhance them
> where possible?

So that's definite "perhaps"?

Anyway, posted patch 1507676.

Georg


From martin at v.loewis.de  Sat Jun 17 10:41:53 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 10:41:53 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
Message-ID: <4493C051.7040704@v.loewis.de>

Kristj?n V. J?nsson wrote:
> The standard install path in chinese distributions can be with a
> non-ANSI path, and installing an embedded python application there will
> break it.

I very much doubt this. On a Chinese system, the Program Files folder
likely has a non-*ASCII* name, but it will have a fine *ANSI* name,
as the ANSI code page on that system should be either 936 (simplified
chinese) or 950 (traditional chinese) - unless the system is
misconfigured.

Can you please report what the path is, what the precise name of the
operating system is, and what the system locale and the system
code page are?

> A completely parallel implementation on the sys.path[i] level?

You should also take a look at what the 8.3 name of the path is.
I really cannot believe that the path is unaccessible to DOS
programs.

> Are there other platforms beside Windows that would profit from this?

No.

Regards,
Martin

From martin at v.loewis.de  Sat Jun 17 10:48:51 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 10:48:51 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <50862ebd0606162153w47caf10ej38a1c74f92857718@mail.gmail.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<50862ebd0606162153w47caf10ej38a1c74f92857718@mail.gmail.com>
Message-ID: <4493C1F3.7090803@v.loewis.de>

Neil Hodgson wrote:
>    It should be unusual for a Chinese installation to use an install
> path that can not be represented in MBCS. Try encoding the install
> directory into MBCS before adding it to sys.path.

Indeed. Unfortunately, people apparently install an English version
(because they can get that without paying any license fee), and then
create directory names that can't be represented in the ANSI code
page (which would then be 1252). Still, on such a system, the
target folder for programs should be "Program Files".

If people do that, they *should* change the system locale to some
Chinese locale, but being non-admin people, they often don't.

Regards,
Martin

From walter at livinglogic.de  Sat Jun 17 12:33:17 2006
From: walter at livinglogic.de (=?iso-8859-1?Q?Walter_D=F6rwald?=)
Date: Sat, 17 Jun 2006 12:33:17 +0200 (CEST)
Subject: [Python-Dev] Last-minute curses patch
In-Reply-To: <4493AA33.60001@scottdial.com>
References: <44915BD3.4020109@livinglogic.de> <44923E13.2090203@v.loewis.de>
	<449260E6.3050008@livinglogic.de> <4493AA33.60001@scottdial.com>
Message-ID: <61182.89.54.3.103.1150540397.squirrel@isar.livinglogic.de>

Scott Dial sagte:
> Walter D?rwald wrote:
>> Martin v. L?wis wrote:
>>> It does need a configure test, though.
>>
>> Unfortunately I have no idea how this whole configure business works!
>
> I got bored. I posted a comment to the bug, which will direct you to  http://scottdial.com/python-dev/curses-resizeterm.diff

Thanks for the patch!

I'm not sure if is_term_resized() can return ERR or not.

Can't comment on the configure logic.

Neal, can this still go in?

Servus,
   Walter




From mal at egenix.com  Sat Jun 17 13:13:34 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Sat, 17 Jun 2006 13:13:34 +0200
Subject: [Python-Dev] Beta 1 schedule ? (Bug in stringobject?)
In-Reply-To: <ee2a432c0606162241t303c173ct681e6bab49760b88@mail.gmail.com>
References: <e6u6at$sbu$1@sea.gmane.org> <4492ACFB.7050407@egenix.com>	
	<e6uaqc$c5r$1@sea.gmane.org> <44931D4B.5060507@egenix.com>
	<ee2a432c0606162241t303c173ct681e6bab49760b88@mail.gmail.com>
Message-ID: <4493E3DE.30700@egenix.com>

Neal Norwitz wrote:
> On 6/16/06, M.-A. Lemburg <mal at egenix.com> wrote:
>> Fredrik Lundh wrote:
>> >
>> > what's the beta 1 status ?  fixing this should be trivial, but I
>> don't have any
>> > cycles to spare today.
>>
>> Good question. PEP 356 says beta 1 was planned two days
>> ago...
>>
>> http://www.python.org/dev/peps/pep-0356/
> 
> beta 1 won't be released until the tests pass consistently.  That
> hasn't happened much this week.  I updated the PEP's schedule.
> Hopefully we can release early next week.  This means the code freeze
> is likely to happen as early as Sunday (more likely Monday or
> Tuesday).
> 
> http://mail.python.org/pipermail/python-checkins/2006-June/054104.html
> 
>> I'd also like to get the new winerror module in before
>> beta1 is released - documentation will follow next week:
>>
>> https://sourceforge.net/tracker/?func=detail&atid=305470&aid=1505257&group_id=5470
>>
>>
>> Is it OK to first check in a pure Python version and then
>> replace this with a C implementation having the same interface
>> later on in the beta cycle ?
> 
> My answer is no. 

Is that no to adding the winerror Python module or no to
replacing it with a C module later on in the beta cycle ?

Note that winerror is a new module, so it can't really
break anything.

> We've had too much breakage.  There are so many
> things already in 2.5.  We really don't need one more thing to break.
> There will be a 2.6.  winerror has limited impact.  At this point, I'd
> rather not see any checkins except to fix something that's broken.
> tests, doc, and bugfixes.  I seem to recall a bunch of checkins
> recently that didn't have a test.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 17 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              15 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From arigo at tunes.org  Sat Jun 17 13:18:41 2006
From: arigo at tunes.org (Armin Rigo)
Date: Sat, 17 Jun 2006 13:18:41 +0200
Subject: [Python-Dev] An obscene computed goto bytecode hack for
	"switch" :)
In-Reply-To: <5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>
Message-ID: <20060617111841.GA19995@code0.codespeak.net>

Hi Phillip,

On Fri, Jun 16, 2006 at 10:01:05PM -0400, Phillip J. Eby wrote:
> One thing I'm curious about, if there are any PyPy folks listening: will 
> tricks like this drive PyPy or Psyco insane?  :)

Yes, both :-)

The reason is that the details of the stack behavior of END_FINALLY are
messy in CPython.  The finally blocks are the only place where the depth
of the stack is not known in advance: depending on how the finally block
is entered, there will be between one and three objects pushed (a single
None, or an int and another object, or an exception type, instance and
traceback).  Psyco cheats here and emulates a behavior where there is
always exactly one object instead (which can be a tuple), so if a
END_FINALLY sees values not put there in the "official" way it will just
crash.  PyPy works similarily but always expect three values.

(Hum, Psyco could easily be fixed to support your use case...  For PyPy
it would be harder without performance hit)


A bientot,

Armin

From scott+python-dev at scottdial.com  Sat Jun 17 13:40:24 2006
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Sat, 17 Jun 2006 07:40:24 -0400
Subject: [Python-Dev] Last-minute curses patch
In-Reply-To: <61182.89.54.3.103.1150540397.squirrel@isar.livinglogic.de>
References: <44915BD3.4020109@livinglogic.de>
	<44923E13.2090203@v.loewis.de>	<449260E6.3050008@livinglogic.de>
	<4493AA33.60001@scottdial.com>
	<61182.89.54.3.103.1150540397.squirrel@isar.livinglogic.de>
Message-ID: <4493EA28.1090800@scottdial.com>

Walter D?rwald wrote:
> I'm not sure if is_term_resized() can return ERR or not.

Oh whoops, you are of course correct. I have corrected the diff accordingly.

-- 
Scott Dial
scott at scottdial.com
scodial at indiana.edu

From kristjan at ccpgames.com  Sat Jun 17 13:56:31 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Sat, 17 Jun 2006 11:56:31 -0000
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
Message-ID: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>

I remember you voicing this point at the Texas sprint.  I can't say I agree.  
The behaviour of certain function (like signal and fopen) is undefined for certain arguments.  Undefined is undefined, exiting the program with an admonition is one of the possible outcomes (as is formatting your hard drive.)
In my opinion this is to be considered "a good thing" since it helps us adhere to the "defined" parts of the standards and not rely on something which is utterly unknown or unpredictable by the standards (but which happens to work on some platforms, no thanks to the standards.)
Besides, we only found this an issue in three places:  signal(), fopen() and strftime() functions.  Not a huge thing to fix.

Apart from these things, I must say that in my experience VC2005 is an surprisingly stable product.  The whole program (link-step) optimization is a boon, and if combined with profile guided optimization (PGO), works wonders.

VS2005 also can create binaries for the X64 windows platform, no small point, and the primary reason we started using it in the first place.  I encourage people to look at the PCBuild8 directory in the current python trunk.  Especially I would like suggestions and comments on how to better automate the PGO build process

Cheers,
Kristj?n

-----Original Message-----
From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of "Martin v. L?wis"
Sent: 17. j?n? 2006 08:26
To: Scott Dial
Cc: Python Dev
Subject: Re: [Python-Dev] Python 2.4 extensions require VC 7.1?

Another reason is that I consider VS 2005 buggy, I hope that some
of the breakage that Microsoft has done to the C library is reverted
in a future release. VS2005 managed to break compatibility with
C89 and C99 in a way that made Python fail to start up, also, it
was possible to have the CRT abort just by calling the builtin open
with the wrong arguments. There is now a work-around for that breakage,
but still, I don't trust that VS 2005 is a good product.

I'm hoping that Python can skip VS 2005 entirely, and go straight
to VS 2007 (or whatever it will be called) for 2.6.



From martin at v.loewis.de  Sat Jun 17 14:19:19 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 14:19:19 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>
Message-ID: <4493F347.5030306@v.loewis.de>

Kristj?n V. J?nsson wrote:
> I remember you voicing this point at the Texas sprint.  I can't say I
> agree. The behaviour of certain function (like signal and fopen) is
> undefined for certain arguments.  Undefined is undefined, exiting the
> program with an admonition is one of the possible outcomes (as is
> formatting your hard drive.)


For fopen(3), you are right. For signal(3), VS2005 is in clear
violation with ISO C, which specifies in 7.14.1.1

       [#8] If the request can  be  honored,  the  signal  function
       returns  the  value  of  func for the most recent successful
       call to signal for the specified signal sig.   Otherwise,  a
       value  of SIG_ERR is returned and a positive value is stored
       in errno.

The set of acceptable signals is deliberately implementation-defined,
and it is consequential that an attempt to set an unsupported signal
gives an error (ISO C cannot specify that the error is EINVAL, since
EINVAL is not part of ISO C).

> In my opinion this is to be considered "a good thing" since it helps
> us adhere to the "defined" parts of the standards and not rely on
> something which is utterly unknown or unpredictable by the standards
> (but which happens to work on some platforms, no thanks to the
> standards.) Besides, we only found this an issue in three places:
> signal(), fopen() and strftime() functions.  Not a huge thing to fix.

For fopen and strftime, they could have achieved the same effect with
just setting errno to EINVAL. The C runtime library just should never
ever terminate the program unless explicitly asked to, just as Python
should never ever terminate execution without giving the application
a chance to intervene (e.g. by catching an exception).

> VS2005 also can create binaries for the X64 windows platform, no
> small point, and the primary reason we started using it in the first
> place.

OTOH, you don't *need* VS2005 to create AMD64 binaries. I had been
creating Itanium binaries for several years now with VS2003, and
started producing AMD64 binaries for Python 2.5.

Regards,
Martin

From scott+python-dev at scottdial.com  Sat Jun 17 14:54:06 2006
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Sat, 17 Jun 2006 08:54:06 -0400
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <4493F347.5030306@v.loewis.de>
References: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>
	<4493F347.5030306@v.loewis.de>
Message-ID: <4493FB6E.5090602@scottdial.com>

Martin v. L?wis wrote:
> For fopen(3), you are right. For signal(3), VS2005 is in clear
> violation with ISO C

I'm nobody but I don't find your argument compelling. I suggest you go 
read: http://msdn2.microsoft.com/en-us/library/ksazx244.aspx

In short, you can tell the CRT to do whatever you like when the 
parameters are invalid, including returning EINVAL.

void VS2005_CRT_invalidParamHandler(const wchar_t* expression,
    const wchar_t* function,
    const wchar_t* file,
    unsigned int line,
    uintptr_t pReserved)
) { errno = EINVAL; }

int main() {
    // Disable VS2005's parameter checking aborts
    _set_invalid_parameter_handler(VS2005_CRT_invalidParamHandler);
    // Disable message box assertions
    _CrtSetReportMode(_CRT_ASSERT, 0);

    ...
}

I went back and read more of the older discussion. And I think your 
position is that you just don't want to force another compiler on 
people, but aren't developers used to this? And if the Express Edition 
(free version) is the target, there is no monetary reason to avoid the 
upgrade. And as others have said, a VS2005 version of python is faster.

For reference, http://msdn2.microsoft.com/en-us/library/ms235497.aspx 
contains the list of CRT breakages according to MSFT.

-- 
Scott Dial
scott at scottdial.com
scodial at indiana.edu

From martin at v.loewis.de  Sat Jun 17 15:27:36 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 15:27:36 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <4493FB6E.5090602@scottdial.com>
References: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>
	<4493F347.5030306@v.loewis.de> <4493FB6E.5090602@scottdial.com>
Message-ID: <44940348.9000005@v.loewis.de>

Scott Dial wrote:
>> For fopen(3), you are right. For signal(3), VS2005 is in clear
>> violation with ISO C
> 
> I'm nobody but I don't find your argument compelling. I suggest you go
> read: http://msdn2.microsoft.com/en-us/library/ksazx244.aspx
> 
> In short, you can tell the CRT to do whatever you like when the
> parameters are invalid, including returning EINVAL.

Sure, I can *make* the library conform to C 99. I could also write
my own C library entirely to achieve that effect. The fact remains
that VS 2005 violates standard C where VS 2003 and earlier did not:
A conforming program will abort, instead of completing successfully.

> I went back and read more of the older discussion. And I think your
> position is that you just don't want to force another compiler on
> people, 

That also, yes.

> but aren't developers used to this?

They can manage, sure, nobody will get injured. However, since somebody
will be unhappy no matter what I do, I do what makes most people happy,
i.e. no change.

Also, I'm really upset by Microsoft's attitude towards their C compiler.
They shouldn't have broken the C library like that, and they shouldn't
have taken the VS Express 2003 release off the net without any prior
warning.

> For reference, http://msdn2.microsoft.com/en-us/library/ms235497.aspx
> contains the list of CRT breakages according to MSFT.

Unfortunately, they don't list the specific breakage that the parameter
validation causes. They don't even document the effect of parameter
validation on their signal() documentation:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vclib/html/_CRT_signal.asp

In any case, I see little chance for a change in the build procedure
for Python 2.5. Notice that none of the Python committers have spoken
in favour of changing the procedure (and some against).

Regards,
Martin


From andorxor at gmx.de  Sat Jun 17 16:06:54 2006
From: andorxor at gmx.de (Stephan Tolksdorf)
Date: Sat, 17 Jun 2006 16:06:54 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <4493FB6E.5090602@scottdial.com>
References: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>	<4493F347.5030306@v.loewis.de>
	<4493FB6E.5090602@scottdial.com>
Message-ID: <44940C7E.6040207@gmx.de>

One reason for not switching to VC 8, which hasn't yet been explicitly 
mentioned here, is that MinGW does not yet easily support linking to the 
msvcr80 runtime library. Some C extension modules, for instance those in 
SciPy, are primarily developed under Linux with GCC and hence are most 
easily built on Windows with MinGW. If the official Python distribution 
was linked to msvcr80.dll, many extension modules probably could not be 
built "out of the box" on Windows (with MinGW) anymore.

The 64bit compiler in VS2005 is pretty handy, though.

Regards,
   Stephan


From ronaldoussoren at mac.com  Sat Jun 17 18:04:54 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Sat, 17 Jun 2006 18:04:54 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <449388C6.30601@gmail.com>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
	<5.1.1.6.0.20060616115737.0330b378@sparrow.telecommunity.com>
	<F75C6D7E-2605-466A-8391-EB7D392324E0@redivi.com>
	<449388C6.30601@gmail.com>
Message-ID: <38FAA148-471D-4F99-A636-95BAED35B325@mac.com>


On 17-jun-2006, at 6:44, Nick Coghlan wrote:

> Bob Ippolito wrote:
>> There's a similar issue in that if sys.prefix contains a colon,  
>> Python
>> is also busted:
>> http://python.org/sf/1507224
>>
>> Of course, that's not a Windows issue, but it is everywhere else. The
>> offending code in that case is Modules/getpath.c,
>
> Since it has to do with the definition of Py_GetPath as returning a  
> single
> string that is really a DELIM separated list of strings, where  
> DELIM is
> defined by the current platform (';' on Windows, ':' everywhere  
> else), this
> seems more like a platform problem than a Python problem, though -  
> you can't
> have directories containing a colon as an entry in PATH or  
> PYTHONPATH either.
> It's not really Python's fault that the platform defines a legal  
> filename
> character as the delimiter for path entries.

On unix-y systems any character except the NUL byte can be used in a  
legal fileystem path, that leaves awfully little characters to use as  
delimiter without risking issues like the one in the bug Bob mentioned.

>
> The only real alternative I can see is to normalise Py_GetPath to  
> always
> return a ';' delimited list of strings, regardless of platform, and  
> update
> PySys_SetPath accordingly. That'd cause potential compatibility  
> problems for
> embedded interpreters, though.

That wouldn't help, ';' is also a valid character in filenames on  
Unix.  Except for accepting the status quo (which is a perfectly fine  
alternative) there seem to be two valid ways to solve this problem.  
You can either define Py_GetPath2 that returns a python list or  
tuple, or introduce some way of quoting the delimiter. Both would be  
backward incompatible.

Ronald

From pje at telecommunity.com  Sat Jun 17 18:38:19 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 17 Jun 2006 12:38:19 -0400
Subject: [Python-Dev] An obscene computed goto bytecode hack for
 "switch" :)
In-Reply-To: <20060617111841.GA19995@code0.codespeak.net>
References: <5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060617115323.03858148@sparrow.telecommunity.com>

At 01:18 PM 6/17/2006 +0200, Armin Rigo wrote:
>Psyco cheats here and emulates a behavior where there is
>always exactly one object instead (which can be a tuple), so if a
>END_FINALLY sees values not put there in the "official" way it will just
>crash.  PyPy works similarily but always expect three values.
>
>(Hum, Psyco could easily be fixed to support your use case...  For PyPy
>it would be harder without performance hit)

I suppose if the code knew it was running under PyPy or Psyco, the code 
could push three items or a tuple?  It's the knowing whether that's the 
case that would be difficult.  :)

I'm a bit surprised, though, since I thought PyPy was supposed to be an 
interpreter of CPython bytecode.  That is, that it runs unmodified Python 
bytecode.

Or are you guys just changing POP_BLOCK's semantics so it puts two extra 
None's on the stack when popping a SETUP_FINALLY block?  [Looks at the code]
Ah, yes.  But you're not emulating the control mechanism.  I see why you're 
saying it would be harder without a performance hit.  I could change the 
bytecode so it would work under PyPy as far as stack levels go, but I'd 
need to also be able to put wrapped unrollers on the stack (which seems 
impossible from within the interpreter), or else PyPy would have to check 
whether the unroller is an integer.

I guess it would probably make better sense to have a JUMP_TOP operation to 
implement the switch statement, and to use that under PyPy, keeping the 
hack only for implementing jump tables in older Python versions.

Anyway, if I do use this for older Python versions, would you accept a 
patch for Psyco to support it?  That would let us have JIT-compiled 
predicate dispatch for older Pythons, which sounds rather 
exciting.  :)  The current version of RuleDispatch is an interpreter that 
follows a tree data structure, but I am working on a new package, 
PEAK-Rules, that is planned to be able to translate dispatch trees directly 
into bytecode, thus removing one level of interpretation.

I do have some other questions, but I suppose this is getting off-topic for 
python-dev now, so I'll jump over to psyco-devel once I've gotten a bit 
further along.  Right now, I've only just got BytecodeAssembler up to 
building simple expression trees, and the "computed goto" demo.


From noamraph at gmail.com  Sat Jun 17 20:38:32 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sat, 17 Jun 2006 21:38:32 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <20060616091435.F332.JCARLSON@uci.edu>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
Message-ID: <b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>

Hello,

2006/6/16, Josiah Carlson <jcarlson at uci.edu>:
> I'm not a mathematician, and I don't really work with arrays of any
> dimensionality, so the need for 0-D subscripting via arr[] while being
> cute, isn't compelling to my uses for Python.

Thanks for appreciating its cuteness...
>
> Now, I appreciate the desire to reduce code length and complexity, but
> from what I understand, the ultimate result of such a change to your
> code would be to go from:
>     arr[()]
> to:
>     arr[]
>
> I don't see how this can reduce lines of code in implementation or use.
> At most it is a two characters per use, and a change in documentation
> (specifying how you subscript 0-D arrays).  If you can show an example
> where actual code line count is reduced with this change, I can't
> guarantee that I would get behind this proposal in a few months (if the
> conversation starts up again), but it may make me feel less that your
> proposal is essentially about aesthetics.

I meant the extra code for writing a special class to handle scalars,
if I decide that the "x[()]" syntax is too ugly or too hard to type,
so I write a special class which will allow the syntax "x.value".

The extra parentheses might not seem to matter for code using that
library, but I intend for people to use it directly, in an interactive
way, just like you type an expression in a spreadsheet. I expect that
for such a usage, the extra parentheses will be slightly unfun.

I know that it's not such a big difference, but I'm not talking about
a big change to the language either - it affects less than 20 lines of
code (probably could be done with even less), and doesn't cause any
problems with anything.

I can imagine Guido designing the grammar, thinking, "Should I allow
an empty subscript list? No, why should anyone want such a thing?
Besides, if someone wants them, we can always add them later." - at
least, it may be how I would think if I designed a language. So now, a
use was found. Indeed, it is farely rare. But why not to allow it now?

Have a good week,
Noam

From martin at v.loewis.de  Sat Jun 17 21:09:06 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 21:09:06 +0200
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List
	Without	Parentheses
In-Reply-To: <b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
Message-ID: <44945352.2000003@v.loewis.de>

Noam Raphael wrote:
> I meant the extra code for writing a special class to handle scalars,
> if I decide that the "x[()]" syntax is too ugly or too hard to type,
> so I write a special class which will allow the syntax "x.value".

What I cannot understand is why you use a zero-dimensional array to
represent a scalar. Scalars are directly supported in Python:

x = 5

Also, in an assignment, what are you putting on the right-hand side?
A read access from another zero-dimensional array?

I think this feature is so esoteric that it would actually hurt the
language to have it.

Regards,
Martin

From greg at electricrain.com  Sat Jun 17 22:09:50 2006
From: greg at electricrain.com (Gregory P. Smith)
Date: Sat, 17 Jun 2006 13:09:50 -0700
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <44940348.9000005@v.loewis.de>
References: <129CEF95A523704B9D46959C922A280002A4CD15@nemesis.central.ccp.cc>
	<4493F347.5030306@v.loewis.de> <4493FB6E.5090602@scottdial.com>
	<44940348.9000005@v.loewis.de>
Message-ID: <20060617200950.GD7182@zot.electricrain.com>

On Sat, Jun 17, 2006 at 03:27:36PM +0200, "Martin v. L?wis" wrote:
> Scott Dial wrote:
> >> For fopen(3), you are right. For signal(3), VS2005 is in clear
> >> violation with ISO C
> > 
> > I'm nobody but I don't find your argument compelling. I suggest you go
> > read: http://msdn2.microsoft.com/en-us/library/ksazx244.aspx
> > 
> > In short, you can tell the CRT to do whatever you like when the
> > parameters are invalid, including returning EINVAL.
> 
> Sure, I can *make* the library conform to C 99. I could also write
> my own C library entirely to achieve that effect. The fact remains
> that VS 2005 violates standard C where VS 2003 and earlier did not:
> A conforming program will abort, instead of completing successfully.

A note from the sidelines on this:

Don't assume microsoft is ever going to "fix" their compilers.  C and
modern C standards are not an important to them.  MS is large enough
that they can choose not to conform or break from the standards and
you'll just have to live with it.  MS uses C++ and C# for everything
internally so they have little internal incentive.

> > I went back and read more of the older discussion. And I think your
> > position is that you just don't want to force another compiler on
> > people, 
> 
> That also, yes.
>
> > but aren't developers used to this?
> 
> They can manage, sure, nobody will get injured. However, since somebody
> will be unhappy no matter what I do, I do what makes most people happy,
> i.e. no change.
> 
> Also, I'm really upset by Microsoft's attitude towards their C compiler.
> They shouldn't have broken the C library like that, and they shouldn't
> have taken the VS Express 2003 release off the net without any prior
> warning.

Agreed.  Regardless, I don't see this as something that the world
being pissed off at them can control.  There are other C compilers for
Windows if you don't want to be at their mercy.


From lists at janc.be  Sat Jun 17 22:31:53 2006
From: lists at janc.be (Jan Claeys)
Date: Sat, 17 Jun 2006 22:31:53 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <4493BC8A.4040709@v.loewis.de>
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>
	<011c01c69178$d0250d30$2b4d2a97@bagio> <4493AB49.9020105@scottdial.com>
	<4493BC8A.4040709@v.loewis.de>
Message-ID: <1150576316.28709.207.camel@localhost.localdomain>

Op za, 17-06-2006 te 10:25 +0200, schreef "Martin v. L?wis":
> Another reason is that I consider VS 2005 buggy, I hope that some
> of the breakage that Microsoft has done to the C library is reverted
> in a future release. VS2005 managed to break compatibility with
> C89 and C99 in a way that made Python fail to start up, also, it
> was possible to have the CRT abort just by calling the builtin open
> with the wrong arguments. There is now a work-around for that breakage,
> but still, I don't trust that VS 2005 is a good product. 

Why should a C++ compiler be able to compile C89 and/or C99 code?


-- 
Jan Claeys


From martin at v.loewis.de  Sat Jun 17 23:17:52 2006
From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 17 Jun 2006 23:17:52 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <1150576316.28709.207.camel@localhost.localdomain>
References: <06Jun16.100609pdt."58641"@synergy1.parc.xerox.com>	<011c01c69178$d0250d30$2b4d2a97@bagio>
	<4493AB49.9020105@scottdial.com>	<4493BC8A.4040709@v.loewis.de>
	<1150576316.28709.207.camel@localhost.localdomain>
Message-ID: <44947180.4060804@v.loewis.de>

Jan Claeys wrote:
> Op za, 17-06-2006 te 10:25 +0200, schreef "Martin v. L?wis":
>> Another reason is that I consider VS 2005 buggy, I hope that some
>> of the breakage that Microsoft has done to the C library is reverted
>> in a future release. VS2005 managed to break compatibility with
>> C89 and C99 in a way that made Python fail to start up, also, it
>> was possible to have the CRT abort just by calling the builtin open
>> with the wrong arguments. There is now a work-around for that breakage,
>> but still, I don't trust that VS 2005 is a good product. 
> 
> Why should a C++ compiler be able to compile C89 and/or C99 code?

It shouldn't. It appears you think VC 7.1 is a C++ compiler only;
that is not the case. It also offers support for (some sort of) C.

Regards,
Martin

From noamraph at gmail.com  Sat Jun 17 23:44:52 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sun, 18 Jun 2006 00:44:52 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <44945352.2000003@v.loewis.de>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<44945352.2000003@v.loewis.de>
Message-ID: <b348a0850606171444ndf9bdfcq8afcbdea13127873@mail.gmail.com>

2006/6/17, "Martin v. L?wis" <martin at v.loewis.de>:
> Noam Raphael wrote:
> > I meant the extra code for writing a special class to handle scalars,
> > if I decide that the "x[()]" syntax is too ugly or too hard to type,
> > so I write a special class which will allow the syntax "x.value".
>
> What I cannot understand is why you use a zero-dimensional array to
> represent a scalar. Scalars are directly supported in Python:
>
> x = 5

I need a zero-dimensional array as a single cell - an object that
holds a value that can change over time. It works just like a cell in
a spreadsheet: For example, say that if you change the value of cell
A1 to 0.18, cell A2 changes to 5. When using the library I design, you
would write "sheet1[0, 0] = 0.18", and, magically, "sheet1[0, 1]" will
become 5. But in my library, everything is meaningful and doesn't have
to be two-dimensional. So, if in the spreadsheet example, A1 meant the
income tax rate, you would write "income_tax[] = 0.18", and,
magically, "profit['Jerusalem', 2005]" will become 5.

I hope I managed to explain myself - my use case and why the simplest
way to treat scalars like income_tax is as zero-dimensional arrays.

> Also, in an assignment, what are you putting on the right-hand side?
> A read access from another zero-dimensional array?
>
I hope my example explained that, but you can put there any object -
for example, you can write "income_tax[] = 0.18"

(If I didn't yet manage to explain myself, please say so - it seems
that it's not a very simple example and I'm not a very good explainer,
at least in English.)

Noam

From noamraph at gmail.com  Sun Jun 18 00:39:50 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sun, 18 Jun 2006 01:39:50 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
Message-ID: <b348a0850606171539j49dd564aj23294d7d32eb1ede@mail.gmail.com>

Hi, sorry for my repeated posts. I just wanted to say that I improved
my patch a little bit, so it does exactly the same thing, but with
smaller code: you can see for yourself at
http://python.pastebin.com/715221 - it changed exactly 10 lines of
code, and adds additional 8 lines, all of them really short and
obvious.

I thought that it might convince someone that it's just a little
generalization of syntax, nothing frightening...

Noam

2006/6/17, Noam Raphael <noamraph at gmail.com>:
> I know that it's not such a big difference, but I'm not talking about
> a big change to the language either - it affects less than 20 lines of
> code (probably could be done with even less), and doesn't cause any
> problems with anything.

From talin at acm.org  Sun Jun 18 01:30:45 2006
From: talin at acm.org (Talin)
Date: Sat, 17 Jun 2006 16:30:45 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript
	List	Without	Parentheses
In-Reply-To: <44945352.2000003@v.loewis.de>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>	<20060616091435.F332.JCARLSON@uci.edu>	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<44945352.2000003@v.loewis.de>
Message-ID: <449490A5.7090404@acm.org>

Martin v. L?wis wrote:
> Noam Raphael wrote:
> 
>>I meant the extra code for writing a special class to handle scalars,
>>if I decide that the "x[()]" syntax is too ugly or too hard to type,
>>so I write a special class which will allow the syntax "x.value".
> 
> 
> What I cannot understand is why you use a zero-dimensional array to
> represent a scalar. Scalars are directly supported in Python:
> 
> x = 5
> 
> Also, in an assignment, what are you putting on the right-hand side?
> A read access from another zero-dimensional array?

Ok, so in order to clear up the confusion here, I am going to take a 
moment to try and explain Noam's proposal in clearer language.

Note that I have no opinions about the merits of the proposal itself; 
However, the lack of understanding here bothers me :)

The motivation, as I understand it, is one of mathematical consistency. 
Let's take a moment and think about arrays in terms of geometry. We all 
learned in school that 3 dimensions defines a volume, 2 dimensions 
defines a plane, 1 dimension defines a line, and 0 dimensions defines a 
point.

Moreover, each N-dimensional entity can be converted to one of lower 
order by setting one of its dimensions to 0. So a volume with one 
dimension set to zero becomes a plane, and so on.

Now think about this with respect to arrays. A 3-dimensional array can 
be converted into a 2-dimensional array by setting one of its dimensions 
to 1. So a 5 x 5 array is equivalent to a 5 x 5 x 1 array.

Similarly, a 3-dimensional array can be converted into a 1-dimensional 
array by setting two of its dimensions to 1: So an array of length 5 is 
equivalent to a 5 x 1 x 1 array.

We see, then, a general rule that a N-dimensional array can be reduced 
to M dimensions by setting (N-M) of its dimensions to 1.

So by this rule, if we reduce a 3d array to zero dimensions, we would 
have an array that has one element: 1 x 1 x 1.

Similarly, each time we reduce the dimension by 1, we also reduce the 
number of indices needed to access the elements of the array. So a 3-d 
array requires 3 coordinates, a 2-d array requires 2 coordinates, and so on.

It should be noted that this zero-dimensional array is not exactly a 
normal scalar. It is a scalar in the sense that it has no dimensions, 
but it is still an array in the sense that it is contains a value which 
is distinct from the array itself. The zero-dimensional array is still a 
container of other values, however it can only hold one value. This is 
different from a normal scalar, which is simply a value, and not a 
container.

Now, as to the specifics of Noam's problem: Apparently what he is trying 
to do is what many other people have done, which is to use Python as a 
base for some other high-level language, building on top of Python 
syntax and using the various operator overloads to define the semantics 
of the language.

However, what he's discovering is that there are cases where his 
syntactical requirements and the syntactical rules of Python don't match.

Now, typically when this occurs, the person who is creating the language 
knows that there is a rationale for why that particular syntax makes 
sense in their language. What they often do in this case is to try and 
convince the Python community that this rationale also applies to Python 
in addition to their own made-up language. This is especially the case 
when the proposed change gives meaning to what would formerly have been 
an error. (I sometime suspect that the guiding design principle of Perl 
is that all possible permutations of ASCII input characters should 
eventually be assigned some syntactically valid meaning.)

Historically, I can say that such efforts are almost always rebuffed - 
while Python may be good at being a base for other languages, this is 
not one of the primary design goals of the language as I understand it.

My advice to people in this situation is to consider that perhaps some 
level of translation between their syntax and Python syntax may be in 
order. It would not be hard for the interactive interpreter to convert 
instances of [] into [()], for example.

-- Talin

From aahz at pythoncraft.com  Sun Jun 18 02:26:31 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sat, 17 Jun 2006 17:26:31 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606171539j49dd564aj23294d7d32eb1ede@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<b348a0850606171539j49dd564aj23294d7d32eb1ede@mail.gmail.com>
Message-ID: <20060618002631.GB27666@panix.com>

On Sun, Jun 18, 2006, Noam Raphael wrote:
>
> Hi, sorry for my repeated posts. I just wanted to say that I improved
> my patch a little bit, so it does exactly the same thing, but with
> smaller code: you can see for yourself at
> http://python.pastebin.com/715221 - it changed exactly 10 lines of
> code, and adds additional 8 lines, all of them really short and
> obvious.
> 
> I thought that it might convince someone that it's just a little
> generalization of syntax, nothing frightening...

Not really.  After reading this thread, my opinion is that you have a
relatively narrow corner case and should find another way to get what
you want.  -1
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From anthony at interlink.com.au  Sun Jun 18 04:18:47 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Sun, 18 Jun 2006 12:18:47 +1000
Subject: [Python-Dev] Beta 1 schedule ? (Bug in stringobject?)
In-Reply-To: <44931D4B.5060507@egenix.com>
References: <e6u6at$sbu$1@sea.gmane.org> <e6uaqc$c5r$1@sea.gmane.org>
	<44931D4B.5060507@egenix.com>
Message-ID: <200606181218.50898.anthony@interlink.com.au>

On Saturday 17 June 2006 07:06, M.-A. Lemburg wrote:
> Fredrik Lundh wrote:
> > M.-A. Lemburg wrote:
> >> Since replace() only works on string objects, it appears
> >> as if a temporary string object would have to be created.
> >> However, this would involve an unnecessary allocation
> >> and copy process... it appears as if the refactoring
> >> during the NFS sprint left out that case.
> >
> > what's the beta 1 status ?  fixing this should be trivial, but I
> > don't have any cycles to spare today.

I just confirmed that Martin and Fred are ready to do this on Tuesday. 
We slipped a couple of days because of pretty bad builbot breakage. 

> I'd also like to get the new winerror module in before
> beta1 is released - documentation will follow next week:

Hm. A new python module should be OK - but I was under the impression 
that then large piles of the standard library would be updated to use 
this new module. I'm less happy (much less) about this happening for 
2.5. 

> Is it OK to first check in a pure Python version and then
> replace this with a C implementation having the same interface
> later on in the beta cycle ?

How big is it likely to be? How much of a pain will it be to make it 
work with various versions of Windows?

Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From anthony at interlink.com.au  Sun Jun 18 04:19:39 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Sun, 18 Jun 2006 12:19:39 +1000
Subject: [Python-Dev] Bug in stringobject?
In-Reply-To: <e6uaqc$c5r$1@sea.gmane.org>
References: <e6u6at$sbu$1@sea.gmane.org> <4492ACFB.7050407@egenix.com>
	<e6uaqc$c5r$1@sea.gmane.org>
Message-ID: <200606181219.40316.anthony@interlink.com.au>

On Friday 16 June 2006 23:13, Fredrik Lundh wrote:
> M.-A. Lemburg wrote:
> > Since replace() only works on string objects, it appears
> > as if a temporary string object would have to be created.
> > However, this would involve an unnecessary allocation
> > and copy process... it appears as if the refactoring
> > during the NFS sprint left out that case.
>
> what's the beta 1 status ?  fixing this should be trivial, but I
> don't have any cycles to spare today.

If you don't get time before the code freeze for beta 1, I don't have 
a problem with this bug fix going in for beta 2.

Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From dynkin at gmail.com  Sun Jun 18 05:16:13 2006
From: dynkin at gmail.com (George Yoshida)
Date: Sun, 18 Jun 2006 12:16:13 +0900
Subject: [Python-Dev] uuid backward compatibility
Message-ID: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>

Python 2.5 ships with uuid.py and now I want to ask you
about its backward compatibility.

uuid.py says in its docstring:

  This module works with Python 2.3 or higher.

And my question is:
  Do we plan to make it 2.3 compatible in future releases?

If so, uuid needs to be listed in PEP 291.
Otherwise, the comment is misleading.

Current uuid.py uses ctypes but it checks ctypes availability
and I counldn't find no other newly added features, so backward
compatibility is still retained.

I just want to make sure.

Thanks in advance.

-- 
george

From martin at v.loewis.de  Sun Jun 18 08:20:05 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 18 Jun 2006 08:20:05 +0200
Subject: [Python-Dev] Pre-PEP: Allow Empty
	Subscript	List	Without	Parentheses
In-Reply-To: <449490A5.7090404@acm.org>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>	<20060616091435.F332.JCARLSON@uci.edu>	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>	<44945352.2000003@v.loewis.de>
	<449490A5.7090404@acm.org>
Message-ID: <4494F095.4000806@v.loewis.de>

Talin wrote:
> The motivation, as I understand it, is one of mathematical consistency. 

Noam told me in private email that this is *not* the motivation.
Instead, he wants mutable values. This, in turn, he wants so he
can catch modifications.

Regards,
Martin

From shane at hathawaymix.org  Sun Jun 18 09:54:36 2006
From: shane at hathawaymix.org (Shane Hathaway)
Date: Sun, 18 Jun 2006 01:54:36 -0600
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List
	Without	Parentheses
In-Reply-To: <b348a0850606171444ndf9bdfcq8afcbdea13127873@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>	<20060616091435.F332.JCARLSON@uci.edu>	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>	<44945352.2000003@v.loewis.de>
	<b348a0850606171444ndf9bdfcq8afcbdea13127873@mail.gmail.com>
Message-ID: <449506BC.3070303@hathawaymix.org>

Noam Raphael wrote:
> 2006/6/17, "Martin v. L?wis" <martin at v.loewis.de>:
>> Noam Raphael wrote:
>>> I meant the extra code for writing a special class to handle scalars,
>>> if I decide that the "x[()]" syntax is too ugly or too hard to type,
>>> so I write a special class which will allow the syntax "x.value".
>> What I cannot understand is why you use a zero-dimensional array to
>> represent a scalar. Scalars are directly supported in Python:
>>
>> x = 5
> 
> I need a zero-dimensional array as a single cell - an object that
> holds a value that can change over time. It works just like a cell in
> a spreadsheet: For example, say that if you change the value of cell
> A1 to 0.18, cell A2 changes to 5. When using the library I design, you
> would write "sheet1[0, 0] = 0.18", and, magically, "sheet1[0, 1]" will
> become 5. But in my library, everything is meaningful and doesn't have
> to be two-dimensional. So, if in the spreadsheet example, A1 meant the
> income tax rate, you would write "income_tax[] = 0.18", and,
> magically, "profit['Jerusalem', 2005]" will become 5.

Try to think more about how users will use your API.  You haven't
specified where those names (sheet1, income_tax, and profit) are coming
from.  What do you expect users of your library to do to bring those
names into their namespace?

Let me take a wild guess so you can see what I'm asking:

import spreadsheetlib
sheet1 = spreadsheetlib.sheet('sheet1')
income_tax = spreadsheetlib.cell('income_tax')
profit = spreadsheetlib.cell('profit')

So far, that's a mess!  What are you really going to do?  Will it be
better?  This could be a much greater concern than optimizing away
parentheses.

A possible way to solve the namespace problem is to make all names an
attribute of some object.

from spreadsheetlib import sp
sp.sheet1[0, 0] = 0.18
assert sp.sheet1[0, 1] == 5
sp.income_tax = 0.18
assert sp.profit['Jerusalem', 2005] == 5

That would be a pretty usable API, IMHO, and you'd be able to write it
now without any changes to Python.

Shane


From python-dev at zesty.ca  Sun Jun 18 10:05:58 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 18 Jun 2006 03:05:58 -0500 (CDT)
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606180259550.698@server1.LFW.org>

On Sun, 18 Jun 2006, George Yoshida wrote:
> uuid.py says in its docstring:
>   This module works with Python 2.3 or higher.
>
> And my question is:
>   Do we plan to make it 2.3 compatible in future releases?
>
> If so, uuid needs to be listed in PEP 291.
> Otherwise, the comment is misleading.

The comment isn't misleading, because the module actually does work
with Python 2.3.  It would only become misleading if it were later
changed to break compatibility with Python 2.3 without updating the
comment.

I intentionally avoided breaking compatibility with Python 2.3 so
that there would be just one current version of uuid.py, both in
the svn repository and available for use with existing installations
of Python, since Python 2.3 is so widely deployed right now.

Anyway, it looks like someone has added this module to the list of
backward-compatible modules in PEP 291.  Regarding whether we want
it to be on that list (i.e. whether or not this backward-compatibility
should be retained as Python moves forward), i'm happy to have it
either way.


-- ?!ng

From martin at v.loewis.de  Sun Jun 18 10:23:53 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 18 Jun 2006 10:23:53 +0200
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
Message-ID: <44950D99.9000606@v.loewis.de>

Ka-Ping Yee wrote:
> Anyway, it looks like someone has added this module to the list of
> backward-compatible modules in PEP 291.  Regarding whether we want
> it to be on that list (i.e. whether or not this backward-compatibility
> should be retained as Python moves forward), i'm happy to have it
> either way.

In that case, I think we shouldn't require 2.3 compatibility. There
is no reason to deliberately break it either, of course.

As for the comment: It apparently *is* misleading, George mistakenly
took it as a requirement for future changes, rather than a factual
statement about the present (even though it uses the tense of simple
present). Anybody breaking 2.3 compatibility will have to remember
to remove the comment, which he likely won't.

Regards,
Martin

From g.brandl at gmx.net  Sun Jun 18 11:39:09 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 18 Jun 2006 11:39:09 +0200
Subject: [Python-Dev] Improve error msgs?
In-Reply-To: <e70f2d$ssn$1@sea.gmane.org>
References: <e6obkn$6ao$1@sea.gmane.org> <e70f2d$ssn$1@sea.gmane.org>
Message-ID: <e734nn$4qf$1@sea.gmane.org>

Georg Brandl wrote:
> Georg Brandl wrote:
>> In abstract.c, there are many error messages like
>> 
>> type_error("object does not support item assignment");
>> 
>> It helps debugging if the object's type was prepended.
>> Should I go through the code and try to enhance them
>> where possible?
> 
> So that's definite "perhaps"?
> 
> Anyway, posted patch 1507676.

Sigh. I guess I'll have to commit it to get a second (actually,
third, thanks Armin) opinion...

Georg


From mal at egenix.com  Sun Jun 18 13:07:18 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Sun, 18 Jun 2006 13:07:18 +0200
Subject: [Python-Dev] Adding winerror module (Beta 1 schedule ?)
In-Reply-To: <200606181218.50898.anthony@interlink.com.au>
References: <e6u6at$sbu$1@sea.gmane.org>
	<e6uaqc$c5r$1@sea.gmane.org>	<44931D4B.5060507@egenix.com>
	<200606181218.50898.anthony@interlink.com.au>
Message-ID: <449533E6.2060808@egenix.com>

Anthony Baxter wrote:
>> I'd also like to get the new winerror module in before
>> beta1 is released - documentation will follow next week:
> 
> Hm. A new python module should be OK - but I was under the impression 
> that then large piles of the standard library would be updated to use 
> this new module. I'm less happy (much less) about this happening for 
> 2.5. 

I don't think that a lot of code currently uses the Windows
error codes. If code does use the Windows error codes, then
they usually hard-code the values in the module, so replacing
those values with ones from winerror wouldn't cause breakage.
winerror only contains mappings from error codes to error
names and vice-versa.

I agree, that it's a bit late in the 2.5 release process
to add a new module. Perhaps it should wait until 2.6.

Note that the module was motivated by a change Martin
implemented which now causes several os module APIs
to return Windows error codes rather than POSIX ones
(r45925). This was later reverted by Martin (r45964).
The Windows error codes are now available through
the .winerror attribute.

The winerror module is intended to be able to use symbolic
names for code using .winerror (much like the errno
serves this purpose for .errno).

>> Is it OK to first check in a pure Python version and then
>> replace this with a C implementation having the same interface
>> later on in the beta cycle ?
> 
> How big is it likely to be? How much of a pain will it be to make it 
> work with various versions of Windows?

Not much: the C module will contain the same values as
the Python file (which are extracted from the standard
Windows WinError.h file).

The only difference is that the C module will use a static C
array to store the values, which results in less heap memory
being used.

Both modules are (will be) generated from the WinError.h file
(see the SF patch).

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 18 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From fredrik at pythonware.com  Sun Jun 18 15:17:31 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sun, 18 Jun 2006 15:17:31 +0200
Subject: [Python-Dev] Adding winerror module (Beta 1 schedule ?)
In-Reply-To: <449533E6.2060808@egenix.com>
References: <e6u6at$sbu$1@sea.gmane.org>	<e6uaqc$c5r$1@sea.gmane.org>	<44931D4B.5060507@egenix.com>	<200606181218.50898.anthony@interlink.com.au>
	<449533E6.2060808@egenix.com>
Message-ID: <e73jp8$b31$1@sea.gmane.org>

M.-A. Lemburg wrote:

> The winerror module is intended to be able to use symbolic
> names for code using .winerror (much like the errno
> serves this purpose for .errno).

couldn't this be implemented as an extra table in errno instead ?

</F>


From ncoghlan at iinet.net.au  Sun Jun 18 16:42:08 2006
From: ncoghlan at iinet.net.au (Nick Coghlan)
Date: Mon, 19 Jun 2006 00:42:08 +1000
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
Message-ID: <44956640.3010003@iinet.net.au>

The implementations of PEP 328 (explicit relative imports) and PEP 338 
(executing modules as scripts) currently have a fight over the __name__ 
attribute of a module.

The -m switch sets __name__ to '__main__', even though it knows the module's 
real name. This is so that "if __name__ == '__main__':" blocks get executed 
properly in the main module.

Relative imports, however, use __name__ to figure out the parent package, 
which obviously won't work if the -m switch has clobbered it.

I think this is a solvable problem, but with beta 1 going out in a couple of 
days, I don't know if it's too late in the 2.5 cycle to fix it.

If Anthony's willing to class this as a bug fix, it should be possible to get 
it sorted out by beta 2. I think fixing it will actually be easier than trying 
to write documentation explaining why it doesn't work right ;)

The 'bug fix' solution would be:

   1. Change main.c and PySys_SetPath so that '' is NOT prepended to sys.path 
when the -m switch is used
   2. Change runpy.run_module to add a __pkg_name__ attribute if the module 
being executed is inside a package
   3. Change import.c to check for __pkg_name__ if (and only if) __name__ == 
'__main__' and use __pkg_name__ if it is found.

If we don't fix it, I'd like to document somewhere that you can't currently 
rely on relative imports if you want to be able to execute your module with 
the '-m' switch.

However, the question I have then is. . . where? It's pretty esoteric, so I 
don't really want to put it in the tutorial, but I can't think of any other 
real documentation we have that covers how to launch the interpreter or the 
"if __name__ == '__main__':" trick.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From nnorwitz at gmail.com  Sun Jun 18 19:18:55 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sun, 18 Jun 2006 10:18:55 -0700
Subject: [Python-Dev] Adding winerror module (Beta 1 schedule ?)
In-Reply-To: <449533E6.2060808@egenix.com>
References: <e6u6at$sbu$1@sea.gmane.org> <e6uaqc$c5r$1@sea.gmane.org>
	<44931D4B.5060507@egenix.com>
	<200606181218.50898.anthony@interlink.com.au>
	<449533E6.2060808@egenix.com>
Message-ID: <ee2a432c0606181018y65cfd921q6ab48548e919cb38@mail.gmail.com>

On 6/18/06, M.-A. Lemburg <mal at egenix.com> wrote:
> Anthony Baxter wrote:
> >> I'd also like to get the new winerror module in before
> >> beta1 is released - documentation will follow next week:
> >
> > Hm. A new python module should be OK - but I was under the impression
> > that then large piles of the standard library would be updated to use
> > this new module. I'm less happy (much less) about this happening for
> > 2.5.
>
> I don't think that a lot of code currently uses the Windows
> error codes. If code does use the Windows error codes, then
> they usually hard-code the values in the module, so replacing
> those values with ones from winerror wouldn't cause breakage.
> winerror only contains mappings from error codes to error
> names and vice-versa.
>
> I agree, that it's a bit late in the 2.5 release process
> to add a new module. Perhaps it should wait until 2.6.

That's a big part of the reason I'd like to wait.  I think Martin had
something about it he didn't like.  If the API isn't right, it's hard
to change.  If we wait for 2.6, we can have more confidence the API
will be good and we won't have to rush anything.

n

From noamraph at gmail.com  Sun Jun 18 20:02:29 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sun, 18 Jun 2006 21:02:29 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <449506BC.3070303@hathawaymix.org>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<44945352.2000003@v.loewis.de>
	<b348a0850606171444ndf9bdfcq8afcbdea13127873@mail.gmail.com>
	<449506BC.3070303@hathawaymix.org>
Message-ID: <b348a0850606181102i61a01dedy721ffcf3b25e9a85@mail.gmail.com>

2006/6/18, Shane Hathaway <shane at hathawaymix.org>:
> Try to think more about how users will use your API.  You haven't
> specified where those names (sheet1, income_tax, and profit) are coming
> from.  What do you expect users of your library to do to bring those
> names into their namespace?
>
That's a good question. I'm going to do some bytecode hacks! Something
like this:

from spreadsheetlib import SourceCube, CalculatedCube
income_tax = SourceCube([])
income_tax[] = 0.18
years = set([2003, 2004, 2005])
profit = SourceCube([years])
profit[2003] = 1000; profit[2004] = 2000; profit[2005] = 2500
real_profit = CalculatedCube([years], lambda year: profit[year] / (1+
income_tax[]))
print real_profit[2004]
(1694.9152542372883)

It may be what Talin meant about a "higher level language", but I
don't really change the language - I only inspect the function to see
on what other changeable objects it depends. Those changeable objects
implement some sort of change notification protocol, and it allows the
system to automatically recalculate the result when one of the values
it depends on changes.

(Actually, I intend to change the function to depend directly on the
changeable object instead of look it up every time in the global
namespace, but I don't think that it changes the explanation.)

Note that requiring that all changeable objects will be attributes of
some other object won't remove the need for bytecode hacking: the only
way is to explicitly specify a list of all the objects that the
function depends on, and then give a function that gets these as
arguments. This will really be inconvenient.

But thanks for the suggestion!

Noam

From guido at python.org  Sun Jun 18 20:07:32 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Jun 2006 11:07:32 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <4494F095.4000806@v.loewis.de>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<44945352.2000003@v.loewis.de> <449490A5.7090404@acm.org>
	<4494F095.4000806@v.loewis.de>
Message-ID: <ca471dc20606181107v77ea473t1cba324ea923aadc@mail.gmail.com>

On 6/17/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Talin wrote:
> > The motivation, as I understand it, is one of mathematical consistency.
>
> Noam told me in private email that this is *not* the motivation.
> Instead, he wants mutable values. This, in turn, he wants so he
> can catch modifications.

That cannot be the only motivation. He can have mutable values today
without any new syntax. (Either he can use x[()] or he can use
attribute assignment.)

But more to the point, this discussion is pointless, since I won't
accept the syntax change.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Sun Jun 18 20:18:39 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Jun 2006 11:18:39 -0700
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
In-Reply-To: <44956640.3010003@iinet.net.au>
References: <44956640.3010003@iinet.net.au>
Message-ID: <ca471dc20606181118g7af0d90fu1c620f5fc7b120ba@mail.gmail.com>

On 6/18/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote:
> The implementations of PEP 328 (explicit relative imports) and PEP 338
> (executing modules as scripts) currently have a fight over the __name__
> attribute of a module.
>
> The -m switch sets __name__ to '__main__', even though it knows the module's
> real name. This is so that "if __name__ == '__main__':" blocks get executed
> properly in the main module.
>
> Relative imports, however, use __name__ to figure out the parent package,
> which obviously won't work if the -m switch has clobbered it.
>
> I think this is a solvable problem, but with beta 1 going out in a couple of
> days, I don't know if it's too late in the 2.5 cycle to fix it.
>
> If Anthony's willing to class this as a bug fix, it should be possible to get
> it sorted out by beta 2. I think fixing it will actually be easier than trying
> to write documentation explaining why it doesn't work right ;)
>
> The 'bug fix' solution would be:
>
>    1. Change main.c and PySys_SetPath so that '' is NOT prepended to sys.path
> when the -m switch is used
>    2. Change runpy.run_module to add a __pkg_name__ attribute if the module
> being executed is inside a package
>    3. Change import.c to check for __pkg_name__ if (and only if) __name__ ==
> '__main__' and use __pkg_name__ if it is found.

That's pretty heavy-handed for a pretty esoteric use case. (Except #1,
which I think should be done regardless as otherwise we'd get a
messed-up sys.path.)

I'd like to understand the use case better. Why can't a "script"
module inside a package use absolute imports to reference other parts
of the package?

> If we don't fix it, I'd like to document somewhere that you can't currently
> rely on relative imports if you want to be able to execute your module with
> the '-m' switch.
>
> However, the question I have then is. . . where? It's pretty esoteric, so I
> don't really want to put it in the tutorial, but I can't think of any other
> real documentation we have that covers how to launch the interpreter or the
> "if __name__ == '__main__':" trick.

With the docs for -m, obviously.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Sun Jun 18 20:23:32 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Jun 2006 11:23:32 -0700
Subject: [Python-Dev] An obscene computed goto bytecode hack for
	"switch" :)
In-Reply-To: <20060617111841.GA19995@code0.codespeak.net>
References: <5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>
	<20060617111841.GA19995@code0.codespeak.net>
Message-ID: <ca471dc20606181123l58a29e7ajf7c56ecea6feba75@mail.gmail.com>

On 6/17/06, Armin Rigo <arigo at tunes.org> wrote:
> The reason is that the details of the stack behavior of END_FINALLY are
> messy in CPython.  The finally blocks are the only place where the depth
> of the stack is not known in advance: depending on how the finally block
> is entered, there will be between one and three objects pushed (a single
> None, or an int and another object, or an exception type, instance and
> traceback).

FWIW, I see this as an unintended accident and would gratefully accept
fixes to the bytecode that made this behavior more regular.

I'm not in favor of abusing this to generate a computed goto, and I
don't see a need for that -- if we decide to add that (either as
syntax or as an automatic optimization) I see no problem adding a new
bytecode. Python's bytecode is not sacred or frozen like Java's.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From mal at egenix.com  Sun Jun 18 20:57:39 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Sun, 18 Jun 2006 20:57:39 +0200
Subject: [Python-Dev] Adding winerror module (Beta 1 schedule ?)
In-Reply-To: <ee2a432c0606181018y65cfd921q6ab48548e919cb38@mail.gmail.com>
References: <e6u6at$sbu$1@sea.gmane.org> <e6uaqc$c5r$1@sea.gmane.org>	
	<44931D4B.5060507@egenix.com>	
	<200606181218.50898.anthony@interlink.com.au>	
	<449533E6.2060808@egenix.com>
	<ee2a432c0606181018y65cfd921q6ab48548e919cb38@mail.gmail.com>
Message-ID: <4495A223.2000202@egenix.com>

Neal Norwitz wrote:
> On 6/18/06, M.-A. Lemburg <mal at egenix.com> wrote:
>> Anthony Baxter wrote:
>> >> I'd also like to get the new winerror module in before
>> >> beta1 is released - documentation will follow next week:
>> >
>> > Hm. A new python module should be OK - but I was under the impression
>> > that then large piles of the standard library would be updated to use
>> > this new module. I'm less happy (much less) about this happening for
>> > 2.5.
>>
>> I don't think that a lot of code currently uses the Windows
>> error codes. If code does use the Windows error codes, then
>> they usually hard-code the values in the module, so replacing
>> those values with ones from winerror wouldn't cause breakage.
>> winerror only contains mappings from error codes to error
>> names and vice-versa.
>>
>> I agree, that it's a bit late in the 2.5 release process
>> to add a new module. Perhaps it should wait until 2.6.
> 
> That's a big part of the reason I'd like to wait.  I think Martin had
> something about it he didn't like.  If the API isn't right, it's hard
> to change.  If we wait for 2.6, we can have more confidence the API
> will be good and we won't have to rush anything.

Ok, let's wait for Python 2.6...

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 18 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From brett at python.org  Sun Jun 18 21:50:00 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 18 Jun 2006 12:50:00 -0700
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <44950D99.9000606@v.loewis.de>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
	<44950D99.9000606@v.loewis.de>
Message-ID: <bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>

On 6/18/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Ka-Ping Yee wrote:
> > Anyway, it looks like someone has added this module to the list of
> > backward-compatible modules in PEP 291.  Regarding whether we want
> > it to be on that list (i.e. whether or not this backward-compatibility
> > should be retained as Python moves forward), i'm happy to have it
> > either way.
>
> In that case, I think we shouldn't require 2.3 compatibility. There
> is no reason to deliberately break it either, of course.
>

I agree with Martin.  We can try to avoid the issue  (and usually people
should to make backporting fixes easier), but adding that hinderance can be
a real pain, especially as we get farther and farther away from 2.3 .

> As for the comment: It apparently *is* misleading, George mistakenly
> took it as a requirement for future changes, rather than a factual
> statement about the present (even though it uses the tense of simple
> present). Anybody breaking 2.3 compatibility will have to remember
> to remove the comment, which he likely won't.
>


I think it is better to add a comment in the external release that it is
backwards compatible somewhere, but leave it out of the core.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060618/25badb76/attachment.html 

From brett at python.org  Sun Jun 18 21:52:07 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 18 Jun 2006 12:52:07 -0700
Subject: [Python-Dev] Improve error msgs?
In-Reply-To: <e734nn$4qf$1@sea.gmane.org>
References: <e6obkn$6ao$1@sea.gmane.org> <e70f2d$ssn$1@sea.gmane.org>
	<e734nn$4qf$1@sea.gmane.org>
Message-ID: <bbaeab100606181252v382f58cfhdfd30a53d394fa64@mail.gmail.com>

On 6/18/06, Georg Brandl <g.brandl at gmx.net> wrote:
>
> Georg Brandl wrote:
> > Georg Brandl wrote:
> >> In abstract.c, there are many error messages like
> >>
> >> type_error("object does not support item assignment");
> >>
> >> It helps debugging if the object's type was prepended.
> >> Should I go through the code and try to enhance them
> >> where possible?
> >
> > So that's definite "perhaps"?
> >
> > Anyway, posted patch 1507676.
>
> Sigh. I guess I'll have to commit it to get a second (actually,
> third, thanks Armin) opinion...



If you want an opinion on whether it is useful, then yes, it is useful.
Honestly I thought that was kind of obvious since better, more informative
error messages are always better as long as the verbosity is not insane.

As for looking at the patch, that is just the usual time/priority problem.
=)

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060618/58aed5a7/attachment.htm 

From pje at telecommunity.com  Sun Jun 18 22:10:00 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 18 Jun 2006 16:10:00 -0400
Subject: [Python-Dev] An obscene computed goto bytecode hack for
 "switch" :)
In-Reply-To: <ca471dc20606181123l58a29e7ajf7c56ecea6feba75@mail.gmail.co
 m>
References: <20060617111841.GA19995@code0.codespeak.net>
	<5.1.1.6.0.20060616210159.04480eb0@sparrow.telecommunity.com>
	<20060617111841.GA19995@code0.codespeak.net>
Message-ID: <5.1.1.6.0.20060618155944.01ea1d98@sparrow.telecommunity.com>

At 11:23 AM 6/18/2006 -0700, Guido van Rossum wrote:
>I'm not in favor of abusing this to generate a computed goto, and I
>don't see a need for that -- if we decide to add that (either as
>syntax or as an automatic optimization) I see no problem adding a new
>bytecode.

Me either -- I suggest simply adding a JUMP_TOP -- but I wanted to point 
out that people wouldn't need to add a new opcode in order to experiment 
with possible "switch" syntaxes.


From pje at telecommunity.com  Sun Jun 18 22:14:05 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 18 Jun 2006 16:14:05 -0400
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
In-Reply-To: <ca471dc20606181118g7af0d90fu1c620f5fc7b120ba@mail.gmail.co
 m>
References: <44956640.3010003@iinet.net.au>
 <44956640.3010003@iinet.net.au>
Message-ID: <5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>

At 11:18 AM 6/18/2006 -0700, Guido van Rossum wrote:
>On 6/18/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote:
> > The 'bug fix' solution would be:
> >
> >    1. Change main.c and PySys_SetPath so that '' is NOT prepended to 
> sys.path
> > when the -m switch is used
> >    2. Change runpy.run_module to add a __pkg_name__ attribute if the module
> > being executed is inside a package
> >    3. Change import.c to check for __pkg_name__ if (and only if) 
> __name__ ==
> > '__main__' and use __pkg_name__ if it is found.
>
>That's pretty heavy-handed for a pretty esoteric use case. (Except #1,
>which I think should be done regardless as otherwise we'd get a
>messed-up sys.path.)

Since the -m module is being run as a script, shouldn't it put the module's 
directory as the first entry on sys.path?  I don't think we should change 
the fact that *some* directory is always inserted at the beginning of 
sys.path -- and all the precedents at the moment say "script directory", if 
you consider -c and the interactive interpreter to be scripts in the 
current directory.  :)


From noamraph at gmail.com  Sun Jun 18 22:57:24 2006
From: noamraph at gmail.com (Noam Raphael)
Date: Sun, 18 Jun 2006 23:57:24 +0300
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <ca471dc20606181107v77ea473t1cba324ea923aadc@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<44945352.2000003@v.loewis.de> <449490A5.7090404@acm.org>
	<4494F095.4000806@v.loewis.de>
	<ca471dc20606181107v77ea473t1cba324ea923aadc@mail.gmail.com>
Message-ID: <b348a0850606181357mf7ab338s4126632032a7bd62@mail.gmail.com>

2006/6/18, Guido van Rossum <guido at python.org>:
> But more to the point, this discussion is pointless, since I won't
> accept the syntax change.

OK, too bad!

But don't say I haven't warned you, when you will all use my fabulous
package and get tired from typing all those extra parentheses! :)

Noam

From guido at python.org  Sun Jun 18 23:03:05 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Jun 2006 14:03:05 -0700
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
In-Reply-To: <5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>
References: <44956640.3010003@iinet.net.au>
	<5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>
Message-ID: <ca471dc20606181403h32b93d58y4705810ebbcbb291@mail.gmail.com>

On 6/18/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 11:18 AM 6/18/2006 -0700, Guido van Rossum wrote:
> >On 6/18/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote:
> > > The 'bug fix' solution would be:
> > >
> > >    1. Change main.c and PySys_SetPath so that '' is NOT prepended to
> > sys.path
> > > when the -m switch is used
> > >    2. Change runpy.run_module to add a __pkg_name__ attribute if the module
> > > being executed is inside a package
> > >    3. Change import.c to check for __pkg_name__ if (and only if)
> > __name__ ==
> > > '__main__' and use __pkg_name__ if it is found.
> >
> >That's pretty heavy-handed for a pretty esoteric use case. (Except #1,
> >which I think should be done regardless as otherwise we'd get a
> >messed-up sys.path.)
>
> Since the -m module is being run as a script, shouldn't it put the module's
> directory as the first entry on sys.path?

Yes for a top-level module. No if it's executing a module inside a
package; it's really evil to have a package directory on sys.path.

> I don't think we should change
> the fact that *some* directory is always inserted at the beginning of
> sys.path -- and all the precedents at the moment say "script directory", if
> you consider -c and the interactive interpreter to be scripts in the
> current directory.  :)

You have a point about sys.path[0] being special. It could be the
current directory instead of the package directory.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Sun Jun 18 23:04:44 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Jun 2006 14:04:44 -0700
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: <b348a0850606181357mf7ab338s4126632032a7bd62@mail.gmail.com>
References: <200606161215.41941.gmccaughan@synaptics-uk.com>
	<b348a0850606160710x4a8eb80bv683771c82bde24a7@mail.gmail.com>
	<20060616091435.F332.JCARLSON@uci.edu>
	<b348a0850606171138o36576009sde006466095eb7bb@mail.gmail.com>
	<44945352.2000003@v.loewis.de> <449490A5.7090404@acm.org>
	<4494F095.4000806@v.loewis.de>
	<ca471dc20606181107v77ea473t1cba324ea923aadc@mail.gmail.com>
	<b348a0850606181357mf7ab338s4126632032a7bd62@mail.gmail.com>
Message-ID: <ca471dc20606181404w7dd4e479nb51756e8499c155@mail.gmail.com>

On 6/18/06, Noam Raphael <noamraph at gmail.com> wrote:
> 2006/6/18, Guido van Rossum <guido at python.org>:
> > But more to the point, this discussion is pointless, since I won't
> > accept the syntax change.
>
> OK, too bad!
>
> But don't say I haven't warned you, when you will all use my fabulous
> package and get tired from typing all those extra parentheses! :)

The bet is on. :)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Sun Jun 18 23:37:30 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 18 Jun 2006 17:37:30 -0400
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
In-Reply-To: <ca471dc20606181403h32b93d58y4705810ebbcbb291@mail.gmail.co
 m>
References: <5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>
	<44956640.3010003@iinet.net.au>
	<5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060618173509.01eab5c8@sparrow.telecommunity.com>

At 02:03 PM 6/18/2006 -0700, Guido van Rossum wrote:
>On 6/18/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > At 11:18 AM 6/18/2006 -0700, Guido van Rossum wrote:
> > >On 6/18/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote:
> > > > The 'bug fix' solution would be:
> > > >
> > > >    1. Change main.c and PySys_SetPath so that '' is NOT prepended to
> > > sys.path
> > > > when the -m switch is used
> > > >    2. Change runpy.run_module to add a __pkg_name__ attribute if 
> the module
> > > > being executed is inside a package
> > > >    3. Change import.c to check for __pkg_name__ if (and only if)
> > > __name__ ==
> > > > '__main__' and use __pkg_name__ if it is found.
> > >
> > >That's pretty heavy-handed for a pretty esoteric use case. (Except #1,
> > >which I think should be done regardless as otherwise we'd get a
> > >messed-up sys.path.)
> >
> > Since the -m module is being run as a script, shouldn't it put the module's
> > directory as the first entry on sys.path?
>
>Yes for a top-level module. No if it's executing a module inside a
>package; it's really evil to have a package directory on sys.path.
>
> > I don't think we should change
> > the fact that *some* directory is always inserted at the beginning of
> > sys.path -- and all the precedents at the moment say "script directory", if
> > you consider -c and the interactive interpreter to be scripts in the
> > current directory.  :)
>
>You have a point about sys.path[0] being special. It could be the
>current directory instead of the package directory.

Mightn't that be a security risk, in that it introduces an import hole for 
secure scripts run with -m?  Not that I know of any such scripts existing 
as yet...

If it's not the package directory, perhaps it could be a copy of whatever 
sys.path entry the package was found under - that wouldn't do anything but 
make "nearby" imports faster.


From guido at python.org  Sun Jun 18 23:49:48 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Jun 2006 14:49:48 -0700
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
In-Reply-To: <5.1.1.6.0.20060618173509.01eab5c8@sparrow.telecommunity.com>
References: <44956640.3010003@iinet.net.au>
	<5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>
	<5.1.1.6.0.20060618173509.01eab5c8@sparrow.telecommunity.com>
Message-ID: <ca471dc20606181449o60f75837q69efb050d55371f@mail.gmail.com>

On 6/18/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> >You have a point about sys.path[0] being special. It could be the
> >current directory instead of the package directory.
>
> Mightn't that be a security risk, in that it introduces an import hole for
> secure scripts run with -m?  Not that I know of any such scripts existing
> as yet...

That sounds like an invented use case if I ever heard of one. YAGNI, please!

> If it's not the package directory, perhaps it could be a copy of whatever
> sys.path entry the package was found under - that wouldn't do anything but
> make "nearby" imports faster.

But it could theoretically affect search order for other modules. I
still see nothing wrong with "". After all that's also the default if
you run a script using python <path/to/file.py .

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From tom at vector-seven.com  Sun Jun 11 03:04:10 2006
From: tom at vector-seven.com (Thomas Lee)
Date: Sun, 11 Jun 2006 11:04:10 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <17547.19802.361151.705599@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
Message-ID: <20060611010410.GA5723@21degrees.com.au>

On Sat, Jun 10, 2006 at 05:53:14PM -0500, skip at pobox.com wrote:
> 
>     Thomas> As the subject of this e-mail says, the attached patch adds a
>     Thomas> "switch" statement to the Python language.
> 
> Thanks for the contribution.  I patched my sandbox and it built just fine.
> I'm going out of town for a couple weeks, so I'll point out what everyone
> else is thinking then duck out of the way:
> 
>     * Aside from the modified Grammar file there is no documentation.
>     * There are no test cases.
>     * Can you submit a patch on SourceForge?
>

You're right, of course. I'll sort the documentation and test cases out
as soon as I get a chance.
 
> You mentioned:
> 
>     Thomas> I got a bit lost as to why the SWITCH opcode is necessary for
>     Thomas> the implementation of the PEP. The reasoning seems to be
>     Thomas> improving performance, but I'm not sure how a new opcode could
>     Thomas> improve performance.
> 
> Your implementation is straightforward, but uses a series of DUP_TOP and
> COMPARE_OP instructions to compare each alternative expression to the
> initial expression.  In many other languages the expression associated with
> the case would be restricted to be a constant expression so that at compile
> time a jump table or dictionary lookup could be used to jump straight to the
> desired case.
> 

I see. But restricting the switch to constants in the name of
performance may not make sense in a language like Python. Maybe this is
something for the PEP to discuss, but it seems such an implementation
would be confusing and sometimes it may not be possible to use a switch
case in place of if/elif/else statements at all.

Consider the following:

#!/usr/bin/python

FAUX_CONST_A = 'a'
FAUX_CONST_B = 'b'

some_value = 'a'

switch some_value:
	case FAUX_CONST_A:
		print 'got a'
	case FAUX_CONST_B:
		print 'got b'
	else:
		print ':('

# EOF

Although, conceptually, FAUX_CONST_A and FAUX_CONST_B are constants, a
'constants only' implementation would likely give a syntax error (see
expr_constant in Python/compile.c).
		
IMHO, this will lead to one of two things:

a) unnecessarily duplication of constant values for the purpose of using
them as case values
b) reverting back to if/elif/else

I do get the distinction, I'm just wondering if the usefulness of the semantics
(or lack thereof) are going to negate any potential performance
enhancements: if a switch statement is never used because it's only
useful in a narrow set of circumstances, then maybe we're looking to
improve performance in the wrong place?

Just thinking about it, maybe there could be two different code paths
for switch statements: one when all the case values are constants (the
'fast' one) and one where one or more are expressions. This would mean a
slightly longer compile time for switch statements while ensuring that
runtime execution is the maximum possible without placing any major
restrictions on what can be used as a case value.

Cheers,
Tom

-- 
Tom Lee
http://www.vector-seven.com


From bioinformed at gmail.com  Sun Jun 11 15:23:02 2006
From: bioinformed at gmail.com (Kevin Jacobs <jacobs@bioinformed.com>)
Date: Sun, 11 Jun 2006 09:23:02 -0400
Subject: [Python-Dev] Segmentation fault in collections.defaultdict
In-Reply-To: <448BB751.1050300@gmail.com>
References: <2e1434c10606101051w4f00ee08j5da64ff0577a3935@mail.gmail.com>
	<448BB751.1050300@gmail.com>
Message-ID: <2e1434c10606110623j247f2839n57065a342a7f51f0@mail.gmail.com>

On 6/11/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
>
> Kevin Jacobs <jacobs at bioinformed.com> wrote:
> > Try this at home:
> > import collections
> > d=collections.defaultdict(int)
> > d.iterkeys().next()  # Seg fault
> > d.iteritems().next() # Seg fault
> > d.itervalues().next() # Fine and dandy
>
> This all worked fine for me in rev 46739 and 46849 (Kubuntu 6.06, gcc
> 4.0.3).
>
> > Python version:
> > Python 2.5a2 (trunk:46822M, Jun 10 2006, 13:14:15)
> > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2
>
> Either something got broken and then fixed again between the two revs I
> tried,
> there's a problem specific to GCC 4.0.2, or there's a problem with
> whatever
> local modifications you have in your working copy :)
>


Looks like pilot error on this one.  I'm working on a 64 bit system and did
not do a distclean after my svn update.  Tim updated dictobject's mask from
an int to Py_ssize_t in rev 46594 (
http://svn.python.org/view?rev=46594&view=rev), which changed the memory
layout of dictionaries.  I can only assume that collectionsmodule.c was not
recompiled to reflect this change and the dict iterator was using a garbled
mask.

Resolution: Always run distclean when updating from the trunk.

Sorry for the noise,
-Kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060611/f8dae397/attachment-0001.htm 

From t.broyer at gmail.com  Mon Jun 12 00:27:00 2006
From: t.broyer at gmail.com (Thomas Broyer)
Date: Mon, 12 Jun 2006 00:27:00 +0200
Subject: [Python-Dev] Bug: xml.dom.pulldom never gives you END_DOCUMENT
	events with an Expat parser
Message-ID: <a9699fd20606111527s46d882b3nddf27d0e6da3632a@mail.gmail.com>

Hi,

First, sorry for posting this here, I closed my SourceForge account a
few months ago and I can't get it reopened...

I'm using python 2.2.1 but a diff on SVN showed that there was no
change at this level, so the following bug should still be there in
current versions (I'll try with a 2.4 at work tomorrow). On my
machine, xml.sax.make_parser returns an
xml.sax.expatreader.ExpatParser instance.

The problem is: I'm never given END_DOCUMENT events.

Code to reproduce:

from xml.dom.pulldom import parseString
reader = parseString('<element attribute="value">text</element>')
# The following 2 lines will produce, in order:
# START_DOCUMENT, START_ELEMENT, TEXT, END_ELEMENT
# Note the lack of the END_DOCUMENT event.
for event,node in reader:
   print event
# The following line will get an END_DOCUMENT event
print reader.getEvent()[0]
# The following line will throw a SAXParseException,
# because the SAX parser's close method has been
# called twice
print reader.getEvent()[0]


Cause:

The xml.dom.pulldom.DOMEventStream.getEvent method, when it has no
more event in its internal stack, calls the SAX parser's close()
method (which is OK) then immediately returns 'None', ignoring any
event that could have been generated by the call to the close()
method. If you call getEvent later, it will send you the remaining
events until there are no more left, and then will call the SAX
parser's close() method again, causing a SAXParseException.
Because expat (an maybe other parsers too) has no way to know when the
document ends, it generates the endDocument/END_DOCUMENT event only
when explicitely told that the XML chunk is the final one (i.e. when
the close() method is called)


Proposed fix:

Add a "parser_closed" attribute to the DOMEventStream class,
initialized to "False". After having called self.parser.close() in the
xml.dom.pulldom.DOMEventStream.getEvent method, immediately set this
"parser_closed" attribute to True and proceed. Finally, at the
beginning of the "while" loop, immediately returns "None" if
"parser_closed" is "True" to prevent a second call to
self.parser.close().
With this change, any call to getEvent when there are no event left
will return None and never throw an exception, which I think is the
expected behavior.


Proposed code:

The "closed" attribute is initialized in the "__init__" method:
    def __init__(self, stream, parser, bufsize):
        self.stream = stream
        self.parser = parser
        self.parser_closed = False
        self.bufsize = bufsize
        if not hasattr(self.parser, 'feed'):
            self.getEvent = self._slurp
        self.reset()

The "getEvent" method becomes:
    def getEvent(self):
        # use IncrementalParser interface, so we get the desired
        # pull effect
        if not self.pulldom.firstEvent[1]:
            self.pulldom.lastEvent = self.pulldom.firstEvent
        while not self.pulldom.firstEvent[1]:
            if self.parser_closed:
                return None
            buf = self.stream.read(self.bufsize)
            if buf:
                self.parser.feed(buf)
            else:
                self.parser.close()
                self.parser_closed = True
        rc = self.pulldom.firstEvent[1][0]
        self.pulldom.firstEvent[1] = self.pulldom.firstEvent[1][1]
        return rc

The same problem seems to exist in the
xml.dom.pulldom.DOMEventStream._slurp method, when the SAX parser is
not an IncrementalParser, as the parser's close() method is never
called. I suggest adding a call to the close() method in there.
However, as I only have expat as an option, which implements
IncrementalParser, I can't test it...
The _slurp method would become:
    def _slurp(self):
        """ Fallback replacement for getEvent() using the
            standard SAX2 interface, which means we slurp the
            SAX events into memory (no performance gain, but
            we are compatible to all SAX parsers).
        """
        self.parser.parse(self.stream)
        self.parser.close()
        self.getEvent = self._emit
        return self._emit()
The _emit method raises exceptions when there are no events left, so I
propose changing it to:
    def _emit(self):
        """ Fallback replacement for getEvent() that emits
            the events that _slurp() read previously.
        """
        if not self.pulldom.firstEvent[1]:
            return None
        rc = self.pulldom.firstEvent[1][0]
        self.pulldom.firstEvent[1] = self.pulldom.firstEvent[1][1]
        return rc

Hope this helps.

-- 
Thomas Broyer

From gward-1337f07a94b43060ff5c1ea922ed93d6 at python.net  Tue Jun 13 04:29:35 2006
From: gward-1337f07a94b43060ff5c1ea922ed93d6 at python.net (Greg Ward)
Date: Mon, 12 Jun 2006 22:29:35 -0400
Subject: [Python-Dev] Dropping externally maintained packages (Was:
 Please stop changing wsgiref on the trunk)
In-Reply-To: <448D9EA1.9000209@v.loewis.de>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de>
Message-ID: <20060613022935.GA28870@cthulhu.gerg.ca>

[Guido]
> While I am an enthusiastic supporter of several of those additions, I
> am *not* in favor of the special status granted to software
> contributed by certain developers, since it is a burden for all other
> developers.

[Martin]
> Each maintainer should indicate whether he is happy with a "this is
> part of Python" approach. If so, the entry should be removed from PEP
> 360 (*); if not, the code should be removed from Python before beta 1.

I very much identify with Phillip's gripe, but I gotta admit Guido has
the compelling argument here.  Personally, I have noticed very few
"rogue" changes to optparse.py over the years, and have quietly grumbled
and backported most of them over to Optik for eventual re-merge to
optparse.  (One or two got dropped because I had already fixed the
problem differently in Optik.)

I think this is just the price that must be paid for maintaining two
copies of something.  It's nice that Optik has an existence of its own
(every time I break compatibility with Python 2.0, someone comes along
with a patch to restore it), but I think it's better that the burden of
keeping track of what is effectively a closely-tracked fork should be on
the person who knows the code best (me), not on a horde of Python
developers who may occasionally stumble across a style issue or shallowf
bug and just fix it.

In retrospect, I'm *really* glad that I didn't release textwrap
separately, and just stuffed it in the stdlib.  And while I have been
thinking about Optik 2.0 for a couple of years now, I think that I have
just decided that there will be no Optik 2.0: there will only be
optparse2.  (Unless I can figure out a backwards-compatible way of
making the changes I've been thinking about all these years, in which
case there will only be optparse.  Unlikely.)

        Greg
-- 
Greg Ward <gward at python.net>                         http://www.gerg.ca/
Reality is for people who can't handle science fiction.

From nicolas.chauvat at logilab.fr  Tue Jun 13 15:43:18 2006
From: nicolas.chauvat at logilab.fr (Nicolas Chauvat)
Date: Tue, 13 Jun 2006 15:43:18 +0200
Subject: [Python-Dev] Source control tools
In-Reply-To: <1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
	<1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>
Message-ID: <20060613134318.GH17259@crater.logilab.fr>

On Tue, Jun 13, 2006 at 10:27:17AM +0200, Alexander Schremmer wrote:
> Maybe you benchmarked a Tailor deficiency here, but Mercurial scales very
> well. People use it for work on the Linux kernel etc.
> Compared to that, Bazaar-NG seems to reach limits already when working on
> it's own code/repository.

We happen to have switched to mercurial a month ago for all of our
code and are happy with it. It scales well, as opposed to darcs. It is
very similar to git. Mercurial is used for OpenSolaris if I recall
correctly.

-- 
Nicolas Chauvat

logilab.fr - services en informatique avanc?e et gestion de connaissances  

From claird at phaseit.net  Tue Jun 13 18:51:31 2006
From: claird at phaseit.net (Cameron Laird)
Date: Tue, 13 Jun 2006 16:51:31 +0000
Subject: [Python-Dev] Documentation enhancement:  "MS free compiler"?
Message-ID: <20060613165130.GA6401@lairds.us>

I'm channeling a correspondent, who tells me that Python documentation
(Python 2.5 announcement, and so on) mentions compatibility of sources
with "the MS free compiler"; that's the default toolchain for Windows.

Apparently we're in conflict with Microsoft on that:  some hyperlinks
refer to <URL: http://msdn.microsoft.com/visualc/vctoolkit2003/ >, which
begins,
  The Visual C++ Toolkit 2003 has been
  replaced by Visual C++ 2005 Express
  Edition.
The latter is available at no charge, incidentally.

We need to update things, I believe.

From alexander.belopolsky at gmail.com  Wed Jun 14 23:23:30 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 14 Jun 2006 17:23:30 -0400
Subject: [Python-Dev] Misleading error message from PyObject_GenericSetAttr
Message-ID: <d38f5330606141423r3a03478hdc1729a6aac44735@mail.gmail.com>

When an extension type Foo defines tp_getattr, but leaves tp_setattr
NULL, an attempt to set an attribute bar results in an AttributeError
with the message "'Foo' object has no attribute 'bar'".  This message
is misleading because the object may have the attribute 'bar' as
implemented in tp_getattr.  It would be better to change the message
to "'Foo' object has only read-only attributes (assign to .bar)" as in
the case tp_setattro == tp_setattr == NULL in  PyObject_SetAttr .

I've also noticed that the exceptions raised from PyObject_SetAttr are
TypeErrors. Shouldn't PyObject_GenericSetAttr raise a TypeError if
tp_setattr is null but tp_getattr is not?  This would be consistent
with the errors from read-only descriptors.

From titus at caltech.edu  Thu Jun 15 19:19:35 2006
From: titus at caltech.edu (Titus Brown)
Date: Thu, 15 Jun 2006 10:19:35 -0700
Subject: [Python-Dev] Code coverage reporting.
Message-ID: <20060615171935.GA26179@caltech.edu>

Folks,

I've just run a code coverage report for the python2.4 branch:

	http://vallista.idyll.org/~t/temp/python2.4-svn/

This report uses my figleaf code,

	http://darcs.idyll.org/~t/projects/figleaf-latest.tar.gz

I'm interested in feedback on a few things --

 * what more would you want to see in this report?

 * is there anything obviously wrong about the report?

In other words... comments solicited ;).

By the by, I'm also planning to integrate this into buildbot on some
projects.  I'll post the scripts when I get there, and I'd be happy
to help Python itself set it up, of course.

cheers,
--titus

From dignor.sign at gmail.com  Thu Jun 15 21:01:57 2006
From: dignor.sign at gmail.com (dsign)
Date: Thu, 15 Jun 2006 12:01:57 -0700
Subject: [Python-Dev] About dynamic module loading
Message-ID: <cbbabc2d0606151201l1a89a4bj53e6127225d414ef@mail.gmail.com>

   I saw in this list, or some of its relatives, an old discussion between
David Abrams, the developer of boost.python, and the devteam of python about
loading modules with RTLD_GLOBAL. Many useful comments and a lot of insight,
but didn't find a solution to the question posed by David. I don't like to
put a sys.dlopenflags  something around my imports of c++ modules, nor to
change 300 c++ files with template instantiations just to be able to export
some functionality to python. All I need is to use RTLD_GLOBAL in a user
transparent way for loading the extensions, so, if there already is a way of
doing so, please tell me (have into account that I read the complete
discussion thread of that time, if I miss the solution, let'me know).

If not, here's a modified version of the dynload_shlib.c file in:

http://dignor.sign.googlepages.com/dynloadpatchforpython2.4.1

that checks the existence of foo.so.global for module foo.so in the same dir
and if so, loads it using RTLD_GLOBAL. An ugly hack, I know, but it works
for me. Maybe there are other users with this problem and they can use this.

Best Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060615/53cca9f1/attachment.html 

From nmm1 at cus.cam.ac.uk  Thu Jun 15 23:21:17 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Thu, 15 Jun 2006 22:21:17 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
Message-ID: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>

As I have posted to comp.lang.python, I am not happy with Python's
numerical robustness - because it basically propagates the 'features'
of IEEE 754 and (worse) C99.  Yes, it's better, but I would like to
make it a LOT better.  I already have a more robust version of 2.4.2,
but there are some problems, technical and political.  I should
appreciate advice.

1) Should I start off by developing a testing version, to give people
a chance to scream at me, or write a PEP?  Because I am no Python
development expert, the former would help to educate me into its
conventions, technical and political.

2) Because some people are dearly attached to the current behaviour,
warts and all, and there is a genuine quandary of whether the 'right'
behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
finally-specifies (let's ignore C99 and Java as beyond redemption),
there might well need to be options.  These can obviously be done by
a command-line option, an environment variable or a float method.
There are reasons to disfavour the last, but all are possible.  Which
is the most Pythonesque approach?

3) I am rather puzzled by the source control mechanism.  Are commit
privileges needed to start a project like this in the main tree?
Note that I am thinking of starting a test subtree only.

4) Is there a Python hacking document?  Specifically, if I want to
add a new method to a built-in type, is there any guide on where to
start?

5) I am NOT offering to write a full floating-point emulator, though
it would be easy enough and could provide repeatable, robust results.
"Easy" does not mean "quick" :-(  Maybe when I retire.  Incidentally,
experience from times of yore is that emulated floating-point would
be fast enough that few, if any, Python users would notice.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From andrew at puzzling.org  Fri Jun 16 04:48:08 2006
From: andrew at puzzling.org (Andrew Bennetts)
Date: Fri, 16 Jun 2006 12:48:08 +1000
Subject: [Python-Dev] Source control tools
In-Reply-To: <oiyy2xppuzka$.dlg@usenet.alexanderweb.de>
References: <9e804ac0606121431o5a801df5w829a9c65f3d3d855@mail.gmail.com>
	<1px7bnk1z0ccy.dlg@usenet.alexanderweb.de>
	<1150390810.28709.41.camel@localhost.localdomain>
	<oiyy2xppuzka$.dlg@usenet.alexanderweb.de>
Message-ID: <20060616024808.GN31839@steerpike.home.puzzling.org>

On Thu, Jun 15, 2006 at 10:33:49PM +0200, Alexander Schremmer wrote:
> On Thu, 15 Jun 2006 19:00:09 +0200, Jan Claeys wrote:
> 
> > Op di, 13-06-2006 te 10:27 +0200, schreef Alexander Schremmer:
> >> Bazaar-NG seems to reach limits already when working on
> >> it's own code/repository. 
> > 
> > Canonical uses bzr to develop launchpad.net, which is a "little bit"
> > larger dan bzr itself, I suspect...?
> 
> I don't think so, without having seen the Launchpad code. I assume that
> Launchpad has less comitters (closed source!) and therefore less change
> sets and less parallel branches.

Actually, Launchpad's got twice as many lines of source (as measured by
sloccount), nearly 10 times as many versioned files, and about twice as many
revisions as bzr.

We typically have 10-20 parallel branches going through our review process at
any one time (branches are generally reviewed before they land on the trunk),
and probably many others being worked on at any given time.

> Once I pulled the bzr changesets (1-3 months ago) and it needed 3 hours on
> a 900 MHz machine with a high-speed (> 50 MBit) internet connection (and it
> was CPU bound). Note that bzr has gained a lot of speed since then, though.

That would have been when it was in "weave" format?  The current "knit" format
doesn't suffer the CPU problems in my experience.  It's still very slow over a
network because it does a very large number of round trips.  There's work to
greatly reduce that problem, by pipelining and by reducing the number of HTTP
requests (by issuing one request with a range header with many ranges, rather
than one request per range!).  There are also plans to write a smart server.

There's a big focus on performance improvements on the bzr list at the moment,
and they seem to be making rapid progress.

-Andrew.


From pete at cenqua.com  Fri Jun 16 16:33:01 2006
From: pete at cenqua.com (Peter Moore)
Date: Sat, 17 Jun 2006 00:33:01 +1000
Subject: [Python-Dev] FishEye on Python CVS Repository
In-Reply-To: <4443E975.4000208@v.loewis.de>
References: <52431c5005060820217cb1f1fb@mail.gmail.com>
	<4443E975.4000208@v.loewis.de>
Message-ID: <8EFADC77-1276-4C6A-972F-46CF0A791712@cenqua.com>

Hi Martin,

The FishEye'd Python Subversion repository is now available here:

  http://fisheye3.cenqua.com/browse/python

A big sorry for the delay in actioning this, lost in the email pile :(

Cheers,
Pete.


On 18/04/2006, at 5:16 AM, Martin v. L?wis wrote:

> Peter Moore wrote:
>> I'm responsible for setting up free FishEye hosting for community
>> projects. As a long time python user I of course added Python up
>> front.  You can see it here:
>>
>>   http://fisheye.cenqua.com/viewrep/python/
>
> Can you please move that to the subversion repository
> (http://svn.python.org/projects/python), or, failing that,
> remove that entry? The CVS repository is no longer used.
>
> Regards,
> Martin


From nmm1 at cus.cam.ac.uk  Sun Jun 18 11:35:29 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Sun, 18 Jun 2006 10:35:29 +0100
Subject: [Python-Dev] Pre-PEP: Allow Empty Subscript List Without
	Parentheses
In-Reply-To: Your message of "Sat, 17 Jun 2006 16:30:45 PDT."
	<449490A5.7090404@acm.org> 
Message-ID: <E1Frtgn-00026k-8A@draco.cus.cam.ac.uk>

Talin <talin at acm.org> wrote:
> 
> Ok, so in order to clear up the confusion here, I am going to take a 
> moment to try and explain Noam's proposal in clearer language.
> 
> Now, as to the specifics of Noam's problem: Apparently what he is trying 
> to do is what many other people have done, which is to use Python as a 
> base for some other high-level language, building on top of Python 
> syntax and using the various operator overloads to define the semantics 
> of the language.

No, that's too restrictive.  Back in the 1970s, Genstat (a statistical
language) and perhaps others introduced the concept of an array type
with an indefinite number of dimensions.  This is a requirement for
implementing such things as continengy tables, analysis of variance
etc., and was and is traditionally handled by some ghastly code.  It
always was easy to handle in LISP and, as far as this goes, Python is
a descendent of LISP rather than of Algol, CPL or Fortran.

Now, I thought of how conventional "3rd GL" languages (Algol 68,
Fortran, C etc.) could be extended to support those - it is very
simple, and is precisely what Noam is proposing.  An index becomes
a single-dimensional vector of integers, and all is hunky-dory.
When you look at it, you realise that you DO want to allow zero-length
index vectors, to avoid having to write separate code for the scalar
case.

So it is not just a matter of mapping another language, but that of
meeting a specific requirement, that is largely language-independent.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From aahz at pythoncraft.com  Mon Jun 19 03:26:21 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sun, 18 Jun 2006 18:26:21 -0700
Subject: [Python-Dev] About dynamic module loading
In-Reply-To: <cbbabc2d0606151201l1a89a4bj53e6127225d414ef@mail.gmail.com>
References: <cbbabc2d0606151201l1a89a4bj53e6127225d414ef@mail.gmail.com>
Message-ID: <20060619012621.GA26334@panix.com>

On Thu, Jun 15, 2006, dsign wrote:
>
> If not, here's a modified version of the dynload_shlib.c file in:
> 
> http://dignor.sign.googlepages.com/dynloadpatchforpython2.4.1
> 
> that checks the existence of foo.so.global for module foo.so in the
> same dir and if so, loads it using RTLD_GLOBAL. An ugly hack, I know,
> but it works for me. Maybe there are other users with this problem and
> they can use this.

Because of the upcoming beta for 2.5, you are not likely to get much
attention right now.  Please make a SourceForge patch so that we can
track this later.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From jcarlson at uci.edu  Mon Jun 19 03:56:01 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Sun, 18 Jun 2006 18:56:01 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060611010410.GA5723@21degrees.com.au>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
Message-ID: <20060618184500.F34E.JCARLSON@uci.edu>


Thomas Lee <tom at vector-seven.com> wrote:
> I see. But restricting the switch to constants in the name of
> performance may not make sense in a language like Python. Maybe this is
> something for the PEP to discuss, but it seems such an implementation
> would be confusing and sometimes it may not be possible to use a switch
> case in place of if/elif/else statements at all.

The PEP already discussed it.  Offering arbitrary expressions whose
meaning can vary at runtime would kill any potential speedup (the
ultimate purpose for having a switch statement), leaving us with
if/elif/else that, paradoxically, ran faster in general than the
equivalent switch statement.

[snip]

> a) unnecessarily duplication of constant values for the purpose of using
> them as case values

The vast majority of use-cases in C/C++ uses single character or small
integer constants.  This hasn't slowed down the switch/case use in C/C++,
and I haven't even seen an over-abundance of const or macro definitions
of constants in C/C++ switch/case statements.


> I do get the distinction, I'm just wondering if the usefulness of the semantics
> (or lack thereof) are going to negate any potential performance
> enhancements: if a switch statement is never used because it's only
> useful in a narrow set of circumstances, then maybe we're looking to
> improve performance in the wrong place?

Please re-read the PEP, more specifically the 'Problem' section:

    "A nice example of this is the state machine implemented in
    pickle.py which is used to serialize Python objects. Other
    prominent cases include XML SAX parsers and Internet protocol
    handlers."

> Just thinking about it, maybe there could be two different code paths
> for switch statements: one when all the case values are constants (the
> 'fast' one) and one where one or more are expressions. This would mean a
> slightly longer compile time for switch statements while ensuring that
> runtime execution is the maximum possible without placing any major
> restrictions on what can be used as a case value.

The non-fast version couldn't actually work if it referenced any names,
given current Python semantics for arbitrary name binding replacements.

 - Josiah


From brett at python.org  Mon Jun 19 04:58:39 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 18 Jun 2006 19:58:39 -0700
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
References: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
Message-ID: <bbaeab100606181958pf9a060cm4903520e6a9161b8@mail.gmail.com>

[skipping answering the numeric-specific questions since I am no math
expert  =) ]

On 6/15/06, Nick Maclaren <nmm1 at cus.cam.ac.uk> wrote:
>
> As I have posted to comp.lang.python, I am not happy with Python's
> numerical robustness - because it basically propagates the 'features'
> of IEEE 754 and (worse) C99.  Yes, it's better, but I would like to
> make it a LOT better.  I already have a more robust version of 2.4.2,
> but there are some problems, technical and political.  I should
> appreciate advice.
>
> 1) Should I start off by developing a testing version, to give people
> a chance to scream at me, or write a PEP?  Because I am no Python
> development expert, the former would help to educate me into its
> conventions, technical and political.


I would do both.  It is a lot easier to get something accepted when you have
working code.  But a PEP to vent possible arguments against the change along
with any backwards-compatibility issues will be needed for something as
major as changing how math works.

2) Because some people are dearly attached to the current behaviour,
> warts and all, and there is a genuine quandary of whether the 'right'
> behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
> finally-specifies (let's ignore C99 and Java as beyond redemption),
> there might well need to be options.  These can obviously be done by
> a command-line option, an environment variable or a float method.
> There are reasons to disfavour the last, but all are possible.  Which
> is the most Pythonesque approach?
>
> 3) I am rather puzzled by the source control mechanism.  Are commit
> privileges needed to start a project like this in the main tree?
> Note that I am thinking of starting a test subtree only.


To work directly in Python's repository, yes, checkin privileges are
needed.  In order to get these, though, you usually either need to have been
involved in python-dev for a while and be known to the group or have someone
everyone trusts to watch over you as you do your work in a branch.

4) Is there a Python hacking document?  Specifically, if I want to
> add a new method to a built-in type, is there any guide on where to
> start?


The C API docs are at http://docs.python.org/ and there are some docs at
http://www.python.org/dev/ in terms of intro to how development for Python
tends to take place.

-Brett

5) I am NOT offering to write a full floating-point emulator, though
> it would be easy enough and could provide repeatable, robust results.
> "Easy" does not mean "quick" :-(  Maybe when I retire.  Incidentally,
> experience from times of yore is that emulated floating-point would
> be fast enough that few, if any, Python users would notice.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060618/b23cc1bf/attachment.html 

From brett at python.org  Mon Jun 19 05:12:39 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 18 Jun 2006 20:12:39 -0700
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <20060615171935.GA26179@caltech.edu>
References: <20060615171935.GA26179@caltech.edu>
Message-ID: <bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>

On 6/15/06, Titus Brown <titus at caltech.edu> wrote:
>
> Folks,
>
> I've just run a code coverage report for the python2.4 branch:
>
>         http://vallista.idyll.org/~t/temp/python2.4-svn/
>
> This report uses my figleaf code,
>
>         http://darcs.idyll.org/~t/projects/figleaf-latest.tar.gz


Very nice, Titus!

I'm interested in feedback on a few things --
>
> * what more would you want to see in this report?
>
> * is there anything obviously wrong about the report?
>
> In other words... comments solicited ;).


Making the comments in the code stand out less (i.e., not black) might be
handy since my eye still gets drawn to the comments a lot.

It would also be nice to be able to sort on different things, such as
filename.

But it does seem accurate; random checking of some modules that got high but
not perfect covereage all seem to be instances where dependency injection
would be required to get the tests to work since they were based on
platform-specific things.

By the by, I'm also planning to integrate this into buildbot on some
> projects.  I'll post the scripts when I get there, and I'd be happy
> to help Python itself set it up, of course.


I don't know if we need it hooked into the buildbots (unless it is dirt
cheap to generate the report).  But hooking it up to the script in
Misc/build.sh that Neal has running to report reference leaks and
fundamental test failures would be wonderful.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060618/1db89b08/attachment.htm 

From nnorwitz at gmail.com  Mon Jun 19 05:29:07 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sun, 18 Jun 2006 20:29:07 -0700
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <bbaeab100606181958pf9a060cm4903520e6a9161b8@mail.gmail.com>
References: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
	<bbaeab100606181958pf9a060cm4903520e6a9161b8@mail.gmail.com>
Message-ID: <ee2a432c0606182029saac55er70da427384f7bbe2@mail.gmail.com>

You should be aware of PEP 754 and address it.

http://www.python.org/dev/peps/pep-0754/

Also note that Python conforms to C89, not C99.  Any solution should
work on all Python platforms.  Some of those platforms are here:

http://www.python.org/dev/buildbot/all/

n
--
On 6/18/06, Brett Cannon <brett at python.org> wrote:
> [skipping answering the numeric-specific questions since I am no math expert
>  =) ]
>
>
> On 6/15/06, Nick Maclaren <nmm1 at cus.cam.ac.uk > wrote:
> > As I have posted to comp.lang.python, I am not happy with Python's
> > numerical robustness - because it basically propagates the 'features'
> > of IEEE 754 and (worse) C99.  Yes, it's better, but I would like to
> > make it a LOT better.  I already have a more robust version of 2.4.2,
> > but there are some problems, technical and political.  I should
> > appreciate advice.
> >
> > 1) Should I start off by developing a testing version, to give people
> > a chance to scream at me, or write a PEP?  Because I am no Python
> > development expert, the former would help to educate me into its
> > conventions, technical and political.
>
>
> I would do both.  It is a lot easier to get something accepted when you have
> working code.  But a PEP to vent possible arguments against the change along
> with any backwards-compatibility issues will be needed for something as
> major as changing how math works.
>
> > 2) Because some people are dearly attached to the current behaviour,
> > warts and all, and there is a genuine quandary of whether the 'right'
> > behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
> > finally-specifies (let's ignore C99 and Java as beyond redemption),
> > there might well need to be options.  These can obviously be done by
> > a command-line option, an environment variable or a float method.
> > There are reasons to disfavour the last, but all are possible.  Which
> > is the most Pythonesque approach?
> >
> > 3) I am rather puzzled by the source control mechanism.  Are commit
> > privileges needed to start a project like this in the main tree?
> > Note that I am thinking of starting a test subtree only.
>
>
> To work directly in Python's repository, yes, checkin privileges are needed.
>  In order to get these, though, you usually either need to have been
> involved in python-dev for a while and be known to the group or have someone
> everyone trusts to watch over you as you do your work in a branch.
>
> > 4) Is there a Python hacking document?  Specifically, if I want to
> > add a new method to a built-in type, is there any guide on where to
> > start?
>
>
> The C API docs are at http://docs.python.org/ and there are some docs at
> http://www.python.org/dev/ in terms of intro to how development for Python
> tends to take place.
>
> -Brett
>
>
> > 5) I am NOT offering to write a full floating-point emulator, though
> > it would be easy enough and could provide repeatable, robust results.
> > "Easy" does not mean "quick" :-(  Maybe when I retire.  Incidentally,
> > experience from times of yore is that emulated floating-point would
> > be fast enough that few, if any, Python users would notice.
> >
> >
> >
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com
>
>
>

From nnorwitz at gmail.com  Mon Jun 19 05:49:47 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sun, 18 Jun 2006 20:49:47 -0700
Subject: [Python-Dev] Dropping externally maintained packages (Was:
	Please stop changing wsgiref on the trunk)
In-Reply-To: <20060613022935.GA28870@cthulhu.gerg.ca>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>
	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>
	<448D9EA1.9000209@v.loewis.de>
	<20060613022935.GA28870@cthulhu.gerg.ca>
Message-ID: <ee2a432c0606182049k714ab527rb81340fcaa1309e5@mail.gmail.com>

On 6/12/06, Greg Ward <gward-1337f07a94b43060ff5c1ea922ed93d6 at python.net> wrote:
> [Guido]
> > While I am an enthusiastic supporter of several of those additions, I
> > am *not* in favor of the special status granted to software
> > contributed by certain developers, since it is a burden for all other
> > developers.
>
> [Martin]
> > Each maintainer should indicate whether he is happy with a "this is
> > part of Python" approach. If so, the entry should be removed from PEP
> > 360 (*); if not, the code should be removed from Python before beta 1.
>
> I very much identify with Phillip's gripe, but I gotta admit Guido has
> the compelling argument here.  Personally, I have noticed very few
> "rogue" changes to optparse.py over the years, and have quietly grumbled
> and backported most of them over to Optik for eventual re-merge to
> optparse.  (One or two got dropped because I had already fixed the
> problem differently in Optik.)

Also note it sucks for users.  For example, I recently closed an
optparse bug without even looking at it, since it has to go into the
optik tracker afaik (I think it was a doc issue).  Did the person
bother to go through the effort to submit the bug a second time?  I
don't know, but we know many people don't submit bugs the first time.
The process adds work to users too.

n

From pje at telecommunity.com  Mon Jun 19 05:59:21 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 18 Jun 2006 23:59:21 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060618184500.F34E.JCARLSON@uci.edu>
References: <20060611010410.GA5723@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
Message-ID: <5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>

At 06:56 PM 6/18/2006 -0700, Josiah Carlson wrote:
>The non-fast version couldn't actually work if it referenced any names,
>given current Python semantics for arbitrary name binding replacements.

Actually, one could consider "case" expressions to be computed at function 
definition time, the way function defaults are.  That would solve the 
problem of symbolic constants, or indeed any sort of expressions.

An alternate possibility would be to have them computed at first use and 
cached thereafter.

Either way would work, and both would allow multiple versions of the same 
switch statement to be spun off as closures without losing their "constant" 
nature or expressiveness.  It's just a question of which one is easier to 
explain.  Python already has both types of one-time initialization: 
function defaults are computed at definition time, and modules are only 
loaded once, the first time you import them.


From ncoghlan at gmail.com  Mon Jun 19 06:05:33 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 19 Jun 2006 14:05:33 +1000
Subject: [Python-Dev] PEP 338 vs PEP 328 - a limitation of the -m switch
In-Reply-To: <ca471dc20606181449o60f75837q69efb050d55371f@mail.gmail.com>
References: <44956640.3010003@iinet.net.au>	<5.1.1.6.0.20060618161140.031cb008@sparrow.telecommunity.com>	<5.1.1.6.0.20060618173509.01eab5c8@sparrow.telecommunity.com>
	<ca471dc20606181449o60f75837q69efb050d55371f@mail.gmail.com>
Message-ID: <4496228D.4070907@gmail.com>

Guido van Rossum wrote:
>> If it's not the package directory, perhaps it could be a copy of whatever
>> sys.path entry the package was found under - that wouldn't do anything but
>> make "nearby" imports faster.
> 
> But it could theoretically affect search order for other modules. I
> still see nothing wrong with "". After all that's also the default if
> you run a script using python <path/to/file.py .

No problem - inserting '' is what the switch does currently. A security 
conscious script should really be clobbering sys.path anyway so that it only 
contains the locations the script needs.

As for the other part (requiring absolute imports), I can put a footnote in 
the tutorial somewhere.

If anyone complains bitterly about the limitation, there's always 2.6 :)

Cheers,
Nick.


-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Mon Jun 19 06:21:04 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 19 Jun 2006 14:21:04 +1000
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
References: <20060615171935.GA26179@caltech.edu>
	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
Message-ID: <44962630.4060806@gmail.com>

Brett Cannon wrote:
> But it does seem accurate; random checking of some modules that got high 
> but not perfect covereage all seem to be instances where dependency 
> injection would be required to get the tests to work since they were 
> based on platform-specific things.

There's something odd going on with __future__.py, though. The module level 
code all shows up as not executed, but the bodies of the two _Feature methods 
both show up as being run.

I'm curious as to how a function body can be executed without executing the 
function definition first :)

As far as making the comments/docstrings less obvious goes, grey is usually a 
good option for that.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Mon Jun 19 06:29:59 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 19 Jun 2006 14:29:59 +1000
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
References: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
Message-ID: <44962847.50104@gmail.com>

Nick Maclaren wrote:
> 5) I am NOT offering to write a full floating-point emulator, though
> it would be easy enough and could provide repeatable, robust results.
> "Easy" does not mean "quick" :-(  Maybe when I retire.  Incidentally,
> experience from times of yore is that emulated floating-point would
> be fast enough that few, if any, Python users would notice.

Python 2.4's decimal module is, in essence, a floating point emulator based on 
the General Decimal Arithmetic specification.

If you want floating point mathematics that doesn't have insane platform 
dependent behaviour, the decimal module is the recommended approach. By the 
time Python 2.6 rolls around, we will hopefully have an optimized version 
implemented in C (that's being worked on already).

That said, I'm not clear on exactly what changes you'd like to make to the 
binary floating point type, so I don't know if I think they're a good idea or 
not :)

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From anthony at interlink.com.au  Mon Jun 19 06:38:07 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Mon, 19 Jun 2006 14:38:07 +1000
Subject: [Python-Dev] When to branch release25-maint?
Message-ID: <200606191438.10751.anthony@interlink.com.au>

A question has been asked about branching release25-maint at the time 
of beta1. I was actually thinking about doing this for 2.5rc1 - once 
we're in release candidate stage we want to really be careful about 
checkins. I'm not sure it's worth branching at beta1 - it's a bit 
more work all round, vs what I suspect will be a small amount of 2.6 
work landing on the trunk nownownow. Also, I'd prefer people's cycles 
be spent on bughunting 2.5 rather than worrying about shiny new 
features for the release that's, what, 18 months away? 

Anyway, thought I'd open this up for discussion...

Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From tjreedy at udel.edu  Mon Jun 19 07:01:33 2006
From: tjreedy at udel.edu (tjreedy)
Date: Mon, 19 Jun 2006 01:01:33 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
References: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
Message-ID: <e75b3m$o99$1@sea.gmane.org>


"Nick Maclaren" <nmm1 at cus.cam.ac.uk> wrote in message 
news:E1FqzHB-00003X-UV at draco.cus.cam.ac.uk...
> experience from times of yore is that emulated floating-point would
> be fast enough that few, if any, Python users would notice.

Perhaps you should enquire on the Python numerical and scientific computing 
lists to see how many feel differently.  I don't see how someone crunching 
numbers hours per day could not notice a slowdown.

tjr






From brett at python.org  Mon Jun 19 07:04:44 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 18 Jun 2006 22:04:44 -0700
Subject: [Python-Dev] When to branch release25-maint?
In-Reply-To: <200606191438.10751.anthony@interlink.com.au>
References: <200606191438.10751.anthony@interlink.com.au>
Message-ID: <bbaeab100606182204g47715fb1rf8b0b1942722a1de@mail.gmail.com>

On 6/18/06, Anthony Baxter <anthony at interlink.com.au> wrote:
>
> A question has been asked about branching release25-maint at the time
> of beta1. I was actually thinking about doing this for 2.5rc1 - once
> we're in release candidate stage we want to really be careful about
> checkins. I'm not sure it's worth branching at beta1 - it's a bit
> more work all round, vs what I suspect will be a small amount of 2.6
> work landing on the trunk nownownow. Also, I'd prefer people's cycles
> be spent on bughunting 2.5 rather than worrying about shiny new
> features for the release that's, what, 18 months away?
>
> Anyway, thought I'd open this up for discussion...


Sounds reasonable to me.  Betas could still have more semantic changes for
bug fixes than a release candidate might allow and thus make the branch too
early a step at b1.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060618/4591757a/attachment.htm 

From nnorwitz at gmail.com  Mon Jun 19 07:50:37 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sun, 18 Jun 2006 22:50:37 -0700
Subject: [Python-Dev] setobject code
In-Reply-To: <36A93C28-6789-4623-ADAA-D15F1950C111@local>
References: <36A93C28-6789-4623-ADAA-D15F1950C111@local>
Message-ID: <ee2a432c0606182250o29e816c5hd41584ef49e9f33c@mail.gmail.com>

On 6/16/06, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> I would like to share a couple of observations that I made as I
> studied the latest setobject implementation.

...

> 2. Type of several data members in dict-object and dict-entry structs
> were recently changed to Py_ssize_t . Whatever considerations
> prompted the change, they should probably apply to the similar
> members of set-object and set-entry structs as well.

Thanks for pointing that out.  Fixed in 47018.

n

From nnorwitz at gmail.com  Mon Jun 19 08:12:26 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Sun, 18 Jun 2006 23:12:26 -0700
Subject: [Python-Dev] current 2.5 issues
Message-ID: <ee2a432c0606182312q134033cci778c96f4d7842283@mail.gmail.com>

valgrind reports a problem when running test_doctest.  I haven't
spotted a problem with the code, but the report is consistent (hmm, I
thought there were 3 warnings, but now there's only 1):

==19291== Conditional jump or move depends on uninitialised value(s)
==19291==    at 0x49D8B5: maybe_call_line_trace (ceval.c:3260)
==19291==    by 0x493334: PyEval_EvalFrameEx (ceval.c:857)
==19291==    by 0x49C617: PyEval_EvalCodeEx (ceval.c:2832)
==19291==    by 0x492E61: PyEval_EvalCode (ceval.c:494)
==19291==    by 0x4C50B7: run_mod (pythonrun.c:1232)
==19291==    by 0x4C4F89: PyRun_StringFlags (pythonrun.c:1199)
==19291==    by 0x4A04F0: exec_statement (ceval.c:4196)
==19291==    by 0x4977BC: PyEval_EvalFrameEx (ceval.c:1665)
==19291==    by 0x49C617: PyEval_EvalCodeEx (ceval.c:2832)
==19291==    by 0x49EABE: fast_function (ceval.c:3661)
==19291==    by 0x49E763: call_function (ceval.c:3586)
==19291==    by 0x49A3B5: PyEval_EvalFrameEx (ceval.c:2269)

*********

Buildbot failures:

openbsd: test_ctypes fails
tru64 alpha: test_signal sometimes hangs
s/390: test_socket_ssl fails probably due to firewall issue in
cygwin: hopeless

debian hppa sometimes hangs when running a forking test (fork1,
wait[34], possibly subprocess).  I am working on a patch to help debug
this situation (already checked in once and reverted).  I can't
reproduce this failure.

*********

Some tests fail when run under regrtest.py -R 4:3: .  There are at
least these problems:

* test_logging
* test_optparse
* test_sqlite
* test_threaded_import


test test_logging crashed -- <type 'exceptions.KeyError'>:
<logging.StreamHandler instance at 0x184d420>

test test_optparse failed -- Traceback (most recent call last):
  File "/home/neal/build/python/svn/trunk-clean/Lib/test/test_optparse.py",
line 622, in test_float_default
    self.assertHelp(self.parser, expected_help)
  File "/home/neal/build/python/svn/trunk-clean/Lib/test/test_optparse.py",
line 202, in assertHelp
    actual_help + '"\n')
AssertionError: help text failure; expected:

test_sqlite
Traceback (most recent call last):
  File "/home/neal/build/python/svn/trunk-clean/Lib/sqlite3/test/userfunctions.py",
line 41, in func_raiseexception
    5/0
ZeroDivisionError: integer division or modulo by zero

test test_threaded_import crashed -- <type 'exceptions.KeyError'>:
'test.threaded_import_hangers'

From jcarlson at uci.edu  Mon Jun 19 09:28:03 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Mon, 19 Jun 2006 00:28:03 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
References: <20060618184500.F34E.JCARLSON@uci.edu>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
Message-ID: <20060619001044.F351.JCARLSON@uci.edu>


"Phillip J. Eby" <pje at telecommunity.com> wrote:
> At 06:56 PM 6/18/2006 -0700, Josiah Carlson wrote:
> >The non-fast version couldn't actually work if it referenced any names,
> >given current Python semantics for arbitrary name binding replacements.
> 
> Actually, one could consider "case" expressions to be computed at function 
> definition time, the way function defaults are.  That would solve the 
> problem of symbolic constants, or indeed any sort of expressions.

Using if/elif/else optimization precludes any non-literal constants, so
we would necessarily have to go with a switch/case for this semantic. It
seems as though it would work well, and wouldn't be fraught with
any of the gotchas that catch users like:
    def fcn(..., dflt={}, dflt2=[]):
...


> An alternate possibility would be to have them computed at first use and 
> cached thereafter.
> 
> Either way would work, and both would allow multiple versions of the same 
> switch statement to be spun off as closures without losing their "constant" 
> nature or expressiveness.  It's just a question of which one is easier to 
> explain.  Python already has both types of one-time initialization: 
> function defaults are computed at definition time, and modules are only 
> loaded once, the first time you import them.

I would go with the former rather than the latter, if only for
flexibility.

 - Josiah


From anthony at interlink.com.au  Mon Jun 19 10:04:09 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Mon, 19 Jun 2006 18:04:09 +1000
Subject: [Python-Dev] TRUNK FREEZE IMMINENT FOR 2.5 BETA 1 - 00:00 UTC,
	20-JUNE-2006
Message-ID: <200606191804.13248.anthony@interlink.com.au>

The trunk will be FROZEN for 2.5b1 from 00:00UTC on Tuesday, 20th of 
June. That's about 16 hours from now. Please don't checkin while the 
trunk is frozen, unless you're one of the release team (me, Martin, 
Fred, Ronald). 

I'll send another note once we're done with the release. Please note 
that once this release is done, the trunk is in FEATURE FREEZE. No 
new features should be checked in without prior approval - checkins 
that violate this will quite probably get backed out. 

Once the beta is out, I expect that we'll get quite a bit more anal 
about any checkins that break the buildbots. Please, please make sure 
you run the test suite before checking in - and if you're at all 
concerned that your checkin might have strange platform dependencies, 
check the buildbot status page 
(http://www.python.org/dev/buildbot/trunk/) after your checkin to 
make sure it didn't break anything.

The plan at the moment is to branch the trunk for release25-maint when 
the first release candidate for 2.5 final is cut. This is currently 
scheduled for August 1st - about 6 weeks away. 

Thanks,
Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From kristjan at ccpgames.com  Mon Jun 19 12:26:00 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 10:26:00 -0000
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
Message-ID: <129CEF95A523704B9D46959C922A280002A4CD8E@nemesis.central.ccp.cc>

This is, in fact, exactly what the python trunk does right now.  This is done in exceptions.c
Kristj?n 

-----Original Message-----
From: Scott Dial [mailto:scott+python-dev at scottdial.com] 
Sent: 17. j?n? 2006 12:54
To: Python Dev
Cc: "Martin v. L?wis"; Kristj?n V. J?nsson
Subject: Re: [Python-Dev] Python 2.4 extensions require VC 7.1?

I'm nobody but I don't find your argument compelling. I suggest you go
read: http://msdn2.microsoft.com/en-us/library/ksazx244.aspx

From kristjan at ccpgames.com  Mon Jun 19 12:43:55 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 10:43:55 -0000
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
Message-ID: <129CEF95A523704B9D46959C922A280002A4CD9B@nemesis.central.ccp.cc>

The signal() doc is rather vague on the point, since it doesn?t define the availible set
of signals.  It doesn?t even say that a signal identifier is an integer.  But it says that it should return EINVAL if it "cannot satisfy the request".  It doesn?t say "if the request is invalid", but I don't want to go into hairsplitting here.  So I could agree with you there.

But I completely disagree when you insist that microsoft has broken the C library.  What
they have done is added parameter validation, and thus simply added code in the "undefined"
domain.
I would also like to point out that, again apart from signal(), you are relying on undefined behaviour of fopen and others.  It may well cause a crash on one of your other platforms one day, you have no way of knowing.  VS2005 just pointed that out to you.

So, it is my suggestion that in stead of going all defensive, and shouting "breakage", why not simply fix those very dubious CRT usage patterns?  Think of it as lint.

Also, consider this:  in the case of file() and strftime() we are passing in dynamic strings.  The strings are not within control of python.  Normally these are static strings, within the control of the developer which has the function reference on hand, knows what he wants and so on.  Yet, here we are passing in any old strings.  There is a huge undefined domain there, and we should be very concerned about that.  It is a wonder we haven?t seen these functions crash before.

I would like to see the question about whether or not to use VS2005 be made purely on the merit of what is most practical (and useful) for people, rather than some emotional arguments about with loaded terms like "breakage", and personal feelings towards Microsoft.

(And by the way, why does pythoncore.dll mess with signal() anyway?  shouldn?t that be python.exe?  I don?t want a dll that I embed to mess with my signal handling) 

Cheers,

Kristj?n

-----Original Message-----
From: "Martin v. L?wis" [mailto:martin at v.loewis.de] 
Sent: 17. j?n? 2006 13:28
To: Scott Dial
Cc: Python Dev; Kristj?n V. J?nsson
Subject: Re: [Python-Dev] Python 2.4 extensions require VC 7.1?


Sure, I can *make* the library conform to C 99. I could also write my own C library entirely to achieve that effect. The fact remains that VS 2005 violates standard C where VS 2003 and earlier did not:
A conforming program will abort, instead of completing successfully.

From kristjan at ccpgames.com  Mon Jun 19 13:12:34 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 11:12:34 -0000
Subject: [Python-Dev] unicode imports
Message-ID: <129CEF95A523704B9D46959C922A280002A4CDB4@nemesis.central.ccp.cc>


Ideally, I would like for python to "simply work." It seems to me that it is mostly a question of time when all modern platforms offer unicode filesystems and hence unicode APIs.  IMHO, stuff like the importer should really be written in native unicode and revert to ASCII only as a fallback for unsupporting platforms.  is WITH_UNICODE ever left undefined these days?

And sure, module names need to be python identifiers (thus ASCII), although I wouldn't be surprised if that restriction were lifted in a not too distant future :)  After all, we support the utf-8 encoding of source files, but I cannot write "kristj?n = 1".  But that's for a future PEP.

Kristj?n

-----Original Message-----
From: Nick Coghlan [mailto:ncoghlan at gmail.com] 
Sent: 16. j?n? 2006 15:30
To: Kristj?n V. J?nsson
Cc: Python Dev
Subject: Re: [Python-Dev] unicode imports

Kristj?n V. J?nsson wrote:
> A cursory glance at import.c shows that the import mechanism is fairly 
> complicated, and riddled with "char *path" thingies, and manual string 
> arithmetic.  Do you have any suggestions on a clean way to unicodify 
> the import mechanism?

Can you install a PEP 302 path hook and importer/loader that can handle path entries that are Unicode strings? (I think this would end up being the parallel implementation you were talking about, though)

If the code that traverses sys.path and sys.path_hooks is itself unicode-unaware (I don't remember if it is or isn't), then you might be able to trick it by poking a Unicode-savvy importer directly into the path_importer_cache for affected Unicode paths.

One issue is that the package and file names still have to be valid Python identifiers, which means ASCII. Unicode would be, at best, permitted only in the path entries.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From kristjan at ccpgames.com  Mon Jun 19 13:27:36 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 11:27:36 -0000
Subject: [Python-Dev] unicode imports
Message-ID: <129CEF95A523704B9D46959C922A280002A4CDC7@nemesis.central.ccp.cc>

Well, my particular test uses  u'c:/tmp/\u814c'
If that cannot be encoded in mbcs, then mbcs isn't useful.
Note that this is both an issue of python being able to run from an arbitrary install position, and also the ability of users to import and run scripts from any other arbitrary directory.

Kristj?n

-----Original Message-----
From: Neil Hodgson [mailto:nyamatongwe at gmail.com] 
Sent: 17. j?n? 2006 04:53
To: Kristj?n V. J?nsson
Cc: Python Dev
Subject: Re: [Python-Dev] unicode imports

Kristj?n V. J?nsson:

> Although python has had full unicode support for filenames for a long 
> time on selected platforms (e.g. Windows), there is one glaring 
> deficiency:  It cannot import from paths containing unicode.  I?ve 
> tried creating folders with chinese characters and adding them to path, to no avail.
> The standard install path in chinese distributions can be with a 
> non-ANSI path, and installing an embedded python application there will break it.

   It should be unusual for a Chinese installation to use an install path that can not be represented in MBCS. Try encoding the install directory into MBCS before adding it to sys.path.

   Neil

From kristjan at ccpgames.com  Mon Jun 19 13:36:58 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 11:36:58 -0000
Subject: [Python-Dev] unicode imports
Message-ID: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>

I don't have specific information on the machines.  We didn?t try very hard to get things to work with 2.3 since we simply assumed it would work automatically when we upgraded to a more mature 2.4.
I could try to get more info, but it would be 2.3 specific.  Have there been any changes since then?

Note that it may not go into program files at all.  Someone may want to install his modules in a folder named in the honour of his mother.

Also, I really would like to see a general solution that doesn?t assume that the path name can somhow be transmuted to an ascii name.  Users are unpredictable.  When you have a wide distribution  , you come up against all kinds of problems (Currently we have around 500.000 users in china.) 
Also, relying on some locale settings is not acceptable.  My machine here has the icelandic locale.  Yet, I need to be able to set up and use a chinese install.  Likewise, many machines in china will have an english locale.  A default encoding and locale is essentially an evil hack in our increasingly global environment.  We have converted more or less our entire code base to unicode because keeping track of encoded strings is simply unworkable in a large project.

Funny that no other platforms could benefit from a unicode import path.  Does that mean that windows will reign supreme?  Please explain.

Cheers,

Kristj?n

-----Original Message-----
From: "Martin v. L?wis" [mailto:martin at v.loewis.de] 
Sent: 17. j?n? 2006 08:42
To: Kristj?n V. J?nsson
Cc: Python Dev
Subject: Re: [Python-Dev] unicode imports

Kristj?n V. J?nsson wrote:
> The standard install path in chinese distributions can be with a 
> non-ANSI path, and installing an embedded python application there 
> will break it.

I very much doubt this. On a Chinese system, the Program Files folder likely has a non-*ASCII* name, but it will have a fine *ANSI* name, as the ANSI code page on that system should be either 936 (simplified
chinese) or 950 (traditional chinese) - unless the system is misconfigured.

Can you please report what the path is, what the precise name of the operating system is, and what the system locale and the system code page are?

> A completely parallel implementation on the sys.path[i] level?

You should also take a look at what the 8.3 name of the path is.
I really cannot believe that the path is unaccessible to DOS programs.

> Are there other platforms beside Windows that would profit from this?

No.

Regards,
Martin

From theller at python.net  Mon Jun 19 13:58:15 2006
From: theller at python.net (Thomas Heller)
Date: Mon, 19 Jun 2006 13:58:15 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
Message-ID: <e763gu$3aq$1@sea.gmane.org>

It should be noted that I once started to convert the import machinery
to be fully unicode aware.  As far as I can tell, a *lot* has to be changed
to make this work.

I started with refactoring Python/import.c, but nobody responded to the question
whether such a refactoring patch would be accepted or not.

Thomas


From kristjan at ccpgames.com  Mon Jun 19 13:59:41 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 11:59:41 -0000
Subject: [Python-Dev] PyString_FromFormat
Message-ID: <129CEF95A523704B9D46959C922A280002A4CDE1@nemesis.central.ccp.cc>

One thing I have often lamented having in PyString_FromFormat (and cousins, like PyErr_Format) is to be able to integrate PyObject pointers.  Adding something like %S and %R (for str() and repr() respectively) seems very useful to me.  Is there any reason why this isn?t there?
 
Cheers,
 
Kristj?n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060619/07a376b3/attachment.htm 

From mwh at python.net  Mon Jun 19 14:04:46 2006
From: mwh at python.net (Michael Hudson)
Date: Mon, 19 Jun 2006 13:04:46 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk> (Nick Maclaren's
	message of "Thu, 15 Jun 2006 22:21:17 +0100")
References: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
Message-ID: <2mmzc9l5b5.fsf@starship.python.net>

Nick Maclaren <nmm1 at cus.cam.ac.uk> writes:

> As I have posted to comp.lang.python, I am not happy with Python's
> numerical robustness - because it basically propagates the 'features'
> of IEEE 754 and (worse) C99. 

That's not really now I would describe the situation today.

> Yes, it's better, but I would like to make it a LOT better.  I
> already have a more robust version of 2.4.2, but there are some
> problems, technical and political.  I should appreciate advice.

I would like to see Tim Peters' opinion on all this.

> 1) Should I start off by developing a testing version, to give people
> a chance to scream at me, or write a PEP?

"Yes"

Or did you want advice on which?  I think a PEP would make a lot of
sense.

> 2) Because some people are dearly attached to the current behaviour,
> warts and all, and there is a genuine quandary of whether the 'right'
> behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
> finally-specifies (let's ignore C99 and Java as beyond redemption),

Why?  Maybe it's clear to you, but it's not totally clear to me, and
it any case the discussion would be better informed for not being too
dismissive.

> there might well need to be options.  These can obviously be done by
> a command-line option, an environment variable or a float method.
> There are reasons to disfavour the last, but all are possible.  Which
> is the most Pythonesque approach?

I have heard Tim say that there are people who would dearly like to be
able to choose.  Environment variables and command line switches are
not Pythonic.

> 3) I am rather puzzled by the source control mechanism.  Are commit
> privileges needed to start a project like this in the main tree?

Yes.  You can also use svk, I believe, but I don't really know
anything about that.

> 4) Is there a Python hacking document?  Specifically, if I want to
> add a new method to a built-in type, is there any guide on where to
> start?

Don't think so.  There's some stuff under http://www.python.org/dev/
but nothing that would cover this.

> 5) I am NOT offering to write a full floating-point emulator, though
> it would be easy enough and could provide repeatable, robust results.
> "Easy" does not mean "quick" :-(  Maybe when I retire.  Incidentally,
> experience from times of yore is that emulated floating-point would
> be fast enough that few, if any, Python users would notice.

Maybe you're right, but I personally doubt this last bit.



Speaking more generally, it would be nice if you gave more
explanations of why the changes you want to make are desirable -- and
for that matter, more details about what they actually are.

I'm interested in making Python's floating point story better, and
have worked on a few things for Python 2.5 -- such as
pickling/marshalling of special values -- but I'm not really a
numerical programmer and don't like to guess what they need.

Cheers,
mwh

-- 
  Python enjoys making tradeoffs that drive *someone* crazy <wink>.
                                       -- Tim Peters, comp.lang.python

From mwh at python.net  Mon Jun 19 14:06:06 2006
From: mwh at python.net (Michael Hudson)
Date: Mon, 19 Jun 2006 13:06:06 +0100
Subject: [Python-Dev] When to branch release25-maint?
In-Reply-To: <200606191438.10751.anthony@interlink.com.au> (Anthony Baxter's
	message of "Mon, 19 Jun 2006 14:38:07 +1000")
References: <200606191438.10751.anthony@interlink.com.au>
Message-ID: <2mirmxl58x.fsf@starship.python.net>

Anthony Baxter <anthony at interlink.com.au> writes:

> A question has been asked about branching release25-maint at the time 
> of beta1. I was actually thinking about doing this for 2.5rc1 - once 
> we're in release candidate stage we want to really be careful about 
> checkins. I'm not sure it's worth branching at beta1 - it's a bit 
> more work all round, vs what I suspect will be a small amount of 2.6 
> work landing on the trunk nownownow. Also, I'd prefer people's cycles 
> be spent on bughunting 2.5 rather than worrying about shiny new 
> features for the release that's, what, 18 months away? 
> 
> Anyway, thought I'd open this up for discussion...

I agree with you.  If people want to work on new features, they can
create branches for that -- by default, bug fixes should go into 2.5
without further effort.

Cheers,
mwh

-- 
  I wouldn't trust the Anglo-Saxons for much anything else.  Given
  they way English is spelled, who could trust them on _anything_ that
  had to do with writing things down, anyway?
                                        -- Erik Naggum, comp.lang.lisp

From mal at egenix.com  Mon Jun 19 14:09:02 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 19 Jun 2006 14:09:02 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <e763gu$3aq$1@sea.gmane.org>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
	<e763gu$3aq$1@sea.gmane.org>
Message-ID: <449693DE.90709@egenix.com>

Thomas Heller wrote:
> It should be noted that I once started to convert the import machinery
> to be fully unicode aware.  As far as I can tell, a *lot* has to be changed
> to make this work.
> 
> I started with refactoring Python/import.c, but nobody responded to the question
> whether such a refactoring patch would be accepted or not.

Perhaps someone should start a PEP on this subject ?!
(not me, though :-)

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 19 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From benji at benjiyork.com  Mon Jun 19 14:37:30 2006
From: benji at benjiyork.com (Benji York)
Date: Mon, 19 Jun 2006 08:37:30 -0400
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
References: <20060615171935.GA26179@caltech.edu>
	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
Message-ID: <44969A8A.6000401@benjiyork.com>

Brett Cannon wrote:
> But it does seem accurate; random checking of some modules that got high 
> but not perfect covereage all seem to be instances where dependency 
> injection would be required to get the tests to work since they were 
> based on platform-specific things.

> I don't know if we need it hooked into the buildbots (unless it is dirt 
> cheap to generate the report).

It would be interesting to combine the coverage over several platforms 
and report that.
--
Benji York

From pje at telecommunity.com  Mon Jun 19 14:51:40 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 19 Jun 2006 08:51:40 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060619001044.F351.JCARLSON@uci.edu>
References: <5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>

At 12:28 AM 6/19/2006 -0700, Josiah Carlson wrote:

>"Phillip J. Eby" <pje at telecommunity.com> wrote:
> > At 06:56 PM 6/18/2006 -0700, Josiah Carlson wrote:
> > >The non-fast version couldn't actually work if it referenced any names,
> > >given current Python semantics for arbitrary name binding replacements.
> >
> > Actually, one could consider "case" expressions to be computed at function
> > definition time, the way function defaults are.  That would solve the
> > problem of symbolic constants, or indeed any sort of expressions.
>
>Using if/elif/else optimization precludes any non-literal constants, so
>we would necessarily have to go with a switch/case for this semantic. It
>seems as though it would work well, and wouldn't be fraught with
>any of the gotchas that catch users like:
>     def fcn(..., dflt={}, dflt2=[]):
>...
>
>
> > An alternate possibility would be to have them computed at first use and
> > cached thereafter.
> >
> > Either way would work, and both would allow multiple versions of the same
> > switch statement to be spun off as closures without losing their 
> "constant"
> > nature or expressiveness.  It's just a question of which one is easier to
> > explain.  Python already has both types of one-time initialization:
> > function defaults are computed at definition time, and modules are only
> > loaded once, the first time you import them.
>
>I would go with the former rather than the latter, if only for
>flexibility.

There's no difference in flexibility.  In either case, the dictionary 
should be kept in a cell in the function's closure, not with the code 
object.  It would simply be a difference in *when* the values were 
computed, and by which code object.  To be done at function definition 
time, the enclosing code block would have to do it, which would be sort of 
weird from a compiler perspective, and there would be an additional problem 
with getting the line number tables correct.  But that's going to be tricky 
no matter which way it's done.


From walter at livinglogic.de  Mon Jun 19 15:07:47 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Mon, 19 Jun 2006 15:07:47 +0200
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <44969A8A.6000401@benjiyork.com>
References: <20060615171935.GA26179@caltech.edu>	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
	<44969A8A.6000401@benjiyork.com>
Message-ID: <4496A1A3.9070009@livinglogic.de>

Benji York wrote:

> Brett Cannon wrote:
>> But it does seem accurate; random checking of some modules that got high 
>> but not perfect covereage all seem to be instances where dependency 
>> injection would be required to get the tests to work since they were 
>> based on platform-specific things.
> 
>> I don't know if we need it hooked into the buildbots (unless it is dirt 
>> cheap to generate the report).
> 
> It would be interesting to combine the coverage over several platforms 
> and report that.

The code coverage report should include how often a line got executed,
not just if it got executed at all. This makes it possible to se
hotspots in the code.

BTW, if there's interest, I can change the code behind
http://coverage.livinglogic.de so that instead of importing the data
into a database, static HTML files are created, so that we can run the
job more often on one of the Python servers. Currently the job runs once
a month with

   ./python Lib/test/regrtest.py -T -N -R ::
-uurlfetch,largefile,network,decimal

and takes about one hour to run the tests.

The source code is available from
http://styx.livinglogic.de/~walter/python/coverage/PythonCodeCoverage.py

Servus,
   Walter

From ncoghlan at gmail.com  Mon Jun 19 15:46:13 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 19 Jun 2006 23:46:13 +1000
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
Message-ID: <4496AAA5.3010701@gmail.com>

Kristj?n V. J?nsson wrote:
> Funny that no other platforms could benefit from a unicode import path.
> Does that mean that windows will reign supreme?  Please explain.

As near as I can tell, other platforms use encoded strings with the normal 
(byte-based) posix file API, so the Python interpreter and the file system 
simply need to agree on the encoding (typically utf-8) in order for both 
filesystem access and importing from non-ASCII paths to work.

On Windows, though, most of the file system interaction code has had to be 
updated to use the wide-character API where possible. import.c is one of the 
few holdouts that relies entirely on the byte-based posix API.

If I had to put money on what's currently happening on your test machine, it's 
that import.c is trying to do u'c:/tmp/\u814c'.encode('mbcs'), getting 
'c:/tmp/?' and proceeding to do nothing useful with that path entry. Checking 
the result of sys.getfilesystemencoding() should be able to confirm that.

So it looks like it ain't really gonna work properly on Windows unless 
import.c is rewritten to use the Unicode-aware platform independent IO 
implementation in posixmodule.c.

Until that happens (hopefully by Python 2.6), I like MvL's suggestion - look 
at the 8.3 DOS name on the command prompt and put that into sys.path. ctypes 
and/or pywin32 should let you get at that information programmatically.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Mon Jun 19 16:10:09 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 20 Jun 2006 00:10:09 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>	<20060618184500.F34E.JCARLSON@uci.edu>	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>
Message-ID: <4496B041.1010406@gmail.com>

Phillip J. Eby wrote:
>>> Either way would work, and both would allow multiple versions of the same
>>> switch statement to be spun off as closures without losing their 
>> "constant"
>>> nature or expressiveness.  It's just a question of which one is easier to
>>> explain.  Python already has both types of one-time initialization:
>>> function defaults are computed at definition time, and modules are only
>>> loaded once, the first time you import them.
>> I would go with the former rather than the latter, if only for
>> flexibility.
> 
> There's no difference in flexibility.  In either case, the dictionary 
> should be kept in a cell in the function's closure, not with the code 
> object.  It would simply be a difference in *when* the values were 
> computed, and by which code object.  To be done at function definition 
> time, the enclosing code block would have to do it, which would be sort of 
> weird from a compiler perspective, and there would be an additional problem 
> with getting the line number tables correct.  But that's going to be tricky 
> no matter which way it's done.

Caching on first use would be the easiest to explain I think. Something like:

     if jump_dict is NULL:
         jump_dict = {FIRST_CASE  : JUMP_TARGET_1,
                      SECOND_CASE : JUMP_TARGET_2,
                      THIRD_CASE  : JUMP_TARGET_3}
     jump_to_case(value, jump_dict)
     ELSE_CLAUSE
     jump_to_end()

'jump_dict' would be held in a cell on the function's closure (since the 
calculated case values might depend on global or closure variables)
'jump_to_case' would be the new opcode, taking two arguments off the stack 
(the jump dictionary and the switch value), executing the matching case (if 
any) and jumping to the end of the switch statement.
If no case is matched, then fall through to the else clause and then jump to 
the end of the statement.

Then the optimisation of the case where all of the case expressions are 
literals would come under the purview of a constant-folding compiler 
automatically when it figures out that the dictionary literal only contains 
constants.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From pje at telecommunity.com  Mon Jun 19 16:29:49 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 19 Jun 2006 10:29:49 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <4496B041.1010406@gmail.com>
References: <5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060619101721.01ea8108@sparrow.telecommunity.com>

At 12:10 AM 6/20/2006 +1000, Nick Coghlan wrote:
>Caching on first use would be the easiest to explain I think. Something like:
>
>      if jump_dict is NULL:
>          jump_dict = {FIRST_CASE  : JUMP_TARGET_1,
>                       SECOND_CASE : JUMP_TARGET_2,
>                       THIRD_CASE  : JUMP_TARGET_3}
>      jump_to_case(value, jump_dict)
>      ELSE_CLAUSE
>      jump_to_end()

Sadly, it's not *quite* that simple, due to the fact that co_lnotab must be 
increase in line numbers as bytecode offsets increase.  It would actually 
look more like:

      LOAD_DEREF jumpdictN
      JUMP_IF_FALSE  initfirstcase

do_switch:
      ...

initfirstcase:
      DUP_TOP
      # compute case value
      LOAD_CONST firstcaseoffset
      ROT_THREE
      STORE_SUBSCR
      JUMP_FORWARD initsecondcase

firstcaseoffset:
      first case goes here
      ...

initsecondcase:
      DUP_TOP
      # compute case value
      LOAD_CONST secondcaseoffset
      ROT_THREE
      STORE_SUBSCR
      JUMP_FORWARD initthirdcase

secondcaseoffset:
      second case goes here
      ...

...

initlastcase:
      DUP_TOP
      # compute case value
      LOAD_CONST lastcaseoffset
      ROT_THREE
      STORE_SUBSCR
      JUMP_ABSOLUTE doswitch

lastcaseoffset:
      last case goes here



The above shenanigans are necessary because the line numbers of the code 
for computing the case expressions have to be interleaved with the line 
numbers for the code for the case suites.

Of course, we could always change how co_lnotab works, which might be a 
good idea anyway.  As our compilation techniques become more sophisticated, 
it starts to get less and less likely that we will always want bytecode and 
source code to be in exactly the same sequence within a given code object.


From guido at python.org  Mon Jun 19 16:37:01 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 07:37:01 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060618184500.F34E.JCARLSON@uci.edu>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
Message-ID: <ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>

On 6/18/06, Josiah Carlson <jcarlson at uci.edu> wrote:
> [...] Offering arbitrary expressions whose
> meaning can vary at runtime would kill any potential speedup (the
> ultimate purpose for having a switch statement), [...]

Um, is this dogma? Wouldn't a switch statement also be a welcome
addition to the readability? I haven't had the time to follow this
thread (still catching up on my Google 50%) but I'm not sure I agree
with the idea that a switch should only exist for speedup.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From kristjan at ccpgames.com  Mon Jun 19 16:39:17 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Mon, 19 Jun 2006 14:39:17 -0000
Subject: [Python-Dev] unicode imports
Message-ID: <129CEF95A523704B9D46959C922A280002A4CE4A@nemesis.central.ccp.cc>

Wouldn?t it be possible then to emulate the unix way?  Simply encode any unicode paths to utf-8, process them as normal, and then decode them just prior to the actual windows io call?  It would make sense to just use the utf-8 encoding all the way for all platforms (since it is easy to work with), and then convert to most appropriate encoding for the platform in question right at the end, e.g. unicode for windows, mbcs for windows without unicode (win98) (which relies on the LC_LOCALE setting) and whatever 8 bit encoding is appropriate for the particular unix platform.

Of course, once there, why not do it unicode all the way up to that last point?  Unless there are platforms without wchar_t that would make sense.

At any rate, I am trying to find a coding path of least resistance here.  Regardless of the timeline or acceptance in mainstream python for this feature, it is something I will have to patch in for our application.

Cheers,
Kristj?n

-----Original Message-----
From: Nick Coghlan [mailto:ncoghlan at gmail.com] 
Sent: 19. j?n? 2006 13:46
To: Kristj?n V. J?nsson
Cc: "Martin v. L?wis"; Python Dev
Subject: Re: [Python-Dev] unicode imports

Kristj?n V. J?nsson wrote:
> Funny that no other platforms could benefit from a unicode import path.
> Does that mean that windows will reign supreme?  Please explain.

As near as I can tell, other platforms use encoded strings with the normal
(byte-based) posix file API, so the Python interpreter and the file system simply need to agree on the encoding (typically utf-8) in order for both filesystem access and importing from non-ASCII paths to work.

From guido at python.org  Mon Jun 19 16:45:39 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 07:45:39 -0700
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CC98@nemesis.central.ccp.cc>
Message-ID: <ca471dc20606190745k48bce466kd4b2d78b3e33bead@mail.gmail.com>

On 6/16/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
> Although python has had full unicode support for filenames for a long time
> on selected platforms (e.g. Windows), there is one glaring deficiency:  It
> cannot import from paths containing unicode.  I?ve tried creating folders
> with chinese characters and adding them to path, to no avail.

I don't know exactly where this discussion is heading at this point,
but I think it's clear that there's a real (though -- yet -- rare)
problem, for which currently only ugly work-arounds exist. I'm not
convinced that it occurs on other platforms than Windows -- everyone
else seems to use UTF-8 for pathnames, while Windows is stuck with
code pages and other crap, and the only reasaonably way to access
Unicode pathnames is via the Windows-specific Unicode API (which is
why import is the last place where this isn't easily solved, as the
import machinery is completely 8-bit-based).

Has it been determined yet whether the DOS 8+3 filename cannot be used
as a workaround?

Perhaps it would be good enough to wait for Py3k? That will have pure
Unicode strings and the import machinery will be completely rewritten
anyway. (And I wouldn't be surprised if that rewrite were to use pure
Python code.) Py3k will be released later than Python 2.6, but most
likely before 2.7.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 19 16:59:57 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 07:59:57 -0700
Subject: [Python-Dev] PyString_FromFormat
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CDE1@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CDE1@nemesis.central.ccp.cc>
Message-ID: <ca471dc20606190759x9c42894xd1e7a60bebc668f@mail.gmail.com>

On 6/19/06, Kristj?n V. J?nsson <kristjan at ccpgames.com> wrote:
> One thing I have often lamented having in PyString_FromFormat (and cousins,
> like PyErr_Format) is to be able to integrate PyObject pointers.  Adding
> something like %S and %R (for str() and repr() respectively) seems very
> useful to me.  Is there any reason why this isn?t there?

Asking "why" a particular feature is omitted rarely is a good starting
point. We collectively probably don't remember, or misremember the
discussion if there was any; the most likely reason is simply that
nobody thought it was useful at the time, and nobody who thought it
*was* useful put enough effort in to provide the feature.

If I had to make a guess, %S and %R as you propose have the very real
possibility to fail. PyString_FromFormat() currently only fails if it
runs out of memory. This is especially helpful for PyErr_Format() --
the last thing you want to hapen during formatting of an error message
is to get an error in the formatting. But there are always ways to
handle that if the use case is strong enough.

If you want to gather use cases, you could scour the Python source
code for calls to either API immediately preceded by a call to
PyObject_Str/Repr() to produce a string to be included into the
message. If you find many, you are not alone and you have a good use
case.

Personally, I think it's not worth the trouble. But I wouldn't
necessarily reject a patch (not that I'm in the business of accepting
individual patches any more -- others will weigh in there).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 19 17:18:15 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 08:18:15 -0700
Subject: [Python-Dev] Misleading error message from
	PyObject_GenericSetAttr
In-Reply-To: <d38f5330606141423r3a03478hdc1729a6aac44735@mail.gmail.com>
References: <d38f5330606141423r3a03478hdc1729a6aac44735@mail.gmail.com>
Message-ID: <ca471dc20606190818t2c4d84d0lc16cb7ac025436ec@mail.gmail.com>

On 6/14/06, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> When an extension type Foo defines tp_getattr, but leaves tp_setattr
> NULL, an attempt to set an attribute bar results in an AttributeError
> with the message "'Foo' object has no attribute 'bar'".  This message
> is misleading because the object may have the attribute 'bar' as
> implemented in tp_getattr.  It would be better to change the message
> to "'Foo' object has only read-only attributes (assign to .bar)" as in
> the case tp_setattro == tp_setattr == NULL in  PyObject_SetAttr .

I agree. Can you submit a patch to SF please?

> I've also noticed that the exceptions raised from PyObject_SetAttr are
> TypeErrors. Shouldn't PyObject_GenericSetAttr raise a TypeError if
> tp_setattr is null but tp_getattr is not?  This would be consistent
> with the errors from read-only descriptors.

Attempting to obtain complete consistency between TypeError and
AttributeError is hopeless. But if you want to submit a patch to
reduce a particular bit of inconsistency (without increasing it
elsewhere) it might well be accepted.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 19 17:21:40 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 08:21:40 -0700
Subject: [Python-Dev] Bug: xml.dom.pulldom never gives you END_DOCUMENT
	events with an Expat parser
In-Reply-To: <a9699fd20606111527s46d882b3nddf27d0e6da3632a@mail.gmail.com>
References: <a9699fd20606111527s46d882b3nddf27d0e6da3632a@mail.gmail.com>
Message-ID: <ca471dc20606190821t3b5c771aw4f43fa5cba073258@mail.gmail.com>

Hm... Perhaps the xml-sig would be a better place to discuss this?

On 6/11/06, Thomas Broyer <t.broyer at gmail.com> wrote:
> Hi,
>
> First, sorry for posting this here, I closed my SourceForge account a
> few months ago and I can't get it reopened...
>
> I'm using python 2.2.1 but a diff on SVN showed that there was no
> change at this level, so the following bug should still be there in
> current versions (I'll try with a 2.4 at work tomorrow). On my
> machine, xml.sax.make_parser returns an
> xml.sax.expatreader.ExpatParser instance.
>
> The problem is: I'm never given END_DOCUMENT events.
>
> Code to reproduce:
>
> from xml.dom.pulldom import parseString
> reader = parseString('<element attribute="value">text</element>')
> # The following 2 lines will produce, in order:
> # START_DOCUMENT, START_ELEMENT, TEXT, END_ELEMENT
> # Note the lack of the END_DOCUMENT event.
> for event,node in reader:
>    print event
> # The following line will get an END_DOCUMENT event
> print reader.getEvent()[0]
> # The following line will throw a SAXParseException,
> # because the SAX parser's close method has been
> # called twice
> print reader.getEvent()[0]
>
>
> Cause:
>
> The xml.dom.pulldom.DOMEventStream.getEvent method, when it has no
> more event in its internal stack, calls the SAX parser's close()
> method (which is OK) then immediately returns 'None', ignoring any
> event that could have been generated by the call to the close()
> method. If you call getEvent later, it will send you the remaining
> events until there are no more left, and then will call the SAX
> parser's close() method again, causing a SAXParseException.
> Because expat (an maybe other parsers too) has no way to know when the
> document ends, it generates the endDocument/END_DOCUMENT event only
> when explicitely told that the XML chunk is the final one (i.e. when
> the close() method is called)
>
>
> Proposed fix:
>
> Add a "parser_closed" attribute to the DOMEventStream class,
> initialized to "False". After having called self.parser.close() in the
> xml.dom.pulldom.DOMEventStream.getEvent method, immediately set this
> "parser_closed" attribute to True and proceed. Finally, at the
> beginning of the "while" loop, immediately returns "None" if
> "parser_closed" is "True" to prevent a second call to
> self.parser.close().
> With this change, any call to getEvent when there are no event left
> will return None and never throw an exception, which I think is the
> expected behavior.
>
>
> Proposed code:
>
> The "closed" attribute is initialized in the "__init__" method:
>     def __init__(self, stream, parser, bufsize):
>         self.stream = stream
>         self.parser = parser
>         self.parser_closed = False
>         self.bufsize = bufsize
>         if not hasattr(self.parser, 'feed'):
>             self.getEvent = self._slurp
>         self.reset()
>
> The "getEvent" method becomes:
>     def getEvent(self):
>         # use IncrementalParser interface, so we get the desired
>         # pull effect
>         if not self.pulldom.firstEvent[1]:
>             self.pulldom.lastEvent = self.pulldom.firstEvent
>         while not self.pulldom.firstEvent[1]:
>             if self.parser_closed:
>                 return None
>             buf = self.stream.read(self.bufsize)
>             if buf:
>                 self.parser.feed(buf)
>             else:
>                 self.parser.close()
>                 self.parser_closed = True
>         rc = self.pulldom.firstEvent[1][0]
>         self.pulldom.firstEvent[1] = self.pulldom.firstEvent[1][1]
>         return rc
>
> The same problem seems to exist in the
> xml.dom.pulldom.DOMEventStream._slurp method, when the SAX parser is
> not an IncrementalParser, as the parser's close() method is never
> called. I suggest adding a call to the close() method in there.
> However, as I only have expat as an option, which implements
> IncrementalParser, I can't test it...
> The _slurp method would become:
>     def _slurp(self):
>         """ Fallback replacement for getEvent() using the
>             standard SAX2 interface, which means we slurp the
>             SAX events into memory (no performance gain, but
>             we are compatible to all SAX parsers).
>         """
>         self.parser.parse(self.stream)
>         self.parser.close()
>         self.getEvent = self._emit
>         return self._emit()
> The _emit method raises exceptions when there are no events left, so I
> propose changing it to:
>     def _emit(self):
>         """ Fallback replacement for getEvent() that emits
>             the events that _slurp() read previously.
>         """
>         if not self.pulldom.firstEvent[1]:
>             return None
>         rc = self.pulldom.firstEvent[1][0]
>         self.pulldom.firstEvent[1] = self.pulldom.firstEvent[1][1]
>         return rc
>
> Hope this helps.
>
> --
> Thomas Broyer
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From rhettinger at ewtllc.com  Mon Jun 19 17:52:00 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 08:52:00 -0700
Subject: [Python-Dev] setobject code
In-Reply-To: <36A93C28-6789-4623-ADAA-D15F1950C111@local>
References: <36A93C28-6789-4623-ADAA-D15F1950C111@local>
Message-ID: <4496C820.4010908@ewtllc.com>

Alexander Belopolsky wrote:

>1. Is there a reason not to have PySet_CheckExact, given that  
>PyFrozenSet_CheckExact exists? Similarly, why PyAnySet_Check, but no  
>PySet_Check or PyFrozenSet_Check?
>  
>
If you NEED PySet_CheckExact, then say so.  Adding it is trivial.
Each of the six combinations needs to be evaluated on its own
merits.  Do you have use case where it is important to know that
you have a set, that it is not frozen, and that it is not a subtype?


Raymond


From rhettinger at ewtllc.com  Mon Jun 19 18:09:09 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 09:09:09 -0700
Subject: [Python-Dev] Improve error msgs?
In-Reply-To: <bbaeab100606181252v382f58cfhdfd30a53d394fa64@mail.gmail.com>
References: <e6obkn$6ao$1@sea.gmane.org>
	<e70f2d$ssn$1@sea.gmane.org>	<e734nn$4qf$1@sea.gmane.org>
	<bbaeab100606181252v382f58cfhdfd30a53d394fa64@mail.gmail.com>
Message-ID: <4496CC25.8040107@ewtllc.com>

An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060619/3c88cea0/attachment-0001.html 

From rhettinger at ewtllc.com  Mon Jun 19 18:13:32 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 09:13:32 -0700
Subject: [Python-Dev] When to branch release25-maint?
In-Reply-To: <200606191438.10751.anthony@interlink.com.au>
References: <200606191438.10751.anthony@interlink.com.au>
Message-ID: <4496CD2C.1020300@ewtllc.com>

Anthony Baxter wrote:

>A question has been asked about branching release25-maint at the time 
>of beta1. I was actually thinking about doing this for 2.5rc1 - once 
>we're in release candidate stage we want to really be careful about 
>checkins. I'm not sure it's worth branching at beta1 - it's a bit 
>more work all round, vs what I suspect will be a small amount of 2.6 
>work landing on the trunk nownownow. Also, I'd prefer people's cycles 
>be spent on bughunting 2.5 rather than worrying about shiny new 
>features for the release that's, what, 18 months away? 
>  
>
I recommend holding-off on a 2.6 branch until someone actually
has some non-trivial amount of 2.6 code ready for a commit.
My guess is that we are all focused on 2.5 or are participating
in intoxicating Py3k discussions.


Raymond


From rhettinger at ewtllc.com  Mon Jun 19 18:27:26 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 09:27:26 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<20060611010410.GA5723@21degrees.com.au>	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
Message-ID: <4496D06E.7070106@ewtllc.com>

An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060619/a0fa4069/attachment.htm 

From pje at telecommunity.com  Mon Jun 19 18:49:07 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 19 Jun 2006 12:49:07 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <4496D06E.7070106@ewtllc.com>
References: <ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060619123535.01ea6e70@sparrow.telecommunity.com>

At 09:27 AM 6/19/2006 -0700, Raymond Hettinger wrote:
>Guido van Rossum wrote:
>>Um, is this dogma? Wouldn't a switch statement also be a welcome
>>addition to the readability? I haven't had the time to follow this
>>thread (still catching up on my Google 50%) but I'm not sure I agree
>>with the idea that a switch should only exist for speedup.
>>
>
>A switch-statement offers only a modest readability improvement over 
>if-elif chains.  If a proposal introduces a switch-statement but doesn't 
>support fast dispatch, then it loses much of its appeal.

I would phrase that a lot differently.  A switch statement is *very* 
attractive for its readability.  The main problem is that if the most 
expressive way to do something in Python is also very slow -- i.e., people 
use it when they should be using a dictionary of functions -- then it adds 
to the "Python is slow" meme by attractive nuisance.  :)

Therefore, a switch statement should be made to perform at least as well as 
a dictionary of functions, or having it might actually be a bad thing.

In any case, we *can* make it perform as well as a dictionary of functions, 
and we can do it with or without another opcode.  What really needs to be 
decided on (i.e. by the BDFL) is the final syntax of the statement itself, 
and the semantics of evaluation time for the 'case' expressions, either at 
first execution of the switch statement, or at function definition time.

If explaining the evaluation time is too difficult, however, it might be an 
argument against the optimization.  But, I don't think that either 
first-use evaluation or definition-time evaluation are too hard to explain, 
since Python has both kinds of evaluation already.


From guido at python.org  Mon Jun 19 18:53:44 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 09:53:44 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <4496D06E.7070106@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
Message-ID: <ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>

On 6/19/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote:
>  Guido van Rossum wrote:
>  On 6/18/06, Josiah Carlson <jcarlson at uci.edu> wrote:
>
>  [...] Offering arbitrary expressions whose
> meaning can vary at runtime would kill any potential speedup (the
> ultimate purpose for having a switch statement), [...]
>
>  Um, is this dogma? Wouldn't a switch statement also be a welcome
> addition to the readability? I haven't had the time to follow this
> thread (still catching up on my Google 50%) but I'm not sure I agree
> with the idea that a switch should only exist for speedup.
>
>  A switch-statement offers only a modest readability improvement over
> if-elif chains.

Probably, which is why it hasn't been added yet. :-)

But there is a definite readability improvement in that you *know*
that it's always the same variable that is being compared and that no
other conditions are snuck into some branches.

> If a proposal introduces a switch-statement but doesn't
> support fast dispatch, then it loses much of its appeal.  Historically, the
> switch-statement discussions centered around fast dispatch without function
> call overhead or loss of direct read/write to local variables (see
> sre_compile.py and sre_parse.py for code would would see a speed benefit but
> almost no improvement in readability).

Well yes duh, of course a switch syntax should be as fast as a
corresponding if/elif dispatch series. (I look upon
dict-of-functions-based dispatch as a tool for a completely different
use case.)

Perhaps I misunderstood Josiah's comment; I thought he was implying
that a switch should be significantly *faster* than if/elif, and was
arguing against features that would jeopardize that speedup. I would
think that it would be fine if some switches could be compiled into
some kind of lookup table while others would just be translated into a
series of if/elifs. As long as the compiler can tell the difference.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Mon Jun 19 19:17:50 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 19 Jun 2006 10:17:50 -0700
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <44969A8A.6000401@benjiyork.com>
References: <20060615171935.GA26179@caltech.edu>
	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
	<44969A8A.6000401@benjiyork.com>
Message-ID: <bbaeab100606191017r7a246a0av7829727fd8546868@mail.gmail.com>

On 6/19/06, Benji York <benji at benjiyork.com> wrote:
>
> Brett Cannon wrote:
> > But it does seem accurate; random checking of some modules that got high
> > but not perfect covereage all seem to be instances where dependency
> > injection would be required to get the tests to work since they were
> > based on platform-specific things.
>
> > I don't know if we need it hooked into the buildbots (unless it is dirt
> > cheap to generate the report).
>
> It would be interesting to combine the coverage over several platforms
> and report that.



Ah, do the union of the coverage!  Yeah, that would be nice and give the
most accurate coverage data in terms of what is actually being tested.  But
as Titus says in another email, question is how to get that data sent back
to be correlated against.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060619/33a1a1d2/attachment.html 

From rhettinger at ewtllc.com  Mon Jun 19 19:21:51 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 10:21:51 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	
	<20060611010410.GA5723@21degrees.com.au>	
	<20060618184500.F34E.JCARLSON@uci.edu>	
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>	
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
Message-ID: <4496DD2F.30501@ewtllc.com>


>>  A switch-statement offers only a modest readability improvement over
>> if-elif chains.
>
>
> Probably, which is why it hasn't been added yet. :-)
>
> But there is a definite readability improvement in that you *know*
> that it's always the same variable that is being compared and that no
> other conditions are snuck into some branches.

Hmm, when I saw that "arbitrary expressions" were being proposed, I took 
that took mean that the variable would have to be repeated in the branches:

   switch x:
      case  x.endswith('wart'):  salicylic_acid()
      case x.endswith('roid'):  preparation_h()
      default:  chicken_soup()




> I would
> think that it would be fine if some switches could be compiled into
> some kind of lookup table while others would just be translated into a
> series of if/elifs. As long as the compiler can tell the difference.
>
That's a worthy goal; of course, the devil is in the details.  Given:

 switch x:
    case 1:  one()
    case 2:  two()
    case 3:  three()
    default:  too_many()

Do we require that x be hashable so that the compiler can use a lookup 
table?


Raymond


From benji at benjiyork.com  Mon Jun 19 19:27:40 2006
From: benji at benjiyork.com (Benji York)
Date: Mon, 19 Jun 2006 13:27:40 -0400
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <bbaeab100606191017r7a246a0av7829727fd8546868@mail.gmail.com>
References: <20060615171935.GA26179@caltech.edu>	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>	<44969A8A.6000401@benjiyork.com>
	<bbaeab100606191017r7a246a0av7829727fd8546868@mail.gmail.com>
Message-ID: <4496DE8C.9060302@benjiyork.com>

Brett Cannon wrote:
> Ah, do the union of the coverage!  Yeah, that would be nice and give the 
> most accurate coverage data in terms of what is actually being tested.  
> But as Titus says in another email, question is how to get that data 
> sent back to be correlated against.

It might be interesting as a BuilBot extension: you already know the 
definitive identity of the thing that you're running (svn path and 
revision), a central server with established communication channel, plus 
all the other BuildBot machinery.
--
Benji York

From guido at python.org  Mon Jun 19 19:35:20 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 10:35:20 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <4496DD2F.30501@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
Message-ID: <ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>

On 6/19/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote:
>
> >>  A switch-statement offers only a modest readability improvement over
> >> if-elif chains.
> >
> >
> > Probably, which is why it hasn't been added yet. :-)
> >
> > But there is a definite readability improvement in that you *know*
> > that it's always the same variable that is being compared and that no
> > other conditions are snuck into some branches.
>
> Hmm, when I saw that "arbitrary expressions" were being proposed, I took
> that took mean that the variable would have to be repeated in the branches:
>
>    switch x:
>       case  x.endswith('wart'):  salicylic_acid()
>       case x.endswith('roid'):  preparation_h()
>       default:  chicken_soup()

That seems insane, since then it would be *just* different syntax for
if/elif. The example looks deceptive: surely the 'switch' expression
should allow an arbitrary expression, so the 'case' wouldn't be able
to refer to the switch part by a name unless there was syntax (or a
convention) for defining a name by which it could be referenced. I
think Perl 6 is defining a very general "matching" syntax which people
interested in this might want to study, just to see how far one can
stretch the insanity.

> > I would
> > think that it would be fine if some switches could be compiled into
> > some kind of lookup table while others would just be translated into a
> > series of if/elifs. As long as the compiler can tell the difference.
> >
> That's a worthy goal; of course, the devil is in the details.  Given:
>
>  switch x:
>     case 1:  one()
>     case 2:  two()
>     case 3:  three()
>     default:  too_many()
>
> Do we require that x be hashable so that the compiler can use a lookup
> table?

That's a good question. We could define switch/case in terms of a hash
table created by the compiler, and then raising an exception if x is
unhashable is fair game. Or we could define it in terms of successive
'==' comparisons, and then the compiler would have to create code for
a slow path in case x is unhashable. I don't think I'm in favor of
always taking the default path when x is unhashable; that would cause
some surprises if an object defines __eq__ to be equal to ints (say)
but not __hash__.

Note that we currently don't have a strong test for hashable; it's
basically "if hash(x) doesn't raise an exception" which means that we
would have to catch this exception (or perhaps only TypeError) in
order to implement the slow path for the successive-comparisons
semantics.

I note that C doesn't require any particular implementation for
switch/case; there's no rule that says the numbers must fit in an
array of pointers or anything like that. So I would be careful before
we define this in terms of hash tables. OTOH the hash table semantics
don't require us to commit to a definition of hashable, which is an
advantage.

How's that for a wishy-washy answer. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From python-dev at zesty.ca  Mon Jun 19 21:29:07 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 19 Jun 2006 14:29:07 -0500 (CDT)
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606191424550.17937@server1.LFW.org>

On Mon, 19 Jun 2006, Guido van Rossum wrote:
> Um, is this dogma? Wouldn't a switch statement also be a welcome
> addition to the readability? I haven't had the time to follow this
> thread (still catching up on my Google 50%) but I'm not sure I agree
> with the idea that a switch should only exist for speedup.

I feel quite strongly that readability should be the primary motivator
for just about any new syntax.

Choosing an implementation that runs at a reasonable speed is also
worthwhile consideration, but readability is where it starts, IMHO.


-- ?!ng

From rhettinger at ewtllc.com  Mon Jun 19 21:30:28 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 12:30:28 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	
	<20060611010410.GA5723@21degrees.com.au>	
	<20060618184500.F34E.JCARLSON@uci.edu>	
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>	
	<4496D06E.7070106@ewtllc.com>	
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>	
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
Message-ID: <4496FB54.5060800@ewtllc.com>


>> > But there is a definite readability improvement in that you *know*
>> > that it's always the same variable that is being compared and that no
>> > other conditions are snuck into some branches.
>>
>> Hmm, when I saw that "arbitrary expressions" were being proposed, I took
>> that took mean that the variable would have to be repeated in the 
>> branches:
>>
>>    switch x:
>>       case  x.endswith('wart'):  salicylic_acid()
>>       case x.endswith('roid'):  preparation_h()
>>       default:  chicken_soup()
>
>
> That seems insane, since then it would be *just* different syntax for
> if/elif. The example looks deceptive: surely the 'switch' expression
> should allow an arbitrary expression, so the 'case' wouldn't be able
> to refer to the switch part by a name unless there was syntax (or a
> convention) for defining a name by which it could be referenced. I
> think Perl 6 is defining a very general "matching" syntax which people
> interested in this might want to study, just to see how far one can
> stretch the insanity.

I share that view 100%.  Can we conclude that arbitrary expressions are 
fine for the switch value but that the case values must be constants?  
That would neatly dispense with some proposed hypergeneralizations and 
keep the discussion focused.


>>  Given:
>>
>>  switch x:
>>     case 1:  one()
>>     case 2:  two()
>>     case 3:  three()
>>     default:  too_many()
>>
>> Do we require that x be hashable so that the compiler can use a lookup
>> table?
>
>
> That's a good question. We could define switch/case in terms of a hash
> table created by the compiler, and then raising an exception if x is
> unhashable is fair game.


+1

> Or we could define it in terms of successive
> '==' comparisons, and then the compiler would have to create code for
> a slow path in case x is unhashable.

Too perilous.  I would not like to put us in a position of generating 
duplicate code or funky new opcodes for the case suites.  Also, it is 
better for the user to know that __hash__ is going to be called, that 
the default-clause will execute when the key in not found, and that a 
KeyError would be raised if x is unhashable.  This is simple, 
explainable, consistent behavior.  Besides, if we've agreed that the 
case values are required to be constants, then there isn't much in the 
way of use cases for x being unhashable.


> I don't think I'm in favor of
> always taking the default path when x is unhashable; that would cause
> some surprises if an object defines __eq__ to be equal to ints (say)
> but not __hash__.


That would be unpleasant.


>
> Note that we currently don't have a strong test for hashable; it's
> basically "if hash(x) doesn't raise an exception" which means that we
> would have to catch this exception (or perhaps only TypeError) in
> order to implement the slow path for the successive-comparisons
> semantics.
>
> I note that C doesn't require any particular implementation for
> switch/case; there's no rule that says the numbers must fit in an
> array of pointers or anything like that. So I would be careful before
> we define this in terms of hash tables. OTOH the hash table semantics
> don't require us to commit to a definition of hashable, which is an
> advantage.
>
> How's that for a wishy-washy answer. :-)
>
Perfect.  Wishy-washy answers reflect an open mind and they contain the 
seeds of complete agreement.

My thought is that we *should* define switching in terms of hash 
tables.  It builds off of existing knowledge and therefore has a near 
zero learning curve.  The implementation is straight-forward and there 
are none of the hidden surprises that we would have with 
fastpath/slowpath approaches which use different underlying magic 
methods and do not guarantee order of execution.

If use cases eventually emerge for an alternative path using successive 
== comparisons, then it can always be considered and added later.  For 
now, YAGNI (neither the functionality, nor the implementation headaches, 
nor the complexity of explaining what it does under all the various cases).



Raymond


From python-dev at zesty.ca  Mon Jun 19 21:46:03 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 19 Jun 2006 14:46:03 -0500 (CDT)
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
	<44950D99.9000606@v.loewis.de>
	<bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>

On 6/18/06, "Martin v. L??wis" <martin at v.loewis.de> wrote:
> As for the comment: It apparently *is* misleading, George mistakenly
> took it as a requirement for future changes, rather than a factual
> statement about the present (even though it uses the tense of simple
> present). Anybody breaking 2.3 compatibility will have to remember
> to remove the comment, which he likely won't.

This sentiment is puzzling to me.  It seems you assume that we can trust
future developers to change the code but we can't trust them to update
the documentation.  So we can't have documentation even if it's factually
true just because someone might forget to update it?  Why is the mere
possibility of incorrect documentation in the future more significant
than actual correct documentation in the present?  Couldn't the same
argument be used to support removing all documentation from all code?

If you see a better way to word the comment to reduce the possibility
of misunderstanding, that's cool with me.  I'd just like people who
get their hands on the module to know that they can use it with 2.3.


-- ?!ng

From guido at python.org  Mon Jun 19 21:47:26 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 12:47:26 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <4496FB54.5060800@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
Message-ID: <ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>

On 6/19/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote:
> [...] Can we conclude that arbitrary expressions are
> fine for the switch value but that the case values must be constants?

That's too strong I believe. If some or all of the cases are arbitrary
expressions the compiler should try to deal. (Although we might have
to add a rule that if more than one case matches there's no guarantee
which branch is taken.)

In particular I expect that named constants are an important use case
(e.g. all of sre_compile.py uses names to compare the op with). The
compiler can't really tell with any degree of certainty that a name
won't ever be rebound (it would take a pretty smart global code
analyzer to prove that).

> That would neatly dispense with some proposed hypergeneralizations and
> keep the discussion focused.
>
>
> >>  Given:
> >>
> >>  switch x:
> >>     case 1:  one()
> >>     case 2:  two()
> >>     case 3:  three()
> >>     default:  too_many()
> >>
> >> Do we require that x be hashable so that the compiler can use a lookup
> >> table?
> >
> >
> > That's a good question. We could define switch/case in terms of a hash
> > table created by the compiler, and then raising an exception if x is
> > unhashable is fair game.
>
>
> +1
>
> > Or we could define it in terms of successive
> > '==' comparisons, and then the compiler would have to create code for
> > a slow path in case x is unhashable.
>
> Too perilous.  I would not like to put us in a position of generating
> duplicate code or funky new opcodes for the case suites.  Also, it is
> better for the user to know that __hash__ is going to be called, that
> the default-clause will execute when the key in not found, and that a
> KeyError would be raised if x is unhashable.  This is simple,
> explainable, consistent behavior.  Besides, if we've agreed that the
> case values are required to be constants, then there isn't much in the
> way of use cases for x being unhashable.

Well, the hypothetical use case is one where we have an arbitrary
object of unknown origin or type, and we want to special-case
treatment for a few known values.

I wonder if there should be two default clauses, or some other
syntactic way to indicate whether we expect all x to be hashable?

OTOH maybe doign the simplest thing that could possibly work is the
right thing here, so I'm not going to push back hard. I guess
practicality beats purity and all that.

Actually there are quiet a few zen of Python rules that endorse the
view that requiring x to be hashable is Pythonic, so I'm being swayed
as I write this. ;-)

> > I don't think I'm in favor of
> > always taking the default path when x is unhashable; that would cause
> > some surprises if an object defines __eq__ to be equal to ints (say)
> > but not __hash__.
>
>
> That would be unpleasant.
>
>
> >
> > Note that we currently don't have a strong test for hashable; it's
> > basically "if hash(x) doesn't raise an exception" which means that we
> > would have to catch this exception (or perhaps only TypeError) in
> > order to implement the slow path for the successive-comparisons
> > semantics.
> >
> > I note that C doesn't require any particular implementation for
> > switch/case; there's no rule that says the numbers must fit in an
> > array of pointers or anything like that. So I would be careful before
> > we define this in terms of hash tables. OTOH the hash table semantics
> > don't require us to commit to a definition of hashable, which is an
> > advantage.
> >
> > How's that for a wishy-washy answer. :-)
> >
> Perfect.  Wishy-washy answers reflect an open mind and they contain the
> seeds of complete agreement.

Thanks. Lawyers have different reasons for being wishy-washy but among
geeks there can be clarity in wshy-washiness. :-)

> My thought is that we *should* define switching in terms of hash
> tables.  It builds off of existing knowledge and therefore has a near
> zero learning curve.  The implementation is straight-forward and there
> are none of the hidden surprises that we would have with
> fastpath/slowpath approaches which use different underlying magic
> methods and do not guarantee order of execution.

I'm not so sure about there being no hidden surprises. I betcha that
there are quire a few bits of code that curerntly use the if/elif
style and seem to beg for a switch statement that depend on the
ordering of the tests. A typical example would be to have one of the
earlier tests express an exception to a later test that is a range
test. (Surely we're going to support range tests... sre_compile.py
uses 'in' almost as often as 'is'.)

> If use cases eventually emerge for an alternative path using successive
> == comparisons, then it can always be considered and added later.  For
> now, YAGNI (neither the functionality, nor the implementation headaches,
> nor the complexity of explaining what it does under all the various cases).

I say, let someone give a complete implementation a try, and then try
to modify as much standard library code as possible to use it. Then
report back. That would be a very interesting experiment to do. (And
thanks for the pointer to sre_compile.py as a use case!)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 19 21:54:15 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 12:54:15 -0700
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
	<44950D99.9000606@v.loewis.de>
	<bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>
	<Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
Message-ID: <ca471dc20606191254t4cd12757t11114e377eb601d4@mail.gmail.com>

On 6/19/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On 6/18/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> > As for the comment: It apparently *is* misleading, George mistakenly
> > took it as a requirement for future changes, rather than a factual
> > statement about the present (even though it uses the tense of simple
> > present). Anybody breaking 2.3 compatibility will have to remember
> > to remove the comment, which he likely won't.
>
> This sentiment is puzzling to me.  It seems you assume that we can trust
> future developers to change the code but we can't trust them to update
> the documentation.

It's sad but true that comments often are out of date for several
releases until someone notices them.

> So we can't have documentation even if it's factually
> true just because someone might forget to update it?  Why is the mere
> possibility of incorrect documentation in the future more significant
> than actual correct documentation in the present?  Couldn't the same
> argument be used to support removing all documentation from all code?

I think it has to be weighed in each case. In *this* particular case
the relevance of the comment seems quite minimal and removing it seems
appropriate.

> If you see a better way to word the comment to reduce the possibility
> of misunderstanding, that's cool with me.  I'd just like people who
> get their hands on the module to know that they can use it with 2.3.

Well even if the comment remains, they are going to have to try it
before they can trust the comment (see above). There is lots of code
in the stdlib that is compatible with Python 2.3 (or 1.5.2 for that
matter). How important is it to record that fact? I'd say not at all.

The Python standard library of a particular Python version shouldn't
be seen as an additional way to distribute code that's intended for
other versions. If you want to encourage people to use your module
with older versions, the right path is to have a distribution (can be
very light-weight) on your own website and add it to PyPI (Cheese
Shop). You can put the same version distributed with Python 2.5 there;
this isn't going to be something with maintenance and featuer
evolution, presumably, since it's only needed until they catch up with
2.5 or later.

If you still aren't convinced, how about a comment like this:

# At the time of writing this module was compatible with Python 2.3 and later.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From shponglespore at gmail.com  Mon Jun 19 21:56:43 2006
From: shponglespore at gmail.com (John Williams)
Date: Mon, 19 Jun 2006 14:56:43 -0500
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <2mmzc9l5b5.fsf@starship.python.net>
References: <E1FqzHB-00003X-UV@draco.cus.cam.ac.uk>
	<2mmzc9l5b5.fsf@starship.python.net>
Message-ID: <e6c6cc500606191256w38411cdcif6386881a727b97c@mail.gmail.com>

On 6/19/06, Michael Hudson <mwh at python.net> wrote:
> Nick Maclaren <nmm1 at cus.cam.ac.uk> writes:
> > 2) Because some people are dearly attached to the current behaviour,
> > warts and all, and there is a genuine quandary of whether the 'right'
> > behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
> > finally-specifies (let's ignore C99 and Java as beyond redemption),
>
> Why?  Maybe it's clear to you, but it's not totally clear to me, and
> it any case the discussion would be better informed for not being too
> dismissive.

I just happened to be reading this, which I found very convincing:

How Java's Floating-Point Hurts Everyone Everywhere
http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf

From martin at v.loewis.de  Mon Jun 19 22:19:38 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 19 Jun 2006 22:19:38 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CD9B@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CD9B@nemesis.central.ccp.cc>
Message-ID: <449706DA.20906@v.loewis.de>

Kristj?n V. J?nsson wrote:
> The signal() doc is rather vague on the point, since it doesn?t
> define the availible set of signals.  It doesn?t even say that a
> signal identifier is an integer.  But it says that it should return
> EINVAL if it "cannot satisfy the request".

What "signal() doc" are you looking at? I'm looking at

http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1124.pdf

section 7.14. This is ISO C99 (actually, technical corrigendum 2
of that), and it does not mention EINVAL. (BTW, signal does
not *return* EINVAL, it returns SIG_ERR and sets errno).

signal() isn't vague about the set of available signals. 7.14/3
lists some, then 7.14/4 says

# The complete set of signals, their semantics, and their default
# handling is implementation-defined; all signal numbers shall be
# positive.

> It doesn?t say "if the request is invalid"

Ah, so you are looking at the Microsoft documentation? As the
implementation isn't compliant to standard C, I would not expect
their documentation to faithfully reproduce standard C.

> but I don't want to go into hairsplitting here.

It's an important point. If Python does not follow some relevant
standard, and therefore breaks, it is Python that must be fixed.
If it breaks on some system which in itself violates some standard,
we have the choice of either working around or ignoring the system.

> But I completely disagree when you insist that microsoft has broken
> the C library.

But they have. A program that is compliant to standard C used to
work with earlier versions of the C library, and stops working with
the current version.

> What they have done is added parameter validation,
> and thus simply added code in the "undefined" domain.

Except that the set of supported signals is not "undefined", it's
"implementation-defined". See 3.4.1 for a definition of
"implementation-defined behaviour", and 3.4.3 for a definition
of "undefined behaviour".

> I would also
> like to point out that, again apart from signal(), you are relying on
> undefined behaviour of fopen and others.

That is true, so we should fix Python here.

> So, it is my suggestion that in stead of going all defensive, and
> shouting "breakage", why not simply fix those very dubious CRT usage
> patterns?  Think of it as lint.

Again, for fopen: sure. For signal, this is not possible: We want to
set *all* signal handlers on a system, but we have know way of
determining at compile time what "all signal handlers" are. Standard C
is deliberately designed to allow applications to do that, and
with msvcr80.dll, we can't.

> Also, consider this:  in the case of file() and strftime() we are
> passing in dynamic strings.  The strings are not within control of
> python.  Normally these are static strings, within the control of the
> developer which has the function reference on hand, knows what he
> wants and so on.  Yet, here we are passing in any old strings.  There
> is a huge undefined domain there, and we should be very concerned
> about that.  It is a wonder we haven?t seen these functions crash
> before.

No, that's not a wonder. It's actually unfortunate that standard
C did not make it an error, but they likely didn't do it because
of existing practice. However, the usual, natural, straight-forward
way of processing the mode string (in a loop with a switch statement)
can't possible cause crashes.

> (And by the way, why does pythoncore.dll mess with signal() anyway?

So we can redirect all signals handlers to Python code if the
user wishes so.

Regards,
Martin

From rhettinger at ewtllc.com  Mon Jun 19 22:24:02 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 19 Jun 2006 13:24:02 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	
	<20060611010410.GA5723@21degrees.com.au>	
	<20060618184500.F34E.JCARLSON@uci.edu>	
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>	
	<4496D06E.7070106@ewtllc.com>	
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>	
	<4496DD2F.30501@ewtllc.com>	
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>	
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
Message-ID: <449707E2.7060803@ewtllc.com>


>> My thought is that we *should* define switching in terms of hash
>> tables.  It builds off of existing knowledge and therefore has a near
>> zero learning curve.  The implementation is straight-forward and there
>> are none of the hidden surprises that we would have with
>> fastpath/slowpath approaches which use different underlying magic
>> methods and do not guarantee order of execution.
>
>
> I'm not so sure about there being no hidden surprises. I betcha that
> there are quire a few bits of code that curerntly use the if/elif
> style and seem to beg for a switch statement that depend on the
> ordering of the tests. A typical example would be to have one of the
> earlier tests express an exception to a later test that is a range
> test. 


That's a tricky issue.  Whenever the cases overlap, I would expect a 
successive comparison approach to jump to the first match while a hash 
table approach would jump to the last match (just like a dictionary 
retains only the latest (key,value) pair when fed successive pairs with 
identical keys and differing values).


> (Surely we're going to support range tests... sre_compile.py
> uses 'in' almost as often as 'is'.)

When the ranges have a short length as they do in sre, I expect that the 
syntax would allow the range to be captured on one line but have 
multiple entries in the hash table which each dispatch to the same 
target code suite:

    switch x:
    case 0, 2, 4, 6:  handle_even()
    case 1, 3, 5, 9:  handle_odd()
    default:  handle_fractions()

Unfortunately, that approach is less than ideal for bigger ranges:

   switch x:
   case xrange(0,sys.maxint,2): handle_odd()
   case xrange(1,sys.maxint,2): handle_even()
   default: handle_fractions()

Other types of range checks get us back into the area of arbitrary 
expressions in case values and having to repeat the variable name:

   switch x:
   case x < 60:  return "too cold"
   case 60 <= x < 80:  return "just right"
   case 80 <= x: return "too hot"

Choose your poison.  How much range flexibility do you want and how much 
implementation and behavioral complexity are you willing to pay for it.



>> If use cases eventually emerge for an alternative path using successive
>> == comparisons, then it can always be considered and added later.  For
>> now, YAGNI (neither the functionality, nor the implementation headaches,
>> nor the complexity of explaining what it does under all the various 
>> cases).
>
>
> I say, let someone give a complete implementation a try, and then try
> to modify as much standard library code as possible to use it. Then
> report back. That would be a very interesting experiment to do. (And
> thanks for the pointer to sre_compile.py as a use case!)


Hmm, it could be time for the Georg bot to graduate to big game.
Georg, are you up to it?



Raymond


From guido at python.org  Mon Jun 19 22:29:55 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 19 Jun 2006 13:29:55 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449707E2.7060803@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
Message-ID: <ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>

On 6/19/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote:
[Guido]
> > I'm not so sure about there being no hidden surprises. I betcha that
> > there are quire a few bits of code that curerntly use the if/elif
> > style and seem to beg for a switch statement that depend on the
> > ordering of the tests. A typical example would be to have one of the
> > earlier tests express an exception to a later test that is a range
> > test.
>
> That's a tricky issue.  Whenever the cases overlap, I would expect a
> successive comparison approach to jump to the first match while a hash
> table approach would jump to the last match (just like a dictionary
> retains only the latest (key,value) pair when fed successive pairs with
> identical keys and differing values).

But it would be easy enough to define a dict-filling function that
updates only new values. (PyDict_Merge has an option to do this,
although it's not currently exposed to Python.)

> > (Surely we're going to support range tests... sre_compile.py
> > uses 'in' almost as often as 'is'.)
>
> When the ranges have a short length as they do in sre, I expect that the
> syntax would allow the range to be captured on one line but have
> multiple entries in the hash table which each dispatch to the same
> target code suite:
>
>     switch x:
>     case 0, 2, 4, 6:  handle_even()
>     case 1, 3, 5, 9:  handle_odd()
>     default:  handle_fractions()

Was it decided yet how to write the cases for a switch that tests for
tuples of values? Requiring parentheses might be sufficient,
essentially making what follows a case *always* take on sequence
syntax.

> Unfortunately, that approach is less than ideal for bigger ranges:
>
>    switch x:
>    case xrange(0,sys.maxint,2): handle_odd()
>    case xrange(1,sys.maxint,2): handle_even()
>    default: handle_fractions()

Right. This would be fine syntactically but clearly breaks the dict
implementation...

> Other types of range checks get us back into the area of arbitrary
> expressions in case values and having to repeat the variable name:
>
>    switch x:
>    case x < 60:  return "too cold"
>    case 60 <= x < 80:  return "just right"
>    case 80 <= x: return "too hot"
>
> Choose your poison.  How much range flexibility do you want and how much
> implementation and behavioral complexity are you willing to pay for it.

In order to decide, we should look at current usage of long if/elif chains.

> >> If use cases eventually emerge for an alternative path using successive
> >> == comparisons, then it can always be considered and added later.  For
> >> now, YAGNI (neither the functionality, nor the implementation headaches,
> >> nor the complexity of explaining what it does under all the various
> >> cases).
> >
> > I say, let someone give a complete implementation a try, and then try
> > to modify as much standard library code as possible to use it. Then
> > report back. That would be a very interesting experiment to do. (And
> > thanks for the pointer to sre_compile.py as a use case!)
>
> Hmm, it could be time for the Georg bot to graduate to big game.
> Georg, are you up to it?

Georg is a bot? :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From martin at v.loewis.de  Mon Jun 19 22:31:45 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 19 Jun 2006 22:31:45 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
Message-ID: <449709B1.9030606@v.loewis.de>

Kristj?n V. J?nsson wrote:
> I don't have specific information on the machines.  We didn?t try
> very hard to get things to work with 2.3 since we simply assumed it
> would work automatically when we upgraded to a more mature 2.4. I
> could try to get more info, but it would be 2.3 specific.  Have there
> been any changes since then?

Not in that respect, no.

> Note that it may not go into program files at all.  Someone may want
> to install his modules in a folder named in the honour of his mother.

It's certainly possible to set this up in a way that it won't work,
on any localized version: just use a path name that isn't supported
in the ANSI code page. However, that should rarely happen: the
name of his mother should still be expressable in the ANSI code
page, if the system is setup correctly.

> Also, I really would like to see a general solution that doesn?t
> assume that the path name can somhow be transmuted to an ascii name.

(Please don't say ASCII here. Windows *A APIs are named that way
 because Microsoft Windows has the notion of an "ANSI code page",
 which, in turn, is just a code page indirection so some selected
 code page meant to support the characters of the user's locale)

> Users are unpredictable.  When you have a wide distribution  , you
> come up against all kinds of problems (Currently we have around
> 500.000 users in china.) Also, relying on some locale settings is not
> acceptable.

Sure, but stating that doesn't really help. Code contributions
would help, but that part of Python has been left out of using
the *W API, because it is particularly messy to fix.

> Funny that no other platforms could benefit from a unicode import
> path.  Does that mean that windows will reign supreme?

That is the case, more or less. Or, more precisely:
- On Linux, Solaris, and most other Unices, file names are bytes
  on the system API, and are expected to be encoded in the user's
  locale. So if your locale does not support a character, you
  can't name a file that way, on Unix. There is a trend towards
  using UTF-8 locales, so that the locale contains all Unicode
  characters.
- On Mac OS X, all file names are UTF-8, always (unless the
  user managed to mess it up), so you can have arbitrary
  Unicode file names

That means that the approach of converting a Unicode sys.path
element to the file system encoding will always do the right
thing on Linux and OS X: the file system encoding will be
the locale's encoding on Linux, and will be UTF-8 on OS X.

It's only Windows which has valid file names that cannot
be represented in the current locale.

Regards,
Martin

From martin at v.loewis.de  Mon Jun 19 22:41:27 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 19 Jun 2006 22:41:27 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CE4A@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CE4A@nemesis.central.ccp.cc>
Message-ID: <44970BF7.7020806@v.loewis.de>

Kristj?n V. J?nsson wrote:
> Wouldn?t it be possible then to emulate the unix way?  Simply encode
> any unicode paths to utf-8, process them as normal, and then decode
> them just prior to the actual windows io call?

That won't work. People also put path names from the ANSI code page
onto sys.path and expect that to work - it always worked, and is
a nearly-complete work-around to put directories with funny characters
onto sys.path. sys.path is a list, so we have little control over
what gets put onto it.

> Of course, once there, why not do it unicode all the way up to that
> last point?  Unless there are platforms without wchar_t that would
> make sense.

Again, we can't really control that. Also, most platforms have no
wchar_t API for file IO. We would have to encode each sys.path
element for each stat() call, which would be quite expensive

> At any rate, I am trying to find a coding path of least resistance
> here.  Regardless of the timeline or acceptance in mainstream python
> for this feature, it is something I will have to patch in for our
> application.

The path with least resistance should be usage of 8.3 directory names.
The one to implement in future Python versions should be the rewrite
of import.c, to operate on PyObject* instead of char*, and perform
conversion to the native API only just before calling the native API.

Regards,
Martin

From martin at v.loewis.de  Mon Jun 19 22:49:48 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 19 Jun 2006 22:49:48 +0200
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
	<44950D99.9000606@v.loewis.de>
	<bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>
	<Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
Message-ID: <44970DEC.9020808@v.loewis.de>

Ka-Ping Yee wrote:
> This sentiment is puzzling to me.  It seems you assume that we can trust
> future developers to change the code but we can't trust them to update
> the documentation.

That's precisely my expectation. Suppose Python 3.0 unifies int and
long, and deprecates the L suffix. Then,

   if not 0 <= time_low < 1<<32L:

will change to

   if not 0 <= time_low < 1<<32:

While this will work fine in Python 2.4 and onwards, it will break
2.3. Whoever is making the change won't even think of the necessity
of a documentation change - after all, this is supposed to be a
style change, only. People do make whole-sale style changes to the
entire library from time to time.

> So we can't have documentation even if it's factually
> true just because someone might forget to update it?

Sure, we can, and if you want that to, we should (you are the author,
so your view is quite important), and I'll shut up. I just wanted
to caution about a risk here.

> If you see a better way to word the comment to reduce the possibility
> of misunderstanding, that's cool with me.  I'd just like people who
> get their hands on the module to know that they can use it with 2.3.

I personally didn't find it misleading at all, and see no need to
change it for *that* reason. I see a potential risk in it wrt.
future changes, but perhaps I'm paranoid.

Regards,
Martin

From martin at v.loewis.de  Mon Jun 19 22:50:52 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 19 Jun 2006 22:50:52 +0200
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <ca471dc20606191254t4cd12757t11114e377eb601d4@mail.gmail.com>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>	
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>	
	<44950D99.9000606@v.loewis.de>	
	<bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>	
	<Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
	<ca471dc20606191254t4cd12757t11114e377eb601d4@mail.gmail.com>
Message-ID: <44970E2C.9080005@v.loewis.de>

Guido van Rossum wrote:
> # At the time of writing this module was compatible with Python 2.3 and
> later.

:-)

Martin

From greg at electricrain.com  Mon Jun 19 23:17:35 2006
From: greg at electricrain.com (Gregory P. Smith)
Date: Mon, 19 Jun 2006 14:17:35 -0700
Subject: [Python-Dev] os.getmtime now returns a float?
Message-ID: <20060619211735.GH7182@zot.electricrain.com>

os.path.getmtime returns a float on linux (2.5a2/b1 HEAD); in 2.4 it
returned an int.  this change makes sense, its what time.time returns.

should there be a note in Misc/NEWS or whatsnew mentioning this minor
change (or did i miss it)?  It breaks code that unintentionally
depended on it returning an int.

-g


From trentm at activestate.com  Mon Jun 19 23:19:08 2006
From: trentm at activestate.com (Trent Mick)
Date: Mon, 19 Jun 2006 14:19:08 -0700
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
Message-ID: <449714CC.7020009@activestate.com>

[Neal Norwitz wrote on June 8th]
> It's June 9 in most parts of the world.  The schedule calls for beta 1
> on June 14.  That means there's less than 4 days until the expected
> code freeze.  Please don't rush to get everything in at the last
> minute.  The buildbots should remain green to keep Anthony happy and
> me sane (or is it the other way around).
> 
> If you plan to make a checkin adding a feature (even a simple one),
> you oughta let people know by responding to this message.  Please get
> the bug fixes in ASAP.  Remember to add tests!

[and then Anthony Baxter wrote today]
 > Subject: TRUNK FREEZE IMMINENT FOR 2.5 BETA 1 - 00:00 UTC,20-JUNE-2006
 >
 > The trunk will be FROZEN for 2.5b1 from 00:00UTC on Tuesday, 20th of
 > June. That's about 16 hours from now. Please don't checkin while the
 > trunk is frozen, unless you're one of the release team (me, Martin,
 > Fred, Ronald).


Can I, or will I be able to get these PyExpat fixes?

* [ 1462338 ] upgrade pyexpat to expat 2.0.0
   http://python.org/sf/1462338

* [ 1295808 ] expat symbols should be namespaced in pyexpat
   http://python.org/sf/1295808

The second one is that one I care about (and is well tested in Komodo on 
four platforms). It will be very important to have that one in the 
Python/Mozilla world (which with Mark Hammond's recent work for 
mozilla.org -- making Python a first class language in the browser along 
with JavaScript) because this namespacing is required to avoid crashing 
conflicts with another version of the expat symbols in the mozilla process.

Martin v. L. wanted the namespacing fix to be preceded by the upgrade to 
expat 2.0.0 -- which I have a patch for also.

I haven't checked in yet, because I dropped the ball for a few weeks here.

I'm going to start working on checking it in right now and will probably 
just go for it (because I have a few hours until Anthony's deadline ;)) 
unless I hear some screams.

Honestly I didn't intentionally wait until the pending trunk-freeze 
email came.

Cheers,
Trent

-- 
Trent Mick
trentm at activestate.com

From tdelaney at avaya.com  Mon Jun 19 23:40:10 2006
From: tdelaney at avaya.com (Delaney, Timothy (Tim))
Date: Tue, 20 Jun 2006 07:40:10 +1000
Subject: [Python-Dev] Switch statement
Message-ID: <2773CAC687FD5F4689F526998C7E4E5F07438B@au3010avexu1.global.avaya.com>

Guido van Rossum wrote:

> I wonder if there should be two default clauses, or some other
> syntactic way to indicate whether we expect all x to be hashable?

    switch expr:
        case 1:
            statements
        case 2:
            statements
        else:
            statements
        except KeyError:
            statements
        finally:
            statements

    switch expr:
        case 1:
            statements
        case 2:
            statements
        else:
            statements
    except KeyError:
        statements
    finally:
        statements

:)

Seriously, I think I'd rather be explicit and just have KeyError
propagate. If someone is expecting occasional unhashable values, they
can just wrap it in try/except.

    try:
        switch expr:
            case 1:
                statements
            case 2:
                statements
            else:
                statements
    except KeyError:
        statements
    finally:
        statements

The first syntax though does have the advantage that it could catch only
KeyErrors raised from the switch statement. That could be easily handled
by a separate SwitchKeyError exception (inheriting from KeyError).

Tim Delaney

From gh at ghaering.de  Mon Jun 19 23:52:45 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Mon, 19 Jun 2006 23:52:45 +0200
Subject: [Python-Dev] Dropping externally maintained packages (Was:
 Please stop changing wsgiref on the trunk)
In-Reply-To: <e6khjs$qqv$1@sea.gmane.org>
References: <5.1.1.6.0.20060611233143.01b616b0@sparrow.telecommunity.com>	<5.1.1.6.0.20060612121658.03255358@sparrow.telecommunity.com>	<ca471dc20606120943o3f9d97ccle62db7539edec1c1@mail.gmail.com>	<448D9EA1.9000209@v.loewis.de>
	<e6khjs$qqv$1@sea.gmane.org>
Message-ID: <44971CAD.2060808@ghaering.de>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Thomas Heller wrote:
> Martin v. L?wis wrote:
>> Guido van Rossum wrote:
>>>> 4 of the 6 modules in PEP 360 were added to Python in 2.5, so if you want
>>>> to get rid of it, *now* would be the time.
>>> I'm all for it.
>>>
>>> While I am an enthusiastic supporter of several of those additions, I
>>> am *not* in favor of the special status granted to software
>>> contributed by certain developers, since it is a burden for all other
>>> developers.
>> Then I guess we should deal with before 2.5b1, and delay 2.5b1 until the
>> status of each of these has been clarified.
>>
>> Each maintainer should indicate whether he is happy with a "this is
>> part of Python" approach. If so, the entry should be removed from PEP
>> 360 (*); if not, the code should be removed from Python before beta 1.
> 
> I will be happy to say "ctypes is part of Python" (although I *fear* it
> is not one of the packages enthusiastically supported by Guido ;-).

The same goes for the sqlite3 module. I see it as part of Python and also
see it as my job synchronize bugfixes with the external version both ways.

I'll also add statements to the source files to ask developers to keep
Python 2.3 compatibility.

> [...]
> I am *very* thankful for the fixes, the code review, the suggestions,
> and the encouragement I got by various python-devers. [...]

Ditto :-)

So, somebody can please adjust the PEPs for the sqlite3 module accordingly.

- -- Gerhard
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFElxytdIO4ozGCH14RAs8PAJ9/+zGGFj3nLyKPNN+B+UmG3gaJeQCfV7Uc
g0PjyvOfXVkA2cohQjJrzeI=
=nM4W
-----END PGP SIGNATURE-----

From benji at benjiyork.com  Mon Jun 19 23:59:12 2006
From: benji at benjiyork.com (Benji York)
Date: Mon, 19 Jun 2006 17:59:12 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5F07438B@au3010avexu1.global.avaya.com>
References: <2773CAC687FD5F4689F526998C7E4E5F07438B@au3010avexu1.global.avaya.com>
Message-ID: <44971E30.2000204@benjiyork.com>

Delaney, Timothy (Tim) wrote:
 > Guido van Rossum wrote:
 >
 >
 >>I wonder if there should be two default clauses, or some other
 >>syntactic way to indicate whether we expect all x to be hashable?
 >
 >
 >     switch expr:
 >         case 1:
 >             statements
 >         case 2:
 >             statements
 >         else:
 >             statements
 >         except KeyError:
 >             statements
 >         finally:
 >             statements

Small variation:

switch expr:
     case 1:
         statements
     case 2:
         statements
     else:
         statements
     except:
         statements
--
Benji York

From martin at v.loewis.de  Tue Jun 20 00:20:45 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 20 Jun 2006 00:20:45 +0200
Subject: [Python-Dev] Documentation enhancement:  "MS free compiler"?
In-Reply-To: <20060613165130.GA6401@lairds.us>
References: <20060613165130.GA6401@lairds.us>
Message-ID: <4497233D.9050005@v.loewis.de>

Cameron Laird wrote:
> I'm channeling a correspondent, who tells me that Python documentation
> (Python 2.5 announcement, and so on) mentions compatibility of sources
> with "the MS free compiler"; that's the default toolchain for Windows.
> 
> Apparently we're in conflict with Microsoft on that:  some hyperlinks
> refer to <URL: http://msdn.microsoft.com/visualc/vctoolkit2003/ >, which
> begins,
>   The Visual C++ Toolkit 2003 has been
>   replaced by Visual C++ 2005 Express
>   Edition.
> The latter is available at no charge, incidentally.

It would be good to know where the hyperlink supposedly is, so we know
who can update it.

In any case, changing the reference to VS 2005 is the wrong thing to
do - VS 2005 is *not* the default tool chain on Windows. So the update
should be that there is no free compiler from MS anymore (or perhaps
it should point to the .NET SDK, provided that has a free compiler).

Regards,
Martin

From brett at python.org  Tue Jun 20 00:30:21 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 19 Jun 2006 15:30:21 -0700
Subject: [Python-Dev] XP build failing
Message-ID: <bbaeab100606191530m3ae8b683s77b537aad17d12c9@mail.gmail.com>

http://www.python.org/dev/buildbot/all/x86%20XP-2%20trunk/builds/676/step-compile/0

Looks like Tim's XP box is crapping out on a header file included from
Tcl/Tk.  Did the Tcl/Tk folk just break something and we are doing an
external svn pull and thus got bit by it?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060619/104a6319/attachment.html 

From p.f.moore at gmail.com  Tue Jun 20 00:33:12 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 19 Jun 2006 23:33:12 +0100
Subject: [Python-Dev] Documentation enhancement: "MS free compiler"?
In-Reply-To: <4497233D.9050005@v.loewis.de>
References: <20060613165130.GA6401@lairds.us> <4497233D.9050005@v.loewis.de>
Message-ID: <79990c6b0606191533u6bd92be3ya5ff8b0c6cc76c49@mail.gmail.com>

On 6/19/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Cameron Laird wrote:
> > I'm channeling a correspondent, who tells me that Python documentation
> > (Python 2.5 announcement, and so on) mentions compatibility of sources
> > with "the MS free compiler"; that's the default toolchain for Windows.
> >
> > Apparently we're in conflict with Microsoft on that:  some hyperlinks
> > refer to <URL: http://msdn.microsoft.com/visualc/vctoolkit2003/ >, which
> > begins,
> >   The Visual C++ Toolkit 2003 has been
> >   replaced by Visual C++ 2005 Express
> >   Edition.
> > The latter is available at no charge, incidentally.
>
> It would be good to know where the hyperlink supposedly is, so we know
> who can update it.

There's one in PCBuild/README - I don't know if that's the one
referred to. However, there is no valid replacement link, so I'm not
sure what it should be replaced with (other than a suggestion that the
only way of getting the toolkit compiler is by "unofficial" means).

> In any case, changing the reference to VS 2005 is the wrong thing to
> do - VS 2005 is *not* the default tool chain on Windows. So the update
> should be that there is no free compiler from MS anymore (or perhaps
> it should point to the .NET SDK, provided that has a free compiler).

As far as I know, there is *no* replacement for the Toolkit compiler
(where replacement implies builds to link with msvcr71.dll, and is an
optimising compiler).

MS withdrew the toolkit compiler right after my patch to document how
to use it was committed :-( I was sufficiently annoyed and demotivated
by this, that I never did anything to fix the documentation,
particularly as I don't have a good suggestion. I'll see if I have
time to look at the README and suggest suitable words.

Paul.

From martin at v.loewis.de  Tue Jun 20 00:35:55 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 20 Jun 2006 00:35:55 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <e763gu$3aq$1@sea.gmane.org>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
	<e763gu$3aq$1@sea.gmane.org>
Message-ID: <449726CB.8080304@v.loewis.de>

Thomas Heller wrote:
> It should be noted that I once started to convert the import machinery
> to be fully unicode aware.  As far as I can tell, a *lot* has to be changed
> to make this work.

Is that code available somewhere still? Does it still work?

> I started with refactoring Python/import.c, but nobody responded to the question
> whether such a refactoring patch would be accepted or not.

I would like to see minimal changes only. I don't see why massive
refactoring would be necessary: the structure of the code should
persist - only the data types should change from char* to PyObject*.
Calls like stat() and open() should be generalized to accept
PyObject*, and otherwise keep their interface.

Regards,
Martin

From martin at v.loewis.de  Tue Jun 20 00:42:40 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 20 Jun 2006 00:42:40 +0200
Subject: [Python-Dev] os.getmtime now returns a float?
In-Reply-To: <20060619211735.GH7182@zot.electricrain.com>
References: <20060619211735.GH7182@zot.electricrain.com>
Message-ID: <44972860.2060503@v.loewis.de>

Gregory P. Smith wrote:
> os.path.getmtime returns a float on linux (2.5a2/b1 HEAD); in 2.4 it
> returned an int.  this change makes sense, its what time.time returns.
> 
> should there be a note in Misc/NEWS or whatsnew mentioning this minor
> change (or did i miss it)?  It breaks code that unintentionally
> depended on it returning an int.

There is an entry in Misc/NEWS:

- stat_float_times is now True.

The change was originally announced in

http://www.python.org/doc/2.3/whatsnew/node18.html

which says

During testing, it was found that some applications will break if time
stamps are floats. For compatibility, when using the tuple interface of
the stat_result time stamps will be represented as integers. When using
named fields (a feature first introduced in Python 2.2), time stamps are
still represented as integers, unless os.stat_float_times() is invoked
to enable float return values:

>>> os.stat("/tmp").st_mtime
1034791200
>>> os.stat_float_times(True)
>>> os.stat("/tmp").st_mtime
1034791200.6335014

In Python 2.4, the default will change to always returning floats.

Regards,
Martin

From python-dev at zesty.ca  Tue Jun 20 00:47:26 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 19 Jun 2006 17:47:26 -0500 (CDT)
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <ca471dc20606191254t4cd12757t11114e377eb601d4@mail.gmail.com>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
	<44950D99.9000606@v.loewis.de>
	<bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>
	<Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
	<ca471dc20606191254t4cd12757t11114e377eb601d4@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606191626470.17937@server1.LFW.org>

On Mon, 19 Jun 2006, Guido van Rossum wrote:
> If you want to encourage people to use your module
> with older versions, the right path is to have a distribution (can be
> very light-weight) on your own website and add it to PyPI

Okay, i've removed the comment and submitted the package to PyPI.


-- ?!ng

From martin at v.loewis.de  Tue Jun 20 00:48:31 2006
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Tue, 20 Jun 2006 00:48:31 +0200
Subject: [Python-Dev] XP build failing
In-Reply-To: <bbaeab100606191530m3ae8b683s77b537aad17d12c9@mail.gmail.com>
References: <bbaeab100606191530m3ae8b683s77b537aad17d12c9@mail.gmail.com>
Message-ID: <449729BF.2030405@v.loewis.de>

Brett Cannon wrote:
> http://www.python.org/dev/buildbot/all/x86%20XP-2%20trunk/builds/676/step-compile/0
> 
> Looks like Tim's XP box is crapping out on a header file included from
> Tcl/Tk.  Did the Tcl/Tk folk just break something and we are doing an
> external svn pull and thus got bit by it?

No, that comes straight out of

http://svn.python.org/projects/external/tcl8.4.12/generic/tclDecls.h

atleast in theory: there is a build process for tcl running if it wasn't
built before. Could just as well also be a hard disk corruption.

Regards,
Martin

From martin at v.loewis.de  Tue Jun 20 00:58:05 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 20 Jun 2006 00:58:05 +0200
Subject: [Python-Dev] PyString_FromFormat
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4CDE1@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4CDE1@nemesis.central.ccp.cc>
Message-ID: <44972BFD.9060503@v.loewis.de>

Kristj?n V. J?nsson wrote:
> One thing I have often lamented having in PyString_FromFormat (and
> cousins, like PyErr_Format) is to be able to integrate PyObject
> pointers.  Adding something like %S and %R (for str() and repr()
> respectively) seems very useful to me.  Is there any reason why this
> isn?t there?

Not sure what the specific use case is, but I think I would use
PyString_Format instead, and use everything you can use in a %
operator. If you want to avoid explicit argument tuple building,
you can also write

  static PyObject *fmt = NULL;
  if (!fmt) fmt = PyString_FromString("Foo %s bar %s foobar %d");
  res = PyMethod_Call(fmt, "__mod__", "(OOi)", o1, o2, 42);

Regards,
Martin

From tim.peters at gmail.com  Tue Jun 20 01:49:03 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 19 Jun 2006 19:49:03 -0400
Subject: [Python-Dev] XP build failing
In-Reply-To: <449729BF.2030405@v.loewis.de>
References: <bbaeab100606191530m3ae8b683s77b537aad17d12c9@mail.gmail.com>
	<449729BF.2030405@v.loewis.de>
Message-ID: <1f7befae0606191649m1c31c3bbxb5f479272d6ebb87@mail.gmail.com>

[Brett]
>> Looks like Tim's XP box is crapping out on a header file included from
>> Tcl/Tk.  Did the Tcl/Tk folk just break something and we are doing an
>> external svn pull and thus got bit by it?

[Martin]
> No, that comes straight out of
>
> http://svn.python.org/projects/external/tcl8.4.12/generic/tclDecls.h
>
> atleast in theory: there is a build process for tcl running if it wasn't
> built before. Could just as well also be a hard disk corruption.

It's a Mystery, and I couldn't find anything wrong in tclDecls.h by
eyeball either.  Blowing away  some directories to force a rebuild of
the tcltk directory appears to have cured it.  There's no other
evidence of disk problems here, but after the current test run
finishes I'm taking the box down to run some pre-boot disk
diagnostics.  I probably left the 2.4 buildbot tree in a broken state,
BTW -- if I don't remember to fix that, somebody poke me :-)

From tim.peters at gmail.com  Tue Jun 20 01:55:57 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Mon, 19 Jun 2006 19:55:57 -0400
Subject: [Python-Dev] XP build failing
In-Reply-To: <1f7befae0606191649m1c31c3bbxb5f479272d6ebb87@mail.gmail.com>
References: <bbaeab100606191530m3ae8b683s77b537aad17d12c9@mail.gmail.com>
	<449729BF.2030405@v.loewis.de>
	<1f7befae0606191649m1c31c3bbxb5f479272d6ebb87@mail.gmail.com>
Message-ID: <1f7befae0606191655q5bffeb6k9206ac82864c212d@mail.gmail.com>

FYI, the tests all pass on my box again.  Going offline line to check the disk.

> ...
> I probably left the 2.4 buildbot tree in a broken state,
> BTW -- if I don't remember to fix that, somebody poke me :-)

I should clarify that that's _my_ 2.4 buildbot tree, only on my
machine.  I didn't break your 2.4 buildbot tree, let alone "the" 2.4
buidbot tree.  I don't even know what that means :-)

From trentm at activestate.com  Tue Jun 20 02:04:14 2006
From: trentm at activestate.com (Trent Mick)
Date: Mon, 19 Jun 2006 17:04:14 -0700
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <449714CC.7020009@activestate.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<449714CC.7020009@activestate.com>
Message-ID: <44973B7E.5020707@activestate.com>

Trent Mick wrote:
> * [ 1462338 ] upgrade pyexpat to expat 2.0.0
>    http://python.org/sf/1462338
> 
> * [ 1295808 ] expat symbols should be namespaced in pyexpat
>    http://python.org/sf/1295808

These are in now. I don't see any failures yet, either on the buildbots 
or on the Windows/Linux/Mac OS X boxes I tested on.

Trent



-- 
Trent Mick
trentm at activestate.com

From aahz at pythoncraft.com  Tue Jun 20 04:01:19 2006
From: aahz at pythoncraft.com (Aahz)
Date: Mon, 19 Jun 2006 19:01:19 -0700
Subject: [Python-Dev] ETree: xml vs xmlcore
Message-ID: <20060620020119.GA14570@panix.com>

Did we make a final decision about whether the canonical location for
ElementTree should be xml or xmlcore?  Also, there's no ElementTree or
xmlcore that I can find at http://docs.python.org/dev/ under global
module index or library reference.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From trentm at activestate.com  Tue Jun 20 05:32:38 2006
From: trentm at activestate.com (Trent Mick)
Date: Mon, 19 Jun 2006 20:32:38 -0700
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <44973B7E.5020707@activestate.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<449714CC.7020009@activestate.com>
	<44973B7E.5020707@activestate.com>
Message-ID: <44976C56.3070205@activestate.com>

Trent Mick wrote:
> Trent Mick wrote:
>> * [ 1462338 ] upgrade pyexpat to expat 2.0.0
>>    http://python.org/sf/1462338
>>
>> * [ 1295808 ] expat symbols should be namespaced in pyexpat
>>    http://python.org/sf/1295808
> 
> These are in now. I don't see any failures yet, either on the buildbots 
> or on the Windows/Linux/Mac OS X boxes I tested on.

It looks like this broke the x86 cygwin build:

http://python.org/dev/buildbot/trunk/x86%20cygwin%20trunk/builds/859/step-test/0

> gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.5.19-i686-2.5/home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/pyexpat.o build/temp.cygwin-1.5.19-i686-2.5/home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/expat/xmlparse.o build/temp.cygwin-1.5.19-i686-2.5/home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/expat/xmlrole.o build/temp.cygwin-1.5.19-i686-2.5/home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/expat/xmltok.o -L/usr/local/lib -L. -lpython2.5 -o build/lib.cygwin-1.5.19-i686-2.5/pyexpat.dll
> build/temp.cygwin-1.5.19-i686-2.5/home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/pyexpat.o: In function `set_error':
> /home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/pyexpat.c:126: undefined reference to `_XML_GetCurrentLineNumber'
> /home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/pyexpat.c:127: undefined reference to `_XML_GetCurrentColumnNumber'
> /home/anthony/Buildbot/trunk.baxter-cygwin/build/Modules/pyexpat.c:131: undefined reference to `_XML_ErrorString'
> ...

I don't have this environment setup right now, though I'll try to check 
in the morning if someone hasn't beaten me to it. (Showing my ignorance 
here and grasping at straws: this wouldn't be because of some extern 
C-name-mangling-with-underscore-prefix thing would it?)

Modules/pyexpat.c:5
	#include "expat.h"
Modules/expat/expat.h:18
	#include "expat_external.h"
Modules/expat/expat_external.h:12
	#include "pyexpatns.h"
Modules/expat/pyexpatns.h:52
	#define XML_GetCurrentLineNumber        PyExpat_XML_GetCurrentLineNumber

I don't see where the disconnect is.


Trent

-- 
Trent Mick
trentm at activestate.com

From nnorwitz at gmail.com  Tue Jun 20 05:49:43 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Mon, 19 Jun 2006 20:49:43 -0700
Subject: [Python-Dev] beta1 coming real soon
In-Reply-To: <44976C56.3070205@activestate.com>
References: <ee2a432c0606082323t1ee013b6nd91771fb461acede@mail.gmail.com>
	<449714CC.7020009@activestate.com> <44973B7E.5020707@activestate.com>
	<44976C56.3070205@activestate.com>
Message-ID: <ee2a432c0606192049h378e0fbci490c6967d5c9d176@mail.gmail.com>

On 6/19/06, Trent Mick <trentm at activestate.com> wrote:
> Trent Mick wrote:
> > Trent Mick wrote:
> >> * [ 1462338 ] upgrade pyexpat to expat 2.0.0
> >>    http://python.org/sf/1462338
> >>
> >> * [ 1295808 ] expat symbols should be namespaced in pyexpat
> >>    http://python.org/sf/1295808
> >
> > These are in now. I don't see any failures yet, either on the buildbots
> > or on the Windows/Linux/Mac OS X boxes I tested on.
>
> It looks like this broke the x86 cygwin build:
>
> http://python.org/dev/buildbot/trunk/x86%20cygwin%20trunk/builds/859/step-test/0

Unfortunately, it's always red, athough this failure is new.

> I don't have this environment setup right now, though I'll try to check
> in the morning if someone hasn't beaten me to it. (Showing my ignorance
> here and grasping at straws: this wouldn't be because of some extern
> C-name-mangling-with-underscore-prefix thing would it?)

I didn't see where pyexpat.c got rebuilt in the compile/test steps.
Which would explain why it can't find new names.  Not sure if that's
the problem or not.  make distclean is supposed to clean that up, hmm
I wonder if distclean was run before this build.  I'm gonna try to
force another build on cygwin.  Anthony might need to do a manual make
distclean.

n

From jcarlson at uci.edu  Tue Jun 20 07:26:36 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Mon, 19 Jun 2006 22:26:36 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
References: <4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
Message-ID: <20060619213651.1DAA.JCARLSON@uci.edu>


"Guido van Rossum" <guido at python.org> wrote:
> Perhaps I misunderstood Josiah's comment; I thought he was implying
> that a switch should be significantly *faster* than if/elif, and was
> arguing against features that would jeopardize that speedup. I would
> think that it would be fine if some switches could be compiled into
> some kind of lookup table while others would just be translated into a
> series of if/elifs. As long as the compiler can tell the difference.

I personally don't find switch/case statements to be significantly (if
at all) easier to read than if/elif/else chains, but that is subjective,
and I note that Ka-Ping finds switch/case to be significantly easier to
read.

Regardless of readability (I know that readability counts), TOOWTDI. If
we allow somewhat arbitrary cases, then any potential speedup may be
thrown away (which would bother those of us who use dispatching), and we
essentially get a different syntax for if/elif/else.  I don't know about
everyone else, but I'm not looking for a different syntax for
if/elif/else, I'm looking for fast dispatch with some reasonable syntax.

In my opinion, the most reasonable syntax is a semantic change for fast
dispatch inside of specifically crafted if/elif chains of the form:
    if/elif non_dotted_name == constant_expression:
As stated various ways by various people, you can generate a hash table
during function definition (or otherwise), verify that the value of
non_dotted_name is hashable, and jump to particular offsets.  If you are
careful with your offsets, you can even have parallel if/elif/else tests
that fall through in the case of a 'non-hashable'.

There is a drawback to the non-syntax if/elif/else optimization,
specifically that someone could find that their dispatch mysteriously
got slower when they went from x==1 to including some other comparison
operator in the chain somewhere.  Well, that and the somewhat restricted
set of optimizations, but we seem to be headed into that restricted set
of optimizations anyways.

One benefit to the non-syntax optimization is that it seems like it could
be implemented as a bytecode hack, allowing us to punt on the entire
discussion, and really decide on whether such a decorator should be in
the standard library (assuming someone is willing to write the decorator).

 - Josiah


From fredrik at pythonware.com  Tue Jun 20 07:34:44 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Jun 2006 07:34:44 +0200
Subject: [Python-Dev] ETree: xml vs xmlcore
In-Reply-To: <20060620020119.GA14570@panix.com>
References: <20060620020119.GA14570@panix.com>
Message-ID: <e781dj$nbr$1@sea.gmane.org>

Aahz wrote:

> Did we make a final decision about whether the canonical location for
> ElementTree should be xml or xmlcore?

the original idea was to use "xml" to get the latest and greatest from 
either the core or PyXML, "xmlcore" to use the bundled version.  I see 
no reason to change this just because people have short memory ;-)

(in other words, the modules should be documented as xml.xxx, with a 
note that says that they're also available as xmlcore.xxx.  this applies 
to all modules under xml, not just etree.).

 > Also, there's no ElementTree or
> xmlcore that I can find at http://docs.python.org/dev/ under global
> module index or library reference.

the docs are still sitting in the patch tracker.  looks like the "just 
post text files; we'll fix the markup" approach doesn't work.  oh well, 
there's plenty of time to fix that before 2.5 final.

</F>


From mal at egenix.com  Tue Jun 20 10:57:37 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 20 Jun 2006 10:57:37 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<20060618184500.F34E.JCARLSON@uci.edu>	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>	<4496D06E.7070106@ewtllc.com>	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>	<4496DD2F.30501@ewtllc.com>	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>	<4496FB54.5060800@ewtllc.com>	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
Message-ID: <4497B881.3060902@egenix.com>

This discussion appears to repeat everything we already have
in the PEP 275:

http://www.python.org/dev/peps/pep-0275/

FWIW, below is a real-life use case that would
benefit from a switch statement or an optimization of the
existing if-elif-else case. It's the unpickler inner loop
of an XML pickle mechanism for Python objects.

Many parser doing the usual tokenize first, then parse the
tokens steps would benefit in the same way by avoiding the
function call overhead.

Note that you rarely have the situation where you need
to have a single cases for huge ranges of values (and
these can easily be handled in a separate if-else in the
else branch of the switch).

You do sometimes need to identical code for a few cases, so
allowing multiple values per case would make such use cases
have less code duplication.

However, using tuple syntax for
this would not be ideal, since a tuple may well be a valid value
to test for. This was discussed last time around: the only
way to cover this case is to always use tuple notation
for the values (see the syntax example in the PEP).

The code currently relies on Python interning
constants that appear in code, making the 'is' test slightly
faster than the '==' test.

        for tag in taglist:
            node = tag.name
            tagtype = tag.type

            if tagtype == DATA:
                if readdata:
                    data = tag.tag
                    readdata = 0
                continue

            # This is where the switch would start...

            elif node is 'int':

                if tagtype == STARTTAG:
                    readdata = 1
                    continue

                elif tagtype == ENDTAG:
                    stack.append(int(data))
                    continue

            elif node is 'float':

                if tagtype == STARTTAG:
                    readdata = 1
                    continue

                elif tagtype == ENDTAG:
                    stack.append(float(data))
                    continue

            elif node is 'long':

                if tagtype == STARTTAG:
                    readdata = 1
                    continue

                elif tagtype == ENDTAG:
                    stack.append(long(data))
                    continue

            elif node is 'string':

                if tagtype == STARTTAG:
                    refid = int(tag.attributes['id'])
                    readdata = 1
                    continue

                elif tagtype == ENDTAG:
                    data = xmlunescape(data, xmlentities)
                    obj =  data.encode('latin-1')
                    stack.append(obj)
                    memo[refid] = obj
                    continue

            elif node is 'tuple':

                if tagtype == STARTTAG:
                    refid = int(tag.attributes['id'])
                    pushframe((node, stack, refid))
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    obj = tuple(stack)
                    node, stack, refid = popframe()
                    memo[refid] = obj
                    stack.append(obj)
                    continue

            elif node is 'list':

                if tagtype == STARTTAG:
                    refid = int(tag.attributes['id'])
                    pushframe((node, stack, refid))
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    obj = list(stack)
                    node, stack, refid = popframe()
                    memo[refid] = obj
                    stack.append(obj)
                    continue

            elif node is 'dict':

                if tagtype == STARTTAG:
                    refid = int(tag.attributes['id'])
                    pushframe((node, stack, refid))
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    items = stack
                    node, stack, refid = popframe()
                    obj = {}
                    for k,v in items:
                        obj[k] = v
                    memo[refid] = obj
                    stack.append(obj)
                    continue

            elif node is 'item':

                if tagtype == STARTTAG:
                    continue

                elif tagtype == ENDTAG:
                    key = stack[-2]
                    value = stack[-1]
                    stack[-2] = (key, value)
                    del stack[-1]
                    continue

            elif node is 'key' or \
                 node is 'value':

                if tagtype == STARTTAG:
                    continue

                elif tagtype == ENDTAG:
                    continue

            elif node is 'none':

                if tagtype == STARTTAG:
                    stack.append(None)
                    continue

                elif tagtype == ENDTAG:
                    continue

            elif node is 'unicode':

                if tagtype == STARTTAG:
                    refid = int(tag.attributes['id'])
                    readdata = 1
                    continue

                elif tagtype == ENDTAG:
                    data = xmlunescape(data, xmlentities)
                    stack.append(obj)
                    memo[refid] = obj
                    continue

            elif node is 'ref':

                if tagtype == STARTTAG:
                    readdata = 1
                    continue

                elif tagtype == ENDTAG:
                    stack.append(memo[int(data)])
                    continue

            elif node is 'instance':

                if tagtype == STARTTAG:
                    attr = tag.attributes
                    refid = int(attr['id'])
                    classname = str(attr['class'])
                    #print 'instance:', repr(refid), repr(classname)
                    pushframe((node, stack, refid, classname))
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    initargs, state = stack
                    node, stack, refid, classname = popframe()
                    obj = self.create_instance(classname,
                                               initargs,
                                               state)
                    memo[refid] = obj
                    stack.append(obj)
                    continue

            elif node is 'initArgs':

                if tagtype == STARTTAG:
                    pushframe((node, stack))
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    obj = tuple(stack)
                    node, stack = popframe()
                    stack.append(obj)
                    continue

            elif node is 'dynamic':

                if tagtype == STARTTAG:
                    attr = tag.attributes
                    refid = int(attr['id'])
                    pushframe((node, stack, refid))
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    callable, args = stack[:2]
                    if len(stack) >= 3:
                        state = stack[2]
                    else:
                        state = None
                    node, stack, refid = popframe()
                    obj = self.create_object(callable, args, state)
                    memo[refid] = obj
                    stack.append(obj)
                    continue

            elif node is 'state' or \
                 node is 'callable' or \
                 node is 'args' or \
                 node is 'imag' or \
                 node is 'real':

                if tagtype in (STARTTAG, ENDTAG):
                    continue

            elif node is 'global':

                if tagtype == STARTTAG:
                    attr = tag.attributes
                    refid = int(attr['id'])
                    fullname = attr['name']
                    obj = self.find_global(fullname)
                    memo[refid] = obj
                    stack.append(obj)
                    continue

                elif tagtype == ENDTAG:
                    continue

            elif node is 'complex':

                if tagtype == STARTTAG:
                    continue

                elif tagtype == ENDTAG:
                    real, imag = stack[-2:]
                    stack[-2] = complex(imag, real)
                    del stack[-1]
                    continue

            elif node is 'xmlPickle':

                if tagtype == STARTTAG:
                    stack = []
                    continue

                elif tagtype == ENDTAG:
                    obj = stack[-1]
                    break

            # If we get here, something is wrong
            raise UnpicklingError, \
                  'unrecognized input data: tag %s' % tag.tag



-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 20 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              12 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From python-dev at zesty.ca  Tue Jun 20 11:01:47 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue, 20 Jun 2006 04:01:47 -0500 (CDT)
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060619213651.1DAA.JCARLSON@uci.edu>
References: <4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<20060619213651.1DAA.JCARLSON@uci.edu>
Message-ID: <Pine.LNX.4.58.0606200358530.17937@server1.LFW.org>


On Mon, 19 Jun 2006, Josiah Carlson wrote:
> I personally don't find switch/case statements to be significantly (if
> at all) easier to read than if/elif/else chains, but that is subjective,
> and I note that Ka-Ping finds switch/case to be significantly easier to
> read.

Uh, i didn't mean to say that.  I said readability should be the primary
motivator for new syntax (in general).  Whether switch/case provides a
significant readability improvement, and how often it turns out to be
useful -- these things depend on the semantics we choose.


-- ?!ng

From p.f.moore at gmail.com  Tue Jun 20 11:18:50 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Tue, 20 Jun 2006 10:18:50 +0100
Subject: [Python-Dev] Documentation enhancement: "MS free compiler"?
In-Reply-To: <79990c6b0606191533u6bd92be3ya5ff8b0c6cc76c49@mail.gmail.com>
References: <20060613165130.GA6401@lairds.us> <4497233D.9050005@v.loewis.de>
	<79990c6b0606191533u6bd92be3ya5ff8b0c6cc76c49@mail.gmail.com>
Message-ID: <79990c6b0606200218t47814b89w1da6d69ae6588129@mail.gmail.com>

On 6/19/06, Paul Moore <p.f.moore at gmail.com> wrote:
> I'll see if I have time to look at the README and suggest suitable words.

I've uploaded http://www.python.org/sf/1509163 and assigned it to you,
Martin. I hope that's OK.

Paul.

From mwh at python.net  Tue Jun 20 12:52:57 2006
From: mwh at python.net (Michael Hudson)
Date: Tue, 20 Jun 2006 11:52:57 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FsLgS-0001fa-Cc@virgo.cus.cam.ac.uk>
References: <E1FsLgS-0001fa-Cc@virgo.cus.cam.ac.uk>
Message-ID: <C30C8CE4-E688-49A6-BF33-2BE3C811820D@python.net>

This mail never appeared on python-dev as far as I can tell, so I'm  
not snipping anything.

On 19 Jun 2006, at 16:29, Nick Maclaren wrote:

> Michael Hudson <mwh at python.net> wrote:
>>
>>> As I have posted to comp.lang.python, I am not happy with Python's
>>> numerical robustness - because it basically propagates the  
>>> 'features'
>>> of IEEE 754 and (worse) C99.
>>
>> That's not really now I would describe the situation today.
>
> It is certainly the case in 2.4.2, however you would describe it.

I guess you could say it reflects the features of C89.  It certainly  
doesn't do anything C99 specific.

But I wouldn't characterize anything Python does in the floating  
point area as "designed", particularly.  Portability makes that hard.

>>> 2) Because some people are dearly attached to the current behaviour,
>>> warts and all, and there is a genuine quandary of whether the  
>>> 'right'
>>> behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
>>> finally-specifies (let's ignore C99 and Java as beyond redemption),
>>
>> Why?  Maybe it's clear to you, but it's not totally clear to me, and
>> it any case the discussion would be better informed for not being too
>> dismissive.
>
> Why which?

Why are C99 and Java beyond redemption?  I know some of the mistakes  
Java makes here, but still, you could at least hint at which you are  
thinking of.

> There are several things that you might be puzzled over.
> And where can I start?  Part of the problem is that I have spent a LOT
> of time in these areas in the past decades, and have been involved
> with many of the relevant standards, and I don't know what to assume.

Well, if you can't explain what your intentions are to *me*, as a  
mathematics-degree holding core Python developer that has done at  
least some work in this area, I posit that you aren't going to get  
very far.

I'm not intimately familiar with the standards like 754 but I have  
some idea what they contain, and I've read appendix F of C99, if that  
helps you target your explanations.

>>> there might well need to be options.  These can obviously be done by
>>> a command-line option, an environment variable or a float method.
>>> There are reasons to disfavour the last, but all are possible.   
>>> Which
>>> is the most Pythonesque approach?
>>
>> I have heard Tim say that there are people who would dearly like  
>> to be
>> able to choose.  Environment variables and command line switches are
>> not Pythonic.
>
> All right, but what is?  Firstly, for something that needs to be
> program-global?

Why does it need to be program global?  In my not-really-thought-out  
plans for straightening out CPython's floating point story I had  
envisioned code to be written something like this:

with fp_context(non_stop_context):
     a = 1e308*1e308 # a is now +inf
     b = a/a         # b is now a quiet nan

with fp_context(all_traps_context):
     a = 1.0/3.0 # raises some Inexact exception

(and have a default context which raises for Overflow, DivideByZero  
and InvalidOperation and ignores Underflow and Inexact).

This could be implemented by having a field in the threadstate of FPU  
flags to check after each fp operation (or each set of fp operations,  
possibly).  I don't think I have the guts to try to implement  
anything sensible using HW traps (which are thread-local as well,  
aren't they?).

Does this look anything at all like what you had in mind?

> Secondly, for things that don't need to be brings
> up my point of adding methods to a built-in class.

This isn't very hard, really, in fact float has class methods in 2.5...

>> I'm interested in making Python's floating point story better, and
>> have worked on a few things for Python 2.5 -- such as
>> pickling/marshalling of special values -- but I'm not really a
>> numerical programmer and don't like to guess what they need.
>
> Ah.  I must get a snapshot, then.  That was one of the lesser things
> on my list.

It was fairly straightforward, and still caused portability problems...

> I have spent a lot of the past few decades in the numerical
> programming arena, from many aspects.

Well, I hope I can help you bring some of that experience to Python.

Cheers,
mwh



From g.brandl at gmx.net  Tue Jun 20 16:04:11 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 20 Jun 2006 16:04:11 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<20060618184500.F34E.JCARLSON@uci.edu>	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>	<4496D06E.7070106@ewtllc.com>	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>	<4496DD2F.30501@ewtllc.com>	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>	<4496FB54.5060800@ewtllc.com>	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
Message-ID: <e78v8s$one$1@sea.gmane.org>

Guido van Rossum wrote:
> On 6/19/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote:
>> > I say, let someone give a complete implementation a try, and then try
>> > to modify as much standard library code as possible to use it. Then
>> > report back. That would be a very interesting experiment to do. (And
>> > thanks for the pointer to sre_compile.py as a use case!)
>>
>> Hmm, it could be time for the Georg bot to graduate to big game.
>> Georg, are you up to it?

I feel I am, and I'll have enough time until 2.6 anyway.
However, I first want to know that the syntax and semantics have been
properly discussed and fixed.

One thing I like to propose which could resolve the ambiguity of
"case 1,2,3:" is:

switch x:
   case (1, 2, 3):
      # if x is the tuple (1,2,3)
   case in 1, 2, 3:
      # if x is 1, 2 or 3

> Georg is a bot? :-)

Yes, I was initiated in Reykjavik ;)

Georg


From ncoghlan at gmail.com  Tue Jun 20 16:50:48 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 21 Jun 2006 00:50:48 +1000
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>
References: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>
Message-ID: <44980B48.2020303@gmail.com>

Nick Maclaren wrote:
> Brett Cannon's and Neal Norwitz's replies appreciated and noted, but
> responses sent by mail.
> 
> 
> Nick Coghlan <ncoghlan at gmail.com> wrote:
>> Python 2.4's decimal module is, in essence, a floating point emulator based on 
>> the General Decimal Arithmetic specification.
> 
> Grrk.  Format and all?  Because, in software, encoding, decoding and
> dealing with the special cases accounts for the vast majority of the
> time.  Using a format and specification designed for implementation
> in software is a LOT faster (often 5-20 times).

If by format you mean storing the number as sign, coefficient, exponent, then 
yes the decimal module took that part from Cowlishaw. Facundo (sensibly) 
ignored some of the hardware-specific details, though. The initial Python 
implementation was designed to be reasonably efficient from a speed point of 
view without being too hard to follow. Memory efficiency wasn't really a 
priority (the only concession to it that I can recall is the use of __slots__ 
on the decimal objects). So most of the time the decimal coefficients are 
stored as tuples of digits, with an internal conversion to long integers in 
order to do arbitrary coefficient arithmetic, but staying with the tuples of 
digits for multiplication and division by powers of ten.

And yes, we ended up adding an '_is_special' flag to the decimal objects 
simply because profiling showed that a whole heap of time was being spent 
deciding if any of the special cases applied to an operation. Having a single 
flag to check cut that time down appreciably.

The intent was always to replace the internal use of tuples and longs with a 
more efficient C implementation - that particular step simply wasn't needed 
for the original use case that lead Facundo to write and implement PEP 327.

>> If you want floating point mathematics that doesn't have insane platform 
>> dependent behaviour, the decimal module is the recommended approach. By the 
>> time Python 2.6 rolls around, we will hopefully have an optimized version 
>> implemented in C (that's being worked on already).
> 
> Yes.  There is no point in building a wheel if someone else is doing it.
> Please pass my name on to the people doing the optimisation, as I have
> a lot of experience in this area and may be able to help.  But it is a
> fairly straightforward (if tricky) task.

Mateusz Rucowicz has taken up the challenge for Google's Summer of Code 
(mentored by Facundo Batista, the original author of PEP 327 and the decimal 
module).

I've cc'ed Facundo, so hopefully he will see this thread and chime in :)

> Mode A:  follow IEEE 754R slavishly, if and when it ever gets into print.
> There is no point in following C99, as it is too ill-defined, even if it
> were felt desirable.  This should not be the default, because of the
> flaws I mention above (see Kahan on Java).

If the C-coded decimal module proves fast enough (for a given definition of 
'fast enough'), I'll certainly be pushing for it to be made the standard float 
type in Py3k. Although I'm sure Tim will miss having to explain the result of 
repr(1.1) every few months ;)

That said, the fact that 754R *isn't* finished yet, is one of the big reasons 
why the decimal module is based on the General Decimal Arithmetic 
Specification instead.

> Mode B:  all numerically ambiguous or invalid operations should raise
> an exception - including pow(0,0), int(NaN) etc. etc.  There is a moot
> point over whether overflow is such a case in an arithmetic that has
> infinities, but let's skip over that one for now.

Let's not skip it, because the decimal module already seems to do pretty much 
what you describe here :)

(The below example code was tested with 2.5a2, but the same code should work 
in Python 2.4. Uninteresting traceback details have been omitted)

 >>> from decimal import Decimal as d
 >>> nan = d('NaN')
 >>> zero = d(0)
 >>>
 >>> pow(zero, zero)
Traceback (most recent call last):
   ...
decimal.InvalidOperation: 0 ** 0
 >>>
 >>> int(nan)
Traceback (most recent call last):
   ...
decimal.InvalidOperation
 >>>
 >>> d('1e999999999') * 10
Traceback (most recent call last):
   ...
decimal.Overflow: above Emax
 >>>
 >>> from decimal import getcontext, Overflow
 >>> ctx = getcontext()
 >>> ctx.traps[Overflow] = False
 >>> d('1e999999999') * 10
Decimal("Infinity")

Emax can be changed if the default limit is too low for a given application :)

> Mode C:  all numerically ambiguous or invalid operations should return
> a NaN (or infinity, if appropriate).  Anything that would lose the error
> indication would raise an exception.  The selection between modes B and
> C could be done by a method on the class - with mode B being selected
> if any argument had it set, and mode C otherwise.

 >>> from decimal import InvalidOperation
 >>> ctx.traps[InvalidOperation] = False
 >>> pow(zero, zero)
Decimal("NaN")
 >>> int(nan)
Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
TypeError: __int__ returned non-int (type Decimal)

The selection of the current mode is on a per-thread basis. Starting with 
python 2.5, you can use the with statement to easily manage the current 
decimal context and ensure it gets rolled back appropriately.

> Heaven help us, there could be a mode D, which would be mode C but
> with trace buffers.  They are another sadly neglected software
> engineering technique, but let's not add every bell and whistle on
> the shelf :-)

 >>> ctx.flags[InvalidOperation]
8

It's not a trace buffer, but the decimal context does at least track how many 
times the different signals have been triggered in the current thread (or 
since they were last reset), regardless of whether or not the traps were 
actually enabled.

Hopefully Facundo will respond, and you can work with him regarding reviewing 
what Mateusz is working on, and possibly offering some pointers and 
suggestions. The work-in-progress can be seen in Python's SVN sandbox:

http://svn.python.org/view/sandbox/trunk/decimal-c/

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From theller at python.net  Tue Jun 20 17:07:28 2006
From: theller at python.net (Thomas Heller)
Date: Tue, 20 Jun 2006 17:07:28 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <449726CB.8080304@v.loewis.de>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
	<e763gu$3aq$1@sea.gmane.org> <449726CB.8080304@v.loewis.de>
Message-ID: <44980F30.9010308@python.net>

Martin v. L?wis schrieb:
> Thomas Heller wrote:
>> It should be noted that I once started to convert the import machinery
>> to be fully unicode aware.  As far as I can tell, a *lot* has to be changed
>> to make this work.
> 
> Is that code available somewhere still? Does it still work?

Available as patch 1093253, I have not tried if it stil works
> 
>> I started with refactoring Python/import.c, but nobody responded to the question
>> whether such a refactoring patch would be accepted or not.
> 
> I would like to see minimal changes only. I don't see why massive
> refactoring would be necessary: the structure of the code should
> persist - only the data types should change from char* to PyObject*.
> Calls like stat() and open() should be generalized to accept
> PyObject*, and otherwise keep their interface.

To be really useful, wide char versions of other things must also be
made available: command line arguments, environment variables
(PYTHONPATH), and maybe other stuff.

Thomas

From cce at clarkevans.com  Tue Jun 20 16:36:21 2006
From: cce at clarkevans.com (Clark C. Evans)
Date: Tue, 20 Jun 2006 10:36:21 -0400
Subject: [Python-Dev] Dropping __init__.py requirement for subpackages
In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com>
References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com>
Message-ID: <20060620143621.GA25300@prometheusresearch.com>

+1 Excellent Change
+1 Minimal Backward Compatibility Difficulties

I think this would also help quite a bit with newbie adoption of Python.
I've had to explain this un-feature on numerous occassions and it given
how smart Python is, I've wondered why it has this requirement.  If you 
look in various open source packages, you'll find that 95% of these
__init__.py files are empty.  The ones at my work actually say:

  # stupid Python requirement, don't remove this file

Why?  Someone decided to remove files of length 0 in our repository
without realizing the consequences.  Since it had the __init__.pyc file
around, it still worked... till one brought down a fresh copy of the 
repository and then it just "stopped" working.  Quite a bit of hair
pulling that one caused us.

The only case where this might cause a problem is with "resource"
directories that only contain ".html", ".jpg" and other files.  So,
perhpas this feature would only turn a directory into a package if
it didn't have any .py files.  It could also trigger only when the
package is explicitly imported?

Good luck /w the pitch-fork wielding users and telling the old-timers
where they can keep their backward compatibility.

Clark

From martin at v.loewis.de  Tue Jun 20 18:38:56 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 20 Jun 2006 18:38:56 +0200
Subject: [Python-Dev] unicode imports
In-Reply-To: <44980F30.9010308@python.net>
References: <129CEF95A523704B9D46959C922A280002A4CDD1@nemesis.central.ccp.cc>
	<e763gu$3aq$1@sea.gmane.org> <449726CB.8080304@v.loewis.de>
	<44980F30.9010308@python.net>
Message-ID: <449824A0.1000603@v.loewis.de>

Thomas Heller wrote:
>> Is that code available somewhere still? Does it still work?
> 
> Available as patch 1093253, I have not tried if it stil works

I see. It's quite a huge change, that's probably why nobody found
the time to review it, yet.

> To be really useful, wide char versions of other things must also be
> made available: command line arguments, environment variables
> (PYTHONPATH), and maybe other stuff.

While I think these things should eventually be done, I don't think
they are that related to import.c.

If W9x support gets dropped, we can rewrite PC/getpathp.c to use the
Unicode API throughout; that would allow to put non-ANSI path
names onto PYTHONPATH.

Making os.environ support Unicode is entirely different isusue.
I would like to see os.environ return Unicode if the key is Unicode;
another option would be to introduce os.uenviron.

Regards,
Martin

From trentm at activestate.com  Tue Jun 20 19:05:34 2006
From: trentm at activestate.com (Trent Mick)
Date: Tue, 20 Jun 2006 10:05:34 -0700
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC 10.3.9
	(Panther)
Message-ID: <44982ADE.5070404@activestate.com>

Thomas and others,

Has anyone else seen failures in test_ctypes on older Mac OS X/PowerPC? 
Results are below. This is running a build of the trunk from last night:

	./configure && make && ./python.exe Lib/test/test_ctypes.py

Note that the test does NOT fail on the Mac OS X/x86 10.4.6 box that I have.

Trent

> $ ./python.exe Lib/test/test_ctypes.py
> /Users/trentm/src/python/Lib/ctypes/__init__.py:10: ImportWarning: Not importing directory '/Users/trentm/src/python/Modules/_ctypes': missing __init__.py
>   from _ctypes import Union, Structure, Array
> test_anon (ctypes.test.test_anon.AnonTest) ... ok
> test_anon_nonmember (ctypes.test.test_anon.AnonTest) ... ok
> test_anon_nonseq (ctypes.test.test_anon.AnonTest) ... ok
> test_nested (ctypes.test.test_anon.AnonTest) ... ok
> test (ctypes.test.test_array_in_pointer.Test) ... ok
> test_2 (ctypes.test.test_array_in_pointer.Test) ... ok
> test_classcache (ctypes.test.test_arrays.ArrayTestCase) ... ok
> test_from_address (ctypes.test.test_arrays.ArrayTestCase) ... ok
> test_from_addressW (ctypes.test.test_arrays.ArrayTestCase) ... ok
> test_numeric_arrays (ctypes.test.test_arrays.ArrayTestCase) ... ok
> test_simple (ctypes.test.test_arrays.ArrayTestCase) ... ok
> test_longlong (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_mixed_1 (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_mixed_2 (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_mixed_3 (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_multi_bitfields_size (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_nonint_types (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_signed (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_single_bitfield_size (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_ulonglong (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_unsigned (ctypes.test.test_bitfields.BitFieldTest) ... ok
> test_ints (ctypes.test.test_bitfields.C_Test) ... ok
> test_shorts (ctypes.test.test_bitfields.C_Test) ... ok
> test_buffer (ctypes.test.test_buffers.StringBufferTestCase) ... ok
> test_string_conversion (ctypes.test.test_buffers.StringBufferTestCase) ... ok
> test_unicode_buffer (ctypes.test.test_buffers.StringBufferTestCase) ... ok
> test_unicode_conversion (ctypes.test.test_buffers.StringBufferTestCase) ... ok
> test_endian_double (ctypes.test.test_byteswap.Test) ... ok
> test_endian_float (ctypes.test.test_byteswap.Test) ... ok
> test_endian_int (ctypes.test.test_byteswap.Test) ... ok
> test_endian_longlong (ctypes.test.test_byteswap.Test) ... ok
> test_endian_other (ctypes.test.test_byteswap.Test) ... ok
> test_endian_short (ctypes.test.test_byteswap.Test) ... ok
> test_struct_fields_1 (ctypes.test.test_byteswap.Test) ... ok
> test_struct_fields_2 (ctypes.test.test_byteswap.Test) ... ok
> test_struct_struct (ctypes.test.test_byteswap.Test) ... ok
> test_unaligned_native_struct_fields (ctypes.test.test_byteswap.Test) ... ok
> test_unaligned_nonnative_struct_fields (ctypes.test.test_byteswap.Test) ... ok
> test_byte (ctypes.test.test_callbacks.Callbacks) ... ok
> test_char (ctypes.test.test_callbacks.Callbacks) ... ok
> test_double (ctypes.test.test_callbacks.Callbacks) ... ok
> test_float (ctypes.test.test_callbacks.Callbacks) ... ok
> test_int (ctypes.test.test_callbacks.Callbacks) ... ok
> test_long (ctypes.test.test_callbacks.Callbacks) ... ok
> test_longlong (ctypes.test.test_callbacks.Callbacks) ... ok
> test_pyobject (ctypes.test.test_callbacks.Callbacks) ... ok
> test_short (ctypes.test.test_callbacks.Callbacks) ... ok
> test_ubyte (ctypes.test.test_callbacks.Callbacks) ... ok
> test_uint (ctypes.test.test_callbacks.Callbacks) ... ok
> test_ulong (ctypes.test.test_callbacks.Callbacks) ... ok
> test_ulonglong (ctypes.test.test_callbacks.Callbacks) ... ok
> test_ushort (ctypes.test.test_callbacks.Callbacks) ... ok
> test_integrate (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... ok
> test_address2pointer (ctypes.test.test_cast.Test) ... ok
> test_array2pointer (ctypes.test.test_cast.Test) ... ok
> test_other (ctypes.test.test_cast.Test) ... ok
> test_p2a_objects (ctypes.test.test_cast.Test) ... ok
> test_byte (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_byte_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_callwithresult (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_double (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_double_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_float (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_float_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_int (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_int_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_long (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_long_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_longlong (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_longlong_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_short (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_short_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ubyte (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ubyte_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_uint (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_uint_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ulong (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ulong_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ulonglong (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ulonglong_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ushort (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_ushort_plus (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_void (ctypes.test.test_cfuncs.CFunctions) ... ok
> test_checkretval (ctypes.test.test_checkretval.Test) ... ok
> test_gl (ctypes.test.test_find.Test_OpenGL_libs) ... ok
> test_glu (ctypes.test.test_find.Test_OpenGL_libs) ... ok
> test_glut (ctypes.test.test_find.Test_OpenGL_libs) ... ok
> test_basic (ctypes.test.test_funcptr.CFuncPtrTestCase) ... ok
> test_dllfunctions (ctypes.test.test_funcptr.CFuncPtrTestCase) ... ok
> test_first (ctypes.test.test_funcptr.CFuncPtrTestCase) ... ok
> test_structures (ctypes.test.test_funcptr.CFuncPtrTestCase) ... ok
> test_byval (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_callbacks (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_callbacks_2 (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_doubleresult (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_errors (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_floatresult (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_intresult (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_longlong_callbacks (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_longlongresult (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_mro (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_pointers (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_shorts (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_stringresult (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_struct_return_2H (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_struct_return_8H (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_voidresult (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_wchar_parm (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_wchar_result (ctypes.test.test_functions.FunctionTestCase) ... ok
> test_incomplete_example (ctypes.test.test_incomplete.MyTestCase) ... ok
> test_get (ctypes.test.test_init.InitTest) ... ok
> test_c_char_p (ctypes.test.test_internals.ObjectsTestCase) ... ok
> test_embedded_structs (ctypes.test.test_internals.ObjectsTestCase) ... ok
> test_ints (ctypes.test.test_internals.ObjectsTestCase) ... ok
> test_ptr_struct (ctypes.test.test_internals.ObjectsTestCase) ... ok
> test_simple_struct (ctypes.test.test_internals.ObjectsTestCase) ... ok
> test_xxx (ctypes.test.test_internals.ObjectsTestCase) ... ok
> test_cint_array (ctypes.test.test_keeprefs.ArrayTestCase) ... ok
> test_p_cint (ctypes.test.test_keeprefs.PointerTestCase) ... ok
> test (ctypes.test.test_keeprefs.PointerToStructure) ... ok
> test_ccharp (ctypes.test.test_keeprefs.SimpleTestCase) ... ok
> test_cint (ctypes.test.test_keeprefs.SimpleTestCase) ... ok
> test_ccharp_struct (ctypes.test.test_keeprefs.StructureTestCase) ... ok
> test_cint_struct (ctypes.test.test_keeprefs.StructureTestCase) ... ok
> test_struct_struct (ctypes.test.test_keeprefs.StructureTestCase) ... ok
> test_qsort (ctypes.test.test_libc.LibTest) ... ok
> test_sqrt (ctypes.test.test_libc.LibTest) ... ok
> test_find (ctypes.test.test_loading.LoaderTest) ... ERROR
> test_load (ctypes.test.test_loading.LoaderTest) ... ERROR
> test_find (ctypes.test.test_macholib.MachOTest) ... ok
> test_cast (ctypes.test.test_memfunctions.MemFunctionsTest) ... ok
> test_memmove (ctypes.test.test_memfunctions.MemFunctionsTest) ... ok
> test_memset (ctypes.test.test_memfunctions.MemFunctionsTest) ... ok
> test_string_at (ctypes.test.test_memfunctions.MemFunctionsTest) ... ok
> test_wstring_at (ctypes.test.test_memfunctions.MemFunctionsTest) ... ok
> test_alignments (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_as_parameter (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_byref (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_char_from_address (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_default_init (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_float_from_address (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_floats (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_from_param (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_init (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_int_from_address (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_integers (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_signed_values (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_sizes (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_typeerror (ctypes.test.test_numbers.NumberTestCase) ... ok
> test_unsigned_values (ctypes.test.test_numbers.NumberTestCase) ... ok
> test (ctypes.test.test_objects.TestCase) ... ok
> test_array_pointers (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_byref_pointer (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_byref_pointerpointer (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_cstrings (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_cw_strings (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_int_pointers (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_subclasses (ctypes.test.test_parameters.SimpleTypesTestCase) ... ok
> test_basic (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_basics (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_bug_1467852 (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_callbacks_with_pointers (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_change_pointers (ctypes.test.test_pointers.PointersTestCase) ... ok
> Test that a character pointer-to-pointer is correctly passed ... ok
> test_from_address (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_other (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_pass_pointers (ctypes.test.test_pointers.PointersTestCase) ... ok
> test_pointer_crash (ctypes.test.test_pointers.PointersTestCase) ... ok
> test (ctypes.test.test_prototypes.ArrayTest) ... ok
> test_POINTER_c_char_arg (ctypes.test.test_prototypes.CharPointersTestCase) ... ok
> test_c_char_p_arg (ctypes.test.test_prototypes.CharPointersTestCase) ... ok
> test_c_void_p_arg (ctypes.test.test_prototypes.CharPointersTestCase) ... ok
> test_int_pointer_arg (ctypes.test.test_prototypes.CharPointersTestCase) ... ok
> test_POINTER_c_wchar_arg (ctypes.test.test_prototypes.WCharPointersTestCase) ... ok
> test_c_wchar_p_arg (ctypes.test.test_prototypes.WCharPointersTestCase) ... ok
> test_PyOS_snprintf (ctypes.test.test_python_api.PythonAPITestCase) ... ok
> test_PyObj_FromPtr (ctypes.test.test_python_api.PythonAPITestCase) ... ok
> test_PyString_FromString (ctypes.test.test_python_api.PythonAPITestCase) ... ok
> test_PyString_FromStringAndSize (ctypes.test.test_python_api.PythonAPITestCase) ... ok
> test_FloatDivisionError (ctypes.test.test_random_things.CallbackTracbackTestCase) ... ok
> test_IntegerDivisionError (ctypes.test.test_random_things.CallbackTracbackTestCase) ... ok
> test_TypeErrorDivisionError (ctypes.test.test_random_things.CallbackTracbackTestCase) ... ok
> test_ValueError (ctypes.test.test_random_things.CallbackTracbackTestCase) ... ok
> test_callback (ctypes.test.test_refcounts.AnotherLeak) ... ok
> test_1 (ctypes.test.test_refcounts.RefcountTestCase) ... ok
> test_refcount (ctypes.test.test_refcounts.RefcountTestCase) ... ok
> test_char (ctypes.test.test_repr.ReprTest) ... ok
> test_numbers (ctypes.test.test_repr.ReprTest) ... ok
> test_with_prototype (ctypes.test.test_returnfuncptrs.ReturnFuncPtrTestCase) ... ok
> test_without_prototype (ctypes.test.test_returnfuncptrs.ReturnFuncPtrTestCase) ... ok
> test_compare (ctypes.test.test_simplesubclasses.Test) ... ok
> test_ignore_retval (ctypes.test.test_simplesubclasses.Test) ... ok
> test_int_callback (ctypes.test.test_simplesubclasses.Test) ... ok
> test_int_struct (ctypes.test.test_simplesubclasses.Test) ... ok
> test_16 (ctypes.test.test_sizes.SizesTestCase) ... ok
> test_32 (ctypes.test.test_sizes.SizesTestCase) ... ok
> test_64 (ctypes.test.test_sizes.SizesTestCase) ... ok
> test_8 (ctypes.test.test_sizes.SizesTestCase) ... ok
> test_size_t (ctypes.test.test_sizes.SizesTestCase) ... ok
> test_char_array (ctypes.test.test_slicing.SlicesTestCase) ... ok
> test_char_ptr (ctypes.test.test_slicing.SlicesTestCase) ... ok
> test_char_ptr_with_free (ctypes.test.test_slicing.SlicesTestCase) ... ok
> test_getslice_cint (ctypes.test.test_slicing.SlicesTestCase) ... ok
> test_setslice_cint (ctypes.test.test_slicing.SlicesTestCase) ... ok
> test_wchar_ptr (ctypes.test.test_slicing.SlicesTestCase) ... ok
> test__POINTER_c_char (ctypes.test.test_stringptr.StringPtrTestCase) ... ok
> test__c_char_p (ctypes.test.test_stringptr.StringPtrTestCase) ... ok
> test_functions (ctypes.test.test_stringptr.StringPtrTestCase) ... ok
> test (ctypes.test.test_strings.StringArrayTestCase) ... ok
> test_c_buffer_raw (ctypes.test.test_strings.StringArrayTestCase) ... ok
> test_c_buffer_value (ctypes.test.test_strings.StringArrayTestCase) ... ok
> test_param_1 (ctypes.test.test_strings.StringArrayTestCase) ... ok
> test_param_2 (ctypes.test.test_strings.StringArrayTestCase) ... ok
> test (ctypes.test.test_strings.WStringArrayTestCase) ... ok
> test_wchar (ctypes.test.test_strings.WStringTestCase) ... ok
> test_1_A (ctypes.test.test_struct_fields.StructFieldsTestCase) ... ok
> test_1_B (ctypes.test.test_struct_fields.StructFieldsTestCase) ... ok
> test_2 (ctypes.test.test_struct_fields.StructFieldsTestCase) ... ok
> test_3 (ctypes.test.test_struct_fields.StructFieldsTestCase) ... ok
> test_4 (ctypes.test.test_struct_fields.StructFieldsTestCase) ... ok
> test (ctypes.test.test_structures.PointerMemberTestCase) ... ok
> test_abstract_class (ctypes.test.test_structures.StructureTestCase) ... ok
> test_emtpy (ctypes.test.test_structures.StructureTestCase) ... ok
> test_fields (ctypes.test.test_structures.StructureTestCase) ... ok
> test_init_errors (ctypes.test.test_structures.StructureTestCase) ... ok
> test_initializers (ctypes.test.test_structures.StructureTestCase) ... ok
> test_intarray_fields (ctypes.test.test_structures.StructureTestCase) ... ok
> test_invalid_field_types (ctypes.test.test_structures.StructureTestCase) ... ok
> test_keyword_initializers (ctypes.test.test_structures.StructureTestCase) ... ok
> test_methods (ctypes.test.test_structures.StructureTestCase) ... ok
> test_nested_initializers (ctypes.test.test_structures.StructureTestCase) ... ok
> test_packed (ctypes.test.test_structures.StructureTestCase) ... ok
> test_simple_structs (ctypes.test.test_structures.StructureTestCase) ... ok
> test_struct_alignment (ctypes.test.test_structures.StructureTestCase) ... ok
> test_structures_with_wchar (ctypes.test.test_structures.StructureTestCase) ... ok
> test_unions (ctypes.test.test_structures.StructureTestCase) ... ok
> test_subclass (ctypes.test.test_structures.SubclassesTest) ... ok
> test_subclass_delayed (ctypes.test.test_structures.SubclassesTest) ... ok
> test_native (ctypes.test.test_unaligned_structures.TestStructures) ... ok
> test_swapped (ctypes.test.test_unaligned_structures.TestStructures) ... ok
> test_ascii_ignore (ctypes.test.test_unicode.StringTestCase) ... ok
> test_ascii_replace (ctypes.test.test_unicode.StringTestCase) ... ok
> test_ascii_strict (ctypes.test.test_unicode.StringTestCase) ... ok
> test_buffers (ctypes.test.test_unicode.StringTestCase) ... ok
> test_latin1_strict (ctypes.test.test_unicode.StringTestCase) ... ok
> test_ascii_ignore (ctypes.test.test_unicode.UnicodeTestCase) ... ok
> test_ascii_replace (ctypes.test.test_unicode.UnicodeTestCase) ... ok
> test_ascii_strict (ctypes.test.test_unicode.UnicodeTestCase) ... ok
> test_buffers (ctypes.test.test_unicode.UnicodeTestCase) ... ok
> test_latin1_strict (ctypes.test.test_unicode.UnicodeTestCase) ... ok
> test_an_integer (ctypes.test.test_values.ValuesTestCase) ... ok
> test_undefined (ctypes.test.test_values.ValuesTestCase) ... ok
> test_array_invalid_length (ctypes.test.test_varsize_struct.VarSizeTest) ... ok
> test_resize (ctypes.test.test_varsize_struct.VarSizeTest) ... ok
> test_vararray_is_sane (ctypes.test.test_varsize_struct.VarSizeTest) ... ok
> test_varsized_array (ctypes.test.test_varsize_struct.VarSizeTest) ... ok
> test_zerosized_array (ctypes.test.test_varsize_struct.VarSizeTest) ... ok
> test_struct_by_value (ctypes.test.test_win32.Structures) ... ok
> 
> ======================================================================
> ERROR: test_find (ctypes.test.test_loading.LoaderTest)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "/Users/trentm/src/python/Lib/ctypes/test/test_loading.py", line 41, in test_find
>     cdll.LoadLibrary(lib)
>   File "/Users/trentm/src/python/Lib/ctypes/__init__.py", line 372, in LoadLibrary
>     return self._dlltype(name)
>   File "/Users/trentm/src/python/Lib/ctypes/__init__.py", line 290, in __init__
>     self._handle = _dlopen(self._name, mode)
> OSError: dlcompat: unable to open this file with RTLD_LOCAL
> 
> ======================================================================
> ERROR: test_load (ctypes.test.test_loading.LoaderTest)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "/Users/trentm/src/python/Lib/ctypes/test/test_loading.py", line 26, in test_load
>     CDLL(libc_name)
>   File "/Users/trentm/src/python/Lib/ctypes/__init__.py", line 290, in __init__
>     self._handle = _dlopen(self._name, mode)
> OSError: dlcompat: unable to open this file with RTLD_LOCAL
> 
> ----------------------------------------------------------------------
> Ran 258 tests in 0.377s
> 
> FAILED (errors=2)
> Traceback (most recent call last):
>   File "Lib/test/test_ctypes.py", line 12, in <module>
>     test_main()
>   File "Lib/test/test_ctypes.py", line 9, in test_main
>     run_suite(unittest.TestSuite(suites))
>   File "/Users/trentm/src/python/Lib/test/test_support.py", line 406, in run_suite
>     raise TestFailed(msg)
> test.test_support.TestFailed: errors occurred; run in verbose mode for details



-- 
Trent Mick
trentm at activestate.com

From dynkin at gmail.com  Tue Jun 20 19:08:24 2006
From: dynkin at gmail.com (George Yoshida)
Date: Wed, 21 Jun 2006 02:08:24 +0900
Subject: [Python-Dev] uuid backward compatibility
In-Reply-To: <Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
References: <2f188ee80606172016y52ed858ep2c9b62972684b3fe@mail.gmail.com>
	<Pine.LNX.4.58.0606180259550.698@server1.LFW.org>
	<44950D99.9000606@v.loewis.de>
	<bbaeab100606181249o1540989fod318bba817dde348@mail.gmail.com>
	<Pine.LNX.4.58.0606191432160.17937@server1.LFW.org>
Message-ID: <2f188ee80606201008l427e2570qd9eec5ae87eb7eeb@mail.gmail.com>

All your replies clarifies what your comment was intended to
mean, especially this one:

> I'd just like people who get their hands on the
> module to know that they can use it with 2.3.

When I first read the comment, I interpretted it too broadly
and took it as a requirement for compatibility. But you didn't
mean it that way at all.

My apology for not asking you beforehand.

uuid is now removed from the dont-break-compatibility list.

-- 

george

From theller at python.net  Tue Jun 20 20:06:54 2006
From: theller at python.net (Thomas Heller)
Date: Tue, 20 Jun 2006 20:06:54 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC 10.3.9
	(Panther)
In-Reply-To: <44982ADE.5070404@activestate.com>
References: <44982ADE.5070404@activestate.com>
Message-ID: <4498393E.1020101@python.net>

Trent Mick schrieb:
> Thomas and others,
> 
> Has anyone else seen failures in test_ctypes on older Mac OS X/PowerPC?
> Results are below. This is running a build of the trunk from last night:
> 
> 	./configure && make && ./python.exe Lib/test/test_ctypes.py
> 
> Note that the test does NOT fail on the Mac OS X/x86 10.4.6 box that I have.

It also works on 10.4.?? Power PC.  I guess the fix has to wait until
I'm able to install 10.3 on my mac, I have the DVDs already but have not 
yet had the time.  If anyone is willing to give me ssh access to a 10.3 
box I can try to fix this earlier.

Thomas


From facundobatista at gmail.com  Tue Jun 20 20:25:26 2006
From: facundobatista at gmail.com (Facundo Batista)
Date: Tue, 20 Jun 2006 15:25:26 -0300
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <44980B48.2020303@gmail.com>
References: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>
	<44980B48.2020303@gmail.com>
Message-ID: <e04bdf310606201125q3bfbd7a3kc9ec8c96099c1285@mail.gmail.com>

2006/6/20, Nick Coghlan <ncoghlan at gmail.com>:

> Nick Maclaren wrote:
> > Brett Cannon's and Neal Norwitz's replies appreciated and noted, but
> > responses sent by mail.

Damn, the most difficult way to keep a thread...


> The intent was always to replace the internal use of tuples and longs with a
> more efficient C implementation - that particular step simply wasn't needed
> for the original use case that lead Facundo to write and implement PEP 327.

Right. We never addressed speed. I mean, we made Decimal as fast as we
could in the limited time we had (Raymond H. helped a lot also here),
but it was NOT designed for speed.

BTW, prove me Decimal is not fast enough, ;)


> Mateusz Rucowicz has taken up the challenge for Google's Summer of Code
> (mentored by Facundo Batista, the original author of PEP 327 and the decimal
> module).
>
> I've cc'ed Facundo, so hopefully he will see this thread and chime in :)

I was reading the thread, yes, but it's so difficult to follow when
half the messages are not in the list... :(


> > Mode A:  follow IEEE 754R slavishly, if and when it ever gets into print.
> > There is no point in following C99, as it is too ill-defined, even if it
> > were felt desirable.  This should not be the default, because of the
> > flaws I mention above (see Kahan on Java).

See Cowlishaw's specification for how you can configure contexts to
achieve different "modes", and reasons for it and all.

Easier way: Just read Decimal docs.


> Let's not skip it, because the decimal module already seems to do pretty much
> what you describe here :)

Well... I think I missed it... but a very good way to resume it would
be: Explain us what do you need to do that there's not achievable with
Decimal...

Regards,

-- 
.    Facundo

Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/

From ronaldoussoren at mac.com  Tue Jun 20 20:50:56 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Tue, 20 Jun 2006 20:50:56 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC 10.3.9
	(Panther)
In-Reply-To: <4498393E.1020101@python.net>
References: <44982ADE.5070404@activestate.com> <4498393E.1020101@python.net>
Message-ID: <71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>


On 20-jun-2006, at 20:06, Thomas Heller wrote:

> Trent Mick schrieb:
>> Thomas and others,
>>
>> Has anyone else seen failures in test_ctypes on older Mac OS X/ 
>> PowerPC?
>> Results are below. This is running a build of the trunk from last  
>> night:
>>
>> 	./configure && make && ./python.exe Lib/test/test_ctypes.py
>>
>> Note that the test does NOT fail on the Mac OS X/x86 10.4.6 box  
>> that I have.
>
> It also works on 10.4.?? Power PC.  I guess the fix has to wait until
> I'm able to install 10.3 on my mac, I have the DVDs already but  
> have not
> yet had the time.  If anyone is willing to give me ssh access to a  
> 10.3
> box I can try to fix this earlier.

I had some problems with my 10.3-capable box, but happily enough it  
decided to come alive again. I'm currently booted into 10.3.9 and  
will have a look.

Ronald


From guido at python.org  Tue Jun 20 21:53:20 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 20 Jun 2006 12:53:20 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060619213651.1DAA.JCARLSON@uci.edu>
References: <4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<20060619213651.1DAA.JCARLSON@uci.edu>
Message-ID: <ca471dc20606201253q2f5314fawcce357ee873d1e5@mail.gmail.com>

On 6/19/06, Josiah Carlson <jcarlson at uci.edu> wrote:
>
> "Guido van Rossum" <guido at python.org> wrote:
> > Perhaps I misunderstood Josiah's comment; I thought he was implying
> > that a switch should be significantly *faster* than if/elif, and was
> > arguing against features that would jeopardize that speedup. I would
> > think that it would be fine if some switches could be compiled into
> > some kind of lookup table while others would just be translated into a
> > series of if/elifs. As long as the compiler can tell the difference.
>
> I personally don't find switch/case statements to be significantly (if
> at all) easier to read than if/elif/else chains, but that is subjective,
> and I note that Ka-Ping finds switch/case to be significantly easier to
> read.
>
> Regardless of readability (I know that readability counts), TOOWTDI. If
> we allow somewhat arbitrary cases, then any potential speedup may be
> thrown away (which would bother those of us who use dispatching), and we
> essentially get a different syntax for if/elif/else.  I don't know about
> everyone else, but I'm not looking for a different syntax for
> if/elif/else, I'm looking for fast dispatch with some reasonable syntax.

Careful though. Even the most efficient switch can't meet all your
dispatch needs; a switch requires that you be able to list all the
cases statically in your source code. If that works for you, great.
But if you need some kind of dynamic extensibility, switch will never
cut it, and you'll have to use a dict of fuctions or the standard
getattr(self, '_Prefix_' + name) dynamic-dispatch-without-dict
approach.

> In my opinion, the most reasonable syntax is a semantic change for fast
> dispatch inside of specifically crafted if/elif chains of the form:
>     if/elif non_dotted_name == constant_expression:
> As stated various ways by various people, you can generate a hash table
> during function definition (or otherwise), verify that the value of
> non_dotted_name is hashable, and jump to particular offsets.  If you are
> careful with your offsets, you can even have parallel if/elif/else tests
> that fall through in the case of a 'non-hashable'.

Note that this is just Solution 1 from PEP 275.

> There is a drawback to the non-syntax if/elif/else optimization,
> specifically that someone could find that their dispatch mysteriously
> got slower when they went from x==1 to including some other comparison
> operator in the chain somewhere.  Well, that and the somewhat restricted
> set of optimizations, but we seem to be headed into that restricted set
> of optimizations anyways.

Remember that most things we usually *think* of as constants aren't
recognized as such by the compiler. For example the constants used by
sre_compile.py.

> One benefit to the non-syntax optimization is that it seems like it could
> be implemented as a bytecode hack, allowing us to punt on the entire
> discussion, and really decide on whether such a decorator should be in
> the standard library (assuming someone is willing to write the decorator).

I expect the scope of the optimization to be much less than you think,
and I expect that people will waste lots of time trying to figure out
why the optimization doesn't happen.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From kristjan at ccpgames.com  Tue Jun 20 23:02:53 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Tue, 20 Jun 2006 21:02:53 -0000
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
Message-ID: <129CEF95A523704B9D46959C922A280002A4D0B6@nemesis.central.ccp.cc>

 Actually, I was looking at the 1989 standard which is what we are supposed to be using, right?  But the exact wording in 99 is: 
"If the request can be honored, the signal function returns the value of func for the
most recent successful call to signal for the specified signal sig. Otherwise, a value of
SIG_ERR is returned and a positive value is stored in errno.".  Sorry for meitioning EINVAL, my mistake.

Continuing the hairsplitting:
Note that the standard doesn?t specify the valid signals, they are implementation defined as you say.  and in that sense, any old integer cannot be considered a "signal sig" in the above quote. So stricly speaking (in my interpretation) there is nothing wrong in aborting for an integer that isn?t defined to be a signal by the implementation.

However:  Let?s just agree that signal() isn?t up to spec, and leave it out of the discussion.  There are provisions all over the code for particular quirks of particular platforms.

Everything else appears to be up to spec.

> However, the usual, natural, straight-forward way of processing the mode string (in a loop with a > switch statement) can't possible cause crashes.
Making assumptions about how someone implements the CRT is not good practice.  Relying on the implementors to go above and beyond the spec to ensure stability or some "common sense" behaviour is inviting trouble. We should be wary, not just on microsoft platforms, on treading onto domain that isn't defined.  Anything can happen.  Literally.
But if you expect that, then in that sense, the VC8 implementation is probably better than most.  Because they have gone out of their way to try to identify all possible ways that you can violate the domain of those functions, and do something defined if you do.  So the behavior of these functions is defined for much wider range of input than for most other implementations, in other words, they have gone above and beyond the spec.


> So we can redirect all signals handlers to Python code if the user wishes so.
I wonder.  Setting process wide handlers like that seems to be odd if you are embedding python to do scripting for you.  The main app is usually the one that decides on signal handling for various things.  Seems like a python.exe thing to me.  But never mind.  At the very least one should then only set those handlers that are valid for each implementation.

I'd also like to draw your attention to lines 1162 and onwards in timemodule.c, lines 410 onwards.  Here is an example of us forcing values into an acceptable range, although the standard says: "If any of
the specified values is outside the normal range, the characters stored are unspecified.".  But on some platforms it would crash, not just store unspecified values.
So we are already compensating for implementations that break the standard (I think we can agree that this is breakage).

So, if I agree with you that signal() is broken (by default, but mended by our fix) is there any any other technical reason why VC8 should not be taken into the fold?  After all, it has already pointed out to us that we are dangerously allowing user format strings to propagate.

Cheers,
Kristj?n

From ronaldoussoren at mac.com  Tue Jun 20 23:08:28 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Tue, 20 Jun 2006 23:08:28 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC 10.3.9
	(Panther)
In-Reply-To: <71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>
References: <44982ADE.5070404@activestate.com> <4498393E.1020101@python.net>
	<71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>
Message-ID: <3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>


On 20-jun-2006, at 20:50, Ronald Oussoren wrote:

>
> On 20-jun-2006, at 20:06, Thomas Heller wrote:
>
>> Trent Mick schrieb:
>>> Thomas and others,
>>>
>>> Has anyone else seen failures in test_ctypes on older Mac OS X/
>>> PowerPC?
>>> Results are below. This is running a build of the trunk from last
>>> night:
>>>
>>> 	./configure && make && ./python.exe Lib/test/test_ctypes.py
>>>
>>> Note that the test does NOT fail on the Mac OS X/x86 10.4.6 box
>>> that I have.
>>
>> It also works on 10.4.?? Power PC.  I guess the fix has to wait until
>> I'm able to install 10.3 on my mac, I have the DVDs already but
>> have not
>> yet had the time.  If anyone is willing to give me ssh access to a
>> 10.3
>> box I can try to fix this earlier.
>
> I had some problems with my 10.3-capable box, but happily enough it
> decided to come alive again. I'm currently booted into 10.3.9 and
> will have a look.

It is a platform bug, RTLD_LOCAL doesn't work on 10.3. The following  
C snippet fails with the same error as ctypes: FAIL: dlcompat: unable  
to open this file with RTLD_LOCAL. This seems to be confirmed by this  
sourcet test file from darwin: http://darwinsource.opendarwin.org/ 
10.4.1/dyld-43/unit-tests/test-cases/dlopen-RTLD_LOCAL/main.c.

/* Begin of file */
#include <dlfcn.h>
#include <stdio.h>

int main(void)
{
         void* lib;

         lib = dlopen("/usr/lib/libz.dylib", RTLD_LOCAL);
         if (lib == NULL) {
                 printf("FAIL: %s\n", dlerror());
         } else {
                 printf("OK\n");
         }
         return 0;
}
/* End of file */


>
> Ronald
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ 
> ronaldoussoren%40mac.com


From jcarlson at uci.edu  Tue Jun 20 23:12:04 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Tue, 20 Jun 2006 14:12:04 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606201253q2f5314fawcce357ee873d1e5@mail.gmail.com>
References: <20060619213651.1DAA.JCARLSON@uci.edu>
	<ca471dc20606201253q2f5314fawcce357ee873d1e5@mail.gmail.com>
Message-ID: <20060620134220.1DBA.JCARLSON@uci.edu>


"Guido van Rossum" <guido at python.org> wrote:
> 
> On 6/19/06, Josiah Carlson <jcarlson at uci.edu> wrote:
> > Regardless of readability (I know that readability counts), TOOWTDI. If
> > we allow somewhat arbitrary cases, then any potential speedup may be
> > thrown away (which would bother those of us who use dispatching), and we
> > essentially get a different syntax for if/elif/else.  I don't know about
> > everyone else, but I'm not looking for a different syntax for
> > if/elif/else, I'm looking for fast dispatch with some reasonable syntax.
> 
> Careful though. Even the most efficient switch can't meet all your
> dispatch needs; a switch requires that you be able to list all the
> cases statically in your source code. If that works for you, great.
> But if you need some kind of dynamic extensibility, switch will never
> cut it, and you'll have to use a dict of fuctions or the standard
> getattr(self, '_Prefix_' + name) dynamic-dispatch-without-dict
> approach.

Indeed.  Not all dispatch methods are solved by if/elif/else blocks,
nor would be solved by switch/case statements, especially not in the
case of subclassing.  Though the subclassing case can be fixed with the
use of things like "if cur_case not in self.handles: ..." .


> Note that this is just Solution 1 from PEP 275.

Yes it is.  I was describing it for those who hadn't read the PEP (there
were a few earlier in this thread).


> Remember that most things we usually *think* of as constants aren't
> recognized as such by the compiler. For example the constants used by
> sre_compile.py.

Right, which is one of two reasons why I'm not writing the bytecode hack
or even an AST transformation (the other being time).  With the 'names
in case matches are bound at time X' semantic inside switch/case
statements, we gain more than could be offered with if/elif/else
optimization.


> > One benefit to the non-syntax optimization is that it seems like it could
> > be implemented as a bytecode hack, allowing us to punt on the entire
> > discussion, and really decide on whether such a decorator should be in
> > the standard library (assuming someone is willing to write the decorator).
> 
> I expect the scope of the optimization to be much less than you think,
> and I expect that people will waste lots of time trying to figure out
> why the optimization doesn't happen.

It depends.  The simple case would offer very limited optimization, but
with a bit of help from Raymond's global binding decorator, it suddenly
becomes almost as useful as the currently being discussed switch/case
statement, offering one of the two options previously discussed for name
binding.

 - Josiah


From gh at ghaering.de  Tue Jun 20 23:22:43 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Tue, 20 Jun 2006 23:22:43 +0200
Subject: [Python-Dev] Small sqlite3 test suite fix (Python 2.5b1 candidate)
Message-ID: <44986723.4040808@ghaering.de>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

http://www.python.org/sf/1509584

Please apply if you think it should go in Python 2.5b1, otherwise I'll
commit it after the freeze.

I'd personally postpone it, because it's only cosmetic (but maybe it's
related to the strange sqlite3 regression test failure Neil reported).

Also, somebody please add me as Python developer on Sourceforge (I cannot
assign items to myself there).

- -- Gerhard
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEmGcjdIO4ozGCH14RAjfuAJ451ElbxqDqi6O+cGV3nVXgp0qLNwCgp6pI
usZh93QtNgRz5Es3WmaX2W8=
=5owZ
-----END PGP SIGNATURE-----

From tim.peters at gmail.com  Tue Jun 20 23:30:10 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Tue, 20 Jun 2006 17:30:10 -0400
Subject: [Python-Dev] Small sqlite3 test suite fix (Python 2.5b1
	candidate)
In-Reply-To: <44986723.4040808@ghaering.de>
References: <44986723.4040808@ghaering.de>
Message-ID: <1f7befae0606201430h43cf7391l3abb72342c13ca64@mail.gmail.com>

[Gerhard H?ring]
> ...
> Also, somebody please add me as Python developer on Sourceforge (I cannot
> assign items to myself there).

If you still can't, scream at me ;-)

From martin at v.loewis.de  Tue Jun 20 23:43:56 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 20 Jun 2006 23:43:56 +0200
Subject: [Python-Dev] Python 2.4 extensions require VC 7.1?
In-Reply-To: <129CEF95A523704B9D46959C922A280002A4D0B6@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002A4D0B6@nemesis.central.ccp.cc>
Message-ID: <44986C1C.1050503@v.loewis.de>

Kristj?n V. J?nsson wrote:
>> However, the usual, natural, straight-forward way of processing the
>> mode string (in a loop with a > switch statement) can't possible
>> cause crashes.
> Making assumptions about how someone implements the CRT is not good
> practice.

I'm not arguing about that, and I think Python should be changed.

Notice, however, that this would be a behavioural change: currently,
Python passes through the mode argument for open(), exposing
the platform-specific behaviour. If Python would reject mode
arguments that are not supported in standard C, programs that
currently work may break.

> I wonder.  Setting process wide handlers like that seems to be odd if
> you are embedding python to do scripting for you.  The main app is
> usually the one that decides on signal handling for various things.
> Seems like a python.exe thing to me.  But never mind.  At the very
> least one should then only set those handlers that are valid for each
> implementation.

There is no portable way to find out what those are.

> So, if I agree with you that signal() is broken (by default, but
> mended by our fix) is there any any other technical reason why VC8
> should not be taken into the fold?  After all, it has already pointed
> out to us that we are dangerously allowing user format strings to
> propagate.

I can't say for sure - there should be some testing first.
One issue I can think of is the packaging: Microsoft wants
us to install msvcr80.dll using their SxS technology, with
manifests and everything. That needs to be considered in the
build process, and dealt with in the MSI production.

I have no experience with side-by-side installation of
"native assemblies" yet, so I would have to learn this first,
or wait for somebody to provide patches. This probably
also impacts exe builders, which have to pick up the
DLL from SxS.

Another technical issue is the absence of support for
msvcr80.dll in MinGW - one currently couldn't build
Python extensions that link with the right CRT.

Not purely technical, but somebody would also need to find
out what the licensing conditions on msvcr80.dll are:
what are the conditions for redistribution if I have
a licensed copy of VS 2005? What if I have VS Express?
What if I have neither, and just want to package it
as a side effect of using, say, py2exe?

Regards,
Martin

From gh at ghaering.de  Wed Jun 21 00:06:17 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Wed, 21 Jun 2006 00:06:17 +0200
Subject: [Python-Dev] Small sqlite3 test suite fix (Python
	2.5b1	candidate)
In-Reply-To: <1f7befae0606201430h43cf7391l3abb72342c13ca64@mail.gmail.com>
References: <44986723.4040808@ghaering.de>
	<1f7befae0606201430h43cf7391l3abb72342c13ca64@mail.gmail.com>
Message-ID: <44987159.3000309@ghaering.de>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tim Peters wrote:
> [Gerhard H?ring]
>> ...
>> Also, somebody please add me as Python developer on Sourceforge (I cannot
>> assign items to myself there).
> 
> If you still can't, scream at me ;-)

Bwaaaaaaaaaaaaah!!! :-P

I still cannot see myself in the "Assigned to" dropdown ...

- -- Gerhard
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEmHFZdIO4ozGCH14RArtWAJwPpJ/BmnCR34UgnsJxbEieU/MdeQCdFTu2
nht30gADuguOlWvhnn5Tj7E=
=QZlI
-----END PGP SIGNATURE-----

From tim.peters at gmail.com  Wed Jun 21 00:27:26 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Tue, 20 Jun 2006 18:27:26 -0400
Subject: [Python-Dev] Small sqlite3 test suite fix (Python 2.5b1
	candidate)
In-Reply-To: <44987159.3000309@ghaering.de>
References: <44986723.4040808@ghaering.de>
	<1f7befae0606201430h43cf7391l3abb72342c13ca64@mail.gmail.com>
	<44987159.3000309@ghaering.de>
Message-ID: <1f7befae0606201527l5b19c473t21921d57c367828e@mail.gmail.com>

[Gerhard]
>>> ...
>>> Also, somebody please add me as Python developer on Sourceforge (I cannot
>>> assign items to myself there).

[Tim]
>> If you still can't, scream at me ;-)

[Gerhard]
> Bwaaaaaaaaaaaaah!!! :-P
>
> I still cannot see myself in the "Assigned to" dropdown ...

Screaming apparently helped!  I can see you now ;-)

From greg.ewing at canterbury.ac.nz  Wed Jun 21 01:54:45 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Jun 2006 11:54:45 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060618184500.F34E.JCARLSON@uci.edu>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
Message-ID: <44988AC5.40101@canterbury.ac.nz>

Josiah Carlson wrote:
> Offering arbitrary expressions whose
> meaning can vary at runtime would kill any potential speedup (the
> ultimate purpose for having a switch statement)

I don't agree that speedup is *the* ultimate purpose
of a switch statement. There's also the matter of
providing a construct that expresses the high-level
intent of the code more clearly than an if-else
chain. I think both of these are equally important.

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 21 02:01:50 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Jun 2006 12:01:50 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
References: <20060611010410.GA5723@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
Message-ID: <44988C6E.4080806@canterbury.ac.nz>

Phillip J. Eby wrote:

> Actually, one could consider "case" expressions to be computed at function 
> definition time, the way function defaults are.  That would solve the 
> problem of symbolic constants, or indeed any sort of expressions.

That's an excellent idea!

> It's just a question of which one is easier to explain. 

I think the function-definition-time one is easiest to
both explain and also to reason about when writing code,
since definition time is well-defined, whereas "the first
time it's executed" is somewhat fuzzy.

It's also a lot clearer how it interacts with closures,
which is another good point.

I recommend adding this option to the relevant PEP
(whichever it is).

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 21 02:13:16 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Jun 2006 12:13:16 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060619101721.01ea8108@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<5.1.1.6.0.20060619084658.03266a90@sparrow.telecommunity.com>
	<5.1.1.6.0.20060619101721.01ea8108@sparrow.telecommunity.com>
Message-ID: <44988F1C.7050502@canterbury.ac.nz>

Phillip J. Eby wrote:

> Sadly, it's not *quite* that simple, due to the fact that co_lnotab must be 
> increase in line numbers as bytecode offsets increase.

I think it's high time all the cleverness was ripped out
of the lnotab and it just became a plain array of independent
entries.

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 21 02:16:26 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Jun 2006 12:16:26 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <4496DD2F.30501@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
Message-ID: <44988FDA.5000809@canterbury.ac.nz>

Raymond Hettinger wrote:

>  switch x:
>     case 1:  one()
>     case 2:  two()
>     case 3:  three()
>     default:  too_many()
> 
> Do we require that x be hashable so that the compiler can use a lookup 
> table?

That sounds reasonable.

--
Greg

From guido at python.org  Wed Jun 21 02:17:40 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 20 Jun 2006 17:17:40 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <44988C6E.4080806@canterbury.ac.nz>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz>
Message-ID: <ca471dc20606201717j4336b21bpb0904c5c7dea3455@mail.gmail.com>

On 6/20/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Phillip J. Eby wrote:
>
> > Actually, one could consider "case" expressions to be computed at function
> > definition time, the way function defaults are.  That would solve the
> > problem of symbolic constants, or indeed any sort of expressions.
>
> That's an excellent idea!

Seconded. (I somehow missed Phillip's post the first time around -- apologies.)

> > It's just a question of which one is easier to explain.
>
> I think the function-definition-time one is easiest to
> both explain and also to reason about when writing code,
> since definition time is well-defined, whereas "the first
> time it's executed" is somewhat fuzzy.
>
> It's also a lot clearer how it interacts with closures,
> which is another good point.
>
> I recommend adding this option to the relevant PEP
> (whichever it is).

I think we should consider adding to PEP 275 rather than starting
over; it's still mostly relevant.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg.ewing at canterbury.ac.nz  Wed Jun 21 02:23:12 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Jun 2006 12:23:12 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
Message-ID: <44989170.2000008@canterbury.ac.nz>

Guido van Rossum wrote:

> Well, the hypothetical use case is one where we have an arbitrary
> object of unknown origin or type, and we want to special-case
> treatment for a few known values.

I'd need convincing that this use case is anything more
than hypothetical.

Also, you can always put your own test for hashability
around it if you want.

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 21 02:26:55 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Jun 2006 12:26:55 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
Message-ID: <4498924F.5000508@canterbury.ac.nz>

Guido van Rossum wrote:

> But it would be easy enough to define a dict-filling function that
> updates only new values.

Or evaluate the case expressions in reverse order.

> Was it decided yet how to write the cases for a switch that tests for
> tuples of values? Requiring parentheses might be sufficient,
> essentially making what follows a case *always* take on sequence
> syntax.

Sounds good to me.

--
Greg

From pje at telecommunity.com  Wed Jun 21 03:06:07 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 20 Jun 2006 21:06:07 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <4498924F.5000508@canterbury.ac.nz>
References: <ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>

At 12:26 PM 6/21/2006 +1200, Greg Ewing wrote:
>Guido van Rossum wrote:
>
> > But it would be easy enough to define a dict-filling function that
> > updates only new values.
>
>Or evaluate the case expressions in reverse order.

-1; stepping through the code in a debugger is going to be weird enough, 
what with the case statements being executed at function definition time, 
without the reverse order stuff.  I'd rather make it an error to list the 
same value more than once; we can just check if the key is present before 
defining that value.


From guido at python.org  Wed Jun 21 06:56:33 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 20 Jun 2006 21:56:33 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <44989170.2000008@canterbury.ac.nz>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060618184500.F34E.JCARLSON@uci.edu>
	<ca471dc20606190737p2ca1ebf4m5091321b6d9ee2e@mail.gmail.com>
	<4496D06E.7070106@ewtllc.com>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<44989170.2000008@canterbury.ac.nz>
Message-ID: <ca471dc20606202156k3f668dfdjc7cec68d5d2fb7fe@mail.gmail.com>

On 6/20/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Guido van Rossum wrote:
>
> > Well, the hypothetical use case is one where we have an arbitrary
> > object of unknown origin or type, and we want to special-case
> > treatment for a few known values.
>
> I'd need convincing that this use case is anything more
> than hypothetical.
>
> Also, you can always put your own test for hashability
> around it if you want.

Right. I wasn't defending the use case. That's why I called it "hypothetical".

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 21 07:14:51 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 20 Jun 2006 22:14:51 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
Message-ID: <ca471dc20606202214j613a5e1aob1241870d49235@mail.gmail.com>

On 6/20/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 12:26 PM 6/21/2006 +1200, Greg Ewing wrote:
> >Guido van Rossum wrote:
> >
> > > But it would be easy enough to define a dict-filling function that
> > > updates only new values.
> >
> >Or evaluate the case expressions in reverse order.
>
> -1; stepping through the code in a debugger is going to be weird enough,
> what with the case statements being executed at function definition time,
> without the reverse order stuff.

Agreed.

> I'd rather make it an error to list the
> same value more than once; we can just check if the key is present before
> defining that value.

That makes sense too.

I was thinking of a use case where you'd have a couple of sets of
cases that need the same treatment per set (sre_compile.py has a few
of these) but one of the sets has an exception. With the if/elif style
you could write this as

  if x is exception:
    ...exceptional case...
  elif x in set1:
    ...case for set1...
  elif x in set2:
    ..case for set2...
  etc.

But the prospect of something like this passing without error:

  switch x:
  case 1: ...case 1...
  case 1: ...another case 1?!?!...

makes me think that it's better to simply reject overlapping cases.

BTW I think the several-sets use case above is important and would
like to have syntax for it.
Earlier it was proposed to allow saying

  case 1, 2, 3: ...executed if x==1 or x==2 or x==3...

but now that we're agreed to evaluate the expression at function
definition time, I want to support

  case S: ...executed if x in S...

but that would be ambiguous. So, thinking aloud, here are a few possibilities:

  case A: ... if x == A...
  cases S: ...if x in A...

or perhaps (saving a keyword):

  case A: ... if x == A...
  case in S: ...if x in A...

This would also let us write cases for ranges:

  case in range(10): ...if x in range(10)...

I propose that the expression used for a single-value should not allow
commas, so that one is forced to write

  case (1, 2): ...if x == (1, 2)...

if you really want a case to be a tuple value, but you should be able to write

  case in 1, 2: ...if x in (1, 2)...

since this really doesn't pose the same kind of ambiguity. If you
forget the 'in' it's a syntax error.

Hm, so this still doesn't help if you write

  case S: ...

(where S is an immutable set or sequence) when you meant

  case in S: ...

so I'm not sure if it's worth the subtleties.

Time for bed here,

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 21 07:16:21 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 20 Jun 2006 22:16:21 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606202214j613a5e1aob1241870d49235@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<ca471dc20606202214j613a5e1aob1241870d49235@mail.gmail.com>
Message-ID: <ca471dc20606202216y5bf95068s9227c7122ab5b9e6@mail.gmail.com>

On 6/20/06, Guido van Rossum <guido at python.org> wrote:
>   case A: ... if x == A...
>   cases S: ...if x in A...
>
> or perhaps (saving a keyword):
>
>   case A: ... if x == A...
>   case in S: ...if x in A...

I was too quick with cut/paste here; I meant

  case S: ...if x in S...

or

  case in S: ...if x in S...

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Wed Jun 21 07:31:35 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 01:31:35 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606202214j613a5e1aob1241870d49235@mail.gmail.com>
References: <5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>

At 10:14 PM 6/20/2006 -0700, Guido van Rossum wrote:
>Hm, so this still doesn't help if you write
>
>   case S: ...
>
>(where S is an immutable set or sequence) when you meant
>
>   case in S: ...
>
>so I'm not sure if it's worth the subtleties.

Well, EIBTI and all that:

     switch x:
         case == 1: foo(x)
         case in S: bar(x)

It even lines up nicely.  :)


From theller at python.net  Wed Jun 21 09:42:56 2006
From: theller at python.net (Thomas Heller)
Date: Wed, 21 Jun 2006 09:42:56 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC 10.3.9
	(Panther)
In-Reply-To: <3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>
References: <44982ADE.5070404@activestate.com> <4498393E.1020101@python.net>
	<71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>
	<3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>
Message-ID: <4498F880.4010401@python.net>

Ronald Oussoren schrieb:
> 
> On 20-jun-2006, at 20:50, Ronald Oussoren wrote:
> 
>>
>> On 20-jun-2006, at 20:06, Thomas Heller wrote:
>>
>>> Trent Mick schrieb:
>>>> Thomas and others,
>>>>
>>>> Has anyone else seen failures in test_ctypes on older Mac OS X/
>>>> PowerPC?
>>>> Results are below. This is running a build of the trunk from last
>>>> night:
>>>>
>>>>     ./configure && make && ./python.exe Lib/test/test_ctypes.py
>>>>
>>>> Note that the test does NOT fail on the Mac OS X/x86 10.4.6 box
>>>> that I have.
>>>
>>> It also works on 10.4.?? Power PC.  I guess the fix has to wait until
>>> I'm able to install 10.3 on my mac, I have the DVDs already but
>>> have not
>>> yet had the time.  If anyone is willing to give me ssh access to a
>>> 10.3
>>> box I can try to fix this earlier.
>>
>> I had some problems with my 10.3-capable box, but happily enough it
>> decided to come alive again. I'm currently booted into 10.3.9 and
>> will have a look.
> 
> It is a platform bug, RTLD_LOCAL doesn't work on 10.3. The following C 
> snippet fails with the same error as ctypes: FAIL: dlcompat: unable to 
> open this file with RTLD_LOCAL. This seems to be confirmed by this 
> sourcet test file from darwin: 
> http://darwinsource.opendarwin.org/10.4.1/dyld-43/unit-tests/test-cases/dlopen-RTLD_LOCAL/main.c. 
> 

What does this mean?  Would it work with RTLD_GLOBAL, is there any other 
way to repair it, or does loading dylibs not work at all on Panther?

Thomas

From ronaldoussoren at mac.com  Wed Jun 21 10:01:21 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Wed, 21 Jun 2006 10:01:21 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC
 10.3.9(Panther)
In-Reply-To: <4498F880.4010401@python.net>
References: <44982ADE.5070404@activestate.com> <4498393E.1020101@python.net>
	<71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>
	<3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>
	<4498F880.4010401@python.net>
Message-ID: <6339755.1150876881434.JavaMail.ronaldoussoren@mac.com>

 
On Wednesday, June 21, 2006, at 09:43AM, Thomas Heller <theller at python.net> wrote:

>Ronald Oussoren schrieb:
>>> will have a look.
>> 
>> It is a platform bug, RTLD_LOCAL doesn't work on 10.3. The following C 
>> snippet fails with the same error as ctypes: FAIL: dlcompat: unable to 
>> open this file with RTLD_LOCAL. This seems to be confirmed by this 
>> sourcet test file from darwin: 
>> http://darwinsource.opendarwin.org/10.4.1/dyld-43/unit-tests/test-cases/dlopen-RTLD_LOCAL/main.c. 
>> 
>
>What does this mean?  Would it work with RTLD_GLOBAL, is there any other 
>way to repair it, or does loading dylibs not work at all on Panther?

Using RTLD_GLOBAL does work. This should also be fairly save as RTLD_GLOBAL seems to be the same as RTLD_LOCAL when using two-level namespaces (which is the default on OSX and used by Python).

Ronald

From python-dev at zesty.ca  Wed Jun 21 10:38:57 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Wed, 21 Jun 2006 03:38:57 -0500 (CDT)
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
Message-ID: <Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>

On Wed, 21 Jun 2006, Phillip J. Eby wrote:
> Well, EIBTI and all that:
>
>      switch x:
>          case == 1: foo(x)
>          case in S: bar(x)
>
> It even lines up nicely.  :)

Hmm, this is rather nice.  I can imagine possible use cases for

    switch x:
        case > 3: foo(x)
        case is y: spam(x)
        case == z: eggs(x)

An interesting use case for which this offers no corresponding
syntax is

        case instanceof ClassA: ham(x)

which doesn't work because Python spells a type test as
isinstance(a, b) rather than with an operator.  (I suppose
whether we want it to be an operator might be another
question to think about for Python 3000.)


-- ?!ng

From g.brandl at gmx.net  Wed Jun 21 11:23:19 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 21 Jun 2006 11:23:19 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
References: <5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>	<17547.19802.361151.705599@montanaro.dyndns.org>	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>	<4496DD2F.30501@ewtllc.com>	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>	<4496FB54.5060800@ewtllc.com>	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>	<449707E2.7060803@ewtllc.com>	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>	<4498924F.5000508@canterbury.ac.nz>	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
Message-ID: <e7b367$ikg$1@sea.gmane.org>

Ka-Ping Yee wrote:
> On Wed, 21 Jun 2006, Phillip J. Eby wrote:
>> Well, EIBTI and all that:
>>
>>      switch x:
>>          case == 1: foo(x)
>>          case in S: bar(x)
>>
>> It even lines up nicely.  :)
> 
> Hmm, this is rather nice.  I can imagine possible use cases for
> 
>     switch x:
>         case > 3: foo(x)
>         case is y: spam(x)

Ha, a slight reminiscence of BASIC...

>         case == z: eggs(x)
> 
> An interesting use case for which this offers no corresponding
> syntax is
> 
>         case instanceof ClassA: ham(x)
> 
> which doesn't work because Python spells a type test as
> isinstance(a, b) rather than with an operator.  (I suppose
> whether we want it to be an operator might be another
> question to think about for Python 3000.)

FWIW, I like "is a" most, but there's no way to spell this
as one word without confusing readers.

Georg


From ncoghlan at gmail.com  Wed Jun 21 12:34:12 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 21 Jun 2006 20:34:12 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <44988C6E.4080806@canterbury.ac.nz>
References: <20060611010410.GA5723@21degrees.com.au>	<17547.19802.361151.705599@montanaro.dyndns.org>	<20060611010410.GA5723@21degrees.com.au>	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz>
Message-ID: <449920A4.7040008@gmail.com>

Greg Ewing wrote:
> Phillip J. Eby wrote:
> 
>> Actually, one could consider "case" expressions to be computed at function 
>> definition time, the way function defaults are.  That would solve the 
>> problem of symbolic constants, or indeed any sort of expressions.
> 
> That's an excellent idea!
> 
>> It's just a question of which one is easier to explain. 
> 
> I think the function-definition-time one is easiest to
> both explain and also to reason about when writing code,
> since definition time is well-defined, whereas "the first
> time it's executed" is somewhat fuzzy.

There's some benefit to "first time it's executed" though:
   a. it allows access to the local namespace
   b. it uses the same semantics at module level as it does in a function

If we go with 'at function definition time', then neither of those is true. 
I'm actually curious how a module level switch statement would work at all in 
that case, without either falling back on the "first time it's executed" 
definition, or else not permitting switch statements in module level code.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Wed Jun 21 13:22:14 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 21 Jun 2006 21:22:14 +1000
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <e04bdf310606201125q3bfbd7a3kc9ec8c96099c1285@mail.gmail.com>
References: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>	
	<44980B48.2020303@gmail.com>
	<e04bdf310606201125q3bfbd7a3kc9ec8c96099c1285@mail.gmail.com>
Message-ID: <44992BE6.1050302@gmail.com>

Facundo Batista wrote:
> 2006/6/20, Nick Coghlan <ncoghlan at gmail.com>:
>> The intent was always to replace the internal use of tuples and longs 
>> with a
>> more efficient C implementation - that particular step simply wasn't 
>> needed
>> for the original use case that lead Facundo to write and implement PEP 
>> 327.
> 
> Right. We never addressed speed. I mean, we made Decimal as fast as we
> could in the limited time we had (Raymond H. helped a lot also here),
> but it was NOT designed for speed.

As I recall, the design flow was pretty much 'make it work to spec' then 'make 
it run the telco benchmark and the tests faster while still keeping the 
implementation reasonably simple'. Helping Raymond with that tuning process 
was actually my first real contribution to CPython, so I got a lot of reading 
done while waiting for the benchmark and the decimal arithmetic tests to run 
with the Python profiler enabled ;)

Even then, I believe only two particularly significant changes were made to 
the implementation - adding the boolean flag so special values could be 
detected easily, and copping the conversion costs to & from longs for 
coefficient arithmetic, because we made the time back in the long run by 
getting to use the C-coded long arithmetic operations.

> BTW, prove me Decimal is not fast enough, ;)

C:\Python24>python -m timeit -s "x = 1.0" "x+x"
10000000 loops, best of 3: 0.137 usec per loop

C:\Python24>python -m timeit -s "from decimal import Decimal as d; x = d(1)" "x+x"
10000 loops, best of 3: 48.3 usec per loop

I don't really know my definition of 'fast enough to be the basic floating 
point type', but I'm pretty sure that a couple of orders of magnitude slower 
isn't it. I guess I'll find out what my definition is if the C implementation 
manages to get there ;)

(Hmm - a couple of spot checks makes it look like the decimal module's slowed 
down by a few percent in Python 2.5. It's probably worth trying out the new 
profiler on the module to see if there are any simple fixes to be made before 
beta 2. . .)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From pje at telecommunity.com  Wed Jun 21 14:29:54 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 08:29:54 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
References: <5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>

At 03:38 AM 6/21/2006 -0500, Ka-Ping Yee wrote:
>On Wed, 21 Jun 2006, Phillip J. Eby wrote:
> > Well, EIBTI and all that:
> >
> >      switch x:
> >          case == 1: foo(x)
> >          case in S: bar(x)
> >
> > It even lines up nicely.  :)
>
>Hmm, this is rather nice.  I can imagine possible use cases for
>
>     switch x:
>         case > 3: foo(x)
>         case is y: spam(x)
>         case == z: eggs(x)
>
>An interesting use case for which this offers no corresponding
>syntax is
>
>         case instanceof ClassA: ham(x)

Actually, I was assuming that any other operator besides == and 'in' would 
be relegated to an if-elif chain in the default case, although it's almost 
possible to do that automatically, I suppose.


From aahz at pythoncraft.com  Wed Jun 21 15:07:35 2006
From: aahz at pythoncraft.com (Aahz)
Date: Wed, 21 Jun 2006 06:07:35 -0700
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <44992BE6.1050302@gmail.com>
References: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>
	<44980B48.2020303@gmail.com>
	<e04bdf310606201125q3bfbd7a3kc9ec8c96099c1285@mail.gmail.com>
	<44992BE6.1050302@gmail.com>
Message-ID: <20060621130735.GA24851@panix.com>

On Wed, Jun 21, 2006, Nick Coghlan wrote:
> Facundo Batista wrote:
>> 
>> BTW, prove me Decimal is not fast enough, ;)
> 
> C:\Python24>python -m timeit -s "x = 1.0" "x+x"
> 10000000 loops, best of 3: 0.137 usec per loop
> 
> C:\Python24>python -m timeit -s "from decimal import Decimal as d; x = d(1)" "x+x"
> 10000 loops, best of 3: 48.3 usec per loop
> 
> I don't really know my definition of 'fast enough to be the basic
> floating point type', but I'm pretty sure that a couple of orders of
> magnitude slower isn't it. I guess I'll find out what my definition is
> if the C implementation manages to get there ;)

Why isn't that fast enough?  Relative speed is *not* the issue when
talking about real-world applications.  More to the point, the
expectation is that the C implementation of Decimal will have faster
conversion to/from string, which in many real world applications forms a
significant part of the processing load.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From fredrik at pythonware.com  Wed Jun 21 16:51:53 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 21 Jun 2006 16:51:53 +0200
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <44992BE6.1050302@gmail.com>
References: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>		<44980B48.2020303@gmail.com>	<e04bdf310606201125q3bfbd7a3kc9ec8c96099c1285@mail.gmail.com>
	<44992BE6.1050302@gmail.com>
Message-ID: <e7bme7$nd0$1@sea.gmane.org>

Nick Coghlan wrote:

>> BTW, prove me Decimal is not fast enough, ;)
> 
> C:\Python24>python -m timeit -s "x = 1.0" "x+x"
> 10000000 loops, best of 3: 0.137 usec per loop
> 
> C:\Python24>python -m timeit -s "from decimal import Decimal as d; x = d(1)" "x+x"
> 10000 loops, best of 3: 48.3 usec per loop
> 
> I don't really know my definition of 'fast enough to be the basic floating 
> point type', but I'm pretty sure that a couple of orders of magnitude slower 
> isn't it.

how fast does the corresponding C program run ?

</F>


From p.f.moore at gmail.com  Wed Jun 21 17:11:17 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 21 Jun 2006 16:11:17 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <e7bme7$nd0$1@sea.gmane.org>
References: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>
	<44980B48.2020303@gmail.com>
	<e04bdf310606201125q3bfbd7a3kc9ec8c96099c1285@mail.gmail.com>
	<44992BE6.1050302@gmail.com> <e7bme7$nd0$1@sea.gmane.org>
Message-ID: <79990c6b0606210811j21e592c4u1951e64d42111ae3@mail.gmail.com>

On 6/21/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Nick Coghlan wrote:
>
> >> BTW, prove me Decimal is not fast enough, ;)
> >
> > C:\Python24>python -m timeit -s "x = 1.0" "x+x"
> > 10000000 loops, best of 3: 0.137 usec per loop
> >
> > C:\Python24>python -m timeit -s "from decimal import Decimal as d; x = d(1)" "x+x"
> > 10000 loops, best of 3: 48.3 usec per loop
> >
> > I don't really know my definition of 'fast enough to be the basic floating
> > point type', but I'm pretty sure that a couple of orders of magnitude slower
> > isn't it.
>
> how fast does the corresponding C program run ?

A horribly crude test using Regina Rexx (which implements the Decimal
standard in C, but I know nothing else about how):

x = 1.0
do 10000000
  y = x + x
end

This takes about 5 sec on my PC, which (if I've calculated correctly)
comes out at 0.5 usec per loop.

I suppose it gives a ballpark figure for the sorts of speeds we can
expect from a C implementation.

Paul.

From anthony at python.org  Wed Jun 21 17:45:18 2006
From: anthony at python.org (Anthony Baxter)
Date: Thu, 22 Jun 2006 01:45:18 +1000
Subject: [Python-Dev] RELEASED Python 2.5 (beta 1)
Message-ID: <200606220145.37444.anthony@python.org>

On behalf of the Python development team and the Python community, I'm 
happy to announce the first BETA release of Python 2.5.

This is an *beta* release of Python 2.5. As such, it is not suitable 
for a production environment. It is being released to solicit 
feedback and hopefully discover bugs, as well as allowing you to 
determine how changes in 2.5 might impact you. If you find things 
broken or incorrect, please log a bug on Sourceforge. 

I'd like to really encourage you to try out this version and check 
that your code still works - if not, and you think it's a bug, please 
log a bug. Hopefully this will make it easier for you to upgrade once 
the final release of Python 2.5 is done.

Please note that changes to improve Python's support for 64 bit 
systems might require authors of C extensions to change their code. 
See the website for more, including a link to a posting discussing 
this issue in particular.

More information on the release (as well as source distributions and 
Windows and Mac OSX installers) are available from the 2.5 website:

    http://www.python.org/2.5/

Since the alpha releases, a slew of bug fixes and smaller new
features have been added. See the release notes (available from the
2.5 webpage) for more. The first beta also includes the results of the 
Iceland NeedForSpeed sprint, resulting in some significant speedups.

As of this release, Python 2.5 is now in *feature freeze*. No new
features are planned - only bugfixes for the code already in the 
codebase.

The plan from here is for one more beta release followed by one or 
more release candidates as needed, leading to a 2.5 final release 
early August.  PEP 356 includes the schedule and will be updated as 
the schedule evolves.

The new features in Python 2.5 are described in Andrew Kuchling's 
What's New In Python 2.5. It's available from the 2.5 web page.

Amongst the language features added include conditional expressions, 
the with statement, the merge of try/except and try/finally into 
try/except/finally, enhancements to generators to produce a coroutine 
kind of functionality, and a brand new AST-based compiler 
implementation.

New modules added include hashlib, ElementTree, sqlite3, wsgiref and
ctypes. We also have a new profiling module "cProfile".

Enjoy this new release (another step on the path to Python 2.5 final)
Anthony

-- 
Anthony Baxter
anthony at python.org
Python Release Manager
(on behalf of the entire python-dev team)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 191 bytes
Desc: not available
Url : http://mail.python.org/pipermail/python-dev/attachments/20060622/9185126b/attachment.pgp 

From guido at python.org  Wed Jun 21 18:16:42 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 09:16:42 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449920A4.7040008@gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
Message-ID: <ca471dc20606210916y6033ba0dx3c3a25dcfb291258@mail.gmail.com>

On 6/21/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> There's some benefit to "first time it's executed" though:
>    a. it allows access to the local namespace

And how would that be a good thing? It just begs for confusion if the
local variable doesn't always have the same value. (Yes, globals may
vary too, but less likely, since global *variables* (i.e. that
actually vary) are generally considered a bad idea. There's no such
taboo for local variables. :-)

>    b. it uses the same semantics at module level as it does in a function

Hm, I hadn't thought of that one yet.

> If we go with 'at function definition time', then neither of those is true.
> I'm actually curious how a module level switch statement would work at all in
> that case, without either falling back on the "first time it's executed"
> definition, or else not permitting switch statements in module level code.

After thinking about it a bit I think that if it's not immediately
contained in a function, it should be implemented as alternative
syntax for an if/elif chain.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 21 18:26:38 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 09:26:38 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
Message-ID: <ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>

On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 03:38 AM 6/21/2006 -0500, Ka-Ping Yee wrote:
> >On Wed, 21 Jun 2006, Phillip J. Eby wrote:
> > > Well, EIBTI and all that:
> > >
> > >      switch x:
> > >          case == 1: foo(x)
> > >          case in S: bar(x)
> > >
> > > It even lines up nicely.  :)
> >
> >Hmm, this is rather nice.  I can imagine possible use cases for
> >
> >     switch x:
> >         case > 3: foo(x)
> >         case is y: spam(x)
> >         case == z: eggs(x)
> >
> >An interesting use case for which this offers no corresponding
> >syntax is
> >
> >         case instanceof ClassA: ham(x)
>
> Actually, I was assuming that any other operator besides == and 'in' would
> be relegated to an if-elif chain in the default case, although it's almost
> possible to do that automatically, I suppose.

I've been thinking about generalization to other operators too, but
decided that it would be a mistake. It would be quite clumsy to
explain the exact semantics: if all operators are "==" or "in" an
efficient hash table gets pre-constructed at function definition time,
otherwise, um..., what exactly?

(Note how I've switched to the switch-for-efficiency camp, since it
seems better to have clear semantics and a clear reason for the syntax
to be different from if/elif chains.)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Wed Jun 21 18:33:55 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 12:33:55 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606210916y6033ba0dx3c3a25dcfb291258@mail.gmail.co
 m>
References: <449920A4.7040008@gmail.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
Message-ID: <5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>

At 09:16 AM 6/21/2006 -0700, Guido van Rossum wrote:
>After thinking about it a bit I think that if it's not immediately
>contained in a function, it should be implemented as alternative
>syntax for an if/elif chain.

That worries me a little.  Suppose I write a one-off script like this:

for line in sys.stdin:
     words = line.split()
     if words:
         switch words[0]:
             case "foo": blah
             case words[-1]: print "mirror image!"

Then, if I later move the switch into a function, it's not going to mean 
the same thing any more.  If the values are frozen at first use or 
definition time (which are the same thing for module-level code), then I'll 
find the lurking bug sooner.

OTOH, breaking it sooner doesn't seem like such a great idea either; seems 
like a recipe for a newbie-FAQ, actually.  ISTM that the only sane way to 
deal with this would be to ban the switch statement at module level, which 
then seems to be an argument for not including the switch statement at all.  :(

I suppose the other possibility would be to require at compilation time 
that a case expression include only non-local variables.  That would mean 
that you couldn't use *any* variables in a case expression at module-level 
switch, but wording the error message for that to not be misleading might 
be tricky.

I suppose an error message for the above could simply point to the fact 
that 'words' is being rebound in the current scope, and thus can't be 
considered a constant.  This is only an error at the top-level if the 
switch appears in a loop, and the variable is rebound somewhere within that 
loop or is rebound more than once in the module as a whole (including 
'global' assignments in functions).


From anthony at interlink.com.au  Wed Jun 21 18:38:48 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 22 Jun 2006 02:38:48 +1000
Subject: [Python-Dev] TRUNK is UNFROZEN, but in FEATURE FREEZE
Message-ID: <200606220238.52248.anthony@interlink.com.au>

2.5b1 is out, so I'm declaring the SVN trunk unfrozen. Note, though, 
that as we're now post-beta, we're in FEATURE FREEZE. 

Really. This means you. :-)

No new features should be checked in without prior approval - checkins 
that violate this will quite probably get backed out.

I expect that we will also now be quite a bit more anal about any 
checkins that break the buildbots. Please, please make sure you run 
the test suite before checking in - and if you're at all concerned 
that your checkin might have strange platform dependencies, check the 
buildbot status page (http://www.python.org/dev/buildbot/trunk/) 
after your checkin to make sure it didn't break anything. Similarly, 
if you're fixing a bug, if at all possible write a test and check 
that in as well. 

The buildbots and a focus on testing should mean that 2.5 ends up 
being one of the most solid Python releases so far. Please help us 
achieve this goal.

The feature freeze on the trunk will continue until we branch for 
release candidate 1 of 2.5 - sometime in the second half of July, 
probably. If you really have the need to do new work on the trunk 
before then, please work on a branch. 

Thanks,
Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From fredrik at pythonware.com  Wed Jun 21 18:41:54 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 21 Jun 2006 18:41:54 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<4496FB54.5060800@ewtllc.com>	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>	<449707E2.7060803@ewtllc.com>	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>	<4498924F.5000508@canterbury.ac.nz>	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
Message-ID: <e7bssg$hke$1@sea.gmane.org>

Guido van Rossum wrote:

> (Note how I've switched to the switch-for-efficiency camp, since it
> seems better to have clear semantics and a clear reason for the syntax
> to be different from if/elif chains.)

if you're now in the efficiency camp, why not just solve this on the 
code generator level ?  given

     var = some expression
     if var == constant:
         ...
     elif var == constant:
         ...

let the compiler use a dispatch table, if it can and wants to.

</F>


From guido at python.org  Wed Jun 21 18:47:21 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 09:47:21 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7bssg$hke$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<e7bssg$hke$1@sea.gmane.org>
Message-ID: <ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>

On 6/21/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Guido van Rossum wrote:
>
> > (Note how I've switched to the switch-for-efficiency camp, since it
> > seems better to have clear semantics and a clear reason for the syntax
> > to be different from if/elif chains.)
>
> if you're now in the efficiency camp, why not just solve this on the
> code generator level ?  given
>
>      var = some expression
>      if var == constant:
>          ...
>      elif var == constant:
>          ...
>
> let the compiler use a dispatch table, if it can and wants to.

But in most cases the 'constant' is actually an expression involving a
global, often even a global in another module. (E.g. sre_compile.py)
The compiler will have a hard time proving that this is really a
constant, so it won't optimize the code.

The proposed switch semantics (create the table when the containing
function is defined) get around this by "defining" what it means by
"constant".

BTW I would like references to locals shadowing globals to be flagged
as errors (or at least warnings) so that users who deduced the wrong
mental model for a switch statement are caught out sooner.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 21 18:55:00 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 09:55:00 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
Message-ID: <ca471dc20606210955o7844e85dy40fcc1875d5b53fa@mail.gmail.com>

On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 09:16 AM 6/21/2006 -0700, Guido van Rossum wrote:
> >After thinking about it a bit I think that if it's not immediately
> >contained in a function, it should be implemented as alternative
> >syntax for an if/elif chain.
>
> That worries me a little.  Suppose I write a one-off script like this:
>
> for line in sys.stdin:
>      words = line.split()
>      if words:
>          switch words[0]:
>              case "foo": blah
>              case words[-1]: print "mirror image!"

Why would you write a script like that? If you've learned the
"idiomatic" use of a switch statement first, that would never occur to
you. If you're totally clueless, I don't really care that much.

> Then, if I later move the switch into a function, it's not going to mean
> the same thing any more.

And it will be a clear compile-time warning because in the refactored
version you'd be attempting to use a local variable in a case.

> If the values are frozen at first use or
> definition time (which are the same thing for module-level code), then I'll
> find the lurking bug sooner.

Or not, depending on how easily the misbehavior is spotted from a
cursory glance at the output.

> OTOH, breaking it sooner doesn't seem like such a great idea either; seems
> like a recipe for a newbie-FAQ, actually.  ISTM that the only sane way to
> deal with this would be to ban the switch statement at module level, which
> then seems to be an argument for not including the switch statement at all.  :(

I don't understand this line of reasoning. The semantics I propose are
totally well-defined.

> I suppose the other possibility would be to require at compilation time
> that a case expression include only non-local variables.  That would mean
> that you couldn't use *any* variables in a case expression at module-level
> switch, but wording the error message for that to not be misleading might
> be tricky.

That seems overly restrictive given that I expect *most* cases to use
named constants, not literals.

> I suppose an error message for the above could simply point to the fact
> that 'words' is being rebound in the current scope, and thus can't be
> considered a constant.  This is only an error at the top-level if the
> switch appears in a loop, and the variable is rebound somewhere within that
> loop or is rebound more than once in the module as a whole (including
> 'global' assignments in functions).

Let's not focus on the error message. I think your assumption that
every switch at the global level ought to be able to be moved into a
function and work the same way is not a particularly important
requirement.

(As a compromise, a switch at the global level with only literal cases
could be efficiently optimized. This should include "compile-time
constant expressions".)

BTW a switch in a class should be treated the same as a global switch.
But what about a switch in a class in a function?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Wed Jun 21 19:05:19 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 13:05:19 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7bssg$hke$1@sea.gmane.org>
References: <ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060621125729.031df008@sparrow.telecommunity.com>

At 06:41 PM 6/21/2006 +0200, Fredrik Lundh wrote:
>Guido van Rossum wrote:
>
> > (Note how I've switched to the switch-for-efficiency camp, since it
> > seems better to have clear semantics and a clear reason for the syntax
> > to be different from if/elif chains.)
>
>if you're now in the efficiency camp, why not just solve this on the
>code generator level ?  given
>
>      var = some expression
>      if var == constant:
>          ...
>      elif var == constant:
>          ...
>
>let the compiler use a dispatch table, if it can and wants to.

Two reasons:

1. Having special syntax is an assertion that 'var' will be usable as a 
dictionary key.  Without this assertion, the generated code would need to 
trap hashing failure.

2. Having special syntax is likewise an assertion that the 'constants' will 
remain constant, if they're symbolic constants like:

FOO = "foo"



From pje at telecommunity.com  Wed Jun 21 19:09:49 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 13:09:49 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606210955o7844e85dy40fcc1875d5b53fa@mail.gmail.co
 m>
References: <5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>

At 09:55 AM 6/21/2006 -0700, Guido van Rossum wrote:
>BTW a switch in a class should be treated the same as a global switch.
>But what about a switch in a class in a function?

Okay, now my head hurts.  :)

A switch in a class doesn't need to be treated the same as a global switch, 
because locals()!=globals() in that case.

I think the top-level is the only thing that really needs a special case 
vs. the general "error if you use a local variable in the expression" rule.

Actually, it might be simpler just to always reject local variables -- even 
at the top-level -- and be done with it.


From guido at python.org  Wed Jun 21 19:27:23 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 10:27:23 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
Message-ID: <ca471dc20606211027q652c84cdp997c2ef92c0eab42@mail.gmail.com>

On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 09:55 AM 6/21/2006 -0700, Guido van Rossum wrote:
> >BTW a switch in a class should be treated the same as a global switch.
> >But what about a switch in a class in a function?
>
> Okay, now my head hurts.  :)

Welcome to the club. There's a Monty Python sketch appropriate...

> A switch in a class doesn't need to be treated the same as a global switch,
> because locals()!=globals() in that case.

But that's not the discerning rule in my mind; the rule is, how to
define "at function definition time".

> I think the top-level is the only thing that really needs a special case
> vs. the general "error if you use a local variable in the expression" rule.

To the contrary, at the top level my preferred semantics don't care
because they don't use a hash.

The strict rules about locals apply when it occurs inside a function,
since then we eval the case expressions at function definition time,
when the locals are undefined. This would normally be good enough, but
I worry (a bit) about this case:

  y = 12
  def foo(x, y):
    switch x:
    case y: print "something"

which to the untrained observer (I care about untrained readers much
more than about untrained writers!) looks like it would print
something if x equals y, the argument, while in fact it prints
something if x equals 12.

> Actually, it might be simpler just to always reject local variables -- even
> at the top-level -- and be done with it.

Can't because locals at the top-level are also globals.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From fredrik at pythonware.com  Wed Jun 21 19:53:42 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 21 Jun 2006 19:53:42 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<449707E2.7060803@ewtllc.com>	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>	<4498924F.5000508@canterbury.ac.nz>	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>	<e7bssg$hke$1@sea.gmane.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
Message-ID: <e7c135$4ql$1@sea.gmane.org>

Guido van Rossum wrote:

> But in most cases the 'constant' is actually an expression involving a
> global, often even a global in another module. (E.g. sre_compile.py)
> The compiler will have a hard time proving that this is really a
> constant, so it won't optimize the code.

unless we come up with a way to make it possible to mark an variable as 
a constant.

> The proposed switch semantics (create the table when the containing
> function is defined) get around this by "defining" what it means by
> "constant".

well, given that people find it really confusing that the two X:es in

    def func(value=X):
        print X

are evaluated at different times, I'm not sure it's a good idea to 
introduce more "evaluation scopes".

but sure, I'm sure people doing certification tests would love questions 
like:

     Q: If a program calls the 'func' function below as 'func()'
        and ONE and TWO are both integer objects, what does 'func'
        print ?

     ONE = 1
     TWO = 2

     def func(value=ONE):
         switch value:
         case ONE:
             print value, "is", ONE
         case TWO:
             print value, "is", TWO

     a: "1 is 1"
     b: "2 is 2"
     c: nothing at all
     d: either "1 is 1" or nothing at all
     e: who knows ?

but I cannot say I find it especially Pythonic, really...

</F>


From pje at telecommunity.com  Wed Jun 21 20:01:34 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 14:01:34 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606211027q652c84cdp997c2ef92c0eab42@mail.gmail.co
 m>
References: <5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>

At 10:27 AM 6/21/2006 -0700, Guido van Rossum wrote:
>On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > At 09:55 AM 6/21/2006 -0700, Guido van Rossum wrote:
> > >BTW a switch in a class should be treated the same as a global switch.
> > >But what about a switch in a class in a function?
> >
> > Okay, now my head hurts.  :)
>
>Welcome to the club. There's a Monty Python sketch appropriate...

Aha!  So *that's* why Jim Fulton is always going "Waaaa".  :)


> > A switch in a class doesn't need to be treated the same as a global switch,
> > because locals()!=globals() in that case.
>
>But that's not the discerning rule in my mind; the rule is, how to
>define "at function definition time".

Waaaaa!  (i.e., my head hurts again :)


> > I think the top-level is the only thing that really needs a special case
> > vs. the general "error if you use a local variable in the expression" rule.
>
>To the contrary, at the top level my preferred semantics don't care
>because they don't use a hash.
>
>The strict rules about locals apply when it occurs inside a function,
>since then we eval the case expressions at function definition time,
>when the locals are undefined. This would normally be good enough, but
>I worry (a bit) about this case:
>
>   y = 12
>   def foo(x, y):
>     switch x:
>     case y: print "something"
>
>which to the untrained observer (I care about untrained readers much
>more than about untrained writers!) looks like it would print
>something if x equals y, the argument, while in fact it prints
>something if x equals 12.

I was thinking this should be rejected due to a local being in the 'case' 
expression.


> > Actually, it might be simpler just to always reject local variables -- even
> > at the top-level -- and be done with it.
>
>Can't because locals at the top-level are also globals.

But you could also just use literals, and the behavior would then be 
consistent.  But I'm neither so enamored of that solution nor so against 
if/elif behavior that I care to argue further.

One minor point, though: what happens if we generate an if/elif for the 
switch, and there's a repeated case value?  The advantage of still using 
the hash-based code at the top level is that you still get an error for 
duplicating keys.

Ugh.  It still seems like the simplest implementation is to say that the 
lookup table is built "at first use" and that the case expressions may not 
refer to variables that are known to be bound in the current scope, or 
rebound in the case of the top level.  So the 'case y' example would be a 
compile-time error, as would my silly "words" example.  But code that only 
used "constants" at the top level would work.


From fredrik at pythonware.com  Wed Jun 21 20:59:03 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 21 Jun 2006 20:59:03 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7c135$4ql$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<449707E2.7060803@ewtllc.com>	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>	<4498924F.5000508@canterbury.ac.nz>	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>	<e7bssg$hke$1@sea.gmane.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org>
Message-ID: <e7c4tl$kq7$1@sea.gmane.org>

Fredrik Lundh wrote:

>> But in most cases the 'constant' is actually an expression involving a
>> global, often even a global in another module. (E.g. sre_compile.py)
>> The compiler will have a hard time proving that this is really a
>> constant, so it won't optimize the code.
> 
> unless we come up with a way to make it possible to mark an variable as 
> a constant.

such as the primary

     'constant' expr

which simply means that expr will be evaluated at function definition 
time, rather than at runtime.  example usage:

     var = expression
     if var == constant sre.FOO:
         ...
     elif var == constant sre.BAR:
         ...
     elif var in constant (sre.FIE, sre.FUM):
         ...

</F>


From guido at python.org  Wed Jun 21 22:16:59 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 13:16:59 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
Message-ID: <ca471dc20606211316n4cc6dc1dofacd30bc55560b9d@mail.gmail.com>

On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> >But that's not the discerning rule in my mind; the rule is, how to
> >define "at function definition time".
>
> Waaaaa!  (i.e., my head hurts again :)

Um, wasn't this your proposal (to freeze the case expressions at
function definition time)?

> > > I think the top-level is the only thing that really needs a special case
> > > vs. the general "error if you use a local variable in the expression" rule.
> >
> >To the contrary, at the top level my preferred semantics don't care
> >because they don't use a hash.
> >
> >The strict rules about locals apply when it occurs inside a function,
> >since then we eval the case expressions at function definition time,
> >when the locals are undefined. This would normally be good enough, but
> >I worry (a bit) about this case:
> >
> >   y = 12
> >   def foo(x, y):
> >     switch x:
> >     case y: print "something"
> >
> >which to the untrained observer (I care about untrained readers much
> >more than about untrained writers!) looks like it would print
> >something if x equals y, the argument, while in fact it prints
> >something if x equals 12.
>
> I was thinking this should be rejected due to a local being in the 'case'
> expression.

Me too. I guess I was just pointing out that "just" evaluating it in
the global scope would not give an error, just like this is valid (but
confusing):

y = 12
def foo(y=y):
  print y
y = 13
foo()  # prints 12

> > > Actually, it might be simpler just to always reject local variables -- even
> > > at the top-level -- and be done with it.
> >
> >Can't because locals at the top-level are also globals.
>
> But you could also just use literals, and the behavior would then be
> consistent.  But I'm neither so enamored of that solution nor so against
> if/elif behavior that I care to argue further.

Yeah, but if you have names for your constants it would be a shame if
you couldn't use them because they happen to be defined in the same
scope.

> One minor point, though: what happens if we generate an if/elif for the
> switch, and there's a repeated case value?  The advantage of still using
> the hash-based code at the top level is that you still get an error for
> duplicating keys.

Sure. But the downside is that it's now actually *slower* than the
if/elif version, because it must evaluate all the case expressions.

> Ugh.  It still seems like the simplest implementation is to say that the
> lookup table is built "at first use" and that the case expressions may not
> refer to variables that are known to be bound in the current scope, or
> rebound in the case of the top level.  So the 'case y' example would be a
> compile-time error, as would my silly "words" example.  But code that only
> used "constants" at the top level would work.

I don't like "first use" because it seems to invite tricks.

The 'case y' example can be flagged as a compile time error with
enough compile-time analysis (we *know* all the locals after all).

IMO your silly words example should just pass (at the global level);
it's silly but not evil, and it's totally clear what it does if it
does anything at all (using the if/elif translation semantics; not
using the first-use semantics). That it doesn't refactor cleanly into
a function body isn't enough reason to forbid it.

I feel some kind of rule of thumb coming up regarding language design,
but I'm having a hard time saying it clearly. It's something about
making commonly written idioms easy to understand even for people
without a full understanding of the language, so that (a) people
generalizing from a few examples without too much help or prior
understanding won't go too far off, and (b) people who *do* care to
read and understand the language spec can always clearly find out wat
any particular thing means and know the pitfalls.

An example is assignment. Python lets you do things like

  x = 42
  y = x

and it all sounds completely reasonable. But Fortran/C/C++ programmers
beware, although the syntax is familiar, this is really a name-binding
statement, not a value-copying statement.

There are many other examples. Function and class definition for
example (look like definitions but are run-time constructs unlike in
most other languages). Etc.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 21 22:21:18 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 13:21:18 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7c4tl$kq7$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<e7bssg$hke$1@sea.gmane.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org> <e7c4tl$kq7$1@sea.gmane.org>
Message-ID: <ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>

On 6/21/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Fredrik Lundh wrote:
>
> >> But in most cases the 'constant' is actually an expression involving a
> >> global, often even a global in another module. (E.g. sre_compile.py)
> >> The compiler will have a hard time proving that this is really a
> >> constant, so it won't optimize the code.
> >
> > unless we come up with a way to make it possible to mark an variable as
> > a constant.
>
> such as the primary
>
>      'constant' expr
>
> which simply means that expr will be evaluated at function definition
> time, rather than at runtime.  example usage:
>
>      var = expression
>      if var == constant sre.FOO:
>          ...
>      elif var == constant sre.BAR:
>          ...
>      elif var in constant (sre.FIE, sre.FUM):
>          ...

This gets pretty repetitive. One might suggest that 'case' could imply
'constant'...?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Wed Jun 21 22:50:23 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Wed, 21 Jun 2006 16:50:23 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606211316n4cc6dc1dofacd30bc55560b9d@mail.gmail.co
 m>
References: <5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>

At 01:16 PM 6/21/2006 -0700, Guido van Rossum wrote:
>On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > >But that's not the discerning rule in my mind; the rule is, how to
> > >define "at function definition time".
> >
> > Waaaaa!  (i.e., my head hurts again :)
>
>Um, wasn't this your proposal (to freeze the case expressions at
>function definition time)?

Actually, I proposed that either that *or* first use could work, and in 
subsequent discussion I sided with first use.

Greg didn't quote the rest of my original post or any of the subsequent 
discussion in the post you picked up on, so that probably gave you the 
impression I was still in favor of function definition time, when I had 
already begun leaning towards "first use" as easier to define.


>Yeah, but if you have names for your constants it would be a shame if
>you couldn't use them because they happen to be defined in the same
>scope.

Maybe the real answer is to have a "const" declaration, not necessarily the 
way that Fredrik suggested, but a way to pre-declare constants e.g.:

     const FOO = 27

And then require case expressions to be either literals or constants.  The 
constants need not be computable at compile time, just runtime.  If a 
constant is defined using a foldable expression (e.g. FOO = 27 + 43), then 
the compiler can always optimize it down to a code level 
constant.  Otherwise, it can just put constants into cells that the 
functions use as part of their closure.  (For that matter, the switch 
statement jump tables, if any, can be put in a cell too.)


>I don't like "first use" because it seems to invite tricks.

Okay, then I think we need a way to declare a global as being constant.  It 
seems like all the big problems with switch/case basically amount to us 
trying to wiggle around the need to explicitly declare constants.


From jcarlson at uci.edu  Wed Jun 21 23:58:19 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 21 Jun 2006 14:58:19 -0700
Subject: [Python-Dev] TRUNK is UNFROZEN, but in FEATURE FREEZE
In-Reply-To: <200606220238.52248.anthony@interlink.com.au>
References: <200606220238.52248.anthony@interlink.com.au>
Message-ID: <20060621143000.1DC3.JCARLSON@uci.edu>


Anthony Baxter <anthony at interlink.com.au> wrote:
> 2.5b1 is out, so I'm declaring the SVN trunk unfrozen. Note, though, 
> that as we're now post-beta, we're in FEATURE FREEZE. 

Hey Raymond, any word on those binascii additions, or should I clean up
that struct patch and add in some tests?

 - Josiah


From titus at caltech.edu  Mon Jun 19 07:53:01 2006
From: titus at caltech.edu (Titus Brown)
Date: Sun, 18 Jun 2006 22:53:01 -0700
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <44962630.4060806@gmail.com>
References: <20060615171935.GA26179@caltech.edu>
	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
	<44962630.4060806@gmail.com>
Message-ID: <20060619055301.GA5000@caltech.edu>

On Mon, Jun 19, 2006 at 02:21:04PM +1000, Nick Coghlan wrote:
-> Brett Cannon wrote:
-> >But it does seem accurate; random checking of some modules that got high 
-> >but not perfect covereage all seem to be instances where dependency 
-> >injection would be required to get the tests to work since they were 
-> >based on platform-specific things.
-> 
-> There's something odd going on with __future__.py, though. The module level 
-> code all shows up as not executed, but the bodies of the two _Feature 
-> methods both show up as being run.
-> 
-> I'm curious as to how a function body can be executed without executing the 
-> function definition first :)

Coverage recording probably wasn't on at the time the module was
imported; I only turn on recording in the 'runtest' function, and
not before.  I should probably start it earlier ;)

-> As far as making the comments/docstrings less obvious goes, grey is usually 
-> a good option for that.

I'll try it out and see...

thanks,
--titus

From nmm1 at cus.cam.ac.uk  Mon Jun 19 10:55:44 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Mon, 19 Jun 2006 09:55:44 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
Message-ID: <E1FsFXs-0005ZD-LA@virgo.cus.cam.ac.uk>

Brett Cannon's and Neal Norwitz's replies appreciated and noted, but
responses sent by mail.


Nick Coghlan <ncoghlan at gmail.com> wrote:
>
> Python 2.4's decimal module is, in essence, a floating point emulator based on 
> the General Decimal Arithmetic specification.

Grrk.  Format and all?  Because, in software, encoding, decoding and
dealing with the special cases accounts for the vast majority of the
time.  Using a format and specification designed for implementation
in software is a LOT faster (often 5-20 times).

> If you want floating point mathematics that doesn't have insane platform 
> dependent behaviour, the decimal module is the recommended approach. By the 
> time Python 2.6 rolls around, we will hopefully have an optimized version 
> implemented in C (that's being worked on already).

Yes.  There is no point in building a wheel if someone else is doing it.
Please pass my name on to the people doing the optimisation, as I have
a lot of experience in this area and may be able to help.  But it is a
fairly straightforward (if tricky) task.

> That said, I'm not clear on exactly what changes you'd like to make to the 
> binary floating point type, so I don't know if I think they're a good idea or 
> not :)

Now, here it is worth posting a reponse :-)

The current behaviour follows C99 (sic) with some extra checking (e.g.
division by zero raises an exception).  However, this means that a LOT
of errors will give nonsense answers without comment, and there are a
lot of ways to 'lose' NaN values quietly - e.g. int(NaN).  That is NOT
good software engineering.  So:

Mode A:  follow IEEE 754R slavishly, if and when it ever gets into print.
There is no point in following C99, as it is too ill-defined, even if it
were felt desirable.  This should not be the default, because of the
flaws I mention above (see Kahan on Java).

Mode B:  all numerically ambiguous or invalid operations should raise
an exception - including pow(0,0), int(NaN) etc. etc.  There is a moot
point over whether overflow is such a case in an arithmetic that has
infinities, but let's skip over that one for now.

Mode C:  all numerically ambiguous or invalid operations should return
a NaN (or infinity, if appropriate).  Anything that would lose the error
indication would raise an exception.  The selection between modes B and
C could be done by a method on the class - with mode B being selected
if any argument had it set, and mode C otherwise.

Now, both modes B and C are traditional approaches to numerical safety,
and have the property that error indications can't be lost "by accident",
though they make no guarantees that the answers make sense.  I am
agnostic about which is better, though mode B is a LOT better from the
debugging point of view, as you discover an error closer to where it
was made.

Heaven help us, there could be a mode D, which would be mode C but
with trace buffers.  They are another sadly neglected software
engineering technique, but let's not add every bell and whistle on
the shelf :-)


"tjreedy" <tjreedy at udel.edu> wrote:
> 
> > experience from times of yore is that emulated floating-point would
> > be fast enough that few, if any, Python users would notice.
>         
> Perhaps you should enquire on the Python numerical and scientific computing 
> lists to see how many feel differently.  I don't see how someone crunching 
> numbers hours per day could not notice a slowdown.

Oh, certainly, almost EVERYONE will "feel" differently!  But that is
not the point.  Those few of us remaining (and there are damn few) who
know how a fast emulated floating-point performs know that the common
belief that it is very slow is wrong.  I have both used and implemented
it :-)

The point is, as I mention above, you MUST use a software-friendly
format AND specification if you want performance.  IEEE 754 and IBM's
decimal pantechnichon are both extremely software-hostile.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From titus at caltech.edu  Mon Jun 19 17:18:09 2006
From: titus at caltech.edu (Titus Brown)
Date: Mon, 19 Jun 2006 08:18:09 -0700
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <44969A8A.6000401@benjiyork.com>
References: <20060615171935.GA26179@caltech.edu>
	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
	<44969A8A.6000401@benjiyork.com>
Message-ID: <20060619151809.GB2539@caltech.edu>

On Mon, Jun 19, 2006 at 08:37:30AM -0400, Benji York wrote:
-> Brett Cannon wrote:
-> >But it does seem accurate; random checking of some modules that got high 
-> >but not perfect covereage all seem to be instances where dependency 
-> >injection would be required to get the tests to work since they were 
-> >based on platform-specific things.
-> 
-> >I don't know if we need it hooked into the buildbots (unless it is dirt 
-> >cheap to generate the report).
-> 
-> It would be interesting to combine the coverage over several platforms 
-> and report that.

Yes, I noticed that the platform specific stuff doesn't get covered, of
course.  It's very easy to do, *if* there's any way to get the coverage
database from a central location (or send it back to a central location).

It might be interesting to run coverage analysis -- either figleaf or
Ned Batchelder's module[0] -- once a week on select buildbot machines
(one linux, one windows, one mac, or some such) and make the coverage
databases available via something like a downloadable static file.  Then
anyone could download those files and do Interesting Things with them.

--titus

[0] I'm sorry, I don't know how Walter Dorwald generates his coverage;
if it's OSS, then it'd be better to use because it shows C code coverage
as well.

p.s. Here's the diff for regr_test:

Index: Lib/test/regrtest.py
===================================================================
--- Lib/test/regrtest.py        (revision 46972)
+++ Lib/test/regrtest.py        (working copy)
@@ -1,4 +1,5 @@
 #! /usr/bin/env python
+import figleaf

 """Regression test.

@@ -333,7 +334,11 @@
             tracer.runctx('runtest(test, generate, verbose, quiet, testdir)',
                           globals=globals(), locals=vars())
         else:
+            figleaf.start(False)
             ok = runtest(test, generate, verbose, quiet, testdir, huntrleaks)
+            figleaf.stop()
+            figleaf.write_coverage('.figleaf')
+
             if ok > 0:
                 good.append(test)
             elif ok == 0:

From nmm1 at cus.cam.ac.uk  Mon Jun 19 17:29:00 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Mon, 19 Jun 2006 16:29:00 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Mon, 19 Jun 2006 13:04:46 BST."
	<2mmzc9l5b5.fsf@starship.python.net> 
Message-ID: <E1FsLgS-0001fa-Cc@virgo.cus.cam.ac.uk>

Michael Hudson <mwh at python.net> wrote:
> 
> > As I have posted to comp.lang.python, I am not happy with Python's
> > numerical robustness - because it basically propagates the 'features'
> > of IEEE 754 and (worse) C99. 
> 
> That's not really now I would describe the situation today.

It is certainly the case in 2.4.2, however you would describe it.

> > 2) Because some people are dearly attached to the current behaviour,
> > warts and all, and there is a genuine quandary of whether the 'right'
> > behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R-
> > finally-specifies (let's ignore C99 and Java as beyond redemption),
> 
> Why?  Maybe it's clear to you, but it's not totally clear to me, and
> it any case the discussion would be better informed for not being too
> dismissive.

Why which?  There are several things that you might be puzzled over.
And where can I start?  Part of the problem is that I have spent a LOT
of time in these areas in the past decades, and have been involved
with many of the relevant standards, and I don't know what to assume.

> > there might well need to be options.  These can obviously be done by
> > a command-line option, an environment variable or a float method.
> > There are reasons to disfavour the last, but all are possible.  Which
> > is the most Pythonesque approach?
> 
> I have heard Tim say that there are people who would dearly like to be
> able to choose.  Environment variables and command line switches are
> not Pythonic.

All right, but what is?  Firstly, for something that needs to be
program-global?  Secondly, for things that don't need to be brings
up my point of adding methods to a built-in class.

> I'm interested in making Python's floating point story better, and
> have worked on a few things for Python 2.5 -- such as
> pickling/marshalling of special values -- but I'm not really a
> numerical programmer and don't like to guess what they need.

Ah.  I must get a snapshot, then.  That was one of the lesser things
on my list.  I have spent a lot of the past few decades in the numerical
programming arena, from many aspects.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From titus at caltech.edu  Mon Jun 19 17:41:00 2006
From: titus at caltech.edu (Titus Brown)
Date: Mon, 19 Jun 2006 08:41:00 -0700
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
References: <20060615171935.GA26179@caltech.edu>
	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>
Message-ID: <20060619154100.GA17492@caltech.edu>

On Sun, Jun 18, 2006 at 08:12:39PM -0700, Brett Cannon wrote:
-> On 6/15/06, Titus Brown <titus at caltech.edu> wrote:
-> >
-> >Folks,
-> >
-> >I've just run a code coverage report for the python2.4 branch:
-> >
-> >        http://vallista.idyll.org/~t/temp/python2.4-svn/
-> >
-> >This report uses my figleaf code,
-> >
-> >        http://darcs.idyll.org/~t/projects/figleaf-latest.tar.gz
-> 
-> 
-> Very nice, Titus!
-> 
-> I'm interested in feedback on a few things --
-> >
-> >* what more would you want to see in this report?
-> >
-> >* is there anything obviously wrong about the report?
-> >
-> >In other words... comments solicited ;).
-> 
-> Making the comments in the code stand out less (i.e., not black) might be
-> handy since my eye still gets drawn to the comments a lot.

I think I'd have to use the tokenizer to do this, no?  The comments
aren't kept in the AST, and I don't want to write a half-arsed regexp
because I'm sure I'll stumble on comments in strings etc ;)

-> It would also be nice to be able to sort on different things, such as
-> filename.

Easy enough; just the index needs to be generated in multiple ways.

-> But it does seem accurate; random checking of some modules that got high but
-> not perfect covereage all seem to be instances where dependency injection
-> would be required to get the tests to work since they were based on
-> platform-specific things.

Great!

-> By the by, I'm also planning to integrate this into buildbot on some
-> >projects.  I'll post the scripts when I get there, and I'd be happy
-> >to help Python itself set it up, of course.
-> 
-> 
-> I don't know if we need it hooked into the buildbots (unless it is dirt
-> cheap to generate the report).  But hooking it up to the script in
-> Misc/build.sh that Neal has running to report reference leaks and
-> fundamental test failures would be wonderful.

Hmm, ok, I'll take a look.

The general cost is a ~2x slowdown for running with trace enabled, and the
HTML generation itself takes less than 5 minutes (all of that in AST
parsing/traversing to figure out what lines *should* be looked at).

cheers,
--titus

From nmm1 at cus.cam.ac.uk  Wed Jun 21 23:22:24 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Wed, 21 Jun 2006 22:22:24 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Tue, 20 Jun 2006 11:52:57 BST."
	<C30C8CE4-E688-49A6-BF33-2BE3C811820D@python.net> 
Message-ID: <E1FtA9Y-0004r8-47@draco.cus.cam.ac.uk>

Michael Hudson <mwh at python.net> wrote:
>
> This mail never appeared on python-dev as far as I can tell, so I'm  
> not snipping anything.

And it still hasn't :-(  I am on the list of recipients without posting
rights, and the moderator appears to be on holiday.

> >>> As I have posted to comp.lang.python, I am not happy with Python's
> >>> numerical robustness - because it basically propagates the  
> >>> 'features'
> >>> of IEEE 754 and (worse) C99.
> >>
> >> That's not really now I would describe the situation today.
> >
> > It is certainly the case in 2.4.2, however you would describe it.
> 
> I guess you could say it reflects the features of C89.  It certainly  
> doesn't do anything C99 specific.

Oh, yes, it does!  If you look at floatobject.c, you will find it solid
with constructions that make limited sense in C99 but next to no sense
in C89.

> But I wouldn't characterize anything Python does in the floating  
> point area as "designed", particularly.  Portability makes that hard.

Not really.  We managed even back in the 1970s, when there was a LOT
more variation.  Writing code that would work, unchanged, on an IBM 360,
an ICL 1900 and a CDC 6600 was, er, interesting ....

> Why are C99 and Java beyond redemption?  I know some of the mistakes  
> Java makes here, but still, you could at least hint at which you are  
> thinking of.

Where do I start?  Look at Kahan's "Why Java's Floating-Point Hurts
Everyone, Everywhere" and then at the UK's reasons for voting "NO"
to C99.  There wasn't even an agreement on the INTENT of most of the
new features in SC22WG14, and God alone knows what the C99 standard
means (if anything).  There is informative and optional text that
overrides normative; there is wording that contradicts itself, or is
otherwise meaningless; and so on.  Plus the fact that the C99 standard
is simultaneously unusable and unimplementable on many architectures.
And the fact that most of it is numerically insane.

I could go on ....

> Well, if you can't explain what your intentions are to *me*, as a  
> mathematics-degree holding core Python developer that has done at  
> least some work in this area, I posit that you aren't going to get  
> very far.    

My intentions are to provide some numerically robust semantics,
preferably of the form where straightforward numeric code (i.e. code
that doesn't play any bit-twiddling tricks) will never invoke
mathematically undefined behaviour without it being flagged.  See
Kahan on that.

> I'm not intimately familiar with the standards like 754 but I have  
> some idea what they contain, and I've read appendix F of C99, if that  
> helps you target your explanations.

Not a lot.  Annex F in itself is only numerically insane.  You need to
know the rest of the standard, including that which is documented only
in SC22WG14 messages, to realise the full horror.

> Why does it need to be program global?  In my not-really-thought-out  
> plans for straightening out CPython's floating point story I had  
> envisioned code to be written something like this:

No, you are thinking at too low a level.  The problem with such things
is that they related to the interfaces between types, and it is those
aspects where object-orientation falls down so badly.  For example,
consider conversion between float and long - which class should control
the semantics?

> This could be implemented by having a field in the threadstate of FPU  
> flags to check after each fp operation (or each set of fp operations,  
> possibly).  I don't think I have the guts to try to implement  
> anything sensible using HW traps (which are thread-local as well,  
> aren't they?).

Gods, NO!!!  Sorry, but I have implemented such things (but that was
on a far architecture, and besides the system is dead).  Modern CPU
architectures don't even DEFINE whether interrupt handling is local
to the core or chip, and document that only in the release notes,
but what is clear is that some BLACK incantations are needed in
either case.  Think of taking a machine check interrupt on a multi-
core, highly-pipelined architecture and blench.  And, if that is an
Itanic, gibber hysterically before taking early retirement on the
grounds of impending insanity.

Oh, that's the calm, moderate description.  The reality is worse.

> > Secondly, for things that don't need to be brings
> > up my point of adding methods to a built-in class.
> 
> This isn't very hard, really, in fact float has class methods in 2.5...

Thanks.  I will look, but remember this is being done at the C level.

> >> I'm interested in making Python's floating point story better, and
> >> have worked on a few things for Python 2.5 -- such as
> >> pickling/marshalling of special values -- but I'm not really a
> >> numerical programmer and don't like to guess what they need.
> >
> > Ah.  I must get a snapshot, then.  That was one of the lesser things
> > on my list.
> 
> It was fairly straightforward, and still caused portability problems...

Now, why did I predict that?  Did you, by any chance, include
System/390 and VAX support in your code :-)


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From walter at livinglogic.de  Thu Jun 22 00:56:02 2006
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Thu, 22 Jun 2006 00:56:02 +0200
Subject: [Python-Dev] Code coverage reporting.
In-Reply-To: <20060619151809.GB2539@caltech.edu>
References: <20060615171935.GA26179@caltech.edu>	<bbaeab100606182012g6aeb7ab5q5ec87d87a00107d9@mail.gmail.com>	<44969A8A.6000401@benjiyork.com>
	<20060619151809.GB2539@caltech.edu>
Message-ID: <4499CE82.6050301@livinglogic.de>

Titus Brown wrote:

> On Mon, Jun 19, 2006 at 08:37:30AM -0400, Benji York wrote:
> -> Brett Cannon wrote:
> -> >But it does seem accurate; random checking of some modules that got high 
> -> >but not perfect covereage all seem to be instances where dependency 
> -> >injection would be required to get the tests to work since they were 
> -> >based on platform-specific things.
> -> 
> -> >I don't know if we need it hooked into the buildbots (unless it is dirt 
> -> >cheap to generate the report).
> -> 
> -> It would be interesting to combine the coverage over several platforms 
> -> and report that.
> 
> Yes, I noticed that the platform specific stuff doesn't get covered, of
> course.  It's very easy to do, *if* there's any way to get the coverage
> database from a central location (or send it back to a central location).
> 
> It might be interesting to run coverage analysis -- either figleaf or
> Ned Batchelder's module[0] -- once a week on select buildbot machines
> (one linux, one windows, one mac, or some such) and make the coverage
> databases available via something like a downloadable static file.  Then
> anyone could download those files and do Interesting Things with them.
> 
> --titus
> 
> [0] I'm sorry, I don't know how Walter Dorwald generates his coverage;
> if it's OSS, then it'd be better to use because it shows C code coverage
> as well.

The script at 
http://styx.livinglogic.de/~walter/python/coverage/PythonCodeCoverage.py 
definitely is open source, so feel free to use in any way you want. The 
web application front end though isn't open source. The SQL script to 
recreate the database can be found here: 
http://styx.livinglogic.de/~walter/python/coverage/coverage.sql

Servus,
    Walter


From rwgk at yahoo.com  Thu Jun 22 01:38:48 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Wed, 21 Jun 2006 16:38:48 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
Message-ID: <20060621233848.89342.qmail@web31504.mail.mud.yahoo.com>

http://docs.python.org/dev/whatsnew/ports.html says:

  The PyRange_New() function was removed. It was never documented, never used
in the core code, and had dangerously lax error checking.

I use this function (don't remember how I found it; this was years ago),
therefore my code doesn't compile with 2.5b1 (it did OK before with 2.5a2). Is
there an alternative spelling for PyRange_New()?

Thank you in advance!

Ralf


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From brett at python.org  Thu Jun 22 02:33:38 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 21 Jun 2006 17:33:38 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
Message-ID: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>

I have been working on a design doc for restricted execution of Python
as part of my dissertation for getting Python into Firefox to replace
JavaScript on the web.  Since this is dealing with security and
messing that up can be costly, I am sending it to the list for any
possible feedback.

I have already run the ideas past Neal, Guido, Jeremy, and Alex and
everyone seemed to think the design was sound (thanks to them and Will
for attending my meeting on it and giving me feedback that helped to
shape this doc), so hopefully there are no major issues with the
design itself.  There are a couple of places (denoted with XXX) where
there is an open issue still.  Feedback on those would be great.

Anyway, here it is.  I am going to be offline most of tomorrow so I
probably won't get back to comments until Friday.

And just in case people are wondering, I plan on doing the
implementation in the open on a branch within Python's repository so
if this design works out it will end up in the core (as for when that
would land, I don't know, but hopefully for 2.6).

---------------------------------------------------------------------------------------------


      Restricted Execution for Python
#######################################

About This Document
=============================

This document is meant to lay out the general design for re-introducing a
restriced execution model for Python.  This document should provide one with
enough information to understand the goals for restricted execution, what
considerations were made for the design, and the actual design itself.  Design
decisions should be clear and explain not only why they were chosen but
possible drawbacks from taking that approach.


Goal
=============================

A good restricted execution model provides enough protection to prevent
malicious harm to come to the system, and no more.  Barriers should be
minimized so as to allow most code that does not do anything that would be
regarded as harmful to run unmodified.

An important point to take into consideration when reading this document is to
realize it is part of my (Brett Cannon's) Ph.D. dissertation.  This means it is
heavily geared toward the restricted execution when the interpreter is working
with Python code embedded in a web page.  While great strides have been taken
to keep the design general enough so as to allow all previous uses of the
'rexec' module [#rexec]_ to be able to use the new design, it is not the
focused goal.  This means if a design decision must be made for the embedded
use case compared to sandboxing Python code in a Python application, the former
will win out.

Throughout this document, the term "resource" is to represent anything that
deserves possible protection.  This includes things that have a physical
representation (e.g., memory) to things that are more abstract and specific to
the interpreter (e.g., sys.path).

When referring to the state of an interpreter, it is either "trusted" or
"untrusted".  A trusted interpreter has no restrictions imposed upon any
resource.  An untrusted interpreter has at least one, possibly more, resource
with a restriction placed upon it.


.. contents::


Use Cases
/////////////////////////////

All use cases are based on how many untrusted or trusted interpreters are
running in a single process.


When the Interpreter Is Embedded
================================

Single Untrusted Interpreter
----------------------------

This use case is when an application embeds the interpreter and never has more
than one interpreter running.

The main security issue to watch out for is not having default abilities be
provided to the interpreter by accident.  There must also be protection from
leaking resources that the interpreter needs for general use underneath the
covers into the untrusted interpreter.


Multiple Untrusted Interpreters
-------------------------------

When multiple interpreters, all untrusted at varying levels, need to be running
within a single application.  This is the key use case that this proposed
design is targetted for.

On top of the security issues from a single untrusted interpreter, there is one
additional worry.  Resources cannot end up being leaked into other interpreters
where they are given escalated rights.


Stand-Alone Python
==================

When someone has written a Python program that wants to execute Python code in
an untrusted interpreter(s).  This is the use case that 'rexec' attempted to
fulfill.

The added security issues for this use case (on top of the ones for the other
use cases) is preventing something from the trusted interpreter leaking into an
untrusted interpreter and having elevated permissions.  With the multiple
untrusted interpreters one did not have to worry about preventing actions from
occurring that are disallowed for all untrusted interpreters.  With this use
case you do have to worry about the binary distinction between trusted and
untrusted interpreters running in the same process.


Resources to Protect
/////////////////////////////

XXX Threading?
XXX CPU?

Filesystem
===================

The most obvious facet of a filesystem to protect is reading from it.  One does
not want what is stored in ``/etc/passwd`` to get out.  And one also does not
want writing to the disk unless explicitly allowed for basically the same
reason; if someone can write ``/etc/passwd`` then they can set the password for
the root account.

But one must also protect information about the filesystem.  This includes both
the filesystem layout and permissions on files.  This means pathnames need to
be properly hidden from an untrusted interpreter.


Physical Resources
===================

Memory should be protected.  It is a limited resource on the system that can
have an impact on other running programs if it is exhausted.  Being able to
restrict the use of memory would help alleviate issues from denial-of-service
(DoS) attacks.


Networking
===================

Networking is somewhat like the filesystem in terms of wanting similar
protections.  You do not want to let untrusted code make tons of socket
connections or accept them to do possibly nefarious things (e.g., acting as a
zombie).

You also want to prevent finding out information about the network you are
connected to.  This includes doing DNS resolution since that allows one to find
out what addresses your intranet has or what subnets you use.


Interpreter
===================

One must make sure that the interpreter is not harmed in any way.  There are
several ways to possibly do this.  One is generating hostile bytecode.  Another
is some buffer overflow.  In general any ability to crash the interpreter is
unacceptable.

There is also the issue of taking it over.  If one is able to gain control of
the overall process through the interpreter than heightened abilities could be
gained.


Types of Security
///////////////////////////////////////

As with most things, there are multiple approaches one can take to tackle a
problem.  Security is no exception.  In general there seem to be two approaches
to protecting resources.


Resource Hiding
=============================

By never giving code a chance to access a resource, you prevent it from be
(ab)used.  This is the idea behind resource hiding.  This can help minimize
security checks by only checking if someone should be given a resource.  By
having possession of a resource be what determines if one should be allowed to
use it you minimize the checks to only when a resource is handed out.

This can be viewed as a passive system for security.  Once a resource has been
given to code there are no more checks to make sure the security model is being
violated.

The most common implementation of resource hiding is capabilities.  In this
type of system a resource's reference acts as a ticket that represents the right
to use the resource.  Once code has a reference it is considered to have full
use of that resource it represents and no further security checks are
performed.

To allow customizable restrictions one can pass references to wrappers of
resources.  This allows one to provide custom security to resources instead of
requiring an all-or-nothing approach.

The problem with capabilities is that it requires a way to control access to
references.  In languages such as Java that use a capability-based security
system, namespaces provide the protection.  By having private attributes and
compartmentalized namespaces, references cannot be reached without explicit
permission.

For instance, Java has a ClassLoader class that one can call to have return a
reference that is desired.  The class does a security check to make sure the
code should be allowed to access the resource, and then returns a reference as
appropriate.  And with private attributes in objects and packages not providing
global attributes you can effectively hide references to prevent security
breaches.

To use an analogy, imagine you are providing security for your home.  With
capabilities, security came from not having any way to know where your house is
without being told where it was; a reference to its location.  You might be
able to ask a guard (e.g., Java's ClassLoader) for a map, but if they refuse
there is no way for you to guess its location without being told.  But once you
knew where it was, you had complete use of the house.

And that complete access is an issue with a capability system.  If someone
played a little loose with a reference for a resource then you run the risk of
it getting out.  Once a reference leaves your hands it becomes difficult to
revoke the right to use that resource.  A capability system can be designed to
do a check every time a reference is handed to a new object, but that can be
difficult to do properly when grafting a new way to handle resources on to an
existing system such as Python since the check is no longer at a point for
requesting a reference but also at plain assignment time.


Resource Crippling
=============================

Another approach to security is to provide constant, proactive security
checking of rights to use a resource.  One can have a resource perform a
security check every time someone tries to use a method on that resource.  This
pushes the security check to a lower level; from a reference level to the
method level.

By performing the security check every time a resource's method is called the
worry of a resource's reference leaking out to insecure code is alleviated
since the resource cannot be used without authorizing it regardless of whether
even having the reference was granted.  This does add extra overhead, though,
by having to do so many security checks.

FreeBSD's jail system provides a system similar to this.  Various system calls
allow for basic usage, but knowing of the system call is not enough to grant
usage.  Every call of a system call requires checking that the proper rights
have been granted to the use in order to allow for the system call to perform
its action.

An even better example in FreeBSD's jail system is its protection of sockets.
One can only bind a single IP address to a jail.  Any attempt to do more or
perform uses with the one IP address that is granted is prevented.  The check
is performed at every call involving the one granted IP address.

Using our home analogy, everyone in the world can know where your home is.  But
to access any door in your home, you have to pass a security check.  The
overhead is higher and slows down your movement in your home, but not caring if
perfect strangers know where your home is prevents the worry of your address
leaking out to the world.


The 'rexec' Module
///////////////////////////////////////

The 'rexec' module [#rexec]_ was based on the design used by Safe-Tcl
[#safe-tcl]_.  The design was essentially a capability system.  Safe-Tcl
allowed you to launch a separate interpreter where its global functions were
specified at creation time.  This prevented one from having any abilities that
were not explicitly provided.

For 'rexec', the Safe-Tcl model was tweaked to better match Python's situation.
An RExec object represented a restricted environment.  Imports were checked
against a whitelist of modules.  You could also restrict the type of modules to
import based on whether they were Python source, bytecode, or C extensions.
Built-ins were allowed except for a blacklist of built-ins to not provide.
Several other protections were provided; see documentation for the complete
list.

With an RExec object created, one could pass in strings of code to be executed
and have the result returned.  One could execute code based on whether stdin,
stdout, and stderr were provided or not.

The ultimate undoing of the 'rexec' module was how access to objects that in
normal Python require no direct action to reach was handled.  Importing modules
requires a direct action, and thus can be protected against directly in the
import machinery.  But for built-ins, they are accessible by default and
require no direct action to access in normal Python; you just use their name
since they are provided in all namespaces.

For instance, in a restricted interpreter, one only had to do
``del __builtins__`` to gain access to the full set of built-ins.  Another way
is through using the gc module:
``gc.get_referrers(''.__class__.__bases__[0])[6]['file']``.  While both of
these could be fixed (the former a bug in 'rexec' and the latter not allowing
gc to be imported), they are examples of things that do not require proactive
actions on the part of the programmer in normal Python to gain access to
tends to leak out.  An unfortunate side-effect of having all of that wonderful
reflection in Python.

There is also the issue that 'rexec' was written in Python which provides its
own problems.

Much has been learned since 'rexec' was written about how Python tends to be
used and where security issues tend to appear.  Essentially Python's dynamic
nature does not lend itself very well to passive security measures since the
reflection abilities in the language lend themselves to getting around
non-proactive security checks.


The Proposed Approach
///////////////////////////////////////

In light of where 'rexec' succeeded and failed along with what is known about
the two main types of security and how Python tends to operate, the following
is a proposal on how to secure Python for restricted execution.

First, security will be provided at the C level.  By taking advantage of the
language barrier of accessing C code from Python without explicit allowance
(i.e., ignoring ctypes [#ctypes]_), direct manipulation of the various security
checks can be substantially reduced and controlled.

Second, all proactive actions that code can do to gain access to resources will
be protected through resource hiding.  By having to go through Python to get to
something (e.g., modules), a security check can be put in place to deny access
as appropriate (this also ties into the separation between interpreters,
discussed below).

Third, any resource that is usually accessible by default will use resource
crippling.  Instead of worrying about hiding a resource that is available by
default (e.g., 'file' type), security checks within the resource will prevent
misuse.  Crippling can also be used for resources where an object could be
desired, but not at its full capacity (e.g., sockets).

Performance should not be too much of an issue for resource crippling.  It's
main use if for I/O types; files and sockets.  Since operations on these types
are I/O bound and not CPU bound, the overhead for doing the security check
should be a wash overall.

Fourth, the restrictions separating multiple interpreters within a single
process will be utilized.  This helps prevent the leaking of objects into
different interpreters with escalated privileges.  Python source code
modules are reloaded for each interpreter, preventing an object that does not
have resource crippling from being leaked into another interpreter unless
explicitly allowed.  C extension modules are shared by not reloading them
between interpreters, but this is considered in the security design.

Fifth, Python source code is always trusted.  Damage to a system is considered
to be done from either hostile bytecode or at the C level.  Thus protecting the
interpreter and extension modules is the great worry, not Python source code.
Python bytecode files, on the other hand, are considered inherently unsafe and
will never be imported directly.

Attempts to perform an action that is not allowed by the security policy will
raise an XXX exception (or subclass thereof) as appropriate.


Implementation Details
===============================

XXX prefix/module name; Restrict, Secure, Sandbox?  Different tense?
XXX C APIs use abstract names (e.g., string, integer) since have not decided if
Python objects or C types (e.g., PyStringObject vs. char *) will be used

Support for untrusted interpreters will be a compilation flag.  This allows the
more common case of people not caring about protections to not have a
performance hindrance when not desired.  And even when Python is compiled for
untrusted interpreter restrictions, when the running interpreter *is* trusted,
there will be no accidental triggers of protections.  This means that
developers should be liberal with the security protections without worrying
about there being issues for interpreters that do not need/want the protection.

At the Python level, the __restricted__ built-in will be set based on whether
the interpreter is untrusted or not.  This will be set for *all* interpreters,
regardless of whether untrusted interpreter support was compiled in or not.

For setting what is to be protected, the XXX<pointer to interpreter> for the
untrusted interpreter must be passed in.  This makes the protection very
explicit and helps make sure you set protections for the exact interpreter you
mean to.

The functions for checking for permissions are actually macros that take
in at least an error return value for the function calling the macro.  This
allows the macro to return for the caller if the check failed and cause the XXX
exception to be propagated.  This helps eliminate any coding errors from
incorrectly checking a return value on a rights-checking function call.  For
the rare case where this functionality is disliked, just make the check in a
utility function and check that function's return value (but this is strongly
discouraged!).


API
--------------

* interpreter PyXXX_NewInterpreter()
    Return a new interpreter that is considered untrusted.  There is no
    corresponding PyXXX_EndInterpreter() as Py_EndInterpreter() will be taught
    how to handle untrusted interpreters.

* PyXXX_Trusted(error_return)
    Macro that has the caller return with 'error_return' if the interpreter is
    not a trusted one.


Memory
=============================

Protection
--------------

An memory cap will be allowed.

Modification to pymalloc will be needed to properly keep track of the
allocation and freeing of memory.  Same goes for the macros around the system
malloc/free system calls.  This provides a platform-independent system for
protection instead of relying on the operating system providing a service for
capping memory usage of a process.  Also allows the protection to be at the
interpreter level instead of at the process level.


Why
--------------

Protecting excessive memory usage allows one to make sure that a DoS attack
against the system's memory is prevented.


Possible Security Flaws
-----------------------

If code makes direct calls to malloc/free instead of using the proper PyMem_*()
macros then the security check will be circumvented.  But C code is *supposed*
to use the proper macros or pymalloc and thus this issue is not with the
security model but with code not following Python coding standards.


API
--------------

* int PyXXX_SetMemoryCap(interpreter, integer)
    Set the memory cap for an untrusted interpreter.  If the interpreter is not
    running an untrusted interpreter, return NULL.

* PyXXX_MemoryAlloc(integer, error_return)
    Macro to increase the amount of memory that is reported that the running
    untrusted interpreter is running.  If the increase puts the total count
    passed the set limit, raise an XXX exception and cause the calling function
    to return with the value of error_return.  For trusted interpreters or
    untrusted interpreters where a cap has not been set, the macro does
    nothing.

* int PyXXX_MemoryFree(integer)
    Decrease the current running interpreter's allocated memory.  If this puts
    the memory returned to below 0, raise an XXX exception and return NULL.
    For trusted interpreters or untrusted interpreters where there is no memory
    cap, the macro does nothing.


CPU
=============================
XXX Needed?  Difficult to get right for all platforms.  Would have to be very
platform-specific.


Reading/Writing Files
=============================

Protection
--------------

The 'file' type will be resource crippled.  The user may specify files or
directories that are acceptable to be opened for reading/writing, or both.

All operations that either read, write, or provide info on a file will require
a security check to make sure that it is allowed for the file that the 'file'
object represents.  This includes the 'file' type's constructor not raising an
IOError stating a file does not exist but XXX instead so that information about
the filesystem is not improperly provided.

The security check will be done for all 'file' objects regardless of where the
'file' object originated.  This prevents issues if the 'file' type or an
instance of it was accidentally made available to an untrusted interpreter.


Why
--------------

Allowing anyone to be able to arbitrarily read, write, or learn about the
layout of your filesystem is extremely dangerous.  It can lead to loss of data
or data being exposed to people whom should not have access.


Possible Security Flaws
-----------------------

Assuming that the method-level checks are correct and control of what
files/directories is not exposed, 'file' object protection is secure, even when
a 'file' object is leaked from a trusted interpreter to an untrusted one.


API
--------------

* int PyXXX_AllowFile(interpreter, path, mode)
    Add a file that is allowed to be opened in 'mode' by the 'file' object.  If
    the interpreter is not untrusted then return NULL.

* int PyXXX_AllowDirectory(interpreter, path, mode)
    Add a directory that is allowed to have files opened in 'mode' by the
    'file' object.  This includes both pre-existing files and any new files
    created by the 'file' object.
    XXX allow for creating/reading subdirectories?

* PyXXX_CheckPath(path, mode, error_return)
    Macro that causes the caller to return with 'error_return' and XXX as the
    exception if the specified path with 'mode' is not allowed.  For trusted
    interpreters, the macro does nothing.


Extension Module Importation
============================

Protection
--------------

A whitelist of extension modules that may be imported must be provided.  A
default set is given for stdlib modules known to be safe.

A check in the import machinery will check that a specified module name is
allowed based on the type of module (Python source, Python bytecode, or
extension module).  Python bytecode files are never directly imported because
of the possibility of hostile bytecode being present.  Python source is always
trusted based on the assumption that all resource harm is eventually done at
the C level, thus Python code directly cannot cause harm.  Thus only C
extension modules need to be checked against the whitelist.

The requested extension module name is checked in order to make sure that it
is on the whitelist if it is a C extension module.  If the name is not correct
an XXX exception is raised.  Otherwise the import is allowed.

Even if a Python source code module imports a C extension module in a trusted
interpreter it is not a problem since the Python source code module is reloaded
in the untrusted interpreter.  When that Python source module is freshly
imported the normal import check will be triggered to prevent the C extension
module from becoming available to the untrusted interpreter.

For the 'os' module, a special restricted version will be used if the proper
C extension module providing the correct abilities is not allowed.  This will
default to '/' as the path separator and provide as much reasonable abilities
as possible from a pure Python module.

The 'sys' module is specially addressed in
`Changing the Behaviour of the Interpreter`_.

By default, the whitelisted modules are:

* XXX work off of rexec whitelist?


Why
--------------

Because C code is considered unsafe, its use should be regulated.  By using a
whitelist it allows one to explicitly decide that a C extension module should
be considered safe.


Possible Security Flaws
-----------------------

If a trusted C extension module imports an untrusted C extension module and
make it an attribute of the trust module there will be a breach in security.
Luckily this a rarity in extension modules.

There is also the issue of a C extension module calling the C API of an
untrusted C extension module.

Lastly, if a trusted C extension module is loaded in a trusted interpreter and
then loaded into an untrusted interpreter then there is no possible checks
during module initialization for possible security issues for resources opened
during initialization of the module if such checks exist in the init*()
function.

All of these issues can be handled by never blindly whitelisting a C extension
module.  Added support for dealing with C extension modules comes in the form
of `Extension Module Crippling`_.

API
--------------

* int PyXXX_AllowModule(interpreter, module_name)
    Allow the untrusted interpreter to import 'module_name'.  If the
    interpreter is not untrusted, return NULL.
    XXX sub-modules in packages allowed implicitly?  Or have to list all
    modules explicitly?

* int PyXXX_BlockModule(interpreter, module_name)
    Remove the specified module from the whitelist.  Used to remove modules
    that are allowed by default.  If called on a trusted interpreter, returns
    NULL.

* PyXXX_CheckModule(module_Name, error_return)
    Macro that causes the caller to return with 'error_return' and sets the
    exception XXX if the specified module cannot be imported.  For trusted
    interpreters the macro does nothing.


Extension Module Crippling
==========================

Protection
--------------

By providing a C API for checking for allowed abilities, modules that have some
useful functionality  can do proper security checks for those functions that
could provide insecure abilities while allowing safe code to be used (and thus
not fully deny importation).


Why
--------------

Consider a module that provides a string processing ability.  If that module
provides a single convenience function that reads its input string from a file
(with a specified path), the whole module should not be blocked from being
used, just that convenience function.  By whitelisting the module but having a
security check on the one problem function, the user can still gain access to
the safe functions.  Even better, the unsafe function can be allowed if the
security checks pass.


Possible Security Flaws
-----------------------

If a C extension module developer incorrectly implements the security checks
for the unsafe functions it could lead to undesired abilities.


API
--------------

Use PyXXX_Trusted() to protect unsafe code from being executed.


Hostile Bytecode
=============================

Protection
--------------

The code object's constructor is not callable from Python.  Importation of .pyc
and .pyo files is also prohibited.


Why
--------------

Without implementing a bytecode verification tool, there is no way of making
sure that bytecode does not jump outside its bounds, thus possibly executing
malicious code.  It also presents the possibility of crashing the interpreter.


Possible Security Flaws
-----------------------

None known.


API
--------------

None.


Changing the Behaviour of the Interpreter
=========================================

Protection
--------------

Only a subset of the 'sys' module will be made available to untrusted
interpreters.  Things to allow from the sys module:

* byteorder
* subversion
* copyright
* displayhook
* excepthook
* __displayhook__
* __excepthook__
* exc_info
* exc_clear
* exit
* getdefaultencoding
* _getframe
* hexversion
* last_type
* last_value
* last_traceback
* maxint
* maxunicode
* modules
* stdin  # See `Stdin, Stdout, and Stderr`_.
* stdout
* stderr
* __stdin__  # See `Stdin, Stdout, and Stderr`_  XXX Perhaps not needed?
* __stdout__
* __stderr__
* version
* api_version


Why
--------------

Filesystem information must be removed.  Any settings that could
possibly lead to a DoS attack (e.g., sys.setrecursionlimit()) or risk crashing
the interpreter must also be removed.


Possible Security Flaws
-----------------------

Exposing something that could lead to future security problems (e.g., a way to
crash the interpreter).


API
--------------

None.


Socket Usage
=============================

Protection
--------------

Allow sending and receiving data to/from specific IP addresses on specific
ports.


Why
--------------

Allowing arbitrary sending of data over sockets can lead to DoS attacks on the
network and other machines.  Limiting accepting data prevents your machine from
being attacked by accepting malicious network connections.  It also allows you
to know exactly where communication is going to and coming from.


Possible Security Flaws
-----------------------

If someone managed to influence the used DNS server to influence what IP
addresses were used after a DNS lookup.


API
--------------

* int PyXXX_AllowIPAddress(interpreter, IP, port)
    Allow the untrusted interpreter to send/receive to the specified IP
    address on the specified port.  If the interpreter is not untrusted,
    return NULL.

* PyXXX_CheckIPAddress(IP, port, error_return)
    Macro to verify that the specified IP address on the specified port is
    allowed to be communicated with.  If not, cause the caller to return with
    'error_return' and XXX exception set.  If the interpreter is trusted then
    do nothing.

* PyXXX_AllowHost(interpreter, host, port)
    Allow the untrusted interpreter to send/receive to the specified host on
    the specified port.  If the interpreter is not untrusted, return NULL.
    XXX resolve to IP at call time to prevent DNS man-in-the-middle attacks?

* PyXXX_CheckHost(host, port, error_return)
    Check that the specified host on the specified port is allowed to be
    communicated with.  If not, set an XXX exception and cause the caller to
    return 'error_return'.  If the interpreter is trusted then do nothing.


Network Information
=============================

Protection
--------------

Limit what information can be gleaned about the network the system is running
on.  This does not include restricting information on IP addresses and hosts
that are have been explicitly allowed for the untrusted interpreter to
communicate with.


Why
--------------

With enough information from the network several things could occur.  One is
that someone could possibly figure out where your machine is on the Internet.
Another is that enough information about the network you are connected to could
be used against it in an attack.


Possible Security Flaws
-----------------------

As long as usage is restricted to only what is needed to work with allowed
addresses, there are no security issues to speak of.


API
--------------

* int PyXXX_AllowNetworkInfo(interpreter)
    Allow the untrusted interpreter to get network information regardless of
    whether the IP or host address is explicitly allowed.  If the interpreter
    is not untrusted, return NULL.

* PyXXX_CheckNetworkInfo(error_return)
    Macro that will return 'error_return' for the caller and set XXX exception
    if the untrusted interpreter does not allow checking for arbitrary network
    information.  For a trusted interpreter this does nothing.


Filesystem Information
=============================

Protection
--------------

Do not allow information about the filesystem layout from various parts of
Python to be exposed.  This means blocking exposure at the Python level to:

* __file__ attribute on modules
* __path__ attribute on packages
* co_filename attribute on code objects


Why
--------------

Exposing information about the filesystem is not allowed.  You can figure out
what operating system one is on which can lead to vulnerabilities specific to
that operating system being exploited.


Possible Security Flaws
-----------------------

Not finding every single place where a file path is exposed.


API
--------------

* int PyXXX_AllowFilesystemInfo(interpreter)
    Allow the untrusted interpreter to expose filesystem information.  If the
    passed-in interpreter is not untrusted, return NULL.

* PyXXX_CheckFilesystemInfo(error_return)
    Macro that checks if exposing filesystem information is allowed.  If it is
    not, cause the caller to return with the value of 'error_return' and raise
    XXX.


Threading
=============================

XXX  Needed?


Stdin, Stdout, and Stderr
=============================

Protection
--------------

By default, sys.__stdin__, sys.__stdout__, and sys.__stderr__ will be set to
instances of cStringIO.  Allowing use of the normal stdin, stdout, and stderr
will be allowed.
XXX Or perhaps __stdin__ and friends should just be blocked and all you get is
sys.stdin and friends set to cStringIO.


Why
--------------

Interference with stdin, stdout, or stderr should not be allowed unless
desired.


Possible Security Flaws
-----------------------

Unless cStringIO instances can be used maliciously, none to speak of.
XXX Use StringIO instances instead for even better security?


API
--------------

* int PyXXX_UseTrueStdin(interpreter)
  int PyXXX_UseTrueStdout(interpreter)
  int PyXXX_UseTrueStderr(interpreter)
    Set the specific stream for the interpreter to the true version of the
    stream and not to the default instance of cStringIO.  If the interpreter is
    not untrusted, return NULL.


Adding New Protections
=============================

Protection
--------------

Allow for extensibility in the security model by being able to add new types of
checks.  This allows not only for Python to add new security protections in a
backwards-compatible fashion, but to also have extension modules add their own
as well.

An extension module can introduce a group for its various values to check, with
a type being a specific value within a group.  The "Python" group is
specifically reserved for use by the Python core itself.


Why
--------------

We are all human.  There is the possibility that a need for a new type of
protection for the interpreter will present itself and thus need support.  By
providing an extensible way to add new protections it helps to future-proof the
system.

It also allows extension modules to present their own set of security
protections.  That way one extension module can use the protection scheme
presented by another that it is dependent upon.


Possible Security Flaws
------------------------

Poor definitions by extension module users of how their protections should be
used would allow for possible exploitation.


API
--------------

XXX Could also have PyXXXExtended prefix instead for the following functions

+ Bool
    * int PyXXX_ExtendedSetTrue(interpreter, group, type)
        Set a group-type to be true.  Expected use is for when a binary
        possibility of something is needed and that the default is to not allow
        use of the resource (e.g., network information).  Returns NULL if the
        interpreter is not untrusted.

    * PyXXX_ExtendedCheckTrue(group, type, error_return)
        Macro that if the group-type is not set to true, cause the caller to
        return with 'error_return' with XXX exception raised.  For trusted
        interpreters the check does nothing.

+ Numeric Range
    * int PyXXX_ExtendedValueCap(interpreter, group, type, cap)
        Set a group-type to a capped value, with the initial value set to 0.
        Expected use is when a resource has a capped amount of use (e.g.,
        memory).  Returns NULL if the interpreter is not untrusted.

    * PyXXX_ExtendedValueAlloc(increase, error_return)
        Macro to raise the amount of a resource is used by 'increase'.  If the
        increase pushes the resource allocation past the set cap, then return
        'error_return' and set XXX as the exception.

    * PyXXX_ExtendedValueFree(decrease, error_return)
        Macro to lower the amount a resource is used by 'decrease'.  If the
        decrease pushes the allotment to below 0 then have the caller return
        'error_return' and set XXX as the exception.


+ Membership
    * int PyXXX_ExtendedAddMembership(interpreter, group, type, string)
        Add a string to be considered a member of a group-type (e.g., allowed
        file paths).  If the interpreter is not an untrusted interpreter,
        return NULL.

    * PyXXX_ExtendedCheckMembership(group, type, string, error_return)
        Macro that checks 'string' is a member of the values set for the
        group-type.  If it is not, then have the caller return 'error_return'
        and set an exception for XXX.  For trusted interpreters the call does
        nothing.

+ Specific Value
    * int PyXXX_ExtendedSetValue(interpreter, group, type, string)
        Set a group-type to a specific string.  If the interpreter is not
        untrusted, return NULL.

    * PyXXX_ExtendedCheckValue(group, type, string, error_return)
        Macro to check that the group-type is set to 'string'.  If it is not,
        then have the caller return 'error_return' and set an exception for
        XXX.  If the interpreter is trusted then nothing is done.


References
///////////////////////////////////////

.. [#rexec] The 'rexec' module
   (http://docs.python.org/lib/module-rexec.html)

.. [#safe-tcl] The Safe-Tcl Security Model
   (http://research.sun.com/technical-reports/1997/abstract-60.html)

.. [#ctypes] 'ctypes' module
   (http://docs.python.org/dev/lib/module-ctypes.html)

From rwgk at yahoo.com  Thu Jun 22 02:58:23 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Wed, 21 Jun 2006 17:58:23 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
Message-ID: <20060622005823.62381.qmail@web31515.mail.mud.yahoo.com>

http://docs.python.org/dev/whatsnew/other-lang.html says:

> One error that Python programmers sometimes make is forgetting to 
> include an __init__.py module in a package directory. Debugging this
> mistake can be confusing, and usually requires running Python with the 
> -v switch to log all the paths searched. In Python 2.5, a new
> ImportWarning warning is raised when an import would have picked up a
> directory as a package but no __init__.py was found.

I am getting tons of "ImportWarning: Not importing directory". See below for
examples. It is impractical for me to reorganize our directory structure. I'd
be busy for a week or more and people would probably scream at me because all
the paths have changed. Are there other options to get rid of the warnings?

Thanks!

Ralf



/net/rosie/scratch1/rwgk/dist/libtbx/libtbx/command_line/scons.py:1:
ImportWarning: Not importing directory '/net/rosie/scratch1/rwgk/dist/libtbx':
missing __init__.py
  from libtbx.utils import Sorry
/net/rosie/scratch1/rwgk/py25b1/build/python/lib/python2.5/random.py:43:
ImportWarning: Not importing directory
'/net/rosie/scratch1/rwgk/dist/cctbx/math': missing __init__.py
  from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil
/net/rosie/scratch1/rwgk/py25b1/build/python/lib/python2.5/random.py:43:
ImportWarning: Not importing directory
'/net/rosie/scratch1/rwgk/dist/scitbx/math': missing __init__.py
  from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil
scons: Reading SConscript files ...
/net/rosie/scratch1/rwgk/dist/scons/src/engine/SCons/Tool/__init__.py:112:
ImportWarning: Not importing directory
'/net/rosie/scratch1/rwgk/dist/scons/src/engine/SCons/Tool/CVS': missing
__init__.py
  file, path, desc = imp.find_module(self.name, smpath)


/net/rosie/scratch1/rwgk/dist/phenix/phenix/__init__.py:1: ImportWarning: Not
importing directory '/net/rosie/scratch1/rwgk/dist/libtbx': missing __init__.py
  try: import libtbx.forward_compatibility
/net/rosie/scratch1/rwgk/dist/phenix/phenix/refinement/__init__.py:1:
ImportWarning: Not importing directory '/net/rosie/scratch1/rwgk/dist/iotbx':
missing __init__.py
  import iotbx.phil
/net/rosie/scratch1/rwgk/dist/iotbx/iotbx/phil.py:1: ImportWarning: Not
importing directory '/net/rosie/scratch1/rwgk/dist/cctbx': missing __init__.py
  from cctbx import sgtbx
/net/rosie/scratch1/rwgk/dist/cctbx/cctbx/array_family/flex.py:1:
ImportWarning: Not importing directory '/net/rosie/scratch1/rwgk/dist/scitbx':
missing __init__.py
  import scitbx.array_family.flex
/net/rosie/scratch1/rwgk/dist/scitbx/scitbx/array_family/flex.py:2:
ImportWarning: Not importing directory '/net/rosie/scratch1/rwgk/dist/boost':
missing __init__.py
  import boost.optional
/net/rosie/scratch1/rwgk/dist/libtbx/libtbx/utils.py:226: ImportWarning: Not
importing directory '/net/rosie/scratch1/rwgk/dist/mmtbx': missing __init__.py
  try: module = __import__(module_path)

etc. etc.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From talin at acm.org  Thu Jun 22 03:56:49 2006
From: talin at acm.org (Talin)
Date: Wed, 21 Jun 2006 18:56:49 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<17547.19802.361151.705599@montanaro.dyndns.org>	<20060611010410.GA5723@21degrees.com.au>	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>	<44988C6E.4080806@canterbury.ac.nz>
	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
Message-ID: <4499F8E1.2020103@acm.org>

Phillip J. Eby wrote:
> At 09:55 AM 6/21/2006 -0700, Guido van Rossum wrote:
> 
>>BTW a switch in a class should be treated the same as a global switch.
>>But what about a switch in a class in a function?
> 
> 
> Okay, now my head hurts.  :)
> 
> A switch in a class doesn't need to be treated the same as a global switch, 
> because locals()!=globals() in that case.
> 
> I think the top-level is the only thing that really needs a special case 
> vs. the general "error if you use a local variable in the expression" rule.
> 
> Actually, it might be simpler just to always reject local variables -- even 
> at the top-level -- and be done with it.

I don't get what the problem is here. A switch constant should have 
exactly the bahavior of a default value of a function parameter. We 
don't seem to have too many problems defining functions at the module 
level, do we?

-- Talin

From talin at acm.org  Thu Jun 22 04:34:00 2006
From: talin at acm.org (Talin)
Date: Wed, 21 Jun 2006 19:34:00 -0700
Subject: [Python-Dev] Allow assignments in 'global' statements?
Message-ID: <449A0198.1060601@acm.org>

I'm sure I am not the first person to say this, but how about:

    global x = 12

(In other words, declare a global and assign a value to it - or another 
way of saying it is that the 'global' keyword acts as an assignment 
modifier.)

-- Talin

From guido at python.org  Thu Jun 22 04:45:58 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 21 Jun 2006 19:45:58 -0700
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060622005823.62381.qmail@web31515.mail.mud.yahoo.com>
References: <20060622005823.62381.qmail@web31515.mail.mud.yahoo.com>
Message-ID: <ca471dc20606211945r3a2dadbeof694a51c6e333e6@mail.gmail.com>

On 6/21/06, Ralf W. Grosse-Kunstleve <rwgk at yahoo.com> wrote:
> I am getting tons of "ImportWarning: Not importing directory". See below for
> examples. It is impractical for me to reorganize our directory structure. I'd
> be busy for a week or more and people would probably scream at me because all
> the paths have changed. Are there other options to get rid of the warnings?

Check out the -W command line option and the warnings module. These
document how to suppress warnings.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From rwgk at yahoo.com  Thu Jun 22 07:34:53 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Wed, 21 Jun 2006 22:34:53 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <ca471dc20606211945r3a2dadbeof694a51c6e333e6@mail.gmail.com>
Message-ID: <20060622053453.6483.qmail@web31509.mail.mud.yahoo.com>

--- Guido van Rossum <guido at python.org> wrote:

> On 6/21/06, Ralf W. Grosse-Kunstleve <rwgk at yahoo.com> wrote:
> > I am getting tons of "ImportWarning: Not importing directory". See below
> for
> > examples. It is impractical for me to reorganize our directory structure.
> I'd
> > be busy for a week or more and people would probably scream at me because
> all
> > the paths have changed. Are there other options to get rid of the warnings?
> 
> Check out the -W command line option and the warnings module. These
> document how to suppress warnings.

Thanks!

This does the trick:

python -W'ignore:Not importing directory'

But this doesn't:

python -W'ignore:Not importing directory:ImportWarning'

I tried a bunch of variations without success. A few examples here would be
very valuable:

http://docs.python.org/lib/warning-filter.html

Also, the magic incantation to silence the warnings would be very helpful here:

http://docs.python.org/dev/whatsnew/other-lang.html

Is there a way to set the warning options via an environment variable?
Otherwise I am forced to use a wrapper or aliases.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From kbk at shore.net  Thu Jun 22 07:39:59 2006
From: kbk at shore.net (Kurt B. Kaiser)
Date: Thu, 22 Jun 2006 01:39:59 -0400 (EDT)
Subject: [Python-Dev] Weekly Python Patch/Bug Summary
Message-ID: <200606220540.k5M5dxTF016810@bayview.thirdcreek.com>

Patch / Bug Summary
___________________

Patches :  378 open ( +3) /  3298 closed (+34) /  3676 total (+37)
Bugs    :  886 open (-24) /  5926 closed (+75) /  6812 total (+51)
RFE     :  224 open ( +7) /   227 closed ( +7) /   451 total (+14)

New / Reopened Patches
______________________

Improve super() objects support for implicit method calls  (2006-05-31)
       http://python.org/sf/1498363  opened by  Collin Winter

Improve itertools.starmap  (2006-05-31)
       http://python.org/sf/1498370  opened by  Collin Winter

Change *args from a tuple to list  (2006-05-31)
       http://python.org/sf/1498441  opened by  Collin Winter

Correctly unpickle exceptions  (2006-06-01)
CLOSED http://python.org/sf/1498571  opened by  ?iga Seilnacht

Fault in XMLRPC not always returned to client  (2006-05-31)
CLOSED http://python.org/sf/1498627  opened by  Daniel Savard

Generate from Unicode database instead of manualy coding.  (2006-06-01)
       http://python.org/sf/1498930  opened by  Anders Chrigstr?m

Optimise "in" operations on tuples of consts  (2006-06-01)
CLOSED http://python.org/sf/1499095  opened by  Collin Winter

Fix for memory leak in WindowsError_str  (2006-06-02)
CLOSED http://python.org/sf/1499797  opened by  ?iga Seilnacht

Alternate RFC 3896 compliant URI parsing module  (2006-06-05)
       http://python.org/sf/1500504  opened by  Nick Coghlan

Remove dependencies on the sets module  (2006-06-04)
       http://python.org/sf/1500609  opened by  Collin Winter

(py3k) Remove the sets module  (2006-06-04)
       http://python.org/sf/1500611  opened by  Collin Winter

Remove the repr()-backticks  (2006-06-04)
       http://python.org/sf/1500623  opened by  Collin Winter

wm_attributes doesn't take keyword arguments  (2006-06-04)
       http://python.org/sf/1500773  opened by  Greg Couch

AF_NETLINK support for socket module  (2006-06-05)
CLOSED http://python.org/sf/1500815  opened by  lplatypus

Cyclic garbage collection support for slices  (2006-06-05)
CLOSED http://python.org/sf/1501180  opened by  ?iga Seilnacht

Fix Bug #1339007 - shelve.open('non-existant-file', 'r')  (2006-06-06)
CLOSED http://python.org/sf/1501534  opened by  Jeung Mun Sic

syntax errors on continuation lines  (2006-06-06)
       http://python.org/sf/1501979  opened by  Roger Miller

Remove randomness from test_exceptions  (2006-06-07)
CLOSED http://python.org/sf/1501987  opened by  ?iga Seilnacht

Conditional compilation of zlib.(de)compressobj.copy  (2006-06-08)
CLOSED http://python.org/sf/1503046  opened by  Chris AtLee

Allow Empty Subscript List Without Parentheses  (2006-06-09)
CLOSED http://python.org/sf/1503556  opened by  Noam Raphael

Tiny patch to stop make spam  (2006-06-09)
       http://python.org/sf/1503717  opened by  Chris AtLee

Rough documentation for xml.etree.ElementTree  (2006-06-10)
       http://python.org/sf/1504046  opened by  Fredrik Lundh

Patch for 1496501 tarfile opener order  (2006-06-10)
       http://python.org/sf/1504073  opened by  Jack Diederich

Switch syntax (partial PEP 275)  (2006-06-11)
       http://python.org/sf/1504199  opened by  Thomas Lee

winerror module  (2006-06-13)
       http://python.org/sf/1505257  opened by  M.-A. Lemburg

curses.resizeterm()  (2006-06-15)
CLOSED http://python.org/sf/1506645  opened by  Walter D?rwald

Patch for 1506758 - popen2/subprocess MAXFD MemoryErrors  (2006-06-15)
       http://python.org/sf/1506760  opened by  Peter Vetere

Use a set to keep interned strings  (2006-06-15)
       http://python.org/sf/1507011  opened by  Alexander Belopolsky

tarfile extraction does not honor umask  (2006-06-16)
       http://python.org/sf/1507247  opened by  Faik Uygur

improve object.c and abstract.c exception messages  (2006-06-17)
CLOSED http://python.org/sf/1507676  opened by  Georg Brandl

transparent gzip compression in liburl2  (2006-06-19)
       http://python.org/sf/1508475  opened by  Jakob Truelsen

uuid documentation  (2006-06-20)
       http://python.org/sf/1508766  opened by  George Yoshida

Make Lib/test/regrtest.py NetBSD 3 aware.  (2006-06-20)
CLOSED http://python.org/sf/1509001  opened by  Matt Fleming

MS Toolkit Compiler no longer available  (2006-06-20)
       http://python.org/sf/1509163  opened by  Paul Moore

skip tests in test_socket__ssl when connection refused  (2006-06-20)
CLOSED http://python.org/sf/1509404  reopened by  bcannon

skip tests in test_socket__ssl when connection refused  (2006-06-20)
CLOSED http://python.org/sf/1509404  opened by  Brett Cannon

Small fix for sqlite3 test suite  (2006-06-20)
CLOSED http://python.org/sf/1509584  opened by  Gerhard H??ring

tarfile stops iteration with some longfiles  (2006-06-21)
CLOSED http://python.org/sf/1509889  opened by  Faik Uygur

Patches Closed
______________

Possible fix to #1334662 (int() wrong answers)  (2006-03-31)
       http://python.org/sf/1462361  closed by  gbrandl

Correctly unpickle exceptions  (2006-05-31)
       http://python.org/sf/1498571  closed by  gbrandl

Fault in XMLRPC not always returned to client  (2006-06-01)
       http://python.org/sf/1498627  closed by  gbrandl

Fix test_exceptions.py  (2006-05-27)
       http://python.org/sf/1496135  closed by  gbrandl

Let dicts propagate the exceptions in user __eq__  (2006-05-29)
       http://python.org/sf/1497053  closed by  arigo

Optimise "in" operations on tuples of consts  (2006-06-01)
       http://python.org/sf/1499095  closed by  rhettinger

potential crash and free memory read  (2005-11-15)
       http://python.org/sf/1357836  closed by  nnorwitz

Fix for memory leak in WindowsError_str  (2006-06-02)
       http://python.org/sf/1499797  closed by  nnorwitz

Better dead code elimination for the AST compiler  (2005-11-02)
       http://python.org/sf/1346214  closed by  gbrandl

Speed charmap encoder  (2005-11-18)
       http://python.org/sf/1359618  closed by  loewis

AF_NETLINK support for socket module  (2006-06-05)
       http://python.org/sf/1500815  closed by  loewis

patch for SIGSEGV in arraymodule.c  (2006-03-20)
       http://python.org/sf/1454485  closed by  loewis

Cyclic garbage collection support for slices  (2006-06-05)
       http://python.org/sf/1501180  closed by  zseil

Fix Bug #1339007 - shelve.open('non-existant-file', 'r')  (2006-06-06)
       http://python.org/sf/1501534  closed by  gbrandl

Remove randomness from test_exceptions  (2006-06-06)
       http://python.org/sf/1501987  closed by  tim_one

Corrupt Berkeley DB using Modify bsddb.dbtables  (2006-01-17)
       http://python.org/sf/1408584  closed by  greg

Conditional compilation of zlib.(de)compressobj.copy  (2006-06-08)
       http://python.org/sf/1503046  closed by  nnorwitz

Allow Empty Subscript List Without Parentheses  (2006-06-09)
       http://python.org/sf/1503556  closed by  gbrandl

Add help reference on Mac  (04/25/06)
       http://python.org/sf/1476578  closed by  sf-robot

Windows CE support (part 2)  (2006-05-27)
       http://python.org/sf/1495999  closed by  loewis

Update documentation for __builtins__  (2005-09-24)
       http://python.org/sf/1303595  closed by  fdrake

Suggested Additional Material for urllib2 docs  (2005-06-08)
       http://python.org/sf/1216942  closed by  fdrake

AssertionError when building rpm under RedHat 9.1  (2003-05-02)
       http://python.org/sf/731328  closed by  jafo

patch for mbcs codecs  (2006-03-22)
       http://python.org/sf/1455898  closed by  loewis

BaseWidget.destroy updates master's childern too early  (2006-05-25)
       http://python.org/sf/1494750  closed by  loewis

curses.resizeterm()  (2006-06-15)
       http://python.org/sf/1506645  closed by  doerwalter

improve object.c and abstract.c exception messages  (2006-06-17)
       http://python.org/sf/1507676  closed by  gbrandl

Minor enhancements to Variable class  (2003-07-01)
       http://python.org/sf/763580  closed by  loewis

Fix for wm_iconbitmap to allow .ico files under Windows.  (2005-01-05)
       http://python.org/sf/1096231  closed by  loewis

upgrade pyexpat to expat 2.0.0  (2006-03-31)
       http://python.org/sf/1462338  closed by  tmick

Make Lib/test/regrtest.py NetBSD 3 aware.  (2006-06-20)
       http://python.org/sf/1509001  closed by  gbrandl

skip tests in test_socket__ssl when connection refused  (2006-06-20)
       http://python.org/sf/1509404  closed by  bcannon

skip tests in test_socket__ssl when connection refused  (2006-06-20)
       http://python.org/sf/1509404  closed by  bcannon

Small fix for sqlite3 test suite  (2006-06-20)
       http://python.org/sf/1509584  closed by  ghaering

tarfile stops iteration with some longfiles  (2006-06-21)
       http://python.org/sf/1509889  closed by  gbrandl

New / Reopened Bugs
___________________

Pickling exceptions crashes Python  (2006-05-30)
CLOSED http://python.org/sf/1497319  opened by  ?iga Seilnacht

__self reserved WATCOM 10.6 word  (2006-05-30)
CLOSED http://python.org/sf/1497414  opened by  kbob_ru

Leak in tarfile.py  (2006-05-31)
CLOSED http://python.org/sf/1497962  opened by  Jens J?rgen Mortensen

MSVC compiler problems with .NET v2.0  (2006-05-31)
CLOSED http://python.org/sf/1498051  opened by  Retief Gerber

optparse does not hande unicode help strings  (2006-05-31)
CLOSED http://python.org/sf/1498146  opened by  Tom Cato Amundsen

tp_alloc for subtypes of  PyComplex_Type is not called  (2006-05-31)
CLOSED http://python.org/sf/1498638  opened by  Travis Oliphant

lstrip does not work properly  (2006-06-01)
CLOSED http://python.org/sf/1499049  opened by  rpache

lstrip does not work properly  (2006-06-02)
CLOSED http://python.org/sf/1499316  opened by  rpache

interpret termination, object deleting  (2006-06-03)
CLOSED http://python.org/sf/1500167  opened by  Jan Martinek

re.escape incorrectly escape literal.  (2006-06-03)
CLOSED http://python.org/sf/1500179  opened by  Baptiste Lepilleur

Memory leak in subprocess module  (2006-06-04)
CLOSED http://python.org/sf/1500293  opened by  ?iga Seilnacht

Lang ref '<' description in 5.9 not consistent with __lt__  (2006-06-05)
CLOSED http://python.org/sf/1501122  opened by  Andy Harrington

Possible buffer overflow in Python/sysmodule.c  (2006-06-05)
CLOSED http://python.org/sf/1501223  opened by  Brett Cannon

python/ncurses bug in 2.4.3 with extended ascii  (2006-06-05)
       http://python.org/sf/1501291  opened by  UnixOps

failure of test_ossaudiodev; elapsed time .1 sec faster  (2006-06-05)
       http://python.org/sf/1501330  opened by  Brett Cannon

method format of logging.Formatter caches incorrectly  (2006-06-06)
       http://python.org/sf/1501699  opened by  Boris Lorbeer

incorrect LOAD/STORE_GLOBAL generation  (2006-06-07)
       http://python.org/sf/1501934  opened by  Thomas Wouters

crash in expat when compiling with --enable-profiling  (2006-06-07)
       http://python.org/sf/1502517  opened by  Ronald Oussoren

HP-UX shared library does not reference librt  (2006-06-08)
CLOSED http://python.org/sf/1502728  opened by  G?ran Uddeborg

PyArg_ParseTuple(args, "i") and sys.maxint  (2006-06-08)
CLOSED http://python.org/sf/1502750  reopened by  lemburg

PyArg_ParseTuple(args, "i") and sys.maxint  (2006-06-08)
CLOSED http://python.org/sf/1502750  opened by  M.-A. Lemburg

'with' sometimes eats exceptions  (2006-06-08)
CLOSED http://python.org/sf/1502805  opened by  Armin Rigo

"/".join() throws OverflowError  (2006-06-08)
CLOSED http://python.org/sf/1503157  opened by  Wummel

'make install' fails on OS X 10.4 when running compileall  (2006-06-08)
CLOSED http://python.org/sf/1503294  opened by  Brett Cannon

Pdb doesn't call flush on its stdout file descriptor  (2006-06-09)
       http://python.org/sf/1503502  opened by  Matt Fleming

logger.config problems with comma separated lists  (2006-06-09)
       http://python.org/sf/1503765  opened by  cuppatea

stdin directory causes crash (SIGSEGV)  (2006-06-09)
CLOSED http://python.org/sf/1503780  opened by  Ben Liblit

Cannot write source code in UTF16  (2006-06-09)
       http://python.org/sf/1503789  opened by  Wai Yip Tung

csv.Sniffer - says "1 method", shows 2  (2006-06-10)
CLOSED http://python.org/sf/1503883  opened by  Frank Millman

sgmlib should allow angle brackets in quoted values  (2006-06-11)
       http://python.org/sf/1504333  opened by  Sam Ruby

xmlcore needs to be documented  (2006-06-11)
       http://python.org/sf/1504456  opened by  Fred L. Drake, Jr.

Make sgmllib char and entity references pluggable  (2006-06-12)
CLOSED http://python.org/sf/1504676  opened by  Sam Ruby

There should be a Python build using Visual Studio 2005  (2006-06-12)
       http://python.org/sf/1504947  opened by  Vincent Manis

under Windows XP, os.walk problem with path >256? chars  (2006-06-12)
CLOSED http://python.org/sf/1504998  opened by  Mike Coleman

Wrong grammar  (2006-06-12)
CLOSED http://python.org/sf/1505081  opened by  Milind

Incorrect comment in socket.py  (2006-06-12)
CLOSED http://python.org/sf/1505095  opened by  Bruce Christensen

Add support for GNU --long options (interpreter)  (2006-06-14)
CLOSED http://python.org/sf/1505841  opened by  Jari Aalto

If MAXFD too high, popen2/subprocess produce MemoryErrors  (2006-06-15)
       http://python.org/sf/1506758  opened by  Peter Vetere

Misleading error message from PyObject_GenericSetAttr    (2006-06-15)
CLOSED http://python.org/sf/1506776  opened by  Alexander Belopolsky

list bug  (2006-06-15)
CLOSED http://python.org/sf/1506799  opened by  SPlyer

pydoc fails on package in ZIP archive  (2006-06-15)
CLOSED http://python.org/sf/1506945  opened by  Christopher Dunn

pydoc fails on package in ZIP archive  (2006-06-15)
       http://python.org/sf/1506951  opened by  Christopher Dunn

HTTPResponse instance has no attribute 'code'  (2006-06-16)
CLOSED http://python.org/sf/1507166  opened by  yodalf

sys.path issue if sys.prefix contains a colon  (2006-06-16)
       http://python.org/sf/1507224  opened by  Ronald Oussoren

Broken example in optparse module documentation  (2006-06-16)
CLOSED http://python.org/sf/1507379  opened by  Michal Krenek

logging fileConfig swallows handler exception   (2006-06-18)
       http://python.org/sf/1508253  opened by  tdir

logging module formatter problem with %(filename)s  (2006-06-18)
       http://python.org/sf/1508369  opened by  David Hess

"...." (four dots) confuses doctest's ellipsis matching  (2006-06-19)
CLOSED http://python.org/sf/1508564  opened by  Andrew Bennetts

os.spawnv fails when argv is a length 1 tuple  (2006-06-19)
CLOSED http://python.org/sf/1508833  opened by  ncloud

failed to load tuxedo libs  (2006-06-19)
CLOSED http://python.org/sf/1508848  opened by  William Ding

threading.Timer breaks when you change system time on win32  (2006-06-19)
       http://python.org/sf/1508864  opened by  Russell Warren

compiler module builds incorrect AST for TryExceptFinally  (2006-06-20)
CLOSED http://python.org/sf/1509132  opened by  Adrien Di Mascio

Absolute/relative import not working?  (2006-06-21)
       http://python.org/sf/1510172  opened by  Mitch Chapman

Bugs Closed
___________

Pickling exceptions crashes Python  (2006-05-30)
       http://python.org/sf/1497319  closed by  gbrandl

__self reserved WATCOM 10.6 word  (2006-05-30)
       http://python.org/sf/1497414  closed by  akuchling

hyper-threading locks up sleeping threads  (2006-05-08)
       http://python.org/sf/1484172  closed by  tim_one

strptime: wrong default values used to fill in missing data  (2006-05-28)
       http://python.org/sf/1496315  closed by  bcannon

Leak in tarfile.py  (05/30/06)
       http://python.org/sf/1497962  closed by  sf-robot

MSVC compiler problems with .NET v2.0  (2006-05-31)
       http://python.org/sf/1498051  closed by  loewis

optparse does not hande unicode help strings  (2006-05-31)
       http://python.org/sf/1498146  closed by  gward

SimpleXMLRPCServer responds to any path  (2006-04-19)
       http://python.org/sf/1473048  closed by  akuchling

tp_alloc for subtypes of  PyComplex_Type is not called  (2006-06-01)
       http://python.org/sf/1498638  closed by  gbrandl

dictobject.c:dictresize() vulnerability  (2006-03-22)
       http://python.org/sf/1456209  closed by  arigo

dict key comparison swallows exceptions  (2005-08-29)
       http://python.org/sf/1275608  closed by  arigo

Traceback error when compiling Regex   (2006-03-22)
       http://python.org/sf/1456280  closed by  gbrandl

lstrip does not work properly  (2006-06-01)
       http://python.org/sf/1499049  closed by  gbrandl

lstrip does not work properly  (2006-06-02)
       http://python.org/sf/1499316  closed by  gbrandl

sgmllib do_tag description error  (2006-04-17)
       http://python.org/sf/1472084  closed by  akuchling

time module missing from global mod index  (2006-05-16)
       http://python.org/sf/1489648  closed by  akuchling

Poorly worded description for socket.makefile()  (2006-04-24)
       http://python.org/sf/1475554  closed by  akuchling

Omission in docs for urllib2.urlopen()  (2006-03-02)
       http://python.org/sf/1441864  closed by  akuchling

interpret termination, object deleting  (2006-06-03)
       http://python.org/sf/1500167  closed by  loewis

re.escape incorrectly escape literal.  (2006-06-03)
       http://python.org/sf/1500179  closed by  blep

Memory leak in subprocess module  (2006-06-04)
       http://python.org/sf/1500293  closed by  gbrandl

Lang ref '<' description in 5.9 not consistent with __lt__  (2006-06-05)
       http://python.org/sf/1501122  closed by  gbrandl

distutils.core: link to list of Trove classifiers  (2006-04-13)
       http://python.org/sf/1470026  closed by  akuchling

Possible buffer overflow in Python/sysmodule.c  (2006-06-05)
       http://python.org/sf/1501223  closed by  bcannon

bsddb: db never opened for writing forgets its size  (2006-05-22)
       http://python.org/sf/1493322  closed by  greg

Make logging consistent in the standard library  (2003-08-19)
       http://python.org/sf/791410  closed by  gbrandl

Distutils does not use logging  (2005-07-19)
       http://python.org/sf/1241006  closed by  gbrandl

argvemulator doesn't work on intel mac  (2006-05-19)
       http://python.org/sf/1491468  closed by  ronaldoussoren

OS X framework build for python 2.5 fails, configure is odd  (2006-05-11)
       http://python.org/sf/1486897  closed by  ronaldoussoren

bsddb3 hash craps out with threads  (2003-07-21)
       http://python.org/sf/775414  closed by  greg

HP-UX shared library does not reference librt  (2006-06-08)
       http://python.org/sf/1502728  closed by  gbrandl

PyArg_ParseTuple(args, "i") and sys.maxint  (2006-06-08)
       http://python.org/sf/1502750  closed by  gbrandl

PyArg_ParseTuple(args, "i") and sys.maxint  (2006-06-08)
       http://python.org/sf/1502750  closed by  gbrandl

'with' sometimes eats exceptions  (2006-06-08)
       http://python.org/sf/1502805  closed by  gbrandl

built-in method .__cmp__  (2005-11-07)
       http://python.org/sf/1350060  closed by  arigo

int/long assume that the buffer ends in \0 => CRASH  (2006-05-25)
       http://python.org/sf/1495033  closed by  bcannon

"/".join() throws OverflowError  (2006-06-08)
       http://python.org/sf/1503157  closed by  gbrandl

xml.sax.saxutils.XMLGenerator mangles \r\n\t in attributes  (2006-04-19)
       http://python.org/sf/1472827  closed by  akuchling

Recursive class instance &quot;error&quot;  (2002-03-20)
       http://python.org/sf/532646  closed by  bcannon

stdin from directory causes crash (SIGSEGV)  (2006-06-10)
       http://python.org/sf/1503780  closed by  gbrandl

-Wi causes a fatal Python error  (2006-06-09)
       http://python.org/sf/1503294  closed by  arigo

Install under osx 10.4.6 breaks shell.  (05/25/06)
       http://python.org/sf/1495210  closed by  sf-robot

csv.Sniffer - says "1 method", shows 2  (2006-06-10)
       http://python.org/sf/1503883  closed by  frankmillman

optparse: extending actions missing ALWAYS_TYPED_ACTIONS  (2006-03-13)
       http://python.org/sf/1449311  closed by  gward

dest parameter in optparse  (2005-04-15)
       http://python.org/sf/1183972  closed by  gward

textwrap.dedent() expands tabs  (2005-11-19)
       http://python.org/sf/1361643  closed by  gward

packman upgrade issue  (2004-05-31)
       http://python.org/sf/963494  closed by  ronaldoussoren

incorrect documentation for optparse  (2005-11-25)
       http://python.org/sf/1366250  closed by  gward

Make sgmllib char and entity references pluggable  (2006-06-12)
       http://python.org/sf/1504676  closed by  fdrake

under Windows XP, os.walk problem with path >256? chars  (2006-06-12)
       http://python.org/sf/1504998  closed by  loewis

Wrong grammar  (2006-06-12)
       http://python.org/sf/1505081  closed by  tim_one

Incorrect comment in socket.py  (2006-06-13)
       http://python.org/sf/1505095  closed by  gbrandl

code that generates a segfault on Python 2.1-2.3  (2004-07-15)
       http://python.org/sf/992017  closed by  bcannon

shelve.Shelf.__del__ throws exceptions  (2005-10-26)
       http://python.org/sf/1339007  closed by  gbrandl

&quot;u#&quot; doesn't check object type  (2002-11-13)
       http://python.org/sf/637547  closed by  gbrandl

str.join() intercepts TypeError raised by iterator  (2004-02-26)
       http://python.org/sf/905389  closed by  gbrandl

Add support for GNU --long options (interpreter)  (2006-06-14)
       http://python.org/sf/1505841  closed by  gbrandl

reflected operator not used when operands have the same type  (2005-02-28)
       http://python.org/sf/1153163  closed by  gbrandl

mimetypes.py does not find mime.types on Mac OS X  (2005-05-14)
       http://python.org/sf/1202018  closed by  gbrandl

SimpleHTTPServer and mimetypes: almost together  (2005-02-06)
       http://python.org/sf/1117556  closed by  gbrandl

No struct.pack exception for some out of range integers  (2005-06-28)
       http://python.org/sf/1229380  closed by  gbrandl

SIGSEGV causes hung threads (Linux)  (2003-06-18)
       http://python.org/sf/756924  closed by  gbrandl

os.listdir fails for pathprefix \\?\d:...  (2004-02-12)
       http://python.org/sf/895567  closed by  gbrandl

PyUnicode_FromEncodedObject  (2003-09-12)
       http://python.org/sf/805015  closed by  gbrandl

raw_input() displays wrong unicode prompt  (2005-01-10)
       http://python.org/sf/1099364  closed by  gbrandl

Misleading error message from PyObject_GenericSetAttr    (2006-06-15)
       http://python.org/sf/1506776  closed by  gbrandl

list bug  (2006-06-15)
       http://python.org/sf/1506799  closed by  tim_one

pydoc fails on package in ZIP archive  (2006-06-15)
       http://python.org/sf/1506945  deleted by  christopherdunn

HTTPResponse instance has no attribute 'code'  (2006-06-16)
       http://python.org/sf/1507166  closed by  yodalf

Broken example in optparse module documentation  (2006-06-16)
       http://python.org/sf/1507379  closed by  nnorwitz

turtle.py deferres exec of stmnts with tracer(0)  (2003-09-26)
       http://python.org/sf/812986  closed by  loewis

tkMessageBox functions reject type and ico  (2003-10-01)
       http://python.org/sf/815924  closed by  loewis

"...." (four dots) confuses doctest's ellipsis matching  (2006-06-19)
       http://python.org/sf/1508564  closed by  gbrandl

os.spawnv fails when argv is a length 1 tuple  (2006-06-19)
       http://python.org/sf/1508833  closed by  gbrandl

failed to load tuxedo libs  (2006-06-19)
       http://python.org/sf/1508848  closed by  nnorwitz

expat symbols should be namespaced in pyexpat  (2005-09-19)
       http://python.org/sf/1295808  closed by  tmick

compiler module builds incorrect AST for TryExceptFinally  (2006-06-20)
       http://python.org/sf/1509132  closed by  gbrandl

New / Reopened RFE
__________________

C API to retain GIL during Python Callback  (2006-05-30)
       http://python.org/sf/1497532  opened by  Martin Gfeller

Add write buffering to gzip  (2006-06-05)
       http://python.org/sf/1501108  opened by  Raymond Hettinger

Add "compose" function to the functools  (2006-06-14)
       http://python.org/sf/1506122  opened by  Gregory Petrosyan

Add "methodcaller" to the operator module  (2006-06-14)
       http://python.org/sf/1506171  opened by  Gregory Petrosyan

Add "methodcaller" to the operator module  (2006-06-14)
CLOSED http://python.org/sf/1506190  opened by  Gregory Petrosyan

Add "methodcaller" to the operator module  (2006-06-14)
CLOSED http://python.org/sf/1506211  opened by  Gregory Petrosyan

Add "methodcaller" to the operator module  (2006-06-14)
CLOSED http://python.org/sf/1506216  opened by  Gregory Petrosyan

Add some dicts to datetime module  (2006-06-14)
       http://python.org/sf/1506296  opened by  Gregory Petrosyan

Add some dicts to datetime module  (2006-06-15)
CLOSED http://python.org/sf/1506313  opened by  Gregory Petrosyan

Add some dicts to datetime module  (2006-06-15)
CLOSED http://python.org/sf/1506324  opened by  Gregory Petrosyan

Add some dicts to datetime module  (2006-06-15)
       http://python.org/sf/1506340  opened by  Gregory Petrosyan

Interrupt/kill threads w/exception  (2006-06-20)
       http://python.org/sf/1509060  opened by  Oliver Bock

replace dist/src/Tools/scripts/which.py with tmick's which  (2006-06-21)
       http://python.org/sf/1509798  opened by  wrstl prmpft

RFE Closed
__________

str.startswith/endswith could take a tuple?  (2006-05-19)
       http://python.org/sf/1491485  closed by  gbrandl

Add "methodcaller" to the operator module  (2006-06-14)
       http://python.org/sf/1506190  closed by  gbrandl

Add "methodcaller" to the operator module  (2006-06-14)
       http://python.org/sf/1506211  closed by  gbrandl

Add "methodcaller" to the operator module  (2006-06-14)
       http://python.org/sf/1506216  closed by  gbrandl

Add some dicts to datetime module  (2006-06-14)
       http://python.org/sf/1506313  closed by  gbrandl

Add some dicts to datetime module  (2006-06-14)
       http://python.org/sf/1506324  closed by  gbrandl

Use new expat version 2.0  (2006-02-17)
       http://python.org/sf/1433435  closed by  gbrandl


From martin at v.loewis.de  Thu Jun 22 08:05:12 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 22 Jun 2006 08:05:12 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060622053453.6483.qmail@web31509.mail.mud.yahoo.com>
References: <20060622053453.6483.qmail@web31509.mail.mud.yahoo.com>
Message-ID: <449A3318.7050804@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
> Is there a way to set the warning options via an environment variable?

This is off-topic for python-dev, but: why don't switch off the warnings
in the code?

Regards,
Martin

From g.brandl at gmx.net  Thu Jun 22 08:07:59 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 22 Jun 2006 08:07:59 +0200
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060621233848.89342.qmail@web31504.mail.mud.yahoo.com>
References: <20060621233848.89342.qmail@web31504.mail.mud.yahoo.com>
Message-ID: <e7dc3v$ojo$1@sea.gmane.org>

Ralf W. Grosse-Kunstleve wrote:
> http://docs.python.org/dev/whatsnew/ports.html says:
> 
>   The PyRange_New() function was removed. It was never documented, never used
> in the core code, and had dangerously lax error checking.
> 
> I use this function (don't remember how I found it; this was years ago),
> therefore my code doesn't compile with 2.5b1 (it did OK before with 2.5a2). Is
> there an alternative spelling for PyRange_New()?

You can call PyRange_Type with the appropriate parameters.

Georg


From nnorwitz at gmail.com  Thu Jun 22 08:35:07 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Wed, 21 Jun 2006 23:35:07 -0700
Subject: [Python-Dev] Things to remember when adding *packages* to stdlib
Message-ID: <ee2a432c0606212335n735716eelae07f6e1d51ceeb@mail.gmail.com>

I believe this change is all that's necessary on the Unix side to
install wsgiref.  Can someone please update the Windows build files to
ensure wsgiref is installed in b2?  Don't forget to update the NEWS
entry too.

Also, all committers and reviewers, try to remember that when a
package (meaning directory) is added to the stdlib, we need to update
Makefile.pre.in and the Windows build files.  (I forgot this time
too.)

Is there some documentation that should be updated as a reminder?
Maybe someone could come up with a heuristic to add to Misc/build.sh
which we could test in there.

n
--

On 6/21/06, neal.norwitz <python-checkins at python.org> wrote:
> Author: neal.norwitz
> Date: Thu Jun 22 08:30:50 2006
> New Revision: 47064
>
> Modified:
>    python/trunk/Makefile.pre.in
>    python/trunk/Misc/NEWS
> Log:
> Copy the wsgiref package during make install.
>
>
>
> Modified: python/trunk/Makefile.pre.in
> ==============================================================================
> --- python/trunk/Makefile.pre.in        (original)
> +++ python/trunk/Makefile.pre.in        Thu Jun 22 08:30:50 2006
> @@ -708,7 +708,7 @@
>                 encodings compiler hotshot \
>                 email email/mime email/test email/test/data \
>                 sqlite3 sqlite3/test \
> -               logging bsddb bsddb/test csv \
> +               logging bsddb bsddb/test csv wsgiref \
>                 ctypes ctypes/test ctypes/macholib idlelib idlelib/Icons \
>                 distutils distutils/command distutils/tests $(XMLLIBSUBDIRS) \
>                 setuptools setuptools/command setuptools/tests setuptools.egg-info \
>
> Modified: python/trunk/Misc/NEWS
> ==============================================================================
> --- python/trunk/Misc/NEWS      (original)
> +++ python/trunk/Misc/NEWS      Thu Jun 22 08:30:50 2006
> @@ -19,6 +19,8 @@
>  - The compiler module now correctly compiles the new try-except-finally
>    statement (bug #1509132).
>
> +- The wsgiref package is now installed properly on Unix.
> +
>
>
>  What's New in Python 2.5 beta 1?
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>

From greg.ewing at canterbury.ac.nz  Thu Jun 22 09:24:14 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Jun 2006 19:24:14 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7c135$4ql$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<e7bssg$hke$1@sea.gmane.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org>
Message-ID: <449A459E.5040108@canterbury.ac.nz>

Fredrik Lundh wrote:
>      Q: If a program calls the 'func' function below as 'func()'
>         and ONE and TWO are both integer objects, what does 'func'
                                                               ^^^^^^
Nothing at all, because you didn't call it!

--
Greg

From jcarlson at uci.edu  Thu Jun 22 09:30:37 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Thu, 22 Jun 2006 00:30:37 -0700
Subject: [Python-Dev] Allow assignments in 'global' statements?
In-Reply-To: <449A0198.1060601@acm.org>
References: <449A0198.1060601@acm.org>
Message-ID: <20060622002712.1DD1.JCARLSON@uci.edu>


Talin <talin at acm.org> wrote:
> 
> I'm sure I am not the first person to say this, but how about:
> 
>     global x = 12
> 
> (In other words, declare a global and assign a value to it - or another 
> way of saying it is that the 'global' keyword acts as an assignment 
> modifier.)

If we allow that, then we necessarily need to allow:

    global x, y, z = 12, 13, 14

... while I have also thought to myself 'it would be really nice to be
able to assign to a global when I declare a value to be global', it has
usually been a case of early optimization for me.  I don't really know
how I feel about either example above, I'd probably be +0.

 - Josiah


From rogermiller at alum.mit.edu  Thu Jun 22 10:06:45 2006
From: rogermiller at alum.mit.edu (Roger Miller)
Date: Wed, 21 Jun 2006 22:06:45 -1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <mailman.27347.1150881823.27774.python-dev@python.org>
References: <mailman.27347.1150881823.27774.python-dev@python.org>
Message-ID: <449A4F95.4070608@alum.mit.edu>

Ka-Ping Yee wrote:
 > Hmm, this is rather nice.  I can imagine possible use cases for
 >
 >    switch x:
 >        case > 3: foo(x)
 >        case is y: spam(x)
 >        case == z: eggs(x)

Part of the readability advantage of a switch over an if/elif chain is 
the semantic parallelism, which would make me question mixing different 
tests in the same switch.  What if the operator moved into the switch 
header?

     switch x ==:
         case 1: foo(x)
	case 2, 3: bar(x)

     switch x in:
	case (1, 3, 5): do_odd(x)
	case (2, 4, 6): do_even(x)

"switch x:" could be equivalent to "switch x ==:", for the common case.

I've also been wondering whether the 'case' keyword is really necessary? 
  Would any ambiguities or other parsing problems arise if you wrote:

     switch x:
         1: foo(x)
	2: bar(x)

It is debatable whether this is more or less readable, but it seemed 
like an interesting question for the language lawyers.

From gh at ghaering.de  Thu Jun 22 10:16:12 2006
From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=)
Date: Thu, 22 Jun 2006 10:16:12 +0200
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
Message-ID: <449A51CC.3070108@ghaering.de>

Brett Cannon wrote:
> I have been working on a design doc for restricted execution of Python
> [...]

All the rest of the API made sense to me, but I couldn't understand why

PyXXX_MemoryFree

is needed. How could memory usage possibly fall below 0?

-- Gerhard

From rwgk at yahoo.com  Thu Jun 22 10:26:15 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Thu, 22 Jun 2006 01:26:15 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449A3318.7050804@v.loewis.de>
Message-ID: <20060622082615.73007.qmail@web31509.mail.mud.yahoo.com>

--- "Martin v. L?wis" <martin at v.loewis.de> wrote:

> Ralf W. Grosse-Kunstleve wrote:
> > Is there a way to set the warning options via an environment variable?
> 
> This is off-topic for python-dev,

What is the channel I should use? (I am testing a beta 1.)

> but: why don't switch off the warnings
> in the code?

We support installation from sources with the native Python if available. Any
Python >= 2.2.1 works. It would be frustrating if we had to give up on this
just because of a warning designed for newcomers.

In our applications we typically address this type of problem with informative
exceptions. For example, if a Boost.Python wrapped C++ object doesn't support
pickling:

  RuntimeError: Pickling of "cctbx_sgtbx_ext.space_group_symbols" instances is
not enabled (http://www.boost.org/libs/python/doc/v2/pickle.html)

Something like this could help newcomers just the same without impacting
experienced users with large, complex package structures. E.g.:

>>> import mypackage.foo
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ImportError: No module named mypackage.foo
    Note that subdirectories are searched for imports only if they contain an
    __init__.py file: http://www.python.org/doc/essays/packages.html


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From greg.ewing at canterbury.ac.nz  Thu Jun 22 09:08:43 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Jun 2006 19:08:43 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606190953m36151e11k64c506d5d5fa4896@mail.gmail.com>
	<4496DD2F.30501@ewtllc.com>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
Message-ID: <449A41FB.8050706@canterbury.ac.nz>

Phillip J. Eby wrote:

>      switch x:
>          case == 1: foo(x)

Aesthetically, I don't like that.

--
Greg

From mwh at python.net  Thu Jun 22 11:40:02 2006
From: mwh at python.net (Michael Hudson)
Date: Thu, 22 Jun 2006 10:40:02 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtA9Y-0004r8-47@draco.cus.cam.ac.uk> (Nick Maclaren's
	message of "Wed, 21 Jun 2006 22:22:24 +0100")
References: <E1FtA9Y-0004r8-47@draco.cus.cam.ac.uk>
Message-ID: <2mr71hjzpp.fsf@starship.python.net>

Nick Maclaren <nmm1 at cus.cam.ac.uk> writes:

> Michael Hudson <mwh at python.net> wrote:
>>
>> This mail never appeared on python-dev as far as I can tell, so I'm  
>> not snipping anything.
>
> And it still hasn't :-(  I am on the list of recipients without posting
> rights, and the moderator appears to be on holiday.

They've appeared now, it seems the moderator is back :-)

>> But I wouldn't characterize anything Python does in the floating  
>> point area as "designed", particularly.  Portability makes that hard.
>
> Not really.  We managed even back in the 1970s, when there was a LOT
> more variation.  Writing code that would work, unchanged, on an IBM 360,
> an ICL 1900 and a CDC 6600 was, er, interesting ....

Maybe append " for me, at least" to what I wrote then.  But really, it
is hard: because Python runs on so many platforms, and platforms that
no current Python developer has access to.  If you're talking about
implementing FP in software (are you?), then I guess it gets easier.

>> Well, if you can't explain what your intentions are to *me*, as a  
>> mathematics-degree holding core Python developer that has done at  
>> least some work in this area, I posit that you aren't going to get  
>> very far.    
>
> My intentions are to provide some numerically robust semantics,
> preferably of the form where straightforward numeric code (i.e. code
> that doesn't play any bit-twiddling tricks) will never invoke
> mathematically undefined behaviour without it being flagged.  See
> Kahan on that.

That doesn't actually explain the details of your intent very much.

>> I'm not intimately familiar with the standards like 754 but I have  
>> some idea what they contain, and I've read appendix F of C99, if that  
>> helps you target your explanations.
>
> Not a lot.  Annex F in itself is only numerically insane.  You need to
> know the rest of the standard, including that which is documented only
> in SC22WG14 messages, to realise the full horror.

That's not why I was mentioning it.  I was mentioning it to give the
idea that I'm not a numerical expert but, for example, I know what a
denorm is.

>> Why does it need to be program global?  In my not-really-thought-out  
>> plans for straightening out CPython's floating point story I had  
>> envisioned code to be written something like this:
> 
> No, you are thinking at too low a level.

Well, I was just trying to get you to actually explain your intent :-)

> The problem with such things is that they related to the interfaces
> between types, and it is those aspects where object-orientation
> falls down so badly.  For example, consider conversion between float
> and long - which class should control the semantics?

This comment seems not to relate to anything I said, or at least not
obviously.

>> This could be implemented by having a field in the threadstate of FPU  
>> flags to check after each fp operation (or each set of fp operations,  
>> possibly).  I don't think I have the guts to try to implement  
>> anything sensible using HW traps (which are thread-local as well,  
>> aren't they?).
>
> Gods, NO!!!

Good :-)

> Sorry, but I have implemented such things (but that was on a far
> architecture, and besides the system is dead).  Modern CPU
> architectures don't even DEFINE whether interrupt handling is local
> to the core or chip, and document that only in the release notes,
> but what is clear is that some BLACK incantations are needed in
> either case.

Well, I only really know about the PowerPC at this level...

> Think of taking a machine check interrupt on a multi- core,
> highly-pipelined architecture and blench.  And, if that is an
> Itanic, gibber hysterically before taking early retirement on the
> grounds of impending insanity.

What does a machine check interrupt have to do with anything?

> Oh, that's the calm, moderate description.  The reality is worse.

Yes, but fortunately irrelevant...

>> > Secondly, for things that don't need to be brings
>> > up my point of adding methods to a built-in class.
>> 
>> This isn't very hard, really, in fact float has class methods in 2.5...
>
> Thanks.  I will look, but remember this is being done at the C level.

So is my code.

>> >> I'm interested in making Python's floating point story better, and
>> >> have worked on a few things for Python 2.5 -- such as
>> >> pickling/marshalling of special values -- but I'm not really a
>> >> numerical programmer and don't like to guess what they need.
>> >
>> > Ah.  I must get a snapshot, then.  That was one of the lesser things
>> > on my list.
>> 
>> It was fairly straightforward, and still caused portability problems...
>
> Now, why did I predict that?  Did you, by any chance, include
> System/390 and VAX support in your code :-)

God no, it was just mundane stuff like SIZEOF_FLOAT not being defined
on windows.

Now, a more general reply: what are you actually trying to acheive
with these posts?  I presume it's more than just make wild claims
about how much more you know about numerical programming than anyone
else...

I get the impression that you would like to see floatobject.c
rewritten to make little or no use of the FPU, is that right?  Also,
you seem to have an alternate model for floating point calculations in
mind, but seem very reluctant to actually explain what this is.

I think you should probably write a PEP.

Cheers,
mwh

-- 
  GET   *BONK*
  BACK  *BONK*
  IN    *BONK*
  THERE *BONK*             -- Naich using the troll hammer in cam.misc

From ncoghlan at gmail.com  Thu Jun 22 12:11:15 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 22 Jun 2006 20:11:15 +1000
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <2mr71hjzpp.fsf@starship.python.net>
References: <E1FtA9Y-0004r8-47@draco.cus.cam.ac.uk>
	<2mr71hjzpp.fsf@starship.python.net>
Message-ID: <449A6CC3.4040309@gmail.com>

Michael Hudson wrote:
> I get the impression that you would like to see floatobject.c
> rewritten to make little or no use of the FPU, is that right?  Also,
> you seem to have an alternate model for floating point calculations in
> mind, but seem very reluctant to actually explain what this is.
> 
> I think you should probably write a PEP.

I suggest using PEP 327 as a starting point, though. A few of the issues 
Cowlishaw talks about in the General Decimal Arithmetic spec are specific to 
hardware implementations, sure, but it doesn't appear to involve anything 
numerically insane (granted, they've added exp, ln and log10 to the spec since 
PEP 327 was implemented. . .).

The only downside I know of when it comes to trying to do serious number 
crunching with the current decimal implementation is the loss of raw speed 
relative to hardware floating point, and Mateusz is working on improving that 
situation.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Thu Jun 22 12:54:46 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 22 Jun 2006 20:54:46 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <4499F8E1.2020103@acm.org>
References: <5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<17547.19802.361151.705599@montanaro.dyndns.org>	<20060611010410.GA5723@21degrees.com.au>	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>	<44988C6E.4080806@canterbury.ac.nz>	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<4499F8E1.2020103@acm.org>
Message-ID: <449A76F6.6030606@gmail.com>

Talin wrote:
> I don't get what the problem is here. A switch constant should have 
> exactly the bahavior of a default value of a function parameter. We 
> don't seem to have too many problems defining functions at the module 
> level, do we?

Because in function definitions, if you put them inside another function, the 
defaults of the inner function get reevaluated every time the outer function 
is run. Doing that for the switch statement would kinda defeat the whole 
point. . .

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From mal at egenix.com  Thu Jun 22 13:08:56 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 22 Jun 2006 13:08:56 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<17547.19802.361151.705599@montanaro.dyndns.org>	<20060611010410.GA5723@21degrees.com.au>	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>	<44988C6E.4080806@canterbury.ac.nz>
	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
Message-ID: <449A7A48.5060404@egenix.com>

Phillip J. Eby wrote:
> Maybe the real answer is to have a "const" declaration, not necessarily the 
> way that Fredrik suggested, but a way to pre-declare constants e.g.:
> 
>      const FOO = 27
> 
> And then require case expressions to be either literals or constants.  The 
> constants need not be computable at compile time, just runtime.  If a 
> constant is defined using a foldable expression (e.g. FOO = 27 + 43), then 
> the compiler can always optimize it down to a code level 
> constant.  Otherwise, it can just put constants into cells that the 
> functions use as part of their closure.  (For that matter, the switch 
> statement jump tables, if any, can be put in a cell too.)
> 
>> I don't like "first use" because it seems to invite tricks.
> 
> Okay, then I think we need a way to declare a global as being constant.  It 
> seems like all the big problems with switch/case basically amount to us 
> trying to wiggle around the need to explicitly declare constants.

I don't think that this would help us much:

If you want the compiler to "see" that a name binds to a constant,
it would need to have access to the actual value at compile time
(e.g. code object definition time).

However, it is common practice to put constants which you'd use
in e.g. parsers into a separate module and you certainly don't
want to have the compiler import the module and apply attribute
lookups.

This means that you'd have to declare a symbol constant in the
scope where you want to use it as such. Which would result in
long sections of e.g.

const case1
const case2
...
const caseN

In the end, making this implicit in the case part of the switch
statement would save us a lot of typing.

However, there's another catch: if we do allow arbitrary expressions
in the case parts we still need to evaluate them at some point:

a. If we do so at compile time, the results may be a lot different
than at execution time (e.g. say you use time.time() in one of the
case value expressions).

b. If we evaluate them at code object execution time (e.g. module
import), then we'd run into similar problems, but at least
the compiler wouldn't have to have access to the used symbols.

c. If we evaluate at first-use time, results of the evaluation
become unpredictable and you'd also lose a lot of the
speedup since building the hash table would consume cycles
that you'd rather spend on doing other things.

d. Ideally, you'd want to create the hash table at compile time
and this is only possible using literals or by telling the
compiler to regard a specific set of globals as constant, e.g.
by passing a dictionary (mapping globals to values) to compile().

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 22 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              10 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From amk at amk.ca  Thu Jun 22 14:24:01 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Thu, 22 Jun 2006 08:24:01 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060622053453.6483.qmail@web31509.mail.mud.yahoo.com>
References: <ca471dc20606211945r3a2dadbeof694a51c6e333e6@mail.gmail.com>
	<20060622053453.6483.qmail@web31509.mail.mud.yahoo.com>
Message-ID: <20060622122401.GA18059@localhost.localdomain>

On Wed, Jun 21, 2006 at 10:34:53PM -0700, Ralf W. Grosse-Kunstleve wrote:
> But this doesn't:
> python -W'ignore:Not importing directory:ImportWarning'

This is a bug.  I've filed bug #1510580 and assigned it to Brett.  I
think the problem was exposed by the new-style exception change, but
the actual bug is in warnings.py; the check for a legal category is
wrong.

> Also, the magic incantation to silence the warnings would be very helpful here:

Good idea; I'll add it.

You could add the following to your site.py or your .pythonrc.py:

warnings.filterwarnings('ignore', 'Not importing directory', ImportWarning)

--amk

From pje at telecommunity.com  Thu Jun 22 15:36:23 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 22 Jun 2006 09:36:23 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <449A7A48.5060404@egenix.com>
References: <5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>

At 01:08 PM 6/22/2006 +0200, M.-A. Lemburg wrote:
>Phillip J. Eby wrote:
> > Maybe the real answer is to have a "const" declaration, not necessarily 
> the
> > way that Fredrik suggested, but a way to pre-declare constants e.g.:
> >
> >      const FOO = 27
> >
> > And then require case expressions to be either literals or constants.  The
> > constants need not be computable at compile time, just runtime.  If a
> > constant is defined using a foldable expression (e.g. FOO = 27 + 43), then
> > the compiler can always optimize it down to a code level
> > constant.  Otherwise, it can just put constants into cells that the
> > functions use as part of their closure.  (For that matter, the switch
> > statement jump tables, if any, can be put in a cell too.)
> >
> >> I don't like "first use" because it seems to invite tricks.
> >
> > Okay, then I think we need a way to declare a global as being 
> constant.  It
> > seems like all the big problems with switch/case basically amount to us
> > trying to wiggle around the need to explicitly declare constants.
>
>I don't think that this would help us much:
>
>If you want the compiler to "see" that a name binds to a constant,
>it would need to have access to the actual value at compile time
>(e.g. code object definition time).

No, it wouldn't.  This hypothetical "const" would be a *statement*, 
executed like any other statement.  It binds a name to a value -- and 
produces an error if the value changes.  The compiler doesn't need to know 
what it evaluates to at runtime; that's what LOAD_NAME or LOAD_DEREF are 
for.  ;)


>However, it is common practice to put constants which you'd use
>in e.g. parsers into a separate module and you certainly don't
>want to have the compiler import the module and apply attribute
>lookups.

Not necessary, but I see it does produce a different problem.


>This means that you'd have to declare a symbol constant in the
>scope where you want to use it as such. Which would result in
>long sections of e.g.
>
>const case1
>const case2
>...
>const caseN

Actually, under my proposal it'd be:

const FOO = somemodule.FOO
const BAR = somemodule.BAR

etc.  Which is probably actually worse.  But I see your point.


>In the end, making this implicit in the case part of the switch
>statement would save us a lot of typing.
>
>However, there's another catch: if we do allow arbitrary expressions
>in the case parts we still need to evaluate them at some point:
>
>a. If we do so at compile time, the results may be a lot different
>than at execution time (e.g. say you use time.time() in one of the
>case value expressions).

We can't do that at compile time.

>b. If we evaluate them at code object execution time (e.g. module
>import), then we'd run into similar problems, but at least
>the compiler wouldn't have to have access to the used symbols.
>
>c. If we evaluate at first-use time, results of the evaluation
>become unpredictable and you'd also lose a lot of the
>speedup since building the hash table would consume cycles
>that you'd rather spend on doing other things.

Assuming that a sequential search takes 1/2N equality tests on average, 
you'll come out ahead by the third switch executions, assuming that the 
time to add a dictionary entry or do a hash lookup is roughly equal to an 
if/else test.  The first execution would put N entries in the dictionary, 
and do 1 lookup.  The second execution does 1 lookup, so we're now at N+2 
operations, vs N operations on average for sequential search.  At the third 
execution, we're at N+3 vs. 2.5N, so for more than 6 entries we're already 
ahead.


>d. Ideally, you'd want to create the hash table at compile time
>and this is only possible using literals or by telling the
>compiler to regard a specific set of globals as constant, e.g.
>by passing a dictionary (mapping globals to values) to compile().

I still think that it suffices to assume that an expression produced using 
only symbols that aren't rebound are sufficiently static for use in a case 
expression.  If a symbol is bound by a single import statement (or other 
definition), or isn't bound at all (e.g. it's a builtin), it's easy enough 
to assume that it's going to remain the same.

Combine that compile-time restriction with a first-use build of the 
dictionary, and I think you have the best that we can hope to do in 
balancing implementation simplicity with usefulness and 
non-confusingness.  If it's not good enough, it's not good enough, but I 
don't think there's anything we've thought of so far that comes out with a 
better set of tradeoffs.


From brett at python.org  Thu Jun 22 15:46:29 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 22 Jun 2006 06:46:29 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <449A51CC.3070108@ghaering.de>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<449A51CC.3070108@ghaering.de>
Message-ID: <bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>

On 6/22/06, Gerhard H?ring <gh at ghaering.de> wrote:
>
> Brett Cannon wrote:
> > I have been working on a design doc for restricted execution of Python
> > [...]
>
> All the rest of the API made sense to me, but I couldn't understand why
>
> PyXXX_MemoryFree
>
> is needed. How could memory usage possibly fall below 0?


It can't in real life, but people could call MemoryFree() too many times.
Plus you need some way to lower the amount when memory is freed.  No need to
penalize a script that does a bunch of malloc/free calls compared to one
that just does a bunch of malloc calls.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060622/c0efc91b/attachment.htm 

From guido at python.org  Thu Jun 22 17:15:59 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 08:15:59 -0700
Subject: [Python-Dev] Allow assignments in 'global' statements?
In-Reply-To: <449A0198.1060601@acm.org>
References: <449A0198.1060601@acm.org>
Message-ID: <ca471dc20606220815w54366784o69e60d45d1e54d3b@mail.gmail.com>

On 6/21/06, Talin <talin at acm.org> wrote:
> I'm sure I am not the first person to say this, but how about:
>
>     global x = 12
>
> (In other words, declare a global and assign a value to it - or another
> way of saying it is that the 'global' keyword acts as an assignment
> modifier.)

-1. If you get this feeling, surely you're overusing global.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 17:20:48 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 08:20:48 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449A4F95.4070608@alum.mit.edu>
References: <mailman.27347.1150881823.27774.python-dev@python.org>
	<449A4F95.4070608@alum.mit.edu>
Message-ID: <ca471dc20606220820p3b96a2cqbbe050cc97570650@mail.gmail.com>

On 6/22/06, Roger Miller <rogermiller at alum.mit.edu> wrote:
> Part of the readability advantage of a switch over an if/elif chain is
> the semantic parallelism, which would make me question mixing different
> tests in the same switch.  What if the operator moved into the switch
> header?
>
>      switch x ==:
>          case 1: foo(x)
>         case 2, 3: bar(x)
>
>      switch x in:
>         case (1, 3, 5): do_odd(x)
>         case (2, 4, 6): do_even(x)
>
> "switch x:" could be equivalent to "switch x ==:", for the common case.

That's difficult (I mean impossible) for Python's parser, since x ==
is also the legal start of an expression.

> I've also been wondering whether the 'case' keyword is really necessary?
>   Would any ambiguities or other parsing problems arise if you wrote:
>
>      switch x:
>          1: foo(x)
>         2: bar(x)
>
> It is debatable whether this is more or less readable, but it seemed
> like an interesting question for the language lawyers.

That's no problem for the parser, as long as the expressions are
indented. ABC did this.

But I think I like an explicit case keyword better; it gives a better
error message if the indentation is forgotten.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From g.brandl at gmx.net  Thu Jun 22 17:29:27 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 22 Jun 2006 17:29:27 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606220820p3b96a2cqbbe050cc97570650@mail.gmail.com>
References: <mailman.27347.1150881823.27774.python-dev@python.org>	<449A4F95.4070608@alum.mit.edu>
	<ca471dc20606220820p3b96a2cqbbe050cc97570650@mail.gmail.com>
Message-ID: <e7ed0n$9ng$1@sea.gmane.org>

Guido van Rossum wrote:

>> I've also been wondering whether the 'case' keyword is really necessary?
>>   Would any ambiguities or other parsing problems arise if you wrote:
>>
>>      switch x:
>>          1: foo(x)
>>         2: bar(x)
>>
>> It is debatable whether this is more or less readable, but it seemed
>> like an interesting question for the language lawyers.
> 
> That's no problem for the parser, as long as the expressions are
> indented. ABC did this.
> 
> But I think I like an explicit case keyword better; it gives a better
> error message if the indentation is forgotten.

It also overthrows the notion that suites are started by statements, not
by expressions.

Georg


From guido at python.org  Thu Jun 22 17:52:37 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 08:52:37 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
Message-ID: <ca471dc20606220852u549388c2na5568a3bf3bfb5a7@mail.gmail.com>

On 6/21/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 01:16 PM 6/21/2006 -0700, Guido van Rossum wrote:
> >Yeah, but if you have names for your constants it would be a shame if
> >you couldn't use them because they happen to be defined in the same
> >scope.
>
> Maybe the real answer is to have a "const" declaration, not necessarily the
> way that Fredrik suggested, but a way to pre-declare constants e.g.:
>
>      const FOO = 27

The problem with this is that I don't see how to export this property
from one module to another. If module A has the above statement and
module B does "from A import FOO" then how does the parser know that
FOO is a constant when it's parsing B? (Remeber the parser only sees
one module at a time; I don't want to drop this separate compilation
facility.) It seems you would still end up with lots of duplicate
const declarations (e.g. "from A import const FOO"). And it should
also be possible to say "import A" and then use "A.FOO" as a constant.

I really don't think this ad-hoc solution is going to work.

> And then require case expressions to be either literals or constants.  The
> constants need not be computable at compile time, just runtime.  If a
> constant is defined using a foldable expression (e.g. FOO = 27 + 43), then
> the compiler can always optimize it down to a code level
> constant.  Otherwise, it can just put constants into cells that the
> functions use as part of their closure.  (For that matter, the switch
> statement jump tables, if any, can be put in a cell too.)
>
> >I don't like "first use" because it seems to invite tricks.
>
> Okay, then I think we need a way to declare a global as being constant.  It
> seems like all the big problems with switch/case basically amount to us
> trying to wiggle around the need to explicitly declare constants.

And I don't believe declaring constants is going to work; not without
a much bigger change to the language and the way we think about it.
This is because "const-ness" isn't a property that you can encode as a
new type of object. It is a compile-time property of *names*. The only
similar thing in Python is globals. But global declarations are
intentionally a bit clunky because we believe overuse of the feature
would be a mistake. But we wouldn't want to discourage constant
declaration if we had them, so having to repeat the 'constant' keyword
in every module that uses a particular constant would be a painful
wart.

Is your objection purely based on the problems of getting switch to
behave the same way inside and outside a function? I'd rather forbid
or cripple switch outside functions than either add constant
declarations or switch to first-use semantics.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 17:53:34 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 08:53:34 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449A41FB.8050706@canterbury.ac.nz>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606191035p40d5e85j7a8bb957a2dfb0ec@mail.gmail.com>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<449A41FB.8050706@canterbury.ac.nz>
Message-ID: <ca471dc20606220853l3bef93e2o418a9011049ad009@mail.gmail.com>

On 6/22/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Phillip J. Eby wrote:
>
> >      switch x:
> >          case == 1: foo(x)
>
> Aesthetically, I don't like that.

Me neither.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 17:56:17 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 08:56:17 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449A76F6.6030606@gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<4499F8E1.2020103@acm.org> <449A76F6.6030606@gmail.com>
Message-ID: <ca471dc20606220856l2a1ed62fl637723636b222a39@mail.gmail.com>

On 6/22/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Talin wrote:
> > I don't get what the problem is here. A switch constant should have
> > exactly the bahavior of a default value of a function parameter. We
> > don't seem to have too many problems defining functions at the module
> > level, do we?
>
> Because in function definitions, if you put them inside another function, the
> defaults of the inner function get reevaluated every time the outer function
> is run. Doing that for the switch statement would kinda defeat the whole
> point. . .

Really? Then where would you store the dict? You can't store it on the
code object because that's immutable. You can't store it on the
function object (if you don't want it to be re-evaluated when the
function is redefined) because a new function object is created by
each redefinition. There needs to be *some* kind of object with a
well-defined life cycle where to store the dict.

I'd say that we should just add a warning against switches in nested
functions that are called only once per definition.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From gh at ghaering.de  Thu Jun 22 18:12:01 2006
From: gh at ghaering.de (=?UTF-8?B?R2VyaGFyZCBIw6RyaW5n?=)
Date: Thu, 22 Jun 2006 18:12:01 +0200
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>	
	<449A51CC.3070108@ghaering.de>
	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>
Message-ID: <449AC151.4030500@ghaering.de>

Brett Cannon wrote:
> On 6/22/06, *Gerhard H?ring* <gh at ghaering.de <mailto:gh at ghaering.de>> wrote:
> 
>     Brett Cannon wrote:
>      > I have been working on a design doc for restricted execution of
>     Python
>      > [...]
> 
>     All the rest of the API made sense to me, but I couldn't understand why
> 
>     PyXXX_MemoryFree
> 
>     is needed. How could memory usage possibly fall below 0?
> 
> It can't in real life, but people could call MemoryFree() too many 
> times.  Plus you need some way to lower the amount when memory is 
> freed.  No need to penalize a script that does a bunch of malloc/free 
> calls compared to one that just does a bunch of malloc calls.

But if you want to limit the amount of memory a Python interpreter can 
use, wouldn't you have to integrate that resource checking into the 
standard Alloc/Dealloc functions instead of only enforcing the resource 
limit when some new API functions are called?

Existing extension modules and existing C code in the Python interpreter 
have no idea of any PyXXX_ calls, so I don't understand how new API 
functions help here.

-- Gerhard

From fredrik at pythonware.com  Thu Jun 22 18:22:17 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 22 Jun 2006 18:22:17 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>	<e7bssg$hke$1@sea.gmane.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>	<e7c135$4ql$1@sea.gmane.org>
	<e7c4tl$kq7$1@sea.gmane.org>
	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>
Message-ID: <e7eg3m$lrh$1@sea.gmane.org>

Guido van Rossum wrote:

>> which simply means that expr will be evaluated at function definition
>> time, rather than at runtime.  example usage:
>>
>>      var = expression
>>      if var == constant sre.FOO:
>>          ...
>>      elif var == constant sre.BAR:
>>          ...
>>      elif var in constant (sre.FIE, sre.FUM):
>>          ...
> 
> This gets pretty repetitive. One might suggest that 'case' could imply
> 'constant'...?

possibly, but I find that a tad too magic for my taste.

a "constant" (or perhaps better, "const") primary would also be useful 
in several other cases, including:

- as a replacement for default-argument object binding

- local dispatch tables, and other generated-but-static data structures

- explicit (but still anonymous) constant/expression "folding"

an alternative would be to add a const declaration that can only be used 
in local scopes; i.e.

     def foo(value):
        const bar = fie.fum
        if value == bar:
           ...

which would behave like

     def foo(value, bar=fie.fum):
        if value == bar:
            ...

but without the "what if we pass in more than one argument?" issue.

yet another alternative would be a const declaration that you could use 
on a global level, but I fail to see how you could propagate the "const- 
ness" property to whoever wants to use a const object -- unless, of 
course, you implement

     const bar = fie.fum

     def foo(value):
        if value == bar:
           ...

as

     class constant_wrapper(object):
         def __init__(self, value):
             self.value = value

     bar = constant_wrapper(fie.fum)

     def foo(value, bar=bar.value):
         if value == bar:
             ...

(except for the default argument thing; see above).  the result is a 
kind of semi-constant objects that would be useful, but perhaps not 
constant enough...)

it might be too much C# exposure, but I think I prefer the "explicit 
when using" approach...

</F>


From brett at python.org  Thu Jun 22 18:28:21 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 22 Jun 2006 09:28:21 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <449AC151.4030500@ghaering.de>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<449A51CC.3070108@ghaering.de>
	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>
	<449AC151.4030500@ghaering.de>
Message-ID: <bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>

On 6/22/06, Gerhard H?ring <gh at ghaering.de> wrote:
>
> Brett Cannon wrote:
> > On 6/22/06, *Gerhard H?ring* <gh at ghaering.de <mailto:gh at ghaering.de>>
> wrote:
> >
> >     Brett Cannon wrote:
> >      > I have been working on a design doc for restricted execution of
> >     Python
> >      > [...]
> >
> >     All the rest of the API made sense to me, but I couldn't understand
> why
> >
> >     PyXXX_MemoryFree
> >
> >     is needed. How could memory usage possibly fall below 0?
> >
> > It can't in real life, but people could call MemoryFree() too many
> > times.  Plus you need some way to lower the amount when memory is
> > freed.  No need to penalize a script that does a bunch of malloc/free
> > calls compared to one that just does a bunch of malloc calls.
>
> But if you want to limit the amount of memory a Python interpreter can
> use, wouldn't you have to integrate that resource checking into the
> standard Alloc/Dealloc functions instead of only enforcing the resource
> limit when some new API functions are called?


Yep.  That API will be used directly in the changes to pymalloc and
PyMem_*() macros (or at least the basic idea).  It is not *only* for
extension modules but for the core as well.

Existing extension modules and existing C code in the Python interpreter
> have no idea of any PyXXX_ calls, so I don't understand how new API
> functions help here.


The calls get added to pymalloc and PyMem_*() under the hood, so that
existing extension modules use the memory check automatically without a
change.  The calls are just there in case some one has some random need to
do their own malloc but still want to participate in the cap.  Plus it
helped me think everything through by giving everything I would need to
change internally an API.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060622/69bbf783/attachment.htm 

From guido at python.org  Thu Jun 22 18:37:41 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 09:37:41 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
Message-ID: <ca471dc20606220937n72a42423h6f05ec0c5ae695ef@mail.gmail.com>

On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> This hypothetical "const" would be a *statement*,
> executed like any other statement.  It binds a name to a value -- and
> produces an error if the value changes.  The compiler doesn't need to know
> what it evaluates to at runtime; that's what LOAD_NAME or LOAD_DEREF are
> for.  ;)

Please think this through more. How do you implement the "produces an
error if the value changes" part? Is the const property you're
thinking of part of the name or of the object it refers to?

The only way I can see it work is if const-ness is a compile-time
property of names, just like global. But that requires too much
repetition when a constant is imported.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 18:45:18 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 09:45:18 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7eg3m$lrh$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<e7bssg$hke$1@sea.gmane.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org> <e7c4tl$kq7$1@sea.gmane.org>
	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>
	<e7eg3m$lrh$1@sea.gmane.org>
Message-ID: <ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>

On 6/22/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> a "constant" (or perhaps better, "const") primary would also be useful
> in several other cases, including:
>
> - as a replacement for default-argument object binding
>
> - local dispatch tables, and other generated-but-static data structures
>
> - explicit (but still anonymous) constant/expression "folding"
>
> an alternative would be to add a const declaration that can only be used
> in local scopes; i.e.
>
>      def foo(value):
>         const bar = fie.fum
>         if value == bar:
>            ...
>
> which would behave like
>
>      def foo(value, bar=fie.fum):
>         if value == bar:
>             ...
>
> but without the "what if we pass in more than one argument?" issue.

So the constant would be evaluated at function definition time? I find
that rather confusing. Especially since common uses will probably
include

  const true = True
  while true:
    ...code...

This is a well-meaning attempt to let the compiler optimize this to a
test-less infinite loop that works but throws the baby out with the
bathwater.

> yet another alternative would be a const declaration that you could use
> on a global level, but I fail to see how you could propagate the "const-
> ness" property to whoever wants to use a const object -- unless, of
> course, you implement
>
>      const bar = fie.fum
>
>      def foo(value):
>         if value == bar:
>            ...
>
> as
>
>      class constant_wrapper(object):
>          def __init__(self, value):
>              self.value = value
>
>      bar = constant_wrapper(fie.fum)
>
>      def foo(value, bar=bar.value):
>          if value == bar:
>              ...
>
> (except for the default argument thing; see above).  the result is a
> kind of semi-constant objects that would be useful, but perhaps not
> constant enough...)

I fail to see the usefulness of this wrapper. The wrapper isn't
completely transparent o some code that uses type checks may need to
be modified. The wrapper doesn't get removed by a simple assignment;
after

  const a = 1
  b = a

how do we prevent b from being treated as a constant?

> it might be too much C# exposure, but I think I prefer the "explicit
> when using" approach...

It may be not enough C# exposure, but I don't know exactly which
approach you are referring to.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 18:46:15 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 09:46:15 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7ed0n$9ng$1@sea.gmane.org>
References: <mailman.27347.1150881823.27774.python-dev@python.org>
	<449A4F95.4070608@alum.mit.edu>
	<ca471dc20606220820p3b96a2cqbbe050cc97570650@mail.gmail.com>
	<e7ed0n$9ng$1@sea.gmane.org>
Message-ID: <ca471dc20606220946l67e3d1c1t7103c3fd107a22da@mail.gmail.com>

On 6/22/06, Georg Brandl <g.brandl at gmx.net> wrote:
> Guido van Rossum wrote:
>
> >> I've also been wondering whether the 'case' keyword is really necessary?
> >>   Would any ambiguities or other parsing problems arise if you wrote:
> >>
> >>      switch x:
> >>          1: foo(x)
> >>         2: bar(x)
> >>
> >> It is debatable whether this is more or less readable, but it seemed
> >> like an interesting question for the language lawyers.
> >
> > That's no problem for the parser, as long as the expressions are
> > indented. ABC did this.
> >
> > But I think I like an explicit case keyword better; it gives a better
> > error message if the indentation is forgotten.
>
> It also overthrows the notion that suites are started by statements, not
> by expressions.

I'm not sure I care about that. Do you use this in teaching? How does
it help you?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From g.brandl at gmx.net  Thu Jun 22 18:56:56 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 22 Jun 2006 18:56:56 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606220946l67e3d1c1t7103c3fd107a22da@mail.gmail.com>
References: <mailman.27347.1150881823.27774.python-dev@python.org>	<449A4F95.4070608@alum.mit.edu>	<ca471dc20606220820p3b96a2cqbbe050cc97570650@mail.gmail.com>	<e7ed0n$9ng$1@sea.gmane.org>
	<ca471dc20606220946l67e3d1c1t7103c3fd107a22da@mail.gmail.com>
Message-ID: <e7ei4p$tgu$1@sea.gmane.org>

Guido van Rossum wrote:
> On 6/22/06, Georg Brandl <g.brandl at gmx.net> wrote:
>> Guido van Rossum wrote:
>>
>> >> I've also been wondering whether the 'case' keyword is really necessary?
>> >>   Would any ambiguities or other parsing problems arise if you wrote:
>> >>
>> >>      switch x:
>> >>          1: foo(x)
>> >>         2: bar(x)
>> >>
>> >> It is debatable whether this is more or less readable, but it seemed
>> >> like an interesting question for the language lawyers.
>> >
>> > That's no problem for the parser, as long as the expressions are
>> > indented. ABC did this.
>> >
>> > But I think I like an explicit case keyword better; it gives a better
>> > error message if the indentation is forgotten.
>>
>> It also overthrows the notion that suites are started by statements, not
>> by expressions.
> 
> I'm not sure I care about that. Do you use this in teaching? How does
> it help you?

I just realized that my post could be misunderstood: The sentence referred
to the "case"-less form. (And it's just a "feeling" thing)

Georg


From pje at telecommunity.com  Thu Jun 22 19:02:29 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 22 Jun 2006 13:02:29 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606220937n72a42423h6f05ec0c5ae695ef@mail.gmail.co
 m>
References: <5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>

At 09:37 AM 6/22/2006 -0700, Guido van Rossum wrote:
>On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > This hypothetical "const" would be a *statement*,
> > executed like any other statement.  It binds a name to a value -- and
> > produces an error if the value changes.  The compiler doesn't need to know
> > what it evaluates to at runtime; that's what LOAD_NAME or LOAD_DEREF are
> > for.  ;)
>
>Please think this through more. How do you implement the "produces an
>error if the value changes" part? Is the const property you're
>thinking of part of the name or of the object it refers to?
>
>The only way I can see it work is if const-ness is a compile-time
>property of names, just like global. But that requires too much
>repetition when a constant is imported.

Right; MAL pointed that out in the message I was replying to, and I 
conceded his point.  Of course, if you consider constness to be an implicit 
property of imported names that aren't rebound, the repetition problem goes 
away.

And if you then require all "case" expressions to be either literals or 
constant names, we can also duck the "when does the expression get 
evaluated?" question.  The obvious answer is that it's evaluated wherever 
you bound the name, and the compiler can either optimize the switch 
statement (or not), depending on where the assignment took place.  A switch 
that's in a loop or a function call can only be optimized if all its 
constants are declared outside the loop or function body; otherwise it 
degrades to an if/elif chain.

There's actually an in-between possibility, too: you could generate if's 
for constants declared in the loop or function body, and use a dictionary 
for any literals or constants declared outside the loop or function 
body.  The only problem that raises is the possibility of an "inner 
constant" being equal to an "outer constant", creating an ambiguity.  But 
we could just say that nearer constants take precedence over later ones, or 
force you to introduce the cases such that inner constants appear first.

(This approach doesn't really need an explicit "const foo=bar" declaration, 
though; it just restricts cases to using names that are bound only once in 
the code of the scope they're obtained from.)


From guido at python.org  Thu Jun 22 19:03:51 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 10:03:51 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7ei4p$tgu$1@sea.gmane.org>
References: <mailman.27347.1150881823.27774.python-dev@python.org>
	<449A4F95.4070608@alum.mit.edu>
	<ca471dc20606220820p3b96a2cqbbe050cc97570650@mail.gmail.com>
	<e7ed0n$9ng$1@sea.gmane.org>
	<ca471dc20606220946l67e3d1c1t7103c3fd107a22da@mail.gmail.com>
	<e7ei4p$tgu$1@sea.gmane.org>
Message-ID: <ca471dc20606221003j70bae322iba4eb6c0adeaab4e@mail.gmail.com>

On 6/22/06, Georg Brandl <g.brandl at gmx.net> wrote:
> Guido van Rossum wrote:
> > On 6/22/06, Georg Brandl <g.brandl at gmx.net> wrote:
> >> Guido van Rossum wrote:
> >>
> >> >> I've also been wondering whether the 'case' keyword is really necessary?
> >> >>   Would any ambiguities or other parsing problems arise if you wrote:
> >> >>
> >> >>      switch x:
> >> >>          1: foo(x)
> >> >>         2: bar(x)
> >> >>
> >> >> It is debatable whether this is more or less readable, but it seemed
> >> >> like an interesting question for the language lawyers.
> >> >
> >> > That's no problem for the parser, as long as the expressions are
> >> > indented. ABC did this.
> >> >
> >> > But I think I like an explicit case keyword better; it gives a better
> >> > error message if the indentation is forgotten.
> >>
> >> It also overthrows the notion that suites are started by statements, not
> >> by expressions.
> >
> > I'm not sure I care about that. Do you use this in teaching? How does
> > it help you?
>
> I just realized that my post could be misunderstood: The sentence referred
> to the "case"-less form. (And it's just a "feeling" thing)

I understood that. And I don't have the same feeling. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 19:44:52 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 10:44:52 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
Message-ID: <ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>

On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 09:37 AM 6/22/2006 -0700, Guido van Rossum wrote:
> >On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > > This hypothetical "const" would be a *statement*,
> > > executed like any other statement.  It binds a name to a value -- and
> > > produces an error if the value changes.  The compiler doesn't need to know
> > > what it evaluates to at runtime; that's what LOAD_NAME or LOAD_DEREF are
> > > for.  ;)
> >
> >Please think this through more. How do you implement the "produces an
> >error if the value changes" part? Is the const property you're
> >thinking of part of the name or of the object it refers to?
> >
> >The only way I can see it work is if const-ness is a compile-time
> >property of names, just like global. But that requires too much
> >repetition when a constant is imported.
>
> Right; MAL pointed that out in the message I was replying to, and I
> conceded his point.  Of course, if you consider constness to be an implicit
> property of imported names that aren't rebound, the repetition problem goes
> away.

Um, technically names are never imported, only objects. Suppose module
A defines const X = 1, and module B imports A. How does the compiler
know that A.X is a constant?

I still don't see how const declarations can be made to work, and
unless you can show me how I don't see how they can help.

> And if you then require all "case" expressions to be either literals or
> constant names, we can also duck the "when does the expression get
> evaluated?" question.  The obvious answer is that it's evaluated wherever
> you bound the name, and the compiler can either optimize the switch
> statement (or not), depending on where the assignment took place.

I don't understand what you're proposing. In particular I don't
understand what you mean by "wherever you bound the name".

So (evading the import problem for a moment) suppose we have

const T = int(time.time())

def foo(x):
  switch x:
    case T: print "Yes"
    else: print "No"

Do you consider that an optimizable switch or not?

> A switch
> that's in a loop or a function call can only be optimized if all its
> constants are declared outside the loop or function body; otherwise it
> degrades to an if/elif chain.

What do you mean by a switch in a function call? Syntactically that
makes no sense. Do you mean in a function definition?

I think you're trying to give a heuristic here for when it's likely
that the switch will be executed multiple times with all cases having
the same values (so the optimized dict can be reused).

I'm afraid this is going to cause problems with predictability. Also,
if it degenerates to an if/elif chain, does it still require the
switch expression to be hashable?

> There's actually an in-between possibility, too: you could generate if's
> for constants declared in the loop or function body, and use a dictionary
> for any literals or constants declared outside the loop or function
> body.  The only problem that raises is the possibility of an "inner
> constant" being equal to an "outer constant", creating an ambiguity.  But
> we could just say that nearer constants take precedence over later ones, or
> force you to introduce the cases such that inner constants appear first.
>
> (This approach doesn't really need an explicit "const foo=bar" declaration,
> though; it just restricts cases to using names that are bound only once in
> the code of the scope they're obtained from.)

Please, think through the notion of const declarations more before
posting again. Without const declarations none of this can work and
the "at-function-definition-time" freezing is the best, because most
predictable, approach IMO.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From fredrik at pythonware.com  Thu Jun 22 20:21:41 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 22 Jun 2006 20:21:41 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>	<e7bssg$hke$1@sea.gmane.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>	<e7c135$4ql$1@sea.gmane.org>
	<e7c4tl$kq7$1@sea.gmane.org>	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>	<e7eg3m$lrh$1@sea.gmane.org>
	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>
Message-ID: <e7en3j$fr7$1@sea.gmane.org>

Guido van Rossum wrote:

>>      def foo(value):
>>         const bar = fie.fum
>>         if value == bar:
>>            ...
>>
>> which would behave like
>>
>>      def foo(value, bar=fie.fum):
>>         if value == bar:
>>             ...
>>
>> but without the "what if we pass in more than one argument?" issue.
> 
> So the constant would be evaluated at function definition time? I find
> that rather confusing.

well, I find the proposed magic behaviour of "case" at least as confusing...

>> (except for the default argument thing; see above).  the result is a
>> kind of semi-constant objects that would be useful, but perhaps not
>> constant enough...)
> 
> I fail to see the usefulness of this wrapper. The wrapper isn't
> completely transparent o some code that uses type checks may need to
> be modified. The wrapper doesn't get removed by a simple assignment;
> after
> 
>   const a = 1
>   b = a
> 
> how do we prevent b from being treated as a constant?

we cannot -- this approaches assigns (a small amount of) const-ness to 
objects, not names.

>> it might be too much C# exposure, but I think I prefer the "explicit
>> when using" approach...
> 
> It may be not enough C# exposure, but I don't know exactly which
> approach you are referring to.

the original one: if you want to treat an expression as a constant, you 
have to be explicit.  examples:

>>> a "constant" (or perhaps better, "const") primary would also be useful
>>> in several other cases, including:
>>>
>>> - as a replacement for default-argument object binding

this is used when you want to pass an *object* into an inner function, 
rather than a name:

     def foo(value, bar=fie.fum):
         if value == bar:
             ...

can be written

     def foo(value):
         if value == const bar:
             ...

>>> - local dispatch tables, and other generated-but-static data structures

     def foo(value):
         table = const {
             1: "one",
             2: "two",
             3: fie.fum,
         }

(maybe "static" would be a better keyword?)

>>> - explicit (but still anonymous) constant/expression "folding"

     def foo(value):
         if value < const (math.pi / 2):
             ...

and so on.  to implement this, the runtime simply evaluates the "const" 
expressions together with the default value expressions, and assigns the 
result to some func_xxx attribute.  everything else works as usual.

</F>


From pje at telecommunity.com  Thu Jun 22 20:37:20 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 22 Jun 2006 14:37:20 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com
 >
References: <5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>

I think one of the problems I sometimes have in communicating with you is 
that I think out stuff from top to bottom of an email, and sometimes 
discard working assumptions once they're no longer needed.  We then end up 
having arguments over ideas I already discarded, because you find the 
problems with them faster than I do, and you assume that those problems 
carry through to the end of my message.  :)  So, I'm partially reversing 
the order of my reply, so you can see what I'm actually proposing, before 
the minutiae of responding the objections you raised to stuff I threw out 
either in my previous message or the message before that.   Hopefully this 
will help.


At 10:44 AM 6/22/2006 -0700, Guido van Rossum wrote:
>Please, think through the notion of const declarations more before
>posting again. Without const declarations none of this can work

Actually, the "const" declaration part isn't necessary and I already 
discarded the idea in my  previous reply to you, noting that the 
combination of these facets can be made to work without any explicit const 
declarations:

1. "case (literal|NAME)" is the syntax for equality testing -- you can't 
use an arbitrary expression, not even a dotted name.

2. NAME, if used, must be bound at most once in its defining scope

3. Dictionary optimization can occur only for literals and names not bound 
in the local scope, others must use if-then.

This doesn't require explicit "const" declarations at all.  It does, 
however, prohibit using "import A" and then switching on a bunch of "A.foo" 
values.  You have to "from A import foo, bar, baz" instead.

If you like this, then you may not need to read the rest of this message, 
because most of your remaining comments and questions were based on an 
assumption that "const" declarations were necessary.


>On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > At 09:37 AM 6/22/2006 -0700, Guido van Rossum wrote:
> > >On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> > > > This hypothetical "const" would be a *statement*,
> > > > executed like any other statement.  It binds a name to a value -- and
> > > > produces an error if the value changes.  The compiler doesn't need 
> to know
> > > > what it evaluates to at runtime; that's what LOAD_NAME or 
> LOAD_DEREF are
> > > > for.  ;)
> > >
> > >Please think this through more. How do you implement the "produces an
> > >error if the value changes" part? Is the const property you're
> > >thinking of part of the name or of the object it refers to?
> > >
> > >The only way I can see it work is if const-ness is a compile-time
> > >property of names, just like global. But that requires too much
> > >repetition when a constant is imported.
> >
> > Right; MAL pointed that out in the message I was replying to, and I
> > conceded his point.  Of course, if you consider constness to be an implicit
> > property of imported names that aren't rebound, the repetition problem goes
> > away.
>
>Um, technically names are never imported, only objects. Suppose module
>A defines const X = 1, and module B imports A. How does the compiler
>know that A.X is a constant?

It doesn't.  You have to "from A import X".  At that point, you have a name 
that is bound by an import that can be considered constant as long as the 
name isn't rebound later.


> > And if you then require all "case" expressions to be either literals or
> > constant names, we can also duck the "when does the expression get
> > evaluated?" question.  The obvious answer is that it's evaluated wherever
> > you bound the name, and the compiler can either optimize the switch
> > statement (or not), depending on where the assignment took place.
>
>I don't understand what you're proposing. In particular I don't
>understand what you mean by "wherever you bound the name".
>
>So (evading the import problem for a moment) suppose we have
>
>const T = int(time.time())
>
>def foo(x):
>   switch x:
>     case T: print "Yes"
>     else: print "No"
>
>Do you consider that an optimizable switch or not?

Yes.  What I'm trying to do is separate "when the dictionary is 
constructed" from "when the expression is evaluated".  If we restrict the 
names used to names that have at most one binding in their defining scope, 
then we can simply add the dictionary entries whenever the *name is 
bound*.  Ergo, the evaluation time is apparent from simple reading of the 
source - we are never moving the evaluation, only determining how early we 
can add information to the switching dictionary.

Thus, the answer to "when is the expression evaluated" is "when it's 
executed as seen in the source code".  There is thus no magic of either 
first-use or function-definition involved.  What you see is exactly what 
you get.


> > A switch
> > that's in a loop or a function call can only be optimized if all its
> > constants are declared outside the loop or function body; otherwise it
> > degrades to an if/elif chain.
>
>What do you mean by a switch in a function call? Syntactically that
>makes no sense. Do you mean in a function definition?

Yes, sorry.  I probably copied the slip from your previous post.  ;)


>I think you're trying to give a heuristic here for when it's likely
>that the switch will be executed multiple times with all cases having
>the same values (so the optimized dict can be reused).

Actually, no, I'm trying to say that we should restrict case expressions to 
either literals or symbolic constants, trying to give a rigorous definition 
for "symbolic constant", and trying to specify what portions of a switch 
statement can be implemented using a dictionary lookup, as distinguished 
from those parts that require if/elif tests.


>I'm afraid this is going to cause problems with predictability.

Actually, it seems quite clear to me.  Any "case NAME:" where NAME is bound 
in local scope must be implemented internally using "if" instead of a 
dictionary lookup.  Any "case literal:" or "case NAME:" where NAME is *not* 
bound in the local scope can be optimized.  (Because that means NAME is 
bound in an outer scope and can be assumed constant in the current scope.)


>Also,
>if it degenerates to an if/elif chain, does it still require the
>switch expression to be hashable?

Yes, if there are any "case literal:" branches, or "case NAME:" where the 
name isn't bound in the local scope.


From guido at python.org  Thu Jun 22 20:38:55 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 11:38:55 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7en3j$fr7$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<e7bssg$hke$1@sea.gmane.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org> <e7c4tl$kq7$1@sea.gmane.org>
	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>
	<e7eg3m$lrh$1@sea.gmane.org>
	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>
	<e7en3j$fr7$1@sea.gmane.org>
Message-ID: <ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>

On 6/22/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Guido van Rossum wrote:
> > So the constant would be evaluated at function definition time? I find
> > that rather confusing.
>
> well, I find the proposed magic behaviour of "case" at least as confusing...

It's not magic if it can be explained. "def goes over all the cases
and evaluates them in the surrounding scope and freezes the meaning of
the cases that way as long as the function object survives" is not
magic.

> >> (except for the default argument thing; see above).  the result is a
> >> kind of semi-constant objects that would be useful, but perhaps not
> >> constant enough...)
> >
> > I fail to see the usefulness of this wrapper. The wrapper isn't
> > completely transparent o some code that uses type checks may need to
> > be modified. The wrapper doesn't get removed by a simple assignment;
> > after
> >
> >   const a = 1
> >   b = a
> >
> > how do we prevent b from being treated as a constant?
>
> we cannot -- this approaches assigns (a small amount of) const-ness to
> objects, not names.

OK, so neither a nor b is really a constant; it's just that they have
a value that is a constant wrapper.

I'm still confused how this wrapper would be used at run time.
(Because at compile time we *don't* generally know whether a
particular value contains a const wrapper or not.)

> >> it might be too much C# exposure, but I think I prefer the "explicit
> >> when using" approach...
> >
> > It may be not enough C# exposure, but I don't know exactly which
> > approach you are referring to.
>
> the original one: if you want to treat an expression as a constant, you
> have to be explicit.  examples:
>
> >>> a "constant" (or perhaps better, "const") primary would also be useful
> >>> in several other cases, including:
> >>>
> >>> - as a replacement for default-argument object binding
>
> this is used when you want to pass an *object* into an inner function,
> rather than a name:
>
>      def foo(value, bar=fie.fum):
>          if value == bar:
>              ...
>
> can be written
>
>      def foo(value):
>          if value == const bar:
>              ...
>
> >>> - local dispatch tables, and other generated-but-static data structures
>
>      def foo(value):
>          table = const {
>              1: "one",
>              2: "two",
>              3: fie.fum,
>          }
>
> (maybe "static" would be a better keyword?)

At least it resembles the corresponding C keyword better than 'const'.

'static' tells me something useful (at least if I know C/C++/Java).
And I have some idea on how to implement it (not so different from the
def-time switch freezing).

However it should be

  static table = {...}

But I don't see how this would require the const-wrapper.

And I still think that this is not as nice as def-time freezing
switches; static or const causes clumsy syntax when importing
constants from another module since you have to repeat the const-ness
for each imported constant in each importing module.

> >>> - explicit (but still anonymous) constant/expression "folding"
>
>      def foo(value):
>          if value < const (math.pi / 2):
>              ...
>
> and so on.  to implement this, the runtime simply evaluates the "const"
> expressions together with the default value expressions, and assigns the
> result to some func_xxx attribute.  everything else works as usual.

Yup, got it. This is clearly implementable and has clear semantics.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 20:52:59 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 11:52:59 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
Message-ID: <ca471dc20606221152j5c3f12c7oe9a36c297b32e0eb@mail.gmail.com>

On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> I think one of the problems I sometimes have in communicating with you is
> that I think out stuff from top to bottom of an email, and sometimes
> discard working assumptions once they're no longer needed.  We then end up
> having arguments over ideas I already discarded, because you find the
> problems with them faster than I do, and you assume that those problems
> carry through to the end of my message.  :)

You *do* have a text editor that lets you go back to the top of the
draft to remove discarded ideas, don't you? :-)

It's a reasonable form of discourse to propose an idea only to shoot
it down, but usually this is introduced by some phrase that hints to
the reader what's going to happen. You can't expect the reader to read
the entire email before turning on their brain. :)

> So, I'm partially reversing
> the order of my reply, so you can see what I'm actually proposing, before
> the minutiae of responding the objections you raised to stuff I threw out
> either in my previous message or the message before that.   Hopefully this
> will help.
>
> At 10:44 AM 6/22/2006 -0700, Guido van Rossum wrote:
> >Please, think through the notion of const declarations more before
> >posting again. Without const declarations none of this can work
>
> Actually, the "const" declaration part isn't necessary and I already
> discarded the idea in my  previous reply to you, noting that the
> combination of these facets can be made to work without any explicit const
> declarations:
>
> 1. "case (literal|NAME)" is the syntax for equality testing -- you can't
> use an arbitrary expression, not even a dotted name.

But dotted names are important! E.g. case re.DOTALL. And sometimes
compile-time constant expressions are too. Example: case sys.maxint-1.

> 2. NAME, if used, must be bound at most once in its defining scope

That's fine -- but doesn't extend to dotted names.

> 3. Dictionary optimization can occur only for literals and names not bound
> in the local scope, others must use if-then.

So this wouldn't be optimized?!

NL = "\n"
for line in sys.stdin:
  switch line:
    "abc\n": ...
    NL: ...

> This doesn't require explicit "const" declarations at all.  It does,
> however, prohibit using "import A" and then switching on a bunch of "A.foo"
> values.  You have to "from A import foo, bar, baz" instead.
>
> If you like this, then you may not need to read the rest of this message,
> because most of your remaining comments and questions were based on an
> assumption that "const" declarations were necessary.

I like it better than const declarations, but I don't like it as much
as the def-time-switch-freezing proposal; I find the limitiation to
simple literals and names too restrictive, and there isn't anything
else like that in Python. I also don't like the possibility that it
degenerates to if/elif. I like predictability. I like to be able to
switch on dotted names.

Also, when using a set in a case, one should be able to use an
expression like s1|s2 in a case.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From mal at egenix.com  Thu Jun 22 20:54:18 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 22 Jun 2006 20:54:18 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<44988C6E.4080806@canterbury.ac.nz>
	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>
Message-ID: <449AE75A.7050302@egenix.com>

Guido van Rossum wrote:
> Without const declarations none of this can work and
> the "at-function-definition-time" freezing is the best, because most
> predictable, approach IMO.

I you like this approach best, then how about using the same
approach as we have for function default argument values:

Variables which are to be regarded as constant within the
scope of the function are declared as such by using a "const"
declaration (much like we already have with the
global declaration).

a,b,c,d = range(4)
defvalue = 1

def switch(x=defvalue):
    const a,b,c,d
    switch x:
        case a: return 'foo'
        case b: return 'foo'
        case c: return 'foo'
        case d: return 'foo'
        else: raise ValueError(x)

This declaration would cause the compiler to generate
LOAD_NAME opcodes just like for defvalue which then gets
executed at code object execution time, ie. when the
function is created.

This would also work out for the solution 1 case
in the PEP (if-elif-else)... <hint><hint> :-)

def switch(x=defvalue):
    const a,b,c,d
    if x == a: return 'foo'
    elif x == b: return 'bar'
    elif x == c: return 'baz'
    elif x == d: return 'bazbar'
    else: raise ValueError(x)

Furthermore, the compiler could protect the constant
names from assignments (much in the same way it
applies special treatment to variables declared global
in a scope).

A nice side-effect would be that could easily use the
same approach to replace the often used default-argument-hack,
e.g.

def fraction(x, int=int, float=float):
    return float(x) - int(x)

This would then read:

def fraction(x):
    const int, float
    return float(x) - int(x)

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 22 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2006-07-03: EuroPython 2006, CERN, Switzerland              10 days left

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From rwgk at yahoo.com  Thu Jun 22 20:55:25 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Thu, 22 Jun 2006 11:55:25 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <e7dc3v$ojo$1@sea.gmane.org>
Message-ID: <20060622185525.93734.qmail@web31501.mail.mud.yahoo.com>

--- Georg Brandl <g.brandl at gmx.net> wrote:

> Ralf W. Grosse-Kunstleve wrote:
> > http://docs.python.org/dev/whatsnew/ports.html says:
> > 
> >   The PyRange_New() function was removed. It was never documented, never
> used
> > in the core code, and had dangerously lax error checking.
> > 
> > I use this function (don't remember how I found it; this was years ago),
> > therefore my code doesn't compile with 2.5b1 (it did OK before with 2.5a2).
> Is
> > there an alternative spelling for PyRange_New()?
> 
> You can call PyRange_Type with the appropriate parameters.

Thanks a lot for the hint! However, I cannot find any documentation for
PyRange_*. I tried this page...

  http://docs.python.org/api/genindex.html

and google. Did I miss something?

I am sure I can get this to work with some digging, but I am posting here to
highlight a communication problem. I feel if a function is removed the
alternative should be made obvious in the associated documentation; in
particular if there is no existing documentation for the alternative.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From fredrik at pythonware.com  Thu Jun 22 21:05:15 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 22 Jun 2006 21:05:15 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>	<e7bssg$hke$1@sea.gmane.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>	<e7c135$4ql$1@sea.gmane.org>
	<e7c4tl$kq7$1@sea.gmane.org>	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>	<e7eg3m$lrh$1@sea.gmane.org>	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>	<e7en3j$fr7$1@sea.gmane.org>
	<ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>
Message-ID: <e7epl9$p0g$1@sea.gmane.org>

Guido van Rossum wrote:

>> well, I find the proposed magic behaviour of "case" at least as confusing...
> 
> It's not magic if it can be explained. "def goes over all the cases
> and evaluates them in the surrounding scope and freezes the meaning of
> the cases that way as long as the function object survives" is not
> magic.

well, people find "def goes over all default values and evaluates them 
in the surrounding scope (etc)" pretty confusing, and the default values 
are all part of the function header.  here you're doing the same thing 
for some expressions *inside* the function body, but not all.  it might 
be easy to explain, but I don't think it's easy to internalize.

> I'm still confused how this wrapper would be used at run time.
> (Because at compile time we *don't* generally know whether a
> particular value contains a const wrapper or not.)

oh, it would require the compiler to check for const-ness on globals 
when the function object is created, which would work for simple names, 
and require some yet-to-be-determined-handwaving-hackery for anything 
else...

>>>>> - local dispatch tables, and other generated-but-static data structures
>>      def foo(value):
>>          table = const {
>>              1: "one",
>>              2: "two",
>>              3: fie.fum,
>>          }
>>
>> (maybe "static" would be a better keyword?)
> 
> At least it resembles the corresponding C keyword better than 'const'.
> 
> 'static' tells me something useful (at least if I know C/C++/Java).
 >
> And I have some idea on how to implement it (not so different from the
> def-time switch freezing).
> 
> However it should be
> 
>   static table = {...}

I'm not sure it should, actually -- the primary form is more flexible, 
and it better matches how things work: it's the expression that's 
special, not the variable.

and things like

     radian = degree * static (math.pi / 180)

would be pretty nice, for those of us who likes our Python fast.

> But I don't see how this would require the const-wrapper.

it wouldn't.

> And I still think that this is not as nice as def-time freezing
> switches; static or const causes clumsy syntax when importing
> constants from another module since you have to repeat the const-ness
> for each imported constant in each importing module.

well, the point is that you only have to spell it out if you actually 
care about things being constant/static/evaluated once/early, and when 
you do, it's always obvious for the reader what you're doing.

</F>


From python-dev at zesty.ca  Thu Jun 22 21:07:12 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Thu, 22 Jun 2006 14:07:12 -0500 (CDT)
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606211027q652c84cdp997c2ef92c0eab42@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<20060611010410.GA5723@21degrees.com.au>
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<ca471dc20606211027q652c84cdp997c2ef92c0eab42@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606221353560.17937@server1.LFW.org>

On Wed, 21 Jun 2006, Guido van Rossum wrote:
> I worry (a bit) about this case:
>
>   y = 12
>   def foo(x, y):
>     switch x:
>     case y: print "something"
>
> which to the untrained observer (I care about untrained readers much
> more than about untrained writers!) looks like it would print
> something if x equals y, the argument, while in fact it prints
> something if x equals 12.

I am quite concerned about this case too.  I think if Python were
to behave this way, it would be a new pitfall for people learning
the language -- like other pitfalls such as using unbound locals,
mutable default arguments, or the historical non-nested scopes.
I'm not saying the other pitfalls don't have good reasons -- some
are outweighed by other design advantages (unbound locals are a
consequence of having no variable declarations) and some have
since been fixed (like nested scopes).  But i'd be wary of adding
a new pitfall to that list without a very substantial win.

> Me too. I guess I was just pointing out that "just" evaluating it in
> the global scope would not give an error, just like this is valid (but
> confusing):
>
> y = 12
> def foo(y=y):
>   print y
> y = 13
> foo()  # prints 12

I see how frozen-cases and default-arguments could have comparable
semantics, but i do think frozen-cases are more confusing.  In this
default-arguments example, there is at least a hint from the syntax
that we're introducing a new local variable, so there is a landmark
where the reader can hang the mental note that a new thing is being
introduced.  Also, it is easier to see that default arguments are
being fixed at function-definition time because their value
expressions are localized in the source code in the "def" line, a
line that makes sense to be evaluating at definition time.

For frozen-cases, you don't have this kind of landmark, and the bits
that are evaluated at function-definition time are scattered and
mixed with the rest of the function evaluated at function-call time.
That's pretty subtle; i can't think of any other Python construct
off the top of my head that mixes evaluation times like that.  (Yes,
the compiler does optimize literals, but it's done in a way that
doesn't affect semantics.)


-- ?!ng

From theller at python.net  Thu Jun 22 21:13:06 2006
From: theller at python.net (Thomas Heller)
Date: Thu, 22 Jun 2006 21:13:06 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC 10.3.9
	(Panther)
In-Reply-To: <3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>
References: <44982ADE.5070404@activestate.com>
	<4498393E.1020101@python.net>	<71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>
	<3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>
Message-ID: <449AEBC2.1070706@python.net>

Ronald Oussoren schrieb:
> On 20-jun-2006, at 20:50, Ronald Oussoren wrote:
> 
>>
>> On 20-jun-2006, at 20:06, Thomas Heller wrote:
>>
>>> Trent Mick schrieb:
>>>> Thomas and others,
>>>>
>>>> Has anyone else seen failures in test_ctypes on older Mac OS X/
>>>> PowerPC?
>>>> Results are below. This is running a build of the trunk from last
>>>> night:
>>>>
>>>> 	./configure && make && ./python.exe Lib/test/test_ctypes.py
>>>>
>>>> Note that the test does NOT fail on the Mac OS X/x86 10.4.6 box
>>>> that I have.
>>>
>>> It also works on 10.4.?? Power PC.  I guess the fix has to wait until
>>> I'm able to install 10.3 on my mac, I have the DVDs already but
>>> have not
>>> yet had the time.  If anyone is willing to give me ssh access to a
>>> 10.3
>>> box I can try to fix this earlier.
>>
>> I had some problems with my 10.3-capable box, but happily enough it
>> decided to come alive again. I'm currently booted into 10.3.9 and
>> will have a look.
> 
> It is a platform bug, RTLD_LOCAL doesn't work on 10.3. The following  
> C snippet fails with the same error as ctypes: FAIL: dlcompat: unable  
> to open this file with RTLD_LOCAL. This seems to be confirmed by this  
> sourcet test file from darwin: http://darwinsource.opendarwin.org/ 
> 10.4.1/dyld-43/unit-tests/test-cases/dlopen-RTLD_LOCAL/main.c.

Hm, what does this mean, and how can the test be repaired?  Maybe I have 
to wait until I can play with Panther to understand this issue...

Would loading the dylib with RTLD_GLOBAL work (or any other flags)?
Does RTLD_LOCAL not work with dylibs, but with other libraries (are 
there any)?

Thomas


From g.brandl at gmx.net  Thu Jun 22 21:16:30 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 22 Jun 2006 21:16:30 +0200
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060622185525.93734.qmail@web31501.mail.mud.yahoo.com>
References: <e7dc3v$ojo$1@sea.gmane.org>
	<20060622185525.93734.qmail@web31501.mail.mud.yahoo.com>
Message-ID: <e7eqae$qsq$1@sea.gmane.org>

Ralf W. Grosse-Kunstleve wrote:
> --- Georg Brandl <g.brandl at gmx.net> wrote:
> 
>> Ralf W. Grosse-Kunstleve wrote:
>> > http://docs.python.org/dev/whatsnew/ports.html says:
>> > 
>> >   The PyRange_New() function was removed. It was never documented, never
>> used
>> > in the core code, and had dangerously lax error checking.
>> > 
>> > I use this function (don't remember how I found it; this was years ago),
>> > therefore my code doesn't compile with 2.5b1 (it did OK before with 2.5a2).
>> Is
>> > there an alternative spelling for PyRange_New()?
>> 
>> You can call PyRange_Type with the appropriate parameters.
> 
> Thanks a lot for the hint! However, I cannot find any documentation for
> PyRange_*. I tried this page...
> 
>   http://docs.python.org/api/genindex.html
> 
> and google. Did I miss something?
> 
> I am sure I can get this to work with some digging, but I am posting here to
> highlight a communication problem. I feel if a function is removed the
> alternative should be made obvious in the associated documentation; in
> particular if there is no existing documentation for the alternative.

Well, PyRange_New *was* undocumented, so there's no place in the documentation
where it would have been.

However, it would perhaps be helpful to add a note to the whatsnew document
for users like yourself. Andrew, does that make sense?

Georg


From bob at redivi.com  Thu Jun 22 21:21:56 2006
From: bob at redivi.com (Bob Ippolito)
Date: Thu, 22 Jun 2006 12:21:56 -0700
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060622185525.93734.qmail@web31501.mail.mud.yahoo.com>
References: <20060622185525.93734.qmail@web31501.mail.mud.yahoo.com>
Message-ID: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>


On Jun 22, 2006, at 11:55 AM, Ralf W. Grosse-Kunstleve wrote:

> --- Georg Brandl <g.brandl at gmx.net> wrote:
>
>> Ralf W. Grosse-Kunstleve wrote:
>>> http://docs.python.org/dev/whatsnew/ports.html says:
>>>
>>>   The PyRange_New() function was removed. It was never  
>>> documented, never
>> used
>>> in the core code, and had dangerously lax error checking.
>>>
>>> I use this function (don't remember how I found it; this was  
>>> years ago),
>>> therefore my code doesn't compile with 2.5b1 (it did OK before  
>>> with 2.5a2).
>> Is
>>> there an alternative spelling for PyRange_New()?
>>
>> You can call PyRange_Type with the appropriate parameters.
>
> Thanks a lot for the hint! However, I cannot find any documentation  
> for
> PyRange_*. I tried this page...
>
>   http://docs.python.org/api/genindex.html
>
> and google. Did I miss something?
>
> I am sure I can get this to work with some digging, but I am  
> posting here to
> highlight a communication problem. I feel if a function is removed the
> alternative should be made obvious in the associated documentation; in
> particular if there is no existing documentation for the alternative.

He means something like this:
PyObject_CallFunction(PyRange_Type, "llli", ...)

-bob


From guido at python.org  Thu Jun 22 21:24:04 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 12:24:04 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7epl9$p0g$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org> <e7c4tl$kq7$1@sea.gmane.org>
	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>
	<e7eg3m$lrh$1@sea.gmane.org>
	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>
	<e7en3j$fr7$1@sea.gmane.org>
	<ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>
	<e7epl9$p0g$1@sea.gmane.org>
Message-ID: <ca471dc20606221224w6a48e471ub26ff55c457a1a70@mail.gmail.com>

On 6/22/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Guido van Rossum wrote:
> > It's not magic if it can be explained. "def goes over all the cases
> > and evaluates them in the surrounding scope and freezes the meaning of
> > the cases that way as long as the function object survives" is not
> > magic.
>
> well, people find "def goes over all default values and evaluates them
> in the surrounding scope (etc)" pretty confusing, and the default values
> are all part of the function header.

I think it's more surprising than confusing, in the same way as
mutable class variables are, or the sharing that follows from "a = [];
b = a".

The switch proposal has less opportunity for this particular surprise
because the case expressions must be immutable (or at least hashable,
which pretty much boils down to the same thing).

> here you're doing the same thing
> for some expressions *inside* the function body, but not all.  it might
> be easy to explain, but I don't think it's easy to internalize.

It's hard to see how it will lead to actual surprises given even only
moderately decent coding style (which would imply not changing global
variables as implied "parameters").

> > I'm still confused how this wrapper would be used at run time.
> > (Because at compile time we *don't* generally know whether a
> > particular value contains a const wrapper or not.)
>
> oh, it would require the compiler to check for const-ness on globals
> when the function object is created, which would work for simple names,
> and require some yet-to-be-determined-handwaving-hackery for anything
> else...

I'd like to see more examples that show how it works. Some simple
"this works, that doesn't, because..." demos.

> >>>>> - local dispatch tables, and other generated-but-static data structures
> >>      def foo(value):
> >>          table = const {
> >>              1: "one",
> >>              2: "two",
> >>              3: fie.fum,
> >>          }
> >>
> >> (maybe "static" would be a better keyword?)
> >
> > At least it resembles the corresponding C keyword better than 'const'.
> >
> > 'static' tells me something useful (at least if I know C/C++/Java).
> >
> > And I have some idea on how to implement it (not so different from the
> > def-time switch freezing).
> >
> > However it should be
> >
> >   static table = {...}
>
> I'm not sure it should, actually -- the primary form is more flexible,
> and it better matches how things work: it's the expression that's
> special, not the variable.

OK, I think I see how this works. You pre-compute the expression at
def-time, squirrel it away in a hidden field on the function object,
and assign it to a local each time the statement is executed. So this
would be allowed?

  a = static 1
  a = static 2  # same variable

> and things like
>
>      radian = degree * static (math.pi / 180)
>
> would be pretty nice, for those of us who likes our Python fast.

No argument there.

> > And I still think that this is not as nice as def-time freezing
> > switches; static or const causes clumsy syntax when importing
> > constants from another module since you have to repeat the const-ness
> > for each imported constant in each importing module.
>
> well, the point is that you only have to spell it out if you actually
> care about things being constant/static/evaluated once/early, and when
> you do, it's always obvious for the reader what you're doing.

Unfortunately this would probably cause people to write

  switch x:
    case static re.DOTALL: ...
    case static re.IGNORECASE: ...

which is just more work to get the same effect as the
def-time-switch-freezing proposal.

I'm also unclear on what you propose this would do *without* the
statics. Would it be a compile-time error? Compile the dict each time
the switch is executed? Degenerate to an if/elif chain? Then what if x
is unhashable? What if *some* cases are static and others aren't?

Also, do you still see any use for the const wrapper that you brought
up earlier? I don't at this point.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From g.brandl at gmx.net  Thu Jun 22 21:19:48 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 22 Jun 2006 21:19:48 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <449AE75A.7050302@egenix.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<44988C6E.4080806@canterbury.ac.nz>	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>
	<449AE75A.7050302@egenix.com>
Message-ID: <e7eqgk$qsq$2@sea.gmane.org>

M.-A. Lemburg wrote:

> A nice side-effect would be that could easily use the
> same approach to replace the often used default-argument-hack,
> e.g.
> 
> def fraction(x, int=int, float=float):
>     return float(x) - int(x)
> 
> This would then read:
> 
> def fraction(x):
>     const int, float
>     return float(x) - int(x)

There's a certain risk that the premature-optimization fraction will
plaster every function with const declarations, but they write
unreadable code anyway ;)

Aside from this, there's still another point: assume you have quite a
number of module-level string "constants" which you want to use in a switch.
You'd have to repeat all of their names in a "const" declaration in order
to use them this way.

Georg


From pje at telecommunity.com  Thu Jun 22 21:24:30 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 22 Jun 2006 15:24:30 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221152j5c3f12c7oe9a36c297b32e0eb@mail.gmail.co
 m>
References: <5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>

At 11:52 AM 6/22/2006 -0700, Guido van Rossum wrote:
>On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
>>I think one of the problems I sometimes have in communicating with you is
>>that I think out stuff from top to bottom of an email, and sometimes
>>discard working assumptions once they're no longer needed.  We then end up
>>having arguments over ideas I already discarded, because you find the
>>problems with them faster than I do, and you assume that those problems
>>carry through to the end of my message.  :)
>
>You *do* have a text editor that lets you go back to the top of the
>draft to remove discarded ideas, don't you? :-)

Well, usually the previous idea seems an essential part of figuring out the 
new idea, and showing why the new idea is better.  At least the way I think 
about it.  But now that I've noticed this seems to be a recurring theme in 
our discussions, I'll try to be more careful.


>It's a reasonable form of discourse to propose an idea only to shoot
>it down, but usually this is introduced by some phrase that hints to
>the reader what's going to happen. You can't expect the reader to read
>the entire email before turning on their brain. :)

Well, you can't expect me to know ahead of time what ideas I'm going to 
discard before I've had the ideas that will replace them.  ;-)  But again, 
I'll be more careful in future about retroactively adding such warnings or 
removing the old ideas entirely.


>>1. "case (literal|NAME)" is the syntax for equality testing -- you can't
>>use an arbitrary expression, not even a dotted name.
>
>But dotted names are important! E.g. case re.DOTALL. And sometimes
>compile-time constant expressions are too. Example: case sys.maxint-1.

True - but at least you *can* use them, with "from re import DOTALL" and 
"maxint_less_1 = sys.maxint-1".  You're just required to disambiguate 
*when* the calculation of these values is to be performed.


>>2. NAME, if used, must be bound at most once in its defining scope
>
>That's fine -- but doesn't extend to dotted names.

Right, hence #1.


>>3. Dictionary optimization can occur only for literals and names not bound
>>in the local scope, others must use if-then.
>
>So this wouldn't be optimized?!
>
>NL = "\n"
>for line in sys.stdin:
>  switch line:
>    "abc\n": ...
>    NL: ...

This would result in a switch dictionary with "abc\n" in it, preceded by an 
if line==NL test.  So it's half-optimized.  The more literals, the more 
optimized.  If you put the same switch in a function body, it becomes fully 
optimized if the NL binding stays outside the function definition.

Note that you previously proposed a switch at top level not be optimized at 
all, so this is an improvement over that.


>I like it better than const declarations, but I don't like it as much
>as the def-time-switch-freezing proposal; I find the limitiation to
>simple literals and names too restrictive, and there isn't anything
>else like that in Python.

Well, you can't "def" a dotted name, but I realize this isn't a binding.


>I also don't like the possibility that it
>degenerates to if/elif. I like predictability.

It is predictable: anything defined in the same scope will be if/elif, 
anything defined outside will be dict-switched.


>I like to be able to switch on dotted names.
>Also, when using a set in a case, one should be able to use an
>expression like s1|s2 in a case.

...which then gets us back to the question of when the dots or "|" are 
evaluated.  My proposal forces you to make the evaluation time explicit, 
visible, and unquestionably obvious in the source, rather than relying on 
invisible knowledge about the function definition time.

"First time use" is also a more visible approach, because it does not 
contradict the user's assumption that evaluation takes place where the 
expression appears.  The "invisible" assumption is only that subsequent 
execution will reuse the same expression results without recalculating them 
-- it doesn't *move* the evaluation somewhere else.

I seem to recall that in general, Python prefers to evaluate expressions in 
the order that they appear in source code, and that we try to preserve that 
property as much as possible.  Both the "names and literals only" and 
"first-time use" approaches preserve that property; "function definition 
time" does not.

Of course, it's up to you to weigh the cost and benefit; I just wanted to 
bring this one specific factor (transparency of the source) to your 
attention.  This whole "const" thread was just me trying to find another 
approach besides "first-time use" that preserves that visibility property 
for readers of the code.


From g.brandl at gmx.net  Thu Jun 22 21:20:59 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 22 Jun 2006 21:20:59 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7epl9$p0g$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>	<e7bssg$hke$1@sea.gmane.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>	<e7c135$4ql$1@sea.gmane.org>	<e7c4tl$kq7$1@sea.gmane.org>	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>	<e7eg3m$lrh$1@sea.gmane.org>	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>	<e7en3j$fr7$1@sea.gmane.org>	<ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>
	<e7epl9$p0g$1@sea.gmane.org>
Message-ID: <e7eqir$qsq$3@sea.gmane.org>

Fredrik Lundh wrote:

> I'm not sure it should, actually -- the primary form is more flexible, 
> and it better matches how things work: it's the expression that's 
> special, not the variable.
> 
> and things like
> 
>      radian = degree * static (math.pi / 180)
> 
> would be pretty nice, for those of us who likes our Python fast.

Nice approach, though I wonder if it's unambiguous enough without
some parenthesizing.

Georg


From python-dev at zesty.ca  Thu Jun 22 21:30:29 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Thu, 22 Jun 2006 14:30:29 -0500 (CDT)
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<4496FB54.5060800@ewtllc.com>
	<ca471dc20606191247l183eaf0eh6012600500ea311b@mail.gmail.com>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606221413300.17937@server1.LFW.org>

On Wed, 21 Jun 2006, Guido van Rossum wrote:
> (Note how I've switched to the switch-for-efficiency camp, since it
> seems better to have clear semantics and a clear reason for the syntax
> to be different from if/elif chains.)

I don't think switch-for-efficiency (at least if efficiency is the
primary design motivator) makes sense without some strong evidence
that the use of if/elif constructs often causes a severe speed problem
in many Python programs.  (Premature optimization and all that.)
Long if/elif chains probably don't occur often enough or slow down
programs enough to invent syntax *just* for speed; and even if they
did, i don't think there's any precedent for a Python statement
invented primarily as a speed optimization.

I'm hoping we can talk more about the question: How can a new statement
help programmers express their intent more clearly?

So far i've seen two possible answers to that question:

    1.  The switched-on expression is written and evaluated just once.

    2.  The cases could help you unpack things into local variables.

(There was some discussion about unpacking earlier:
http://mail.python.org/pipermail/python-dev/2005-April/052780.html
which petered out, though there may still be the possibility of
designing something quite useful and readable.)

Any others?


-- ?!ng

From guido at python.org  Thu Jun 22 21:54:29 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 12:54:29 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
Message-ID: <ca471dc20606221254v268b6769t28aa1b1345e69bed@mail.gmail.com>

On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> >>[Phillip]
> >>1. "case (literal|NAME)" is the syntax for equality testing -- you can't
> >>use an arbitrary expression, not even a dotted name.
> >[Guido]
> >But dotted names are important! E.g. case re.DOTALL. And sometimes
> >compile-time constant expressions are too. Example: case sys.maxint-1.
> [Phillip]
> True - but at least you *can* use them, with "from re import DOTALL" and
> "maxint_less_1 = sys.maxint-1".  You're just required to disambiguate
> *when* the calculation of these values is to be performed.

Yeah, but the result is a quite crippled case expression that's not
like anything in Python.

> >>2. NAME, if used, must be bound at most once in its defining scope
> >
> >That's fine -- but doesn't extend to dotted names.
>
> Right, hence #1.

Which I don't like.

(I know, I'm repeating myself here. Better than contradicting myself. :-)

> >>3. Dictionary optimization can occur only for literals and names not bound
> >>in the local scope, others must use if-then.
> >
> >So this wouldn't be optimized?!
> >
> >NL = "\n"
> >for line in sys.stdin:
> >  switch line:
> >    "abc\n": ...
> >    NL: ...
>
> This would result in a switch dictionary with "abc\n" in it, preceded by an
> if line==NL test.  So it's half-optimized.  The more literals, the more
> optimized.  If you put the same switch in a function body, it becomes fully
> optimized if the NL binding stays outside the function definition.

That still seems really weird, especially if you consider the whole
thing already being inside a def(). It would optimize references to
non-locals but not references to locals...?

> Note that you previously proposed a switch at top level not be optimized at
> all, so this is an improvement over that.

I don't particularly care about top-level switches; I don't expect
they'll be used much and I don't expect people to care about their
speed much. A for loop using some local variables is also quite slow
outside a function; if anybody complains we just tell them to put it
in a function.

I do care about switch/case being easy to use and flexible in likely
use cases, which include using constants defined in a different
module.

> >I like it better than const declarations, but I don't like it as much
> >as the def-time-switch-freezing proposal; I find the limitiation to
> >simple literals and names too restrictive, and there isn't anything
> >else like that in Python.
>
> Well, you can't "def" a dotted name, but I realize this isn't a binding.

You could have left that out of your email and we'd all have been happier. :-)

> >I also don't like the possibility that it
> >degenerates to if/elif. I like predictability.
>
> It is predictable: anything defined in the same scope will be if/elif,
> anything defined outside will be dict-switched.

But that's pretty subtle. I'd much rather see a rule that
*effectively* rules out non-constant cases completely. IMO the
def-time-switch-freeze proposal does this.

> >I like to be able to switch on dotted names.
> >Also, when using a set in a case, one should be able to use an
> >expression like s1|s2 in a case.
>
> ...which then gets us back to the question of when the dots or "|" are
> evaluated.  My proposal forces you to make the evaluation time explicit,
> visible, and unquestionably obvious in the source, rather than relying on
> invisible knowledge about the function definition time.

At the cost of more convoluted code. It means in many cases I'd
probably continue to use if/elif chains because refactoring it into a
switch is too much effort. Thereby relegating switch to something only
used by speed freaks. While I want my switch to be fast, I don't want
it to be a freak.

> "First time use" is also a more visible approach, because it does not
> contradict the user's assumption that evaluation takes place where the
> expression appears.  The "invisible" assumption is only that subsequent
> execution will reuse the same expression results without recalculating them
> -- it doesn't *move* the evaluation somewhere else.

Have you made up your mind yet where the result of the first-time
evaluated value should be stored? On the function object? That implies
that it doesn't help for inner defs that are called only once per
definition (like certain callback patterns).

> I seem to recall that in general, Python prefers to evaluate expressions in
> the order that they appear in source code, and that we try to preserve that
> property as much as possible.  Both the "names and literals only" and
> "first-time use" approaches preserve that property; "function definition
> time" does not.

But first-time has the very big disadvantage IMO that there's no
safeguard to warn you that the value is different on a subsequent
execution -- you just get the old value without warning.

> Of course, it's up to you to weigh the cost and benefit; I just wanted to
> bring this one specific factor (transparency of the source) to your
> attention.  This whole "const" thread was just me trying to find another
> approach besides "first-time use" that preserves that visibility property
> for readers of the code.

Summarizing our disagreement, I think you feel that
freeze-on-first-use is most easily explained and understood while I
feel that freeze-at-def-time is more robust. I'm not sure how to get
past this point except by stating that you haven't convinced me... I
think it's time to sit back and wait for someone else to weigh in with
a new argument.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Thu Jun 22 22:00:15 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 13:00:15 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <Pine.LNX.4.58.0606221413300.17937@server1.LFW.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<449707E2.7060803@ewtllc.com>
	<ca471dc20606191329p1a978983g2c3ea7401213d7dd@mail.gmail.com>
	<4498924F.5000508@canterbury.ac.nz>
	<5.1.1.6.0.20060620210402.01e9bc78@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621012733.01e8f660@sparrow.telecommunity.com>
	<Pine.LNX.4.58.0606210334470.17937@server1.LFW.org>
	<5.1.1.6.0.20060621082808.01e90d18@sparrow.telecommunity.com>
	<ca471dc20606210926i2681b27erff515fe4b96022cf@mail.gmail.com>
	<Pine.LNX.4.58.0606221413300.17937@server1.LFW.org>
Message-ID: <ca471dc20606221300s2044e3b4r331f9c16ff997b77@mail.gmail.com>

On 6/22/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Wed, 21 Jun 2006, Guido van Rossum wrote:
> > (Note how I've switched to the switch-for-efficiency camp, since it
> > seems better to have clear semantics and a clear reason for the syntax
> > to be different from if/elif chains.)
>
> I don't think switch-for-efficiency (at least if efficiency is the
> primary design motivator) makes sense without some strong evidence
> that the use of if/elif constructs often causes a severe speed problem
> in many Python programs.  (Premature optimization and all that.)
> Long if/elif chains probably don't occur often enough or slow down
> programs enough to invent syntax *just* for speed; and even if they
> did, i don't think there's any precedent for a Python statement
> invented primarily as a speed optimization.

My position is more nuanced that that. I like having a switch because
it can make certain types of code more readable. (I keep referring to
sre_parse.py and sre_compile.py -- has anyone else looked at these at
all?) But I also like switch because it can be implemented using a
single dict lookup with a pre-computed dict, if certain conditions on
the cases are met. I find that those conditions generally met by code
that lends itself for using a switch. I believe that switch would be
better understood if it *always* precomputed the dict, and failed
immediately with an exception (at run time) if the precompilation
wasn't feasible.

> I'm hoping we can talk more about the question: How can a new statement
> help programmers express their intent more clearly?
>
> So far i've seen two possible answers to that question:
>
>     1.  The switched-on expression is written and evaluated just once.
>
>     2.  The cases could help you unpack things into local variables.
>
> (There was some discussion about unpacking earlier:
> http://mail.python.org/pipermail/python-dev/2005-April/052780.html
> which petered out, though there may still be the possibility of
> designing something quite useful and readable.)

I'm not convinced that the matching idea (which your URL seems to
refer to) works well enough in Python to consider. Anyway, it's a
completely different approach and should probably be discussed
separately rather than as a variant of switch/case.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Thu Jun 22 21:59:33 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 22 Jun 2006 15:59:33 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221224w6a48e471ub26ff55c457a1a70@mail.gmail.co
 m>
References: <e7epl9$p0g$1@sea.gmane.org>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>
	<e7c135$4ql$1@sea.gmane.org> <e7c4tl$kq7$1@sea.gmane.org>
	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>
	<e7eg3m$lrh$1@sea.gmane.org>
	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>
	<e7en3j$fr7$1@sea.gmane.org>
	<ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>
	<e7epl9$p0g$1@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060622154737.03bef9c0@sparrow.telecommunity.com>

At 12:24 PM 6/22/2006 -0700, Guido van Rossum wrote:
>OK, I think I see how this works. You pre-compute the expression at
>def-time, squirrel it away in a hidden field on the function object,
>and assign it to a local each time the statement is executed.

More precisely, I'd say that the computation is moved to function 
definition time and becomes an anonymous free variable.  The body of the 
static expression becomes a LOAD_DEREF of the free variable, rather than 
computation of the expression.

The debug trace will show the function definition going to the lines that 
contain the static expressions, but that's understandable.

I think I like it.  I was confused by what Fredrik meant by "const", but 
your renaming it to "static" makes more sense to me; i.e. it belongs to the 
function, as opposed to each execution of the function.  (Whereas I was 
reading "const" as meaning "immutable" or "non-rebindable", which made no 
sense in the context.)


>Unfortunately this would probably cause people to write
>
>   switch x:
>     case static re.DOTALL: ...
>     case static re.IGNORECASE: ...
>
>which is just more work to get the same effect as the
>def-time-switch-freezing proposal.

Without the "static", the reordering of execution isn't obvious.  But 
perhaps that could be lived with, if the explanation was, "well, static is 
implied by case".


>I'm also unclear on what you propose this would do *without* the
>statics. Would it be a compile-time error? Compile the dict each time
>the switch is executed? Degenerate to an if/elif chain? Then what if x
>is unhashable? What if *some* cases are static and others aren't?

If we allow non-static cases, then they should become "if"s that happen 
prior to a dictionary lookup on the remaining static/literal ones.  Or we 
could just say that each adjacent block of static cases is its own 
dictionary lookup, and the rest happen in definition order.  (i.e., you 
replace contiguous static/literal runs with dictionary lookups, and 
everything else is if-elif.)


From theller at python.net  Thu Jun 22 22:07:09 2006
From: theller at python.net (Thomas Heller)
Date: Thu, 22 Jun 2006 22:07:09 +0200
Subject: [Python-Dev] test_ctypes failure on Mac OS X/PowerPC
	10.3.9(Panther)
In-Reply-To: <6339755.1150876881434.JavaMail.ronaldoussoren@mac.com>
References: <44982ADE.5070404@activestate.com> <4498393E.1020101@python.net>
	<71C6D0BF-B569-48BD-9F9D-05D3660FF2AA@mac.com>
	<3B96F3FF-B847-4E57-ACC2-F7D979DCA5BA@mac.com>
	<4498F880.4010401@python.net>
	<6339755.1150876881434.JavaMail.ronaldoussoren@mac.com>
Message-ID: <449AF86D.6060200@python.net>

Ronald Oussoren schrieb:
>  
> On Wednesday, June 21, 2006, at 09:43AM, Thomas Heller <theller at python.net> wrote:
> 
>>Ronald Oussoren schrieb:
>>>> will have a look.
>>> 
>>> It is a platform bug, RTLD_LOCAL doesn't work on 10.3. The following C 
>>> snippet fails with the same error as ctypes: FAIL: dlcompat: unable to 
>>> open this file with RTLD_LOCAL. This seems to be confirmed by this 
>>> sourcet test file from darwin: 
>>> http://darwinsource.opendarwin.org/10.4.1/dyld-43/unit-tests/test-cases/dlopen-RTLD_LOCAL/main.c. 
>>> 
>>
>>What does this mean?  Would it work with RTLD_GLOBAL, is there any other 
>>way to repair it, or does loading dylibs not work at all on Panther?
> 
> Using RTLD_GLOBAL does work. This should also be fairly save as RTLD_GLOBAL seems to be the same as RTLD_LOCAL when using two-level namespaces (which is the default on OSX and used by Python).

This sounds like RTLD_GLOBAL should be the default mode on OS X.

Thomas

From pje at telecommunity.com  Thu Jun 22 22:14:12 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Thu, 22 Jun 2006 16:14:12 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221254v268b6769t28aa1b1345e69bed@mail.gmail.co
 m>
References: <5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>

At 12:54 PM 6/22/2006 -0700, Guido van Rossum wrote:
>Summarizing our disagreement, I think you feel that
>freeze-on-first-use is most easily explained and understood while I
>feel that freeze-at-def-time is more robust. I'm not sure how to get
>past this point except by stating that you haven't convinced me... I
>think it's time to sit back and wait for someone else to weigh in with
>a new argument.

Which I think you and Fredrik have found, if "case" implies "static".  It 
also looks attractive as an addition in its own right, independent of "switch".

In any case, my point wasn't to convince you but to make you aware of 
certain costs and benefits that I wasn't sure you'd perceived.  It's clear 
from your response that you *have* perceived them now, so I'm quite 
satisfied by that outcome -- i.e., my goal wasn't to "convince" you to 
adopt a particular proposal, but rather to make sure you understood and 
considered the ramifications of the ones under discussion.

That being said, there isn't anything to "get past"; from my POV, the 
discussion is already a success.  :)


From nmm1 at cus.cam.ac.uk  Thu Jun 22 23:28:13 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Thu, 22 Jun 2006 22:28:13 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Wed, 21 Jun 2006 00:50:48 +1000."
	<44980B48.2020303@gmail.com> 
Message-ID: <E1FtWij-0003ig-Rl@draco.cus.cam.ac.uk>

Very interesting.  I need to investigate in more depth.

> The work-in-progress can be seen in Python's SVN sandbox:
>
> http://svn.python.org/view/sandbox/trunk/decimal-c/

beelzebub$svn checkout http://svn.python.org/view/sandbox/trunk/decimal-c/
svn: PROPFIND request failed on '/view/sandbox/trunk/decimal-c'
svn: PROPFIND of '/view/sandbox/trunk/decimal-c': Could not read chunk size: connection was closed by server. (http://svn.python.org)


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Fri Jun 23 00:14:09 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Thu, 22 Jun 2006 23:14:09 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Thu, 22 Jun 2006 10:40:02 BST."
	<2mr71hjzpp.fsf@starship.python.net> 
Message-ID: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>

Michael Hudson <mwh at python.net> wrote:
> 
> Maybe append " for me, at least" to what I wrote then.  But really, it
> is hard: because Python runs on so many platforms, and platforms that
> no current Python developer has access to.  If you're talking about
> implementing FP in software (are you?), then I guess it gets easier.

No, I am not.  And it isn't as hard as is currently made out.

> > My intentions are to provide some numerically robust semantics,
> > preferably of the form where straightforward numeric code (i.e. code
> > that doesn't play any bit-twiddling tricks) will never invoke
> > mathematically undefined behaviour without it being flagged.  See
> > Kahan on that.
> 
> That doesn't actually explain the details of your intent very much.

Let's try again.  You say that you are a mathematician.  The
standard floating-point model is that it maps functions defined on
the reals (sometimes complex) to approximations defined on floating-
point.  The conventional interpretation was that any operation that
was not mathematically continuous in an open region including its
argument values (in the relevant domain) was an error, and that all
such errors should be flagged.  That is what I am talking about.
It's all classic behaviour - nothing unusual.

> > Not a lot.  Annex F in itself is only numerically insane.  You need to
> > know the rest of the standard, including that which is documented only
> > in SC22WG14 messages, to realise the full horror.
> 
> That's not why I was mentioning it.  I was mentioning it to give the
> idea that I'm not a numerical expert but, for example, I know what a
> denorm is.

Unfortunately, that doesn't help, because it is not where the issues
are.  What I don't know is how much you know about numerical models,
IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
reflector, but there were some lurkers.

> > The problem with such things is that they related to the interfaces
> > between types, and it is those aspects where object-orientation
> > falls down so badly.  For example, consider conversion between float
> > and long - which class should control the semantics?
> 
> This comment seems not to relate to anything I said, or at least not
> obviously.

I am afraid that it did.  I pointed out that some of the options
needed to control the behaviour of the implicit conversions between
built-in classes.  Now, right back in the Simula days, those issues
were one of the reasons of the religious war between the multiple
inheritance people and those who thought it was anathema.  My claim
is that such properties need to be program-global, or else you will
have the most almighty confusion.

You can take the Axiom approach of having a superclass to which
such things are bound, but most programming languages have always
had difficulty with properties that aren't clearly associated with
a single class - ESPECIALLY when they affect primitives.

> >> This could be implemented by having a field in the threadstate of FPU  
> >> flags to check after each fp operation (or each set of fp operations,  
> >> possibly).  I don't think I have the guts to try to implement  
> >> anything sensible using HW traps (which are thread-local as well,  
> >> aren't they?).
> >
> > Gods, NO!!!
> 
> Good :-)

!!!!!  I am sorry, but that isn't an appropriate response.  The fact
is that they are unspecified - IDEALLY, things like floating-point
traps would be handled thread-locally (and thus neither change context
not affect other cores, as was done on the Ferranti Atlas and many
other machines), but things like TLB miss traps, device interrupts
and machine-check interrupts need to be CPU-global.  Unfortunately,
modern architectures use a single mechanism for all of them - which
is a serious design error.

> > Sorry, but I have implemented such things (but that was on a far
> > architecture, and besides the system is dead).  Modern CPU
> > architectures don't even DEFINE whether interrupt handling is local
> > to the core or chip, and document that only in the release notes,
> > but what is clear is that some BLACK incantations are needed in
> > either case.
> 
> Well, I only really know about the PowerPC at this level...

Do you?  Can you tell me whether interrupts stop the core or chip,
for each of the classes of interrupt, and exactly what the incantation
is for changing to the other mode?

> > Think of taking a machine check interrupt on a multi- core,
> > highly-pipelined architecture and blench.  And, if that is an
> > Itanic, gibber hysterically before taking early retirement on the
> > grounds of impending insanity.
> 
> What does a machine check interrupt have to do with anything?

Because a machine check is one of the classes of interrupt that you
POSITIVELY want the other cores stopped until you have worked out
whether it impacts just the interrupted core or the CPU as a whole.
Inter alia, the PowerPC architecture takes one when a core has just
gone AWOL - and there is NO WAY that the dead core can handle the
interrupt indicating its own demise!

> > Oh, that's the calm, moderate description.  The reality is worse.
> 
> Yes, but fortunately irrelevant...

Unfortunately, it isn't.  I wish that it were :-(

> Now, a more general reply: what are you actually trying to acheive
> with these posts?  I presume it's more than just make wild claims
> about how much more you know about numerical programming than anyone
> else...

Sigh.  What I am trying to get is floating-point support of the form
that, when a programmer makes a numerical error (see above), he gets
EITHER an exception value returned OR an exception raised.  I do, of
course, need to exclude the cases when the code is testing states
explicitly, twiddling bits and so on.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From rwgk at yahoo.com  Fri Jun 23 01:38:45 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Thu, 22 Jun 2006 16:38:45 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <e7eqae$qsq$1@sea.gmane.org>
Message-ID: <20060622233845.98659.qmail@web31513.mail.mud.yahoo.com>

--- Georg Brandl <g.brandl at gmx.net> wrote:

> Well, PyRange_New *was* undocumented,

Yes, that was clear.

> However, it would perhaps be helpful to add a note to the whatsnew document
> for users like yourself. Andrew, does that make sense?

I am worried about using an alternative that is *again* not documented. There
is no mention of "PyRange" in the Python C API documentation, not even "range".

Unless I am the only user of PyRange_New() in the whole wide world a few extra
lines in the "changes" document will prevent recurring questions.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From facundobatista at gmail.com  Fri Jun 23 01:41:38 2006
From: facundobatista at gmail.com (Facundo Batista)
Date: Thu, 22 Jun 2006 20:41:38 -0300
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
Message-ID: <e04bdf310606221641r5ca494bbuec0a3830aa3c8f3c@mail.gmail.com>

2006/6/22, Nick Maclaren <nmm1 at cus.cam.ac.uk>:

> > Now, a more general reply: what are you actually trying to acheive
> > with these posts?  I presume it's more than just make wild claims
> > about how much more you know about numerical programming than anyone
> > else...
>
> Sigh.  What I am trying to get is floating-point support of the form
> that, when a programmer makes a numerical error (see above), he gets
> EITHER an exception value returned OR an exception raised.  I do, of
> course, need to exclude the cases when the code is testing states
> explicitly, twiddling bits and so on.

Well, so I'm completely lost... because, if all you want is to be able
to chose a returned value or an exception raised, you actually can
control that in Decimal.

Regards,

-- 

.    Facundo

Blog: http://www.taniquetil.com.ar/plog/
PyAr: http://www.python.org/ar/

From aahz at pythoncraft.com  Fri Jun 23 02:00:03 2006
From: aahz at pythoncraft.com (Aahz)
Date: Thu, 22 Jun 2006 17:00:03 -0700
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
Message-ID: <20060623000003.GA26173@panix.com>

On Thu, Jun 22, 2006, Nick Maclaren wrote:
>
> Sigh.  What I am trying to get is floating-point support of the form
> that, when a programmer makes a numerical error (see above), he gets
> EITHER an exception value returned OR an exception raised.  

Then you need to write up a detailed design document FOR PYTHON that
specifies how a smart person like Michael Hudson would go about
implementing what you want.  Keep in mind that Python does *not*
currently require C99 (and it's not clear when it will) and that Python
runs on multiple hardware platforms and operating systems, so your scheme
needs to be either independent of hardware/OS or you need to clearly
specify how your scheme can EASILY be made to work on any system.

You can't expect us to do your legwork for you, and you can't expect
that Tim Peters is the only person on the dev team who understands what
you're getting at.

Incidentally, your posts will go directly to python-dev without
moderation if you subscribe to the list, which is a Good Idea if you want
to participate in discussion.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From guido at python.org  Fri Jun 23 02:12:29 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 22 Jun 2006 17:12:29 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
Message-ID: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>

On 6/22/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 12:54 PM 6/22/2006 -0700, Guido van Rossum wrote:
> >Summarizing our disagreement, I think you feel that
> >freeze-on-first-use is most easily explained and understood while I
> >feel that freeze-at-def-time is more robust. I'm not sure how to get
> >past this point except by stating that you haven't convinced me... I
> >think it's time to sit back and wait for someone else to weigh in with
> >a new argument.
>
> Which I think you and Fredrik have found, if "case" implies "static".  It
> also looks attractive as an addition in its own right, independent of "switch".
>
> In any case, my point wasn't to convince you but to make you aware of
> certain costs and benefits that I wasn't sure you'd perceived.  It's clear
> from your response that you *have* perceived them now, so I'm quite
> satisfied by that outcome -- i.e., my goal wasn't to "convince" you to
> adopt a particular proposal, but rather to make sure you understood and
> considered the ramifications of the ones under discussion.
>
> That being said, there isn't anything to "get past"; from my POV, the
> discussion is already a success.  :)

That sounds like a good solution all around. I hope that others can
also find themselves in this.

(1) An expression of the form 'static' <atom> has the semantics of
evaluating the atom at the same time as the nearest surrounding
function definition. If there is no surrounding function definition,
'static' is a no-op and the expression is evaluated every time.
[Alternative 1: this is an error] [Alternative 2: it is evaluated
before the module is entered; this would imply it can not involve any
imported names but it can involve builtins] [Alternative 3:
precomputed the first time the switch is entered]

(2) All case expressions in a switch have an implied 'static'.

(3) A switch is implemented using a dict which is precomputed at the
same time its static expressions are precomputed. The switch
expression must be hashable. Overlap between different cases will
raise an exception at precomputation time.

Independent from this, I wonder if we also need static names of the form

  static <name> = <expression>

which would be similar to

  <name> = static (<expression>)

but also prevents <name> from being assigned to elsewhere in the same scope.

Also, I haven't heard a lot of thumbs up or down on the idea of using

  case X:

to indicate a single value and

  case in S:

to indicate a sequence of values.

(I'm not counting all the hypergeneralizations that were proposed like

  case == X:
  case < X:
  case is X:
  case isinstance X:

since I'm -1 on all those, no matter how nicely they align.

:-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From nnorwitz at gmail.com  Fri Jun 23 03:46:49 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Thu, 22 Jun 2006 18:46:49 -0700
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <20060623000003.GA26173@panix.com>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
	<20060623000003.GA26173@panix.com>
Message-ID: <ee2a432c0606221846l41ada49aoff1471440ca84707@mail.gmail.com>

On 6/22/06, Aahz <aahz at pythoncraft.com> wrote:
> On Thu, Jun 22, 2006, Nick Maclaren wrote:
> >
> > Sigh.  What I am trying to get is floating-point support of the form
> > that, when a programmer makes a numerical error (see above), he gets
> > EITHER an exception value returned OR an exception raised.
>
> Then you need to write up a detailed design document FOR PYTHON that

The best design doc that I know of is code. :-)

Seriously, there seems to be a fair amount of miscommunication in this
thread.  It would be much easier to communicate using code snippets.
I'd suggest pointing out places in the Python code that are lacking
and how you would correct them.  That will make it easier for everyone
to understand each other.

n

From rwgk at yahoo.com  Fri Jun 23 07:38:20 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Thu, 22 Jun 2006 22:38:20 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>
Message-ID: <20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>

--- Bob Ippolito <bob at redivi.com> wrote:

> > I am sure I can get this to work with some digging, but I am  
> > posting here to
> > highlight a communication problem. I feel if a function is removed the
> > alternative should be made obvious in the associated documentation; in
> > particular if there is no existing documentation for the alternative.
> 
> He means something like this:
> PyObject_CallFunction(PyRange_Type, "llli", ...)

Thanks! This does the trick for me:

#if PY_VERSION_HEX >= 0x02030000
        PyObject_CallFunction(
          (PyObject*) &PyRange_Type, "lll", start, start+len*step, step)
#else
        PyRange_New(start, len, step, 1)
#endif


I've tested this with Python 2.2.3, 2.3.4, 2.4.3, 2.5b1. Python 2.2.3 (RedHat
WS 3) compiles the PyRange_Type call, but there is a runtime error:

TypeError: cannot create 'xrange' instances


I am compiling the code above with a C++ compiler (in the context of
Boost.Python). Newer g++ versions unfortunatly produce a warning if -Wall is
specified:

warning: dereferencing type-punned pointer will break strict-aliasing rules

This refers to the (PyObject*) &PyRange_Type cast.
I believe the warning is bogus, but people still get upset about it (google the
C++-SIG archive). Is there a chance that PyRange_New() could be resurrected,
with the fragment above (plus additional overflow check for start+len*step) as
the implementation? That would fix the problems of the old implementation,
there would be no reason to have the cast in C++, no frustrated end-users, and
one change less to document.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From fredrik at pythonware.com  Fri Jun 23 09:35:19 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 23 Jun 2006 09:35:19 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622154737.03bef9c0@sparrow.telecommunity.com>
References: <e7epl9$p0g$1@sea.gmane.org>	<17547.19802.361151.705599@montanaro.dyndns.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>	<e7c135$4ql$1@sea.gmane.org>
	<e7c4tl$kq7$1@sea.gmane.org>	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>	<e7eg3m$lrh$1@sea.gmane.org>	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>	<e7en3j$fr7$1@sea.gmane.org>	<ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>	<e7epl9$p0g$1@sea.gmane.org>
	<ca471dc20606221224w6a48e471ub26ff55c457a1a70@mail.gmail.co m>
	<5.1.1.6.0.20060622154737.03bef9c0@sparrow.telecommunity.com>
Message-ID: <e7g5jl$nhi$1@sea.gmane.org>

Phillip J. Eby wrote:

> I think I like it.  I was confused by what Fredrik meant by "const", but 
> your renaming it to "static" makes more sense to me;

footnote: I suggested static in my list of use cases; while "const" 
makes sense in many cases, "static" makes more sense for things like this:

>      def foo(value):
>          table = const {
>              1: "one",
>              2: "two",
>              3: fie.fum,
>          }
> 
> (maybe "static" would be a better keyword?)

...at least for C/C++ heads; if you look things up in a dictionary, I'd 
say that the noun "constant", in the meaning

    2. a. A quantity assumed to have a fixed value in a
          specified mathematical context.
       b. An experimental or theoretical condition, factor,
          or quantity that does not vary or that is regarded
          as invariant in specified circumstances.

makes at least as much sense as the adjective "static":

    1. a. Having no motion; being at rest; quiescent.
       b. Fixed; stationary.

>> Unfortunately this would probably cause people to write
>>
>>   switch x:
>>     case static re.DOTALL: ...
>>     case static re.IGNORECASE: ...
>>
>> which is just more work to get the same effect as the
>> def-time-switch-freezing proposal.
> 
> Without the "static", the reordering of execution isn't obvious.  But 
> perhaps that could be lived with, if the explanation was, "well, static
 > is implied by case".

I'd still prefer the "explicit is better than implicit" route, approach 
switch/case (if added) is defined in terms of if/elif, and optimizations 
are handled by the optimizer/code generator.

</F>


From fredrik at pythonware.com  Fri Jun 23 10:01:34 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 23 Jun 2006 10:01:34 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
Message-ID: <e7g74s$sdk$1@sea.gmane.org>

Guido van Rossum wrote:

> That sounds like a good solution all around. I hope that others can
> also find themselves in this.
> 
> (1) An expression of the form 'static' <atom> has the semantics of
> evaluating the atom at the same time as the nearest surrounding
> function definition. If there is no surrounding function definition,
> 'static' is a no-op and the expression is evaluated every time.
> [Alternative 1: this is an error] [Alternative 2: it is evaluated
> before the module is entered; this would imply it can not involve any
> imported names but it can involve builtins] [Alternative 3:
> precomputed the first time the switch is entered]

+0.5 on this (still looking for some obvious drawback).

as for static in non-local scopes, an error feels more pythonic, but 
that would complicate things if you want to move code from a local to a 
global context (but how often do you do that ?).  alternative 2 and 3 
feels "too magic", again.

> (2) All case expressions in a switch have an implied 'static'.

I'm still -0 on implied static.  and only +0 on switch/case, in general. 
  but it's growing on me.

(now, if you're written "implied 'break'", I'm all for it)

> (3) A switch is implemented using a dict which is precomputed at the
> same time its static expressions are precomputed. The switch
> expression must be hashable. Overlap between different cases will
> raise an exception at precomputation time.

+0 on switch/case, but -0.5 on a "in terms of implementation" rather 
than "in terms of existing language constructs" approach.

as I mentioned before, I'd prefer if the switch/case/case-in/else was 
defined in terms of a corresponding if/elif/else construct (but where 
the controlling expression is only evaluated once).

after all, Python's a dynamic language, and I'm not convinced that I 
would never want to use dynamically evaluated case values.  just map

     switch EXPR:
     case E1:
         ...
     case in E2:
         ...
     else:
         ...

to

     VAR = EXPR
     if VAR == E1:
         ...
     elif VAR in E2:
         ...
     else:
         ...

where VAR is a temporary variable, and case and case-in clauses can be 
freely mixed, and leave the rest to the code generator.  (we could even 
allow "switch EXPR [as VAR]" to match a certain other sugar construct).

I'm also a K&R guy, so switch/case/case-in/else should all have the same 
indent.  anything else is just sloppy design.

> Independent from this, I wonder if we also need static names of the form
> 
>   static <name> = <expression>
> 
> which would be similar to
> 
>   <name> = static (<expression>)
> 
> but also prevents <name> from being assigned to elsewhere in the same scope.

-0 from here; I don't see an obvious need for static names, but it may 
grow on me.

> Also, I haven't heard a lot of thumbs up or down on the idea of using
> 
>   case X:
> 
> to indicate a single value and
> 
>   case in S:
> 
> to indicate a sequence of values.

+1 from here.  it's obvious, useful, and therefore perfectly pythonic.

</F>


From fredrik at pythonware.com  Fri Jun 23 10:05:55 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 23 Jun 2006 10:05:55 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7g5jl$nhi$1@sea.gmane.org>
References: <e7epl9$p0g$1@sea.gmane.org>	<17547.19802.361151.705599@montanaro.dyndns.org>	<ca471dc20606210947s1490bb66s6f040c78239dc623@mail.gmail.com>	<e7c135$4ql$1@sea.gmane.org>	<e7c4tl$kq7$1@sea.gmane.org>	<ca471dc20606211321k624fb425l9174efb9bd43f3e2@mail.gmail.com>	<e7eg3m$lrh$1@sea.gmane.org>	<ca471dc20606220945v6ba2565at7a7a82f9cbd01d65@mail.gmail.com>	<e7en3j$fr7$1@sea.gmane.org>	<ca471dc20606221138h25529dbbv977128e20852d682@mail.gmail.com>	<e7epl9$p0g$1@sea.gmane.org>	<ca471dc20606221224w6a48e471ub26ff55c457a1a70@mail.gmail.co
	m>	<5.1.1.6.0.20060622154737.03bef9c0@sparrow.telecommunity.com>
	<e7g5jl$nhi$1@sea.gmane.org>
Message-ID: <e7g7d1$t48$1@sea.gmane.org>

Fredrik Lundh wrote:

> I'd still prefer the "explicit is better than implicit" route, approach 
> switch/case (if added) is defined in terms of if/elif, and optimizations 
> are handled by the optimizer/code generator.

s/approach/where/

</F>


From tim.peters at gmail.com  Fri Jun 23 10:40:12 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 23 Jun 2006 04:40:12 -0400
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>
References: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>
	<20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>
Message-ID: <1f7befae0606230140v6a3d2b0eldb78a5d909c15735@mail.gmail.com>

[Ralf W. Grosse-Kunstleve]
> Thanks! This does the trick for me:
>
> #if PY_VERSION_HEX >= 0x02030000
>         PyObject_CallFunction(
>           (PyObject*) &PyRange_Type, "lll", start, start+len*step, step)

Note that this is extremely lax about possible overflow in the
arithmetic.  For that reason it can't be recommend for general use.

> #else
>         PyRange_New(start, len, step, 1)
> #endif
>
> I've tested this with Python 2.2.3, 2.3.4, 2.4.3, 2.5b1. Python 2.2.3 (RedHat
> WS 3) compiles the PyRange_Type call, but there is a runtime error:
>
> TypeError: cannot create 'xrange' instances

Sorry, I didn't follow that.  The only mention of PyRange_Type in the
#if'ed code above is in a block that looks like it should be entirely
ignored in a 2.2.3 Python (provided you're using the right header
files and the C compiler isn't broken).

> I am compiling the code above with a C++ compiler (in the context of
> Boost.Python). Newer g++ versions unfortunatly produce a warning if -Wall is
> specified:
>
> warning: dereferencing type-punned pointer will break strict-aliasing rules
>
> This refers to the (PyObject*) &PyRange_Type cast.
> I believe the warning is bogus, but people still get upset about it (google the
> C++-SIG archive).

Compile all of Python that way, and you'll probably see more of those
than you can count ;-)  Python is normally compiled with, and is
_intended_ to be compiled with,

    -fno-strict-aliasing

If you didn't do that, try it.

> Is there a chance that PyRange_New() could be resurrected,
> with the fragment above (plus additional overflow check for start+len*step) as
> the implementation? That would fix the problems of the old implementation,
> there would be no reason to have the cast in C++, no frustrated end-users, and
> one change less to document.

The deprecation of PyRange_New was duly announced in the NEWS file for
Python 2.4:

"""
What's New in Python 2.4 (release candidate 1)
==============================================
*Release date: 18-NOV-2004*

...

C API
-----

- The PyRange_New() function is deprecated.
"""

Since it was never documented to begin with, it was a "use at your own
risk" thing anyway.  As you're currently it's only known user
throughout all of history :-), if you do all the work of
rehabilitating it, I'd be at best a weak -1 anyway:  one of the
problems with PyRange_New was that its signature was wildly different
than the builtin range()'s.  That made it a poor API for "surprise,
surprise!" reasons alone.  That was a mistake, and I'd rather
inconvenience you than pass that mistake on to our precious children
;-)

OTOH, I'd have no objection to a new C API function with a (start,
stop, step) signature.

From fredrik at pythonware.com  Fri Jun 23 11:11:14 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 23 Jun 2006 11:11:14 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7g74s$sdk$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
Message-ID: <e7gb7f$ai2$1@sea.gmane.org>

Fredrik Lundh wrote:

> (now, if you're written "implied 'break'", I'm all for it)

note to self: the fact that it's a holiday doesn't mean that you should 
post before you'd had enough coffee.

</F>


From theller at python.net  Fri Jun 23 12:02:15 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 23 Jun 2006 12:02:15 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
Message-ID: <e7ge79$kg8$1@sea.gmane.org>

Now that ctypes is no longer an externally maintained module, imo the 
repository should be moved from SF cvs to python.org svn.

The current layout is different than the main python trunk, and it 
should be preserved so that I can still do standalone releases.

I think the best would be to import it into an url like

http://svn.python.org/projects/sandbox/trunk/ctypes/

Is it possible to take the CVS repository files (they can be accessed 
with rsync), and import that, preserving the whole history, into SVN?

I would be responsible to merge changes from the sandbox to the Python 
trunk and vice versa.  On the other hand, it may be possible to use the 
svn:external facility to have Modules/_ctypes and Lib/ctypes use certain 
directories in the sandbox tree (although I do not know how good this 
works in reality).

Thomas


From rwgk at yahoo.com  Fri Jun 23 12:23:19 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Fri, 23 Jun 2006 03:23:19 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <1f7befae0606230140v6a3d2b0eldb78a5d909c15735@mail.gmail.com>
Message-ID: <20060623102319.21553.qmail@web31515.mail.mud.yahoo.com>

--- Tim Peters <tim.peters at gmail.com> wrote:
> [Ralf W. Grosse-Kunstleve]
> > Thanks! This does the trick for me:
> >
> > #if PY_VERSION_HEX >= 0x02030000
> >         PyObject_CallFunction(
> >           (PyObject*) &PyRange_Type, "lll", start, start+len*step, step)
> 
> Note that this is extremely lax about possible overflow in the
> arithmetic.  For that reason it can't be recommend for general use.
> 
> > #else
> >         PyRange_New(start, len, step, 1)
> > #endif
> >
> > I've tested this with Python 2.2.3, 2.3.4, 2.4.3, 2.5b1. Python 2.2.3
> (RedHat
> > WS 3) compiles the PyRange_Type call, but there is a runtime error:
> >
> > TypeError: cannot create 'xrange' instances
> 
> Sorry, I didn't follow that.  The only mention of PyRange_Type in the
> #if'ed code above is in a block that looks like it should be entirely
> ignored in a 2.2.3 Python (provided you're using the right header
> files and the C compiler isn't broken).

First I tried the PyRange_Type code with Python 2.2.3 and no #ifdef. I resorted
to the #ifdef and the old PyRange_New() call only because it didn't work.

> Compile all of Python that way, and you'll probably see more of those
> than you can count ;-)  Python is normally compiled with, and is
> _intended_ to be compiled with,
> 
>     -fno-strict-aliasing
> 
> If you didn't do that, try it.

I missed this. Thanks for pointing it out.

> Since it was never documented to begin with, it was a "use at your own
> risk" thing anyway.  As you're currently it's only known user
> throughout all of history :-), if you do all the work of
> rehabilitating it, I'd be at best a weak -1 anyway:  one of the
> problems with PyRange_New was that its signature was wildly different
> than the builtin range()'s.  That made it a poor API for "surprise,
> surprise!" reasons alone.  That was a mistake, and I'd rather
> inconvenience you than pass that mistake on to our precious children
> ;-)

I agree, although I find it hard to believe I am that unique. I'll google for
PyRange_New in a couple of years to see how many more people got stranded. :-)

Although it will bias the measure of my uniqueness, I still believe you should
tell people in the documentation what to do, e.g.

  PyObject_CallFunction((PyObject*) &PyRange_Type, "lll", start, stop, step)

which avoids showing the sloppy start+len*step hack.

BTW: by the time my children pick up programming nobody under 30 will use C
Python anymore. ;-)


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From bioinformed at gmail.com  Fri Jun 23 13:57:05 2006
From: bioinformed at gmail.com (Kevin Jacobs <jacobs@bioinformed.com>)
Date: Fri, 23 Jun 2006 07:57:05 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
Message-ID: <2e1434c10606230457h7af2f39j78a749b5984af7c1@mail.gmail.com>

On 6/22/06, Nick Maclaren <nmm1 at cus.cam.ac.uk> wrote:
>
> > > Not a lot.  Annex F in itself is only numerically insane.  You need to
> > > know the rest of the standard, including that which is documented only
> > > in SC22WG14 messages, to realise the full horror.
> [...]
> >Unfortunately, that doesn't help, because it is not where the issues
> >are.  What I don't know is how much you know about numerical models,
> >IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
> >reflector, but there were some lurkers.
>


Hand wave, hand wave, hand wave.  Many of us here aren't stupid and have
more than passing experience with numerical issues, even if we haven't been
"active on SC22WG14".  Let's stop with the high-level pissing contest and
lay out a clear technical description of exactly what has your knickers in a
twist, how it hurts Python, and how we can all work together to make the
pain go away.

A good place to start: You mentioned earlier that there where some
nonsensical things in floatobject.c.  Can you list some of the most serious
of these?

Thanks,
-Kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060623/75cbb77c/attachment.html 

From mwh at python.net  Fri Jun 23 14:32:37 2006
From: mwh at python.net (Michael Hudson)
Date: Fri, 23 Jun 2006 13:32:37 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk> (Nick Maclaren's
	message of "Thu, 22 Jun 2006 23:14:09 +0100")
References: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
Message-ID: <2mmzc4jbmi.fsf@starship.python.net>

Nick Maclaren <nmm1 at cus.cam.ac.uk> writes:

>> > My intentions are to provide some numerically robust semantics,
>> > preferably of the form where straightforward numeric code (i.e. code
>> > that doesn't play any bit-twiddling tricks) will never invoke
>> > mathematically undefined behaviour without it being flagged.  See
>> > Kahan on that.
>> 
>> That doesn't actually explain the details of your intent very much.
>
> Let's try again.  You say that you are a mathematician.

I don't think I said that; I said I have a mathematics degree (I guess
I'm a computer scientist, these days).

> The standard floating-point model is that it maps functions defined
> on the reals (sometimes complex) to approximations defined on
> floating- point.  The conventional interpretation was that any
> operation that was not mathematically continuous in an open region
> including its argument values (in the relevant domain) was an error,
> and that all such errors should be flagged.  That is what I am
> talking about.  It's all classic behaviour - nothing unusual.

Well, I think you've used longer words than necessary, but thanks for
the explanation.

>> > Not a lot.  Annex F in itself is only numerically insane.  You need to
>> > know the rest of the standard, including that which is documented only
>> > in SC22WG14 messages, to realise the full horror.
>> 
>> That's not why I was mentioning it.  I was mentioning it to give the
>> idea that I'm not a numerical expert but, for example, I know what a
>> denorm is.
>
> Unfortunately, that doesn't help, because it is not where the issues
> are.  What I don't know is how much you know about numerical models,
> IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
> reflector, but there were some lurkers.

I'm in no way deeply enough involved to be reading that sort of email,
which I would have thought would have been obvious from the other
things I have said.

>> >> This could be implemented by having a field in the threadstate of FPU  
>> >> flags to check after each fp operation (or each set of fp operations,  
>> >> possibly).  I don't think I have the guts to try to implement  
>> >> anything sensible using HW traps (which are thread-local as well,  
>> >> aren't they?).
>> >
>> > Gods, NO!!!
>> 
>> Good :-)
>
> !!!!!  I am sorry, but that isn't an appropriate response.

Um, I think we've been misreading each other here.

>> > Sorry, but I have implemented such things (but that was on a far
>> > architecture, and besides the system is dead).  Modern CPU
>> > architectures don't even DEFINE whether interrupt handling is local
>> > to the core or chip, and document that only in the release notes,
>> > but what is clear is that some BLACK incantations are needed in
>> > either case.
>> 
>> Well, I only really know about the PowerPC at this level...
>
> Do you?  Can you tell me whether interrupts stop the core or chip,
> for each of the classes of interrupt, and exactly what the incantation
> is for changing to the other mode?

No.  I've only played on single processor, single core machines.

>> > Think of taking a machine check interrupt on a multi- core,
>> > highly-pipelined architecture and blench.  And, if that is an
>> > Itanic, gibber hysterically before taking early retirement on the
>> > grounds of impending insanity.
>> 
>> What does a machine check interrupt have to do with anything?
>
> Because a machine check is one of the classes of interrupt that you
> POSITIVELY want the other cores stopped until you have worked out
> whether it impacts just the interrupted core or the CPU as a whole.
> Inter alia, the PowerPC architecture takes one when a core has just
> gone AWOL - and there is NO WAY that the dead core can handle the
> interrupt indicating its own demise!

But, a floating point exception isn't a machine check interrupt, it's
a program interrupt...

>> Now, a more general reply: what are you actually trying to acheive
>> with these posts?  I presume it's more than just make wild claims
>> about how much more you know about numerical programming than anyone
>> else...
>
> Sigh.  What I am trying to get is floating-point support of the form
> that, when a programmer makes a numerical error (see above), he gets
> EITHER an exception value returned OR an exception raised.

See, that wasn't so hard!  We'd have saved a lot of heat and light if
you'd said that at the start (and if you think you'd made it clear
already: you hadn't).

Cheers,
mwh

-- 
  ... so the notion that it is meaningful to pass pointers to memory
  objects into which any random function may write random values
  without having a clue where they point, has _not_ been debunked as
  the sheer idiocy it really is.        -- Erik Naggum, comp.lang.lisp

From fredrik at pythonware.com  Fri Jun 23 15:14:54 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 23 Jun 2006 15:14:54 +0200
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
Message-ID: <e7gpgc$snm$1@sea.gmane.org>

Nick Maclaren wrote:

> Unfortunately, that doesn't help, because it is not where the issues
> are.  What I don't know is how much you know about numerical models,
> IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
> reflector, but there were some lurkers.

SC22WG14?  is that some marketing academy?  not a very good one, obviously.

</F>


From tim.peters at gmail.com  Fri Jun 23 15:24:23 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 23 Jun 2006 09:24:23 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <2e1434c10606230457h7af2f39j78a749b5984af7c1@mail.gmail.com>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
	<2e1434c10606230457h7af2f39j78a749b5984af7c1@mail.gmail.com>
Message-ID: <1f7befae0606230624p545518f6h41513e326fea5665@mail.gmail.com>

[Kevin Jacobs]
> ...
> A good place to start: You mentioned earlier that there where some
> nonsensical things in floatobject.c.  Can you list some of the most serious
> of these?

I suspect Nick spends way too much time reading standards ;-)  What he said is:

    If you look at floatobject.c, you will find it solid with constructions that
    make limited sense in C99 but next to no sense in C89.

And, in fact, C89 truly defines almost nothing about floating-point
semantics or pragmatics.  Nevertheless, if a thing "works" under gcc
and under MS C, then "it works" for something like 99.9% of Python's
users, and competitive pressures are huge for other compiler vendors
to play along with those two.

I don't know what specifically Nick had in mind, and join the chorus
asking for specifics.  I _expect_ he's got a keen eye for genuine
coding bugs here, but also expect I'd consider many technically
dubious bits of FP code to be fine under the "de facto standard"
dodge.

For an example much worse than anything in floatobject.c _could_ be, C
doesn't guarantee that an "int" can hold an integer larger than 32767.
 I doubt Python would run _at all_ on a platform with 16-bit ints, and
the "de facto standard" Python actually tries to follow is that an int
holds at least 32 bits.  Even more fundamental than that, we use
-fno-strict-aliasing under gcc because our C code is in fact in
undefined-behavior-land all over the place when casting between
PyObject* and pointers to objects of conceptual PyObject subclasses.
Because of the _way_ these casts are done, we're almost certainly in
no real trouble under any compiler, and the real reason we use
-fno-strict-aliasing under gcc is just to shut up the annoying warning
messages.

WRT floatobject.c, part of Python's assumed "de facto C standard" is
that no FP operation will raise SIGFPE, or cause "abnormal"
termination of the program by any other means.  C89 doesn't say say
squat about that of any real use.  C99 kinda does, sometimes, under
some conditions.  The IEEE-754 standard mandates "no-stop mode" for
its default numeric environment, and Python effectively assumes that,
and _forces_ it on platforms where it's not the default.  The only
known platform to date on which it was necessary to do so can be
deduced from Python's main() function:

int
main(int argc, char **argv)
{
...
#ifdef __FreeBSD__
	fp_except_t m;

	m = fpgetmask();
	fpsetmask(m & ~FP_X_OFL);
#endif
	return Py_Main(argc, argv);
}

So, sure, everything we do is undefined, but, no, we don't really care
:-)  If a non-trivial 100%-guaranteed-by-the-standard-to-work C
program exists, I don't think I've seen it.

Changes in float behavior really have to go thru the numerical Python
users, because they have the largest stake.  From that community, by
far the most frequent request | hear (even to the extent that people
seek me out at conferences and hint that they're willing to throw
money at it ;-)) is for a way to stop Python from raising
ZeroDivisionError on float divides.  They may or may not be aware of
the dubious justifications for the no-stop results IEEE-754 mandates
for the various div-by-0 sub-cases, but they're eager to live with
them regardless.  Even those who agree those "should be" errors (seems
to be a minority these days) would rather search result arrays for
NaNs and Infs after-the-fact than litter masses of array-syntax-driven
computation with hard-to-get-right recovery code.  For a stupid
example, in

    a = b / c

you may be dividing millions of array elements:  do you really want to
throw away all the work you've done so far just because division
#1384923 divided by 0?  "Yes" if you think the entire world is beyond
salvation if that happens, but that's rarely the case.  When writing a
_scalar_ loop you generally don't mind catching exceptions, or even
doing

    if c[i]:
        a[i] = b[i] / c[i]

You can treat each computation specially, because you're only _doing_
one computation at a time.  When computing with giant aggregates,
exceptions can be much harder to live with.

BTW, Nick, are you aware of Python's fpectl module?  That's
user-contributed code that attempts to catch overflow, div-by-0, and
invalid operation on 754 boxes and transform them  into raising a
Python-level FloatingPointError exception.  Changes were made all over
the place to try to support this at the time.  Every time you see a
PyFPE_START_PROTECT or PyFPE_END_PROTECT macro in Python's C code,
that's the system it's trying to play nicely with.  "Normally", those
macros have empty expansions.

fpectl is no longer built by default, because repeated attempts failed
to locate any but "ya, I played with it once, I think" users, and the
masses of platform-specific #ifdef'ery in fpectlmodule.c were
suffering fatal bit-rot.  No users + no maintainers means I expect
it's likely that module will go away in the foreseeable future.  You'd
probably hate its _approach_ to this anyway ;-)

From thomas at python.org  Fri Jun 23 16:26:36 2006
From: thomas at python.org (Thomas Wouters)
Date: Fri, 23 Jun 2006 16:26:36 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
In-Reply-To: <e7ge79$kg8$1@sea.gmane.org>
References: <e7ge79$kg8$1@sea.gmane.org>
Message-ID: <9e804ac0606230726k602e3c89wecaa7507d11b3193@mail.gmail.com>

On 6/23/06, Thomas Heller <theller at python.net> wrote:

> Is it possible to take the CVS repository files (they can be accessed
> with rsync), and import that, preserving the whole history, into SVN?


I don't remember a svn-specific tool for it (although I am fairly out of
touch in that regard). However, if no one else comes up with a better
alternative, I believe you could use 'tailor' to recreate the history in
subversion. It'd do a checkin for every checkin in CVS, and take a while, so
you'd probably want to test it in a private subversion tree, and run it 'for
real' in a directory that won't generate checkins to python-checkins ;)

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060623/8bc8f1c0/attachment.html 

From gustavo at niemeyer.net  Fri Jun 23 16:56:02 2006
From: gustavo at niemeyer.net (Gustavo Niemeyer)
Date: Fri, 23 Jun 2006 11:56:02 -0300
Subject: [Python-Dev] Moving the ctypes repository to python.org
In-Reply-To: <e7ge79$kg8$1@sea.gmane.org>
References: <e7ge79$kg8$1@sea.gmane.org>
Message-ID: <20060623145602.GB10250@niemeyer.net>

> I think the best would be to import it into an url like
> 
> http://svn.python.org/projects/sandbox/trunk/ctypes/
> 
> Is it possible to take the CVS repository files (they can be accessed 
> with rsync), and import that, preserving the whole history, into SVN?

It should be possible to use cvs2svn itself, creating a dump file,
and loading that dump file into an existing repository with
'svnadmin load --parent-dir'.

-- 
Gustavo Niemeyer
http://niemeyer.net

From theller at python.net  Fri Jun 23 17:42:22 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 23 Jun 2006 17:42:22 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
In-Reply-To: <20060623145602.GB10250@niemeyer.net>
References: <e7ge79$kg8$1@sea.gmane.org> <20060623145602.GB10250@niemeyer.net>
Message-ID: <e7h24t$t7c$1@sea.gmane.org>

>> I think the best would be to import it into an url like
>> 
>> http://svn.python.org/projects/sandbox/trunk/ctypes/
>> 
>> Is it possible to take the CVS repository files (they can be accessed 
>> with rsync), and import that, preserving the whole history, into SVN?
> 

Gustavo Niemeyer schrieb:

> It should be possible to use cvs2svn itself, creating a dump file,
> and loading that dump file into an existing repository with
> 'svnadmin load --parent-dir'.
> 

Yes, that is what I'm experimenting with currently.

One problem that occurs is the following.  It seems that cvs2svn
does not handle a branch correctly...

Here is a partial output of 'cvs log _ctypes.c' running
on the SF CVS repository:


RCS file: /cvsroot/ctypes/ctypes/source/_ctypes.c,v
Working file: _ctypes.c
head: 1.340
branch:
locks: strict
access list:
symbolic names:
	...
	branch_1_0: 1.226.0.2
	...
keyword substitution: kv
total revisions: 431;	selected revisions: 431
...
----------------------------
revision 1.307
date: 2006/03/03 20:17:15;  author: theller;  state: Exp;  lines: +1373 -815
Moving files from branch_1_0 to HEAD.
----------------------------
revision 1.306
date: 2006/03/03 19:47:24;  author: theller;  state: dead;  lines: +0 -0
Remove all files, will add them from 'branch_1_0' again.
----------------------------
revision 1.305
date: 2005/05/11 19:17:15;  author: theller;  state: Exp;  lines: +7 -1
oops - patch was incomplete.
----------------------------
revision 1.304
date: 2005/05/11 19:11:35;  author: theller;  state: Exp;  lines: +24 -5
Don't call __init__, only __new__, when an ctypes object is retrieved
from a base object.
...

What I did was at a certain time develop in the 'branch_1_0' branch, leaving
HEAD for experimental work.  Later I decided that this was wrong, cvs removed all
files in HEAD, and added them back from a branch_1_0 checkout.  Maybe doing
this was another bad idea, as the trunk in the converted SVN repository
only lists _ctypes.c revisions corresponding to the CVS version numbers
1.307 up to the current CVS head 1.340.  All the older versions from 1.1 up to
1.226.2.55 show up in the branch_1_0 branch that cvs2svn has created - although
in CVS only the versions 1.226.0.2 up to 1.226.2.55 were ever in the branch_1_0
branch.  Is that a bug in cvs2svnn?

Oh well, maybe I deserve that - but 'cvs log' shows much more info the
'svn log' now (when running in the trunk checkout).

Thomas


From jcarlson at uci.edu  Fri Jun 23 18:28:44 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 23 Jun 2006 09:28:44 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
References: <5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
Message-ID: <20060623085714.1DEE.JCARLSON@uci.edu>


"Guido van Rossum" <guido at python.org> wrote:
> (1) An expression of the form 'static' <atom> has the semantics of
> evaluating the atom at the same time as the nearest surrounding
> function definition. If there is no surrounding function definition,
> 'static' is a no-op and the expression is evaluated every time.
> [Alternative 1: this is an error] [Alternative 2: it is evaluated
> before the module is entered; this would imply it can not involve any
> imported names but it can involve builtins] [Alternative 3:
> precomputed the first time the switch is entered]

I'm +1 on alternative 1, but -1 on all others, with a small caveat that
I'll mark with * later on.


> (2) All case expressions in a switch have an implied 'static'.

+1

> (3) A switch is implemented using a dict which is precomputed at the
> same time its static expressions are precomputed. The switch
> expression must be hashable. Overlap between different cases will
> raise an exception at precomputation time.

+1 (As I was catching up on the flurry of emails from yesterday this
morning, I noticed to my surprise that you came around to precisely what
I had hoped for in a switch/case statement; I'm going to have to try not
posting on a subject more often <.5 wink>)

> Independent from this, I wonder if we also need static names of the form
> 
>   static <name> = <expression>
> 
> which would be similar to
> 
>   <name> = static (<expression>)
> 
> but also prevents <name> from being assigned to elsewhere in the same scope.

* I don't particularly care for the literal syntax of static.  I like
the /idea/, but I don't believe that it is as useful as people think it
is.  Let me explain.  Given your previously proposed 'all cases have an
implied static', that handles the switch/case statement, the purpose of
'static' is to explain what happens to the switch/case statement, not
that people would actually use it as such in the switch/case example (as
it is implied, and people are insane about trying to reduce how much
they type).

For general variables, like say; math.pi, re.DOTALL, sys.maxint, len,
int, Exception, etc., many of them are merely references to their values,
that is, there are no value manipulations.  This case is quite
conveniently covered by Raymond's "Decorator for BindingConstants at
compile time", a sexy decorator available in the cookbook [1].  That
decorator can handle all of the non-modifying value assignments, and in
the case of Frederick's previously described:

    if value < const (math.pi / 2):
        ...

... a small modification to Raymond's recipe that allows keyword
arguments (on the decorator) would preserve the beauty of the function
definition, at the cost of an uglier decorator.

    @make_constants(math_pi_2=(math.pi / 2))
    def foo(value):
        if value < math_pi_2:
            ...

Now, it's not as slick as a static unary operator, but it handles the
simple references case quite well, and can be hacked to handle the
saving of expressions without significant difficulty, though such
uglifies the decorator call significantly.  If a variant of that
decorator were available in the standard library, maybe with simple
variants, I believe much of the general talk about 'static' will go away. 
I could certainly be wrong, but that's ok too.


> Also, I haven't heard a lot of thumbs up or down on the idea of using
> 
>   case X:
> 
> to indicate a single value and
> 
>   case in S:
> 
> to indicate a sequence of values.

+1


> (I'm not counting all the hypergeneralizations that were proposed like
> 
>   case == X:
>   case < X:
>   case is X:
>   case isinstance X:
> 
> since I'm -1 on all those, no matter how nicely they align.

The 'case == X' was cute, though if it was an alternative spelling of
'case X', I doubt it would be used terribly often.  Regardless, I'm -1
on all other cases, and would not be concerned to lose the '== X'
version.

 - Josiah

[1] http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/277940


From aleaxit at gmail.com  Fri Jun 23 19:08:26 2006
From: aleaxit at gmail.com (Alex Martelli)
Date: Fri, 23 Jun 2006 10:08:26 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
Message-ID: <e8a0972d0606231008id802644v52feb81b762bfade@mail.gmail.com>

On 6/22/06, Guido van Rossum <guido at python.org> wrote:
   ...
> (1) An expression of the form 'static' <atom> has the semantics of
> evaluating the atom at the same time as the nearest surrounding
> function definition. If there is no surrounding function definition,
> 'static' is a no-op and the expression is evaluated every time.
> [Alternative 1: this is an error] [Alternative 2: it is evaluated
> before the module is entered; this would imply it can not involve any
> imported names but it can involve builtins] [Alternative 3:
> precomputed the first time the switch is entered]

+1, preferably with alternative 1.  I've occasionally (ab)used the
fact that default values are computed at def time to get similar
semantics, but that (whence the "ab") has all sorts of issues (such as
"exposing" arguments you really do NOT want to be passed).


> (2) All case expressions in a switch have an implied 'static'.
>
> (3) A switch is implemented using a dict which is precomputed at the
> same time its static expressions are precomputed. The switch
> expression must be hashable. Overlap between different cases will
> raise an exception at precomputation time.

+0, just because I care about switch only up to a point!-)


> Independent from this, I wonder if we also need static names of the form
>
>   static <name> = <expression>
>
> which would be similar to
>
>   <name> = static (<expression>)
>
> but also prevents <name> from being assigned to elsewhere in the same scope.

Lovely!!!  Definitely +1.  Could perhaps THIS use of static be allowed
even outside of a def?  I'd just love to have such static names in
modules and classes, too (with runtime checks of errant assignments,
if needed).


> Also, I haven't heard a lot of thumbs up or down on the idea of using
>
>   case X:
>
> to indicate a single value and
>
>   case in S:
>
> to indicate a sequence of values.
>
> (I'm not counting all the hypergeneralizations that were proposed like
>
>   case == X:
>   case < X:
>   case is X:
>   case isinstance X:
>
> since I'm -1 on all those, no matter how nicely they align.

Agreed on the generalizations, but allowing (just) "case == X" and
"case in S" looks more readable to me than "case X" and "case in S".
Since I'm not overly focused on switch/case anyway, _and_ this choice
is just about syntax sugar anyway, my preference's mild!-)


Alex

From guido at python.org  Fri Jun 23 19:17:11 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 10:17:11 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060623085714.1DEE.JCARLSON@uci.edu>
References: <5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<20060623085714.1DEE.JCARLSON@uci.edu>
Message-ID: <ca471dc20606231017p78b9f7cfvdd604a95c8d0a702@mail.gmail.com>

On 6/23/06, Josiah Carlson <jcarlson at uci.edu> wrote:
>
> "Guido van Rossum" <guido at python.org> wrote:
> > (1) An expression of the form 'static' <atom> has the semantics of
> > evaluating the atom at the same time as the nearest surrounding
> > function definition. If there is no surrounding function definition,
> > 'static' is a no-op and the expression is evaluated every time.
> > [Alternative 1: this is an error] [Alternative 2: it is evaluated
> > before the module is entered; this would imply it can not involve any
> > imported names but it can involve builtins] [Alternative 3:
> > precomputed the first time the switch is entered]
>
> I'm +1 on alternative 1, but -1 on all others, with a small caveat that
> I'll mark with * later on.

I'm beginning to lean in that direction myself. A clean rule for
switches would be that if it's not inside a function, the cases must
be compile-time constant expressions.

> > (2) All case expressions in a switch have an implied 'static'.
>
> +1
>
> > (3) A switch is implemented using a dict which is precomputed at the
> > same time its static expressions are precomputed. The switch
> > expression must be hashable. Overlap between different cases will
> > raise an exception at precomputation time.
>
> +1 (As I was catching up on the flurry of emails from yesterday this
> morning, I noticed to my surprise that you came around to precisely what
> I had hoped for in a switch/case statement; I'm going to have to try not
> posting on a subject more often <.5 wink>)

No kidding. There is such a thing as too much heat. I need to do this
more often myself, too!

> > Independent from this, I wonder if we also need static names of the form
> >
> >   static <name> = <expression>
> >
> > which would be similar to
> >
> >   <name> = static (<expression>)
> >
> > but also prevents <name> from being assigned to elsewhere in the same scope.
>
> * I don't particularly care for the literal syntax of static.

What do you mean by "literal syntax of static"? Do you mean 'static'
<atom> or 'static'  <name> '=' <expression>? Or something else?

> I like
> the /idea/, but I don't believe that it is as useful as people think it
> is.  Let me explain.  Given your previously proposed 'all cases have an
> implied static', that handles the switch/case statement, the purpose of
> 'static' is to explain what happens to the switch/case statement, not
> that people would actually use it as such in the switch/case example (as
> it is implied, and people are insane about trying to reduce how much
> they type).

I'm all for typing less if it makes things clearer for the reader. I'm
not for reductions in what you type at the expense of readability.
Often, reducing redundant boiler plate is of the first category, since
the reader must just skip the boiler plate if it's explicitly typed.

> For general variables, like say; math.pi, re.DOTALL, sys.maxint, len,
> int, Exception, etc., many of them are merely references to their values,
> that is, there are no value manipulations.  This case is quite
> conveniently covered by Raymond's "Decorator for BindingConstants at
> compile time", a sexy decorator available in the cookbook [1].  That
> decorator can handle all of the non-modifying value assignments, and in
> the case of Frederick's previously described:

But it is an absolutely yucky approach. It modifies byte code. That
makes it break in future Python versions (Python's byte code is not
standardized across versions), as well in Jython, IronPython, and
sandboxed Python (which will make a come-back, see Brett's post).

If it's as valuable as to let people devise the crap in that cookbook
entry (no offense, I'm sure it's a great intellectual accomplishment,
but it's fundamentally the wrong approach) then it's worth adding
something to the language to do it right. As the cookbook discussion
mentions, the decorator assumes that all globals are constant. That is
way too implicit for me.

(IOW, the existence of that cookbook entry proves to me that the
language needs to support something like this explicitly.)

>     if value < const (math.pi / 2):
>         ...
>
> ... a small modification to Raymond's recipe that allows keyword
> arguments (on the decorator) would preserve the beauty of the function
> definition, at the cost of an uglier decorator.
>
>     @make_constants(math_pi_2=(math.pi / 2))
>     def foo(value):
>         if value < math_pi_2:
>             ...

Realistically that would be way too verbose. Note how the string
approximating math...pi...2 now occurs three times in the code where
Fredrik's example had it only once!

> Now, it's not as slick as a static unary operator, but it handles the
> simple references case quite well, and can be hacked to handle the
> saving of expressions without significant difficulty, though such
> uglifies the decorator call significantly.  If a variant of that
> decorator were available in the standard library, maybe with simple
> variants, I believe much of the general talk about 'static' will go away.
> I could certainly be wrong, but that's ok too.

It's fundamentally the wrong approach.

> > Also, I haven't heard a lot of thumbs up or down on the idea of using
> >
> >   case X:
> >
> > to indicate a single value and
> >
> >   case in S:
> >
> > to indicate a sequence of values.
>
> +1
>
>
> > (I'm not counting all the hypergeneralizations that were proposed like
> >
> >   case == X:
> >   case < X:
> >   case is X:
> >   case isinstance X:
> >
> > since I'm -1 on all those, no matter how nicely they align.
>
> The 'case == X' was cute, though if it was an alternative spelling of
> 'case X', I doubt it would be used terribly often.  Regardless, I'm -1
> on all other cases, and would not be concerned to lose the '== X'
> version.
>
>  - Josiah
>
> [1] http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/277940

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Fri Jun 23 19:23:29 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 10:23:29 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e8a0972d0606231008id802644v52feb81b762bfade@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e8a0972d0606231008id802644v52feb81b762bfade@mail.gmail.com>
Message-ID: <ca471dc20606231023g73c48535k2a7ee49afada3487@mail.gmail.com>

On 6/23/06, Alex Martelli <aleaxit at gmail.com> wrote:
> On 6/22/06, Guido van Rossum <guido at python.org> wrote:
> > Independent from this, I wonder if we also need static names of the form
> >
> >   static <name> = <expression>
> >
> > which would be similar to
> >
> >   <name> = static (<expression>)
> >
> > but also prevents <name> from being assigned to elsewhere in the same scope.
>
> Lovely!!!  Definitely +1.  Could perhaps THIS use of static be allowed
> even outside of a def?  I'd just love to have such static names in
> modules and classes, too (with runtime checks of errant assignments,
> if needed).

It would provide no speed advantage, and I don't see how the
staticness would be transferred upon import into another module.
Runtime checks of errant assignments would be relatively easy: trap
this in the module setattr operation, and henceforth let
module.__dict__ return a read-only dict wrapper. (Except that would
break "exec E in globals()". I guess it would have to be a dict
wrapper that only makes those specific keys read-only...)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From jcarlson at uci.edu  Fri Jun 23 19:47:16 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 23 Jun 2006 10:47:16 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e8a0972d0606231008id802644v52feb81b762bfade@mail.gmail.com>
References: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e8a0972d0606231008id802644v52feb81b762bfade@mail.gmail.com>
Message-ID: <20060623104111.1DF1.JCARLSON@uci.edu>


"Alex Martelli" <aleaxit at gmail.com> wrote:
> On 6/22/06, Guido van Rossum <guido at python.org> wrote:
> > Independent from this, I wonder if we also need static names of the form
> >
> >   static <name> = <expression>
> >
> > which would be similar to
> >
> >   <name> = static (<expression>)
> >
> > but also prevents <name> from being assigned to elsewhere in the same scope.
> 
> Lovely!!!  Definitely +1.  Could perhaps THIS use of static be allowed
> even outside of a def?  I'd just love to have such static names in
> modules and classes, too (with runtime checks of errant assignments,
> if needed).

The problem is that there would need to be a change to modules to be
more class-like (read-only properties), as "module.foo = x" would need
to raise an exception if foo was static in module.

 - Josiah


From mal at egenix.com  Fri Jun 23 19:51:03 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 23 Jun 2006 19:51:03 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7eqgk$qsq$2@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<44988C6E.4080806@canterbury.ac.nz>	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>	<449AE75A.7050302@egenix.com>
	<e7eqgk$qsq$2@sea.gmane.org>
Message-ID: <449C2A07.1030506@egenix.com>

Georg Brandl wrote:
> M.-A. Lemburg wrote:
> 
>> A nice side-effect would be that could easily use the
>> same approach to replace the often used default-argument-hack,
>> e.g.
>>
>> def fraction(x, int=int, float=float):
>>     return float(x) - int(x)
>>
>> This would then read:
>>
>> def fraction(x):
>>     const int, float
>>     return float(x) - int(x)
> 
> There's a certain risk that the premature-optimization fraction will
> plaster every function with const declarations, but they write
> unreadable code anyway ;)
>
> Aside from this, there's still another point: assume you have quite a
> number of module-level string "constants" which you want to use in a switch.
> You'd have to repeat all of their names in a "const" declaration in order
> to use them this way.

If you want to use the switch-dispatch table optimization, yes.

I'm sure we could find ways to make such declarations more
user-friendly. E.g. to declare all symbols imported from a
module constant:

# Declare the name "module" constant:
const module

# Declare all references "module.<something>" constant:
const module.*

This would allow you to e.g. declare all builtins constant,
avoiding cluttering up your code with const declarations,
as in the above example.

Note that such a declaration would go beyond just the use in
a switch statement. It allows you to declare names reserved
within the scope you are defining them in and gives them
a special meaning - much like "global" does.

Indeed, with this kind of declaration you wouldn't need to
add the switch statement to benefit from the dispatch
table optimization, since the compiler could easily identify
an if-elif-else chain as being optimizable even if it uses
symbols instead of literals for the values.

Furthermore, the compiler could do other optimizations on the
const declared names, such as optimizing away global lookups
and turning them into code object constants lookups.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 23 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From steven.bethard at gmail.com  Fri Jun 23 19:52:07 2006
From: steven.bethard at gmail.com (Steven Bethard)
Date: Fri, 23 Jun 2006 11:52:07 -0600
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
Message-ID: <d11dcfba0606231052q7c3f129oa6fd7d722f8483dc@mail.gmail.com>

[delurking in response to the first really decisive message in the thread] ;-)

On 6/22/06, Guido van Rossum <guido at python.org> wrote:
> (1) An expression of the form 'static' <atom> has the semantics of
> evaluating the atom at the same time as the nearest surrounding
> function definition.

FWIW, +1.  This is clear and useful.  And code like the following
could be fixed by simply adding a "static" before the "i", instead of
adding default arguments::

    funcs = []
    for i in xrange(10):
        def f():
            return i
        funcs.append(f)

> If there is no surrounding function definition,
> 'static' is a no-op and the expression is evaluated every time.
> [Alternative 1: this is an error]

+1 on the error.  Function definition time doesn't really make sense
for modules.  Only +0 on allowing only compile-time constants, since
it would be a little harder to explain.  I guess you'd want to tell
people something like "think of a module as being a function that is
defined at compile time and called on import".

> (2) All case expressions in a switch have an implied 'static'.

+1.  You already have to understand that switch statements are
evaluated at function definition time, so the 'static' does seem a bit
redundant.

> (3) A switch is implemented using a dict which is precomputed at the
> same time its static expressions are precomputed. The switch
> expression must be hashable. Overlap between different cases will
> raise an exception at precomputation time.

+1.  What a wonderful, simple explanation. =)

> Independent from this, I wonder if we also need static names of the form
>
>   static <name> = <expression>
>
> which would be similar to
>
>   <name> = static (<expression>)
>
> but also prevents <name> from being assigned to elsewhere in the same scope.

-0.  I'm not sure how often this is really necessary.  I'd rather see
static expressions in Python 2.6, see how people use them, and then
decide whether or not static names are also needed.

> Also, I haven't heard a lot of thumbs up or down on the idea of using
>
>   case X:
>
> to indicate a single value and
>
>   case in S:
>
> to indicate a sequence of values.

+1.  This syntax seems pretty intuitive.

STeVe
-- 
I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a
tiny blip on the distant coast of sanity.
        --- Bucky Katt, Get Fuzzy

From mal at egenix.com  Fri Jun 23 20:01:47 2006
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 23 Jun 2006 20:01:47 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <449C2A07.1030506@egenix.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<44988C6E.4080806@canterbury.ac.nz>	<449920A4.7040008@gmail.com>	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>	<449AE75A.7050302@egenix.com>	<e7eqgk$qsq$2@sea.gmane.org>
	<449C2A07.1030506@egenix.com>
Message-ID: <449C2C8B.70903@egenix.com>

Reading on in the thread it seems that there's agreement
on using "static" instead of "const", to s/const/static
:-)

M.-A. Lemburg wrote:
> Georg Brandl wrote:
>> M.-A. Lemburg wrote:
>>
>>> A nice side-effect would be that could easily use the
>>> same approach to replace the often used default-argument-hack,
>>> e.g.
>>>
>>> def fraction(x, int=int, float=float):
>>>     return float(x) - int(x)
>>>
>>> This would then read:
>>>
>>> def fraction(x):
>>>     const int, float
>>>     return float(x) - int(x)
>> There's a certain risk that the premature-optimization fraction will
>> plaster every function with const declarations, but they write
>> unreadable code anyway ;)
>>
>> Aside from this, there's still another point: assume you have quite a
>> number of module-level string "constants" which you want to use in a switch.
>> You'd have to repeat all of their names in a "const" declaration in order
>> to use them this way.
> 
> If you want to use the switch-dispatch table optimization, yes.
> 
> I'm sure we could find ways to make such declarations more
> user-friendly. E.g. to declare all symbols imported from a
> module constant:
> 
> # Declare the name "module" constant:
> const module
> 
> # Declare all references "module.<something>" constant:
> const module.*
> 
> This would allow you to e.g. declare all builtins constant,
> avoiding cluttering up your code with const declarations,
> as in the above example.
> 
> Note that such a declaration would go beyond just the use in
> a switch statement. It allows you to declare names reserved
> within the scope you are defining them in and gives them
> a special meaning - much like "global" does.
> 
> Indeed, with this kind of declaration you wouldn't need to
> add the switch statement to benefit from the dispatch
> table optimization, since the compiler could easily identify
> an if-elif-else chain as being optimizable even if it uses
> symbols instead of literals for the values.
> 
> Furthermore, the compiler could do other optimizations on the
> const declared names, such as optimizing away global lookups
> and turning them into code object constants lookups.
> 

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 23 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::

From kd5bjo at gmail.com  Fri Jun 23 20:02:11 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Fri, 23 Jun 2006 13:02:11 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
Message-ID: <eaaf21dc0606231102p25b016e2r97d6a5dd3d19f90e@mail.gmail.com>

On 6/22/06, Guido van Rossum <guido at python.org> wrote:
> (3) A switch is implemented using a dict which is precomputed at the
> same time its static expressions are precomputed. The switch
> expression must be hashable. Overlap between different cases will
> raise an exception at precomputation time.

How does this interact with __contains__, __len__, and __iter__ for
the 'case in S' statement?  Would it work with a class that only
implements __contains__, such as a continuous range class?

  -- Eric Sumner

From jcarlson at uci.edu  Fri Jun 23 20:15:52 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 23 Jun 2006 11:15:52 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606231017p78b9f7cfvdd604a95c8d0a702@mail.gmail.com>
References: <20060623085714.1DEE.JCARLSON@uci.edu>
	<ca471dc20606231017p78b9f7cfvdd604a95c8d0a702@mail.gmail.com>
Message-ID: <20060623105105.1DF4.JCARLSON@uci.edu>


"Guido van Rossum" <guido at python.org> wrote:
> On 6/23/06, Josiah Carlson <jcarlson at uci.edu> wrote:
> > "Guido van Rossum" <guido at python.org> wrote:
> > > Independent from this, I wonder if we also need static names of the form
> > >
> > >   static <name> = <expression>
> > >
> > > which would be similar to
> > >
> > >   <name> = static (<expression>)
> > >
> > > but also prevents <name> from being assigned to elsewhere in the same scope.
> >
> > * I don't particularly care for the literal syntax of static.
> 
> What do you mean by "literal syntax of static"? Do you mean 'static'
> <atom> or 'static'  <name> '=' <expression>? Or something else?

The existance of the static keyword and its use in general.

You later stated that decorators were the wrong way of handling it. I
believe that the...
    static <name> = <expression>
...would require too many changes to what some regular python users have
come to expect from at least module namespaces.  I have nothing
constructive to say about the function local case.

I believe that static is generally fine for the...
    static <expression>
... case, whether it is prefixed with a '<name> =', or some other
operation on the value.

Allowing things like 'value < static (math.pi / 2)' brings up the
question of where the calculated value of (math.pi / 2) will be stored. 
Presumably it would be stored in a function or module const table, and
that is fine.  But what does the operation:
    <name> = static <expression>
...do?  In a function namespace, do we calculate expression, assign it
to the <name> local on function definition and call it good?  Or do we
load the stored evaluated <expression> each pass through during runtime,
making it effectively equivalent to:
    <name> = <literal>
I hope it's the latter (assign it to the local from a const table at the
point of the '<name> = static ...' line).


> > For general variables, like say; math.pi, re.DOTALL, sys.maxint, len,
> > int, Exception, etc., many of them are merely references to their values,
> > that is, there are no value manipulations.  This case is quite
> > conveniently covered by Raymond's "Decorator for BindingConstants at
> > compile time", a sexy decorator available in the cookbook [1].  That
> > decorator can handle all of the non-modifying value assignments, and in
> > the case of Frederick's previously described:
> 
> But it is an absolutely yucky approach. It modifies byte code. That
> makes it break in future Python versions (Python's byte code is not
> standardized across versions), as well in Jython, IronPython, and
> sandboxed Python (which will make a come-back, see Brett's post).

You make a good point.  It really is only usable in particular CPython
versions at any one time, though I am generally of a different opinion:
if for some desired operation X you can get identical functionality
and/or speed improvements during runtime without additional syntax, and
it is easy to use, then there is no reason to change syntax.

It seems that this particular operation can be cumbersome, is a
maintenance nightmare, and isn't applicable to non-CPython, so it
violates my reasons for 'shouldn't become syntax'.


If it makes others happy, static is fine with me.

 - Josiah


From jimjjewett at gmail.com  Fri Jun 23 20:21:18 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Fri, 23 Jun 2006 14:21:18 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <fb6fbf560606231054j4c4be1ajc152a6fb21032dd5@mail.gmail.com>
References: <fb6fbf560606231054j4c4be1ajc152a6fb21032dd5@mail.gmail.com>
Message-ID: <fb6fbf560606231121v1384256ew41ca19b782889293@mail.gmail.com>

Nick Maclaren wrote:

> The standard floating-point model is that it maps functions defined on
> the reals (sometimes complex) to approximations defined on floating-
> point.

OK.

> The conventional interpretation was that any operation that
> was not mathematically continuous in an open region including its
> argument values (in the relevant domain) was an error, and that all
> such errors should be flagged.  That is what I am talking about.

Not a bad goal, but not worth sweating over, since it isn't
sufficient.  It still allows functions whose continuity does not
extend to the next possible floating point approximation, or functions
whose value, while continuous, changes "too much" in that region.

> Unfortunately, that doesn't help, because it is not where the issues
> are.  What I don't know is how much you know about numerical models,
> IEEE 754 in particular, and C99.  You weren't active on the SC22WG14

For some uses, it is more important to be consistent with established
practice than to be as correct as possible.  If the issues are still
problems, and can't be solved in languages like java, then ... the
people who want "correct" behavior will be a tiny minority, and it
makes sense to have them use a 3rd-party extension.

> For example, consider conversion between float
> and long - which class should control the semantics?

The current python approach with binary fp is to inherit from C
(consistency with established practice).  The current python approach
for Decimal (or custom classes) is to refuse to guess in the first
place; people need to make an explicit conversion.  How is this a
problem?

> they are unspecified - IDEALLY, things like floating-point
> traps would be handled thread-locally ... but things like TLB
> miss traps, device interrupts and machine-check interrupts
> need to be CPU-global.  Unfortunately, modern architectures
> use a single mechanism for all of them

That sounds like a serious problem for some combination of threading
and trying to use hardware floating point at maximum efficiency.  It
does not sound like a problem for software implementations, like
python's Decimal package.  It *might* limit the gains that a portable
C re-implementation could get, but if you need the absolute fastest
and you need threads -- maybe that is again the domain of a 3rd-party
extension.

-jJ

From pje at telecommunity.com  Fri Jun 23 20:27:18 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Fri, 23 Jun 2006 14:27:18 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <449C2A07.1030506@egenix.com>
References: <e7eqgk$qsq$2@sea.gmane.org>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>
	<449AE75A.7050302@egenix.com> <e7eqgk$qsq$2@sea.gmane.org>
Message-ID: <5.1.1.6.0.20060623141300.027a71a0@sparrow.telecommunity.com>

At 07:51 PM 6/23/2006 +0200, M.-A. Lemburg wrote:
>Furthermore, the compiler could do other optimizations on the
>const declared names, such as optimizing away global lookups
>and turning them into code object constants lookups.

Technically, they'd have to become LOAD_DEREF on cells set up by the module 
level code and attached to function objects.  'marshal' won't be able to 
save function references or other such objects to a .pyc file.

It's interesting that this line of thinking does get us closer to the 
long-desired builtins optimization.  I'm envisioning:

     static __builtin__.*

or something like that.  Hm.  Maybe:

     from __builtin__ import static *

:)

In practice, however, this doesn't work for * imports unless it causes all 
global-scope names with no statically detectable assignments to become 
static.  That could be a problem for modules that generate symbols 
dynamically, like 'opcode' in the stdlib.

OTOH, maybe we could just have a LOAD_STATIC opcode that works like 
LOAD_DEREF but falls back to using globals if the cell is empty.

Interestingly, a side effect of making names static is that they also 
become private and untouchable from outside the module.

Hm.  Did I miss something, or did we just solve builtin lookup 
optimization?  The only problem I see is that currently you can stick a new 
version of 'len()' into a module from outside it, shadowing the 
builtin.  Under this scheme (of making all read-only names in a module 
become closure variables), such an assignment would change the globals, but 
have no effect on the module's behavior, which would be tied to the static 
definitions created at import time.







>--
>Marc-Andre Lemburg
>eGenix.com
>
>Professional Python Services directly from the Source  (#1, Jun 23 2006)
> >>> Python/Zope Consulting and Support ...        http://www.egenix.com/
> >>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
> >>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
>________________________________________________________________________
>
>::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::
>_______________________________________________
>Python-Dev mailing list
>Python-Dev at python.org
>http://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: 
>http://mail.python.org/mailman/options/python-dev/pje%40telecommunity.com


From guido at python.org  Fri Jun 23 20:37:33 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 11:37:33 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <eaaf21dc0606231102p25b016e2r97d6a5dd3d19f90e@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<eaaf21dc0606231102p25b016e2r97d6a5dd3d19f90e@mail.gmail.com>
Message-ID: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>

On 6/23/06, Eric Sumner <kd5bjo at gmail.com> wrote:
> On 6/22/06, Guido van Rossum <guido at python.org> wrote:
> > (3) A switch is implemented using a dict which is precomputed at the
> > same time its static expressions are precomputed. The switch
> > expression must be hashable. Overlap between different cases will
> > raise an exception at precomputation time.
>
> How does this interact with __contains__, __len__, and __iter__ for
> the 'case in S' statement?  Would it work with a class that only
> implements __contains__, such as a continuous range class?

No; in order to make it possible to use a single dict lookup for
dispatch, the set members are expanded into the dict key. If you have
a large contiguous range, you'll be better off (sometimes *much*
better) doing an explicit if/elif check before entering the switch.

Let me sketch a prototype for the code that builds the dict given the cases:

def build_switch(cases, globals):
  """
  Args:
    cases: [(op, expr, offset), ...]
      # op in ('==', 'in')
      # expr is a string giving an expression
      # offset is an integer offset where to jump for this case
    globals: dict used as a namespace
  """
  dispatch = {}
  for op, expr, offset in cases:
    value = eval(expr, globals)
    switch op:
      case '==':
        if value in dispatch:
          raise RuntimeError("duplicate switch case %r == %r" % (expr, value))
        dispatch[value] = offset
      case 'in':
        for val in value:
          if val in dispatch:
            raise RuntimeError("duplicate switch case %r contains %r"
% (expr, val))
          dispatch[val] = offset
  return dispatch

Of course, the real implementation would not use eval() or represent
the expressions as strings or represent all the cases as a list; the
compiler would probably generate byte code corresponding to the body
of either of the above cases for each case in the switch being
compiled. The dispatch dicts would be passed into the function object
constructor somehow. Lots of details for whoever wants to implement
this. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Fri Jun 23 20:55:06 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 11:55:06 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7g74s$sdk$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
Message-ID: <ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>

On 6/23/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Guido van Rossum wrote:
> > (3) A switch is implemented using a dict which is precomputed at the
> > same time its static expressions are precomputed. The switch
> > expression must be hashable. Overlap between different cases will
> > raise an exception at precomputation time.
>
> +0 on switch/case, but -0.5 on a "in terms of implementation" rather
> than "in terms of existing language constructs" approach.
>
> as I mentioned before, I'd prefer if the switch/case/case-in/else was
> defined in terms of a corresponding if/elif/else construct (but where
> the controlling expression is only evaluated once).
>
> after all, Python's a dynamic language, and I'm not convinced that I
> would never want to use dynamically evaluated case values.  just map
>
>      switch EXPR:
>      case E1:
>          ...
>      case in E2:
>          ...
>      else:
>          ...
>
> to
>
>      VAR = EXPR
>      if VAR == E1:
>          ...
>      elif VAR in E2:
>          ...
>      else:
>          ...
>
> where VAR is a temporary variable, and case and case-in clauses can be
> freely mixed, and leave the rest to the code generator.  (we could even
> allow "switch EXPR [as VAR]" to match a certain other sugar construct).

This used to be my position. I switched after considering the
alternatives for what should happen if either the switch expression or
one or more of the case expressions is unhashable. (Not to mention if
one of them has a broken __hash__ method.) Unless all values are
literals (or at least compile-time constant expressions involving only
literals) the compiler can't know whether any particular switch could
involve non-hashable values, and it would have to write code that
catches exceptions from hashing and falls back to an if/elif chain
implementation (unless you don't care at all about speedups). I don't
think that static would help enough; static doesn't promise that the
value is hashable.

> I'm also a K&R guy, so switch/case/case-in/else should all have the same
> indent.  anything else is just sloppy design.

I'm on the fence about the exact syntax. I see four alternatives: we
can use case or not, and we can indent it or not. The only weird thing
about not indenting the cases is that python-mode.el and all other
IDEs would have be taught that the line following switch ...: should
not be indented.

But all things considered I'm probably most in favor of unindented
cases using the case keyword; working at Google has taught me that
indentation levels are a precious resource, and I'm slightly
uncomfortable with not having an explicit case keyword in the light of
coding errors involving typos or misindents.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Fri Jun 23 21:05:42 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 12:05:42 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060623105105.1DF4.JCARLSON@uci.edu>
References: <20060623085714.1DEE.JCARLSON@uci.edu>
	<ca471dc20606231017p78b9f7cfvdd604a95c8d0a702@mail.gmail.com>
	<20060623105105.1DF4.JCARLSON@uci.edu>
Message-ID: <ca471dc20606231205x781e313eo65d6a74c151481b@mail.gmail.com>

On 6/23/06, Josiah Carlson <jcarlson at uci.edu> wrote:
> You later stated that decorators were the wrong way of handling it. I
> believe that the...
>     static <name> = <expression>
> ...would require too many changes to what some regular python users have
> come to expect from at least module namespaces.  I have nothing
> constructive to say about the function local case.

It looks like "static NAME = EXPR" is still pretty controversial.
"NAME = static EXPR" seems to be getting universal +1s OTOH.

> Allowing things like 'value < static (math.pi / 2)' brings up the
> question of where the calculated value of (math.pi / 2) will be stored.
> Presumably it would be stored in a function or module const table, and
> that is fine.

A new category of data stored on the function object, computed at
function def time. It would be an array and there would be a new
opcode to load values in this array in O(1) time.

> But what does the operation:
>     <name> = static <expression>
> ...do?  In a function namespace, do we calculate expression, assign it
> to the <name> local on function definition and call it good?

That would be impossible; the local namespace doesn't exist when the
function object is created (except for the cells used to reference
variables in outer function scopes).

> Or do we
> load the stored evaluated <expression> each pass through during runtime,
> making it effectively equivalent to:
>     <name> = <literal>
> I hope it's the latter (assign it to the local from a const table at the
> point of the '<name> = static ...' line).

Yes, the latter  should be good enough.

[On Raymond's optimizing decorator]
> You make a good point.  It really is only usable in particular CPython
> versions at any one time, though I am generally of a different opinion:
> if for some desired operation X you can get identical functionality
> and/or speed improvements during runtime without additional syntax, and
> it is easy to use, then there is no reason to change syntax.

There problem with hacks like that decorator is that if it misbehaves
(e.g. you have a global that sometimes is reassigned) you end up
debugging really hairy code. The semantics aren't 100% clear.

I'm all for telling people "you can do that yourself" or even "here is
a standard library module that solves your problem". But the solution
needs to satisfy a certain cleanliness standard.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From edcjones at comcast.net  Fri Jun 23 22:16:44 2006
From: edcjones at comcast.net (Edward C. Jones)
Date: Fri, 23 Jun 2006 16:16:44 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <mailman.27711.1151087287.27774.python-dev@python.org>
References: <mailman.27711.1151087287.27774.python-dev@python.org>
Message-ID: <449C4C2C.60006@comcast.net>

Python is a beautiful simple language with a rich standard library. 
Python has done fine without a switch statement up to now. Guido left it 
out of the original language for some reason (my guess is simplicity). 
Why is it needed now? What would be added next: do while or goto? The 
urge to add syntax should be resisted unless there is a high payoff 
(such as yield).

There are much better ways for the developers to spend their time and 
energy (refactoring os comes to mind).

Please keep Python simple.

-1 on the switch statement.

From martin at v.loewis.de  Fri Jun 23 22:20:15 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 23 Jun 2006 22:20:15 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060622082615.73007.qmail@web31509.mail.mud.yahoo.com>
References: <20060622082615.73007.qmail@web31509.mail.mud.yahoo.com>
Message-ID: <449C4CFF.4070808@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
>>> Is there a way to set the warning options via an environment variable?
>> This is off-topic for python-dev,
> 
> What is the channel I should use? (I am testing a beta 1.)

The specific question was

"Is there a way to set the warning options via an environment variable?"

This has nothing to do with beta1; the warnings module was introduced
many releases ago, along with all the mechanics to disable warnings.

>> but: why don't switch off the warnings
>> in the code?
> 
> We support installation from sources with the native Python if available. Any
> Python >= 2.2.1 works. It would be frustrating if we had to give up on this
> just because of a warning designed for newcomers.

I guess you misunderstood. I propose you put warnings.simplefilter()
into your code. The warnings was introduced before 2.2.1 IIRC, so this
should work on all releases you want to support (but have no effect on
installations where the warning isn't generated).

Regards,
Martin

From guido at python.org  Fri Jun 23 22:21:16 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 13:21:16 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060623141300.027a71a0@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<ca471dc20606221044g6701d2c9wd155ab003753249@mail.gmail.com>
	<449AE75A.7050302@egenix.com> <e7eqgk$qsq$2@sea.gmane.org>
	<449C2A07.1030506@egenix.com>
	<5.1.1.6.0.20060623141300.027a71a0@sparrow.telecommunity.com>
Message-ID: <ca471dc20606231321k2e9d4955u390046ace98c7fab@mail.gmail.com>

On 6/23/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> Hm.  Did I miss something, or did we just solve builtin lookup
> optimization?  The only problem I see is that currently you can stick a new
> version of 'len()' into a module from outside it, shadowing the
> builtin.  Under this scheme (of making all read-only names in a module
> become closure variables), such an assignment would change the globals, but
> have no effect on the module's behavior, which would be tied to the static
> definitions created at import time.

Or we could arrange for such assignments to be dynamically illegal. We
could have some provision whereby any name that's known to the
compiler to be a built-in, and for which the compiler can't see an
explicit assignment, is implicitly made static. This would make it a
run-time error if "import *" were to redefine such a name. The module
object would have to know which names are static and disallow
assignments to these. It would also have to export __dict__ as a proxy
that disallows such assignments. I think it can be made to work.

I do think this would require static names as well as static
expressions. This is definitely still in the brainstorm phase!

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From jimjjewett at gmail.com  Fri Jun 23 22:32:48 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Fri, 23 Jun 2006 16:32:48 -0400
Subject: [Python-Dev] Switch statement
Message-ID: <fb6fbf560606231332o6439198csbddb3e74eeb46bb6@mail.gmail.com>

In http://mail.python.org/pipermail/python-dev/2006-June/066399.html, PJE wrote:
>> Python prefers to evaluate expressions in the order that they
>> appear in source code, ... "first-time use" preserves that
>> property; "function definition time" does not.

Guido wrote:
> But first-time has the very big disadvantage IMO that there's no
> safeguard to warn you that the value is different on a subsequent
> execution -- you just get the old value without warning.

That is true either way, and is already true with computed default
arguments.  The only difference is that your mental model has even
longer to become inconsistent.  (The time between definition and first
use.)

First time use also lets you use a constant (such as a dotted name
from another module) that may not yet be defined when the function is
defined, but will be defined before the function is used.

-jJ

From martin at v.loewis.de  Fri Jun 23 22:38:47 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 23 Jun 2006 22:38:47 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
In-Reply-To: <e7h24t$t7c$1@sea.gmane.org>
References: <e7ge79$kg8$1@sea.gmane.org> <20060623145602.GB10250@niemeyer.net>
	<e7h24t$t7c$1@sea.gmane.org>
Message-ID: <449C5157.5050004@v.loewis.de>

Thomas Heller wrote:
> What I did was at a certain time develop in the 'branch_1_0' branch, leaving
> HEAD for experimental work.  Later I decided that this was wrong, cvs removed all
> files in HEAD, and added them back from a branch_1_0 checkout.  Maybe doing
> this was another bad idea, as the trunk in the converted SVN repository
> only lists _ctypes.c revisions corresponding to the CVS version numbers
> 1.307 up to the current CVS head 1.340.  All the older versions from 1.1 up to
> 1.226.2.55 show up in the branch_1_0 branch that cvs2svn has created - although
> in CVS only the versions 1.226.0.2 up to 1.226.2.55 were ever in the branch_1_0
> branch.  Is that a bug in cvs2svnn?

I doubt it. I'm pretty sure the subversion repository *does* contain all
the old files, in the old revisions. What happens if you do the
following on your converted subversion repository:

1. find out the oldest version of the files from svn log. Say this is
   version 1000.
2. Explicitly check out the trunk at version 950 (i.e. some plenty
   revisions before your copied the files from the branch).

I expect that this will give you the files just before you deleted
them; doing "svn log" on this sandbox will then give you all the old
log messages and versions.

If that is what happens, here is why: "svn log" will trace a file
through all its revisions, and across "svn copy"s, back to when it
was added into the repository. At that point, "svn log" stops.
An earlier file with the same name which got removed is considered
as a different file, so "svn log" does not show its revisions.

If you don't want that do happen, you could try to "outdate" (cvs -o)
the deletion and readdition in CVS, purging that piece of history.
I'm not entirely certain whether this should work.

If that isn't what happens, I'd be curious to look at the CVS and
SVN tarballs.

Regards,
Martin

From jimjjewett at gmail.com  Fri Jun 23 22:40:12 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Fri, 23 Jun 2006 16:40:12 -0400
Subject: [Python-Dev] Switch statement
Message-ID: <fb6fbf560606231340n19b4a19co88cb889d53844e20@mail.gmail.com>

In http://mail.python.org/pipermail/python-dev/2006-June/066409.html,
Guido wrote

> (1) An expression of the form 'static' <atom> has the semantics of
> evaluating the atom at the same time as the nearest surrounding
> function definition.

(A)  I prefer a term other than 'static'.

Looking at various contexts in just C, C++, and Java, I have seen
static used to imply (at least) each of private, global, singleton,
class-level, and constant.

This suggested usage sounds more like a cross between "auxiliary
variable in a Lisp lambda form" and "compile-time defined constant".

(B)  I would prefer semantics closer to Java's final variables:  by
declaring something final, you would be promising that the next
expression can be treated as though it were a literal constant.
Python will evaluate it at least once after the "final" keyword but
before it gets used; anything more than that should be up to the
implementation.

The only advantage I see of specifying the time more tightly is that
people could use things that really aren't constant, iff they get the
timing right.

In http://mail.python.org/pipermail/python-dev/2006-June/066432.html,
Steven Bethard posted a fix for the "I-didn't-want-a-closure" problem
that uses this -- but realistically, people would still be burned by
unexpected closures just as often as they are today; the only benefit
is that the correct workaround is cleaner.

First time use has more compelling use cases, but I'm not sure they're
compelling enough.

(C)  Yes, I realize that you prefer to freeze only objects (the
results of expressions), and weren't sure about the form which also
froze the name.  But realistically, if a "final" name is rebound, that
is probably an error worth flagging.  I do realize that this gets into
a wider issue about how to seal a namespace.

> If there is no surrounding function definition,
> 'static' is a no-op and the expression is evaluated every time.

uhm ... OK ... so why even allow it at all?  Just for consistency with
the implied static of a case statement, even though it won't mean the
same thing?

> [Alternative 1: this is an error]

OK, but ...

Things like re.DOTALL should also be final; things like
urlparse.uses_relative should not.  It seems a shame to spend a
keyword saying "treat this as constant" and still not be able to do so
with module-level globals.

> [Alternative 2: it is evaluated before the module is entered;
> this would imply it can not involve any imported names but it can
> involve builtins]

And parsing must take at least one more pass, and static still better
not appear inside an if statement, and ...

> [Alternative 3: precomputed the first time the switch is entered]

OK, though it will be anti-efficient compared to bailing out when you
hit a match.

Alternative 4:  computed at some point after discovering it is final,
but before using it.  For case expressions, this would be after
starting to compute the switch dictionary, but before executing
anything in the suite of this or a later alternative.

> (2) All case expressions in a switch have an implied 'static'.

Good.  But again, I would prefer that the names also be frozen, so
that people won't expect that they can change the clauses; using a
variable in a clause should be fine, but rebinding that name later
(within the same persistent scope) is at best misleading.

> (3) A switch is implemented using a dict which is precomputed at the
> same time its static expressions are precomputed. The switch
> expression must be hashable. Overlap between different cases will
> raise an exception at precomputation time.

Again, I'm not sure it makes sense to specify the time.  Any
specification will cause the following to be well-defined, but someone
will be surprised at any particular result.  My best guess is that it
your proposal would catch ("A1", "B1", "C1", "D2")

    a="A1"
    b="B1"
    c="C1"
    d="D1"
    def f(v):
        if sys.version_info > (2, 5, 0, "", 0):
            a="A2"
        else:
            a="A3"
        b = static "B2"
        c = "C2"
        static d = "D2"
        switch v:
        case in (a, b, c, d): ...

I'm not convinced that we should forbid building the dictionary as
needed, so that it may not contain the last several cases until it
gets an input that doesn't match earlier cases.  (Though I do see the
argument for raising an Exception as early as possible if there are
conflicts.)

-jJ

From guido at python.org  Fri Jun 23 22:47:18 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 13:47:18 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <fb6fbf560606231332o6439198csbddb3e74eeb46bb6@mail.gmail.com>
References: <fb6fbf560606231332o6439198csbddb3e74eeb46bb6@mail.gmail.com>
Message-ID: <ca471dc20606231347s777df21drfb2dc161c3f6ed81@mail.gmail.com>

On 6/23/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> In http://mail.python.org/pipermail/python-dev/2006-June/066399.html, PJE wrote:
> >> Python prefers to evaluate expressions in the order that they
> >> appear in source code, ... "first-time use" preserves that
> >> property; "function definition time" does not.
>
> Guido wrote:
> > But first-time has the very big disadvantage IMO that there's no
> > safeguard to warn you that the value is different on a subsequent
> > execution -- you just get the old value without warning.
>
> That is true either way, and is already true with computed default
> arguments.  The only difference is that your mental model has even
> longer to become inconsistent.  (The time between definition and first
> use.)
>
> First time use also lets you use a constant (such as a dotted name
> from another module) that may not yet be defined when the function is
> defined, but will be defined before the function is used.

I should probably just pronounce on this; I'm not going to change my
mind, so def-time-freeze it is (if we do this at all). Ditto for
static inside a function (if we do that at all). Static and switch
outside a function are still somewhat open; I'm currently leaning
towards making static expressions outside a function illegal and limit
switches outside a function to compile-time-constant expressions.

Here are a few examples showing my objections against first-use.

def foo(c):
  def bar(x):
    switch x:
    case c: print 42
    else: print 0
  return bar

p = foo(1)
p(1)  # prints 42
q = foo(2)
q(2)  # does this print 42 or 0?

I think q(2) should print 42; otherwise it's not clear what object
should be used to hold the frozen switch dict; it can't be the code
object since code objects need to be immutable and cannot have any
variable state.

But then the def time enters into it anyway...

Another example is this:

def foo(c, x):
  switch x:
  case c: print 42
  else: print 0

Should this be allowed? The first-use rule has no strong motivation to
forbid it, since *knowing* the first-rule it's reasonable to expect
that *any* case expression will just be evalluated in the local scope
at the first use of the switch. But it's just begging for confusion if
the reader isn't clued in to the first-use rule. The def-time rule
simply forbids this; any switch you're likely to write with the
def-time rule is almost certain to use only global and imported
variables that are constants in the user's mind.

With the def-time rule, you'd have to work a lot harder to construct
an example that works differently than the casual reader would expect.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Fri Jun 23 22:48:47 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 13:48:47 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <fb6fbf560606231340n19b4a19co88cb889d53844e20@mail.gmail.com>
References: <fb6fbf560606231340n19b4a19co88cb889d53844e20@mail.gmail.com>
Message-ID: <ca471dc20606231348k3899bf65w59c2adc848da2b44@mail.gmail.com>

This post is too long for me to respond right now. I'm inviting others
to respond. I've got a feeling you're coming late to this discussion
and we're going around in circles.

--Guido

On 6/23/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> In http://mail.python.org/pipermail/python-dev/2006-June/066409.html,
> Guido wrote
>
> > (1) An expression of the form 'static' <atom> has the semantics of
> > evaluating the atom at the same time as the nearest surrounding
> > function definition.
>
> (A)  I prefer a term other than 'static'.
>
> Looking at various contexts in just C, C++, and Java, I have seen
> static used to imply (at least) each of private, global, singleton,
> class-level, and constant.
>
> This suggested usage sounds more like a cross between "auxiliary
> variable in a Lisp lambda form" and "compile-time defined constant".
>
> (B)  I would prefer semantics closer to Java's final variables:  by
> declaring something final, you would be promising that the next
> expression can be treated as though it were a literal constant.
> Python will evaluate it at least once after the "final" keyword but
> before it gets used; anything more than that should be up to the
> implementation.
>
> The only advantage I see of specifying the time more tightly is that
> people could use things that really aren't constant, iff they get the
> timing right.
>
> In http://mail.python.org/pipermail/python-dev/2006-June/066432.html,
> Steven Bethard posted a fix for the "I-didn't-want-a-closure" problem
> that uses this -- but realistically, people would still be burned by
> unexpected closures just as often as they are today; the only benefit
> is that the correct workaround is cleaner.
>
> First time use has more compelling use cases, but I'm not sure they're
> compelling enough.
>
> (C)  Yes, I realize that you prefer to freeze only objects (the
> results of expressions), and weren't sure about the form which also
> froze the name.  But realistically, if a "final" name is rebound, that
> is probably an error worth flagging.  I do realize that this gets into
> a wider issue about how to seal a namespace.
>
> > If there is no surrounding function definition,
> > 'static' is a no-op and the expression is evaluated every time.
>
> uhm ... OK ... so why even allow it at all?  Just for consistency with
> the implied static of a case statement, even though it won't mean the
> same thing?
>
> > [Alternative 1: this is an error]
>
> OK, but ...
>
> Things like re.DOTALL should also be final; things like
> urlparse.uses_relative should not.  It seems a shame to spend a
> keyword saying "treat this as constant" and still not be able to do so
> with module-level globals.
>
> > [Alternative 2: it is evaluated before the module is entered;
> > this would imply it can not involve any imported names but it can
> > involve builtins]
>
> And parsing must take at least one more pass, and static still better
> not appear inside an if statement, and ...
>
> > [Alternative 3: precomputed the first time the switch is entered]
>
> OK, though it will be anti-efficient compared to bailing out when you
> hit a match.
>
> Alternative 4:  computed at some point after discovering it is final,
> but before using it.  For case expressions, this would be after
> starting to compute the switch dictionary, but before executing
> anything in the suite of this or a later alternative.
>
> > (2) All case expressions in a switch have an implied 'static'.
>
> Good.  But again, I would prefer that the names also be frozen, so
> that people won't expect that they can change the clauses; using a
> variable in a clause should be fine, but rebinding that name later
> (within the same persistent scope) is at best misleading.
>
> > (3) A switch is implemented using a dict which is precomputed at the
> > same time its static expressions are precomputed. The switch
> > expression must be hashable. Overlap between different cases will
> > raise an exception at precomputation time.
>
> Again, I'm not sure it makes sense to specify the time.  Any
> specification will cause the following to be well-defined, but someone
> will be surprised at any particular result.  My best guess is that it
> your proposal would catch ("A1", "B1", "C1", "D2")
>
>     a="A1"
>     b="B1"
>     c="C1"
>     d="D1"
>     def f(v):
>         if sys.version_info > (2, 5, 0, "", 0):
>             a="A2"
>         else:
>             a="A3"
>         b = static "B2"
>         c = "C2"
>         static d = "D2"
>         switch v:
>         case in (a, b, c, d): ...
>
> I'm not convinced that we should forbid building the dictionary as
> needed, so that it may not contain the last several cases until it
> gets an input that doesn't match earlier cases.  (Though I do see the
> argument for raising an Exception as early as possible if there are
> conflicts.)
>
> -jJ
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From bos at serpentine.com  Fri Jun 23 22:38:35 2006
From: bos at serpentine.com (Bryan O'Sullivan)
Date: Fri, 23 Jun 2006 13:38:35 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449C4C2C.60006@comcast.net>
References: <mailman.27711.1151087287.27774.python-dev@python.org>
	<449C4C2C.60006@comcast.net>
Message-ID: <1151095115.2685.8.camel@localhost.localdomain>

On Fri, 2006-06-23 at 16:16 -0400, Edward C. Jones wrote:

> Please keep Python simple.

+1 on this sentiment.

I use switch statements all the time in C, but I'd rather not see them
in Python - even though I'd use them if they were there! - purely to
keep the cognitive overhead low.

	<b


From jimjjewett at gmail.com  Fri Jun 23 23:20:05 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Fri, 23 Jun 2006 17:20:05 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606231347s777df21drfb2dc161c3f6ed81@mail.gmail.com>
References: <fb6fbf560606231332o6439198csbddb3e74eeb46bb6@mail.gmail.com>
	<ca471dc20606231347s777df21drfb2dc161c3f6ed81@mail.gmail.com>
Message-ID: <fb6fbf560606231420i1943d6dey7f6077637eac1d4@mail.gmail.com>

On 6/23/06, Guido van Rossum <guido at python.org> wrote:
> Here are a few examples showing my objections against first-use.

[Problem with nested scopes; today this usually shows up as (invalid)
bug reports about lambda, in which failure to bind a "default"
variable "to itself" causes it to take on the value at the end of the
loop, instead of the value of the index when defined.]

[Problem with using a parameter as a case selector -- at least these
aren't available at definition time.]

> With the def-time rule, you'd have to work a lot harder to construct
> an example that works differently than the casual reader would expect.

Anything which use the same names in the local scope, particularly if
those names are themselves marked final (or static).

a=1
b=2
c=3
    def f(v):
        a=4    # This gets ignored?
        final b=5    # But what about this?  It is local, but a
constant known in advance
        switch v:
        case in (a, b, c): ...
        final c=6    # Also a constant, but textually after the case keyword.

-jJ

From kd5bjo at gmail.com  Fri Jun 23 23:36:15 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Fri, 23 Jun 2006 16:36:15 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<eaaf21dc0606231102p25b016e2r97d6a5dd3d19f90e@mail.gmail.com>
	<ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>
Message-ID: <eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>

On 6/23/06, Guido van Rossum <guido at python.org> wrote:
> No; in order to make it possible to use a single dict lookup for
> dispatch, the set members are expanded into the dict key. If you have
> a large contiguous range, you'll be better off (sometimes *much*
> better) doing an explicit if/elif check before entering the switch.

In that case, I would argue that the proposed syntax is misleading.
Regardless of how it is implemented, a switch statement is
conceptually a chain of if/elif statements.  As such, the 'in'
keyword, if it is allowed at all, should behave like it does in if
statements, rather than it does in loops.  If, for implementation
reasons, you want to ensure that all of the sets are enumerable, I
would recommend a syntax like this:

   "case" ["*"] expression ("," ["*"] expression)* ":" suite

This is consistent with parameter lists, which emphasizes that the
sequences are being enumerated instead of simply tested against.

  -- Eric Sumner

From exarkun at divmod.com  Fri Jun 23 23:46:58 2006
From: exarkun at divmod.com (Jean-Paul Calderone)
Date: Fri, 23 Jun 2006 17:46:58 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <1151095115.2685.8.camel@localhost.localdomain>
Message-ID: <20060623214658.29014.535179457.divmod.quotient.8263@ohm>

On Fri, 23 Jun 2006 13:38:35 -0700, Bryan O'Sullivan <bos at serpentine.com> wrote:
>On Fri, 2006-06-23 at 16:16 -0400, Edward C. Jones wrote:
>
>> Please keep Python simple.
>
>+1 on this sentiment.
>

I agree.

Jean-Paul

From Scott.Daniels at Acm.Org  Sat Jun 24 02:14:25 2006
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Fri, 23 Jun 2006 17:14:25 -0700
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>
References: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>
	<20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>
Message-ID: <e7i03m$sjs$1@sea.gmane.org>

Ralf W. Grosse-Kunstleve wrote:
> Thanks! This does the trick for me:
> 
> #if PY_VERSION_HEX >= 0x02030000
>         PyObject_CallFunction(
>           (PyObject*) &PyRange_Type, "lll", start, start+len*step, step)
> #else
>         PyRange_New(start, len, step, 1)
> #endif
> 
> I am compiling the code above with a C++ compiler (in the context of
> Boost.Python). Newer g++ versions unfortunatly produce a warning if -Wall is
> specified:
> 
> warning: dereferencing type-punned pointer will break strict-aliasing rules

I am not sure about your compiler, but if I remember the standard
correctly, the following code shouldn't complain:

    PyObject_CallFunction((PyObject*) (void *) &PyRange_Type,
                          "lll", start, start+len*step, step)

-- Scott David Daniels
Scott.Daniels at Acm.Org


From jcarlson at uci.edu  Sat Jun 24 02:21:47 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 23 Jun 2006 17:21:47 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>
References: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>
	<eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>
Message-ID: <20060623170807.1DFA.JCARLSON@uci.edu>


"Eric Sumner" <kd5bjo at gmail.com> wrote:
> 
> On 6/23/06, Guido van Rossum <guido at python.org> wrote:
> > No; in order to make it possible to use a single dict lookup for
> > dispatch, the set members are expanded into the dict key. If you have
> > a large contiguous range, you'll be better off (sometimes *much*
> > better) doing an explicit if/elif check before entering the switch.
> 
> In that case, I would argue that the proposed syntax is misleading.
> Regardless of how it is implemented, a switch statement is
> conceptually a chain of if/elif statements.  As such, the 'in'
> keyword, if it is allowed at all, should behave like it does in if
> statements, rather than it does in loops.  If, for implementation
> reasons, you want to ensure that all of the sets are enumerable, I
> would recommend a syntax like this:
> 
>    "case" ["*"] expression ("," ["*"] expression)* ":" suite
> 
> This is consistent with parameter lists, which emphasizes that the
> sequences are being enumerated instead of simply tested against.

You apparently missed the post where Guido expressed that he believes
that one of the valid motivators for the switch statement and the
dict-based dispatching was for that of speed improvements.  He also
already stated that cases could essentially only be examples for which
Python does pre-computation and the storing of constants (he didn't use
those words, and there are caveats with regards to module.attr and
global 'constants', but that was the gist I got from it).

As such, because any explicit range object is neither dict-accessable as
the values in the range would be, nor are they generally precomputed (or
precomputable) as constants (like (1,2,3) is and 1+1 should be), your
particular use-case (range objects that may implement __contains__ fast,
but whose __iter__ returns a huge number of values if it were
implemented as such) is not covered under switch/case, and we would
likely point you back off to if/elif/else.

This is a good thing, because if switch/case ends up functionally
identical to if/elif/else, then it has no purpose as a construct.  On
the other hand, because it is different from if/elif/else, and it is
different in such a way to make certain blocks of code (arguably) easier
to read or understand, (likely provably) faster, then it actually has a
purpose and use.


 - Josiah


From guido at python.org  Sat Jun 24 02:52:35 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 23 Jun 2006 17:52:35 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060623170807.1DFA.JCARLSON@uci.edu>
References: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>
	<eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>
	<20060623170807.1DFA.JCARLSON@uci.edu>
Message-ID: <ca471dc20606231752l3afea656j7a90219dc9b741c4@mail.gmail.com>

On 6/23/06, Josiah Carlson <jcarlson at uci.edu> wrote:
> This is a good thing, because if switch/case ends up functionally
> identical to if/elif/else, then it has no purpose as a construct.  On
> the other hand, because it is different from if/elif/else, and it is
> different in such a way to make certain blocks of code (arguably) easier
> to read or understand, (likely provably) faster, then it actually has a
> purpose and use.

Excellent formulation!

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From foom at fuhm.net  Sat Jun 24 03:22:52 2006
From: foom at fuhm.net (James Y Knight)
Date: Fri, 23 Jun 2006 21:22:52 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
Message-ID: <B30BCBFF-94DF-435F-9691-82C1F0D4F502@fuhm.net>


On Jun 22, 2006, at 3:24 PM, Phillip J. Eby wrote:

> Well, you can't "def" a dotted name, but I realize this isn't a  
> binding.

I have actually wanted to do that before. It would be nice if you  
could. :)

James

From kd5bjo at gmail.com  Sat Jun 24 04:08:10 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Fri, 23 Jun 2006 21:08:10 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060623170807.1DFA.JCARLSON@uci.edu>
References: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>
	<eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>
	<20060623170807.1DFA.JCARLSON@uci.edu>
Message-ID: <eaaf21dc0606231908y18ec926ble6c0670180b3830a@mail.gmail.com>

> > In that case, I would argue that the proposed syntax is misleading.
> > Regardless of how it is implemented, a switch statement is
> > conceptually a chain of if/elif statements.  As such, the 'in'
> > keyword, if it is allowed at all, should behave like it does in if
> > statements, rather than it does in loops.  If, for implementation
> > reasons, you want to ensure that all of the sets are enumerable, I
> > would recommend a syntax like this:
> >
> >    "case" ["*"] expression ("," ["*"] expression)* ":" suite
> >
> > This is consistent with parameter lists, which emphasizes that the
> > sequences are being enumerated instead of simply tested against.
>
> You apparently missed the post where Guido expressed that he believes
> that one of the valid motivators for the switch statement and the
> dict-based dispatching was for that of speed improvements.  He also
> already stated that cases could essentially only be examples for which
> Python does pre-computation and the storing of constants (he didn't use
> those words, and there are caveats with regards to module.attr and
> global 'constants', but that was the gist I got from it).

I admit that I came into this discussion in the middle, and my initial
post was for informational (to me) purposes only.  I did not mean to
imply by that post that the proposal was flawed in any way, just to
verify that I properly understood the proposal.  I am sorry if I was
unclear about this.

> As such, because any explicit range object is neither dict-accessable as
> the values in the range would be, nor are they generally precomputed (or
> precomputable) as constants (like (1,2,3) is and 1+1 should be), your
> particular use-case (range objects that may implement __contains__ fast,
> but whose __iter__ returns a huge number of values if it were
> implemented as such) is not covered under switch/case, and we would
> likely point you back off to if/elif/else.

I concur.  I actually suspected as much prior to my original message
on this topic, but I wanted to make sure I was understanding things
correctly before attempting to make a suggestion.

> This is a good thing, because if switch/case ends up functionally
> identical to if/elif/else, then it has no purpose as a construct.  On
> the other hand, because it is different from if/elif/else, and it is
> different in such a way to make certain blocks of code (arguably) easier
> to read or understand, (likely provably) faster, then it actually has a
> purpose and use.

Again, I concur.  My point was not that the mechanics of the construct
were incorrect, but that the proposed syntax misrepresented its
function.  Again, I am sorry if I was unclear about this.

  -- Eric Sumner

From rasky at develer.com  Fri Jun 23 17:08:28 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Fri, 23 Jun 2006 17:08:28 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
References: <e7ge79$kg8$1@sea.gmane.org>
Message-ID: <00d401c696d6$e2572ed0$bf03030a@trilan>

Thomas Heller wrote:

> Is it possible to take the CVS repository files (they can be accessed
> with rsync), and import that, preserving the whole history, into SVN?

Yes:
http://cvs2svn.tigris.org/

You just need a maintainer of the Python SVN repository to load the dump
files this tool generates.
-- 
Giovanni Bajo


From ncoghlan at gmail.com  Sat Jun 24 05:09:53 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 24 Jun 2006 13:09:53 +1000
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <e7gpgc$snm$1@sea.gmane.org>
References: <2mr71hjzpp.fsf@starship.python.net>	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk>
	<e7gpgc$snm$1@sea.gmane.org>
Message-ID: <449CAD01.1010008@gmail.com>

Fredrik Lundh wrote:
> Nick Maclaren wrote:
> 
>> Unfortunately, that doesn't help, because it is not where the issues
>> are.  What I don't know is how much you know about numerical models,
>> IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
>> reflector, but there were some lurkers.
> 
> SC22WG14?  is that some marketing academy?  not a very good one, obviously.

http://www.open-std.org/jtc1/sc22/wg14/

Looks like python-dev for C, only with extra servings of international 
bureaucracy and vendor self-interest to make life more complicated ;)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From tjreedy at udel.edu  Sat Jun 24 05:50:44 2006
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 23 Jun 2006 23:50:44 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
References: <2mr71hjzpp.fsf@starship.python.net>	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk><e7gpgc$snm$1@sea.gmane.org>
	<449CAD01.1010008@gmail.com>
Message-ID: <e7icql$rmr$1@sea.gmane.org>


"Nick Coghlan" <ncoghlan at gmail.com> wrote in message 
news:449CAD01.1010008 at gmail.com...
>> SC22WG14?  is that some marketing academy?  not a very good one, 
>> obviously.
>
> http://www.open-std.org/jtc1/sc22/wg14/
>
> Looks like python-dev for C, only with extra servings of international
> bureaucracy and vendor self-interest to make life more complicated ;)

Of interest among their C-EPs is one for adding the equivalent of our 
decimal module
http://www.open-std.org/jtc1/sc22/wg14/www/projects#24732 




From ncoghlan at gmail.com  Sat Jun 24 06:29:34 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 24 Jun 2006 14:29:34 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606220856l2a1ed62fl637723636b222a39@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	
	<20060611010410.GA5723@21degrees.com.au>	
	<5.1.1.6.0.20060618235252.03a71680@sparrow.telecommunity.com>	
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>	
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>	
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	
	<4499F8E1.2020103@acm.org> <449A76F6.6030606@gmail.com>
	<ca471dc20606220856l2a1ed62fl637723636b222a39@mail.gmail.com>
Message-ID: <449CBFAE.4070205@gmail.com>

Guido van Rossum wrote:
> On 6/22/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> Talin wrote:
>> > I don't get what the problem is here. A switch constant should have
>> > exactly the bahavior of a default value of a function parameter. We
>> > don't seem to have too many problems defining functions at the module
>> > level, do we?
>>
>> Because in function definitions, if you put them inside another 
>> function, the
>> defaults of the inner function get reevaluated every time the outer 
>> function
>> is run. Doing that for the switch statement would kinda defeat the whole
>> point. . .
> 
> Really? Then where would you store the dict? You can't store it on the
> code object because that's immutable. You can't store it on the
> function object (if you don't want it to be re-evaluated when the
> function is redefined) because a new function object is created by
> each redefinition. There needs to be *some* kind of object with a
> well-defined life cycle where to store the dict.
> 
> I'd say that we should just add a warning against switches in nested
> functions that are called only once per definition.

I wasn't very clear. . .

Talin noted that there's no ambiguity with the timing of the evaluation of 
default function arguments, regardless of whether the function definition is 
at module scope or inside another function - the default arguments are simply 
evaluated every time the function definition is executed. So he wondered why 
that simplicity didn't translate to the evaluation of switch cases.

With a switch statement, we want the cases evaluated when the *containing* def 
statement is executed, not every time the switch statement itself is executed. 
Which makes things far more complex :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From talin at acm.org  Sat Jun 24 06:36:47 2006
From: talin at acm.org (Talin)
Date: Fri, 23 Jun 2006 21:36:47 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
Message-ID: <449CC15F.9070107@acm.org>

Guido van Rossum wrote:

> That sounds like a good solution all around. I hope that others can
> also find themselves in this.
> 
> (1) An expression of the form 'static' <atom> has the semantics of
> evaluating the atom at the same time as the nearest surrounding
> function definition. If there is no surrounding function definition,
> 'static' is a no-op and the expression is evaluated every time.
> [Alternative 1: this is an error] [Alternative 2: it is evaluated
> before the module is entered; this would imply it can not involve any
> imported names but it can involve builtins] [Alternative 3:
> precomputed the first time the switch is entered]

I'm thinking that outside of a function, 'static' just means that the 
expression is evaluated at compile-time, with whatever symbols the 
compiler has access to (including any previously-defined statics in that 
module). The result of the expression is then inserted into the module 
code just like any literal.

So for example:

    a = static len( "1234" )

compiles as:

    a = 4

...assuming that you can call 'len' at compile time.

The rationale here is that I'm trying to create an analogy between 
functions and modules, where the 'static' declaration has an analogous 
relationship to a module as it does to a function. Since a module is 
'defined' when its code is compiled, that would be when the evaluation 
occurs.

I'm tempted to propose a way for the compiler to import static 
definitions from outside the module ('static import'?) however I 
recognize that this would greatly increase the fragility of Python, 
since now you have the possibility that a module could be compiled with 
a set of numeric constants that are out of date with respect to some 
other module.

-- Talin

From tim.peters at gmail.com  Sat Jun 24 07:11:05 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 24 Jun 2006 01:11:05 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <e7gpgc$snm$1@sea.gmane.org>
References: <2mr71hjzpp.fsf@starship.python.net>
	<E1FtXRB-00041c-Hg@draco.cus.cam.ac.uk> <e7gpgc$snm$1@sea.gmane.org>
Message-ID: <1f7befae0606232211x2271215dsbde703e0fa8bc3c5@mail.gmail.com>

[/F]
> SC22WG14?  is that some marketing academy?  not a very good one, obviously.

That's because it's European ;-)  The ISO standards process has highly
visible layers of bureaucracy, and, in full, JTC1/SC22/WG14 is just
the Joint ISO/IEC Technical Committee 1's SubCommittee 22's Working
Group 14 .  JTC1 is in charge of information technology standards;
JTC1/SC22 "programming languages, their environments and system
software interfaces"; and JTC1/SC22/WG14 the C programming language.
POSIX lives right next door in TC/SC/WG space, at JTC1/SC22/WG15.

IOW, these are the friendly folks who define what C is.  In America,
it's usually just called "the ANSI C committee" instead, probably for
the same reason Americans prefer intuitive names like "mile" and
"gallon" over puffed up technical gibberish like "metre" and "litre"
;-)

From rwgk at yahoo.com  Sat Jun 24 07:28:46 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Fri, 23 Jun 2006 22:28:46 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <e7i03m$sjs$1@sea.gmane.org>
Message-ID: <20060624052846.15341.qmail@web31515.mail.mud.yahoo.com>

--- Scott David Daniels <Scott.Daniels at Acm.Org> wrote:

> Ralf W. Grosse-Kunstleve wrote:
> > Thanks! This does the trick for me:
> > 
> > #if PY_VERSION_HEX >= 0x02030000
> >         PyObject_CallFunction(
> >           (PyObject*) &PyRange_Type, "lll", start, start+len*step, step)
> > #else
> >         PyRange_New(start, len, step, 1)
> > #endif
> > 
> > I am compiling the code above with a C++ compiler (in the context of
> > Boost.Python). Newer g++ versions unfortunatly produce a warning if -Wall
> is
> > specified:
> > 
> > warning: dereferencing type-punned pointer will break strict-aliasing rules
> 
> I am not sure about your compiler, but if I remember the standard
> correctly, the following code shouldn't complain:
> 
>     PyObject_CallFunction((PyObject*) (void *) &PyRange_Type,
>                           "lll", start, start+len*step, step)

Thanks for the suggestion!

I just tried:

g++ (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-20)
g++ (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)
g++ (GCC) 4.0.0 20050519 (Red Hat 4.0.0-8)

with -Wall -Wno-sign-compare. These compilers don't issue the "will break
strict-aliasing rules" warning with or without the intermediate (void *).
However, I also tried:

g++ (GCC) 4.1.0 20060304 (Red Hat 4.1.0-3)

which issues the warning without the (void *), but not with the (void *).

I am not an expert of the C/C++ language details, but the intermediate cast
seems to be a great local alternative to the global -fno-strict-aliasing flag.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From rwgk at yahoo.com  Sat Jun 24 08:58:46 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Fri, 23 Jun 2006 23:58:46 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449C4CFF.4070808@v.loewis.de>
Message-ID: <20060624065846.74585.qmail@web31503.mail.mud.yahoo.com>

--- "Martin v. L???wis" <martin at v.loewis.de> wrote:

> The specific question was
> 
> "Is there a way to set the warning options via an environment variable?"
> 
> This has nothing to do with beta1; the warnings module was introduced
> many releases ago, along with all the mechanics to disable warnings.

Due to the new ImportWarning first introduced in 2.5b1 the question of
disabling warnings is becoming much more pressing (I am assuming that I am not
again the only person on the planet to have this problem).

> I guess you misunderstood.

Yes.

> I propose you put warnings.simplefilter()
> into your code. The warnings was introduced before 2.2.1 IIRC, so this
> should work on all releases you want to support (but have no effect on
> installations where the warning isn't generated).

Where would I put the warnings.simplefilter()? I have hundreds of scripts and
__init__.py files.

I just came accross this situation (simplified):

  % cd boost/boost
  % python2.5
  >>> import math
  __main__:1: ImportWarning: Not importing directory 'math': missing
__init__.py

This is because there is a subdirectory math in boost/boost, something that I
cannot change.
The PYTHONPATH is not set at all in this case.
I.e. I get the ImportWarning just because my current working directory happens
to contain a subdirectory which matches one of the Python modules in the
standard library. Isn't this going to cause widespread problems?


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From martin at v.loewis.de  Sat Jun 24 09:29:03 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 24 Jun 2006 09:29:03 +0200
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <e7i03m$sjs$1@sea.gmane.org>
References: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>	<20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>
	<e7i03m$sjs$1@sea.gmane.org>
Message-ID: <449CE9BF.9030408@v.loewis.de>

Scott David Daniels wrote:
> I am not sure about your compiler, but if I remember the standard
> correctly, the following code shouldn't complain:
> 
>     PyObject_CallFunction((PyObject*) (void *) &PyRange_Type,
>                           "lll", start, start+len*step, step)

You remember the standard incorrectly. Python's usage of casts has
undefined behaviour, and adding casts only makes the warning go away,
but does not make the problem go away.

Regards,
Martin

From martin at v.loewis.de  Sat Jun 24 09:32:16 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 24 Jun 2006 09:32:16 +0200
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060624052846.15341.qmail@web31515.mail.mud.yahoo.com>
References: <20060624052846.15341.qmail@web31515.mail.mud.yahoo.com>
Message-ID: <449CEA80.1000408@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
> I am not an expert of the C/C++ language details, but the intermediate cast
> seems to be a great local alternative to the global -fno-strict-aliasing flag.

Depends on what you want to achieve. If you just want to make the
warning go away, the cast works fine. If you want to avoid bad
code being generated, you better use the flag (alternatively,
you could fix Python to not rely on undefined behaviour (and no,
it's not easy to fix in Python, or else we would have fixed it
long ago)).

Regards,
Martin

From martin at v.loewis.de  Sat Jun 24 09:36:29 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 24 Jun 2006 09:36:29 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624065846.74585.qmail@web31503.mail.mud.yahoo.com>
References: <20060624065846.74585.qmail@web31503.mail.mud.yahoo.com>
Message-ID: <449CEB7D.8050502@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
>> This has nothing to do with beta1; the warnings module was introduced
>> many releases ago, along with all the mechanics to disable warnings.
> 
> Due to the new ImportWarning first introduced in 2.5b1 the question of
> disabling warnings is becoming much more pressing (I am assuming that I am not
> again the only person on the planet to have this problem).

Sure. However, many people on comp.lang.python could have told you how
to silence warnings in Python.

> Where would I put the warnings.simplefilter()? I have hundreds of scripts and
> __init__.py files.

I would have to study your application to answer that question. Putting
it into sitecustomize.py should always work.

> I.e. I get the ImportWarning just because my current working directory happens
> to contain a subdirectory which matches one of the Python modules in the
> standard library. Isn't this going to cause widespread problems?

I don't know. Whether a warning is a problem is a matter of attitude, also.

Regards,
Martin

From rwgk at yahoo.com  Sat Jun 24 09:56:54 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Sat, 24 Jun 2006 00:56:54 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449CEB7D.8050502@v.loewis.de>
Message-ID: <20060624075654.99693.qmail@web31507.mail.mud.yahoo.com>

--- "Martin v. L?wis" <martin at v.loewis.de> wrote:

> I don't know. Whether a warning is a problem is a matter of attitude, also.

Our users will think our applications are broken if they see warnings like
that. It is not professional.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From rwgk at yahoo.com  Sat Jun 24 10:22:11 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Sat, 24 Jun 2006 01:22:11 -0700 (PDT)
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <449CE9BF.9030408@v.loewis.de>
Message-ID: <20060624082211.80699.qmail@web31515.mail.mud.yahoo.com>

--- "Martin v. L?wis" <martin at v.loewis.de> wrote:

> Scott David Daniels wrote:
> > I am not sure about your compiler, but if I remember the standard
> > correctly, the following code shouldn't complain:
> > 
> >     PyObject_CallFunction((PyObject*) (void *) &PyRange_Type,
> >                           "lll", start, start+len*step, step)
> 
> You remember the standard incorrectly.

There are three standards to consider: C89/90, C99, C++97/98. Here you can find
the opinion of one of the authors of the C++ standard in this matter:

http://mail.python.org/pipermail/c++-sig/2005-December/009869.html


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From martin at v.loewis.de  Sat Jun 24 10:58:43 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 24 Jun 2006 10:58:43 +0200
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <20060624082211.80699.qmail@web31515.mail.mud.yahoo.com>
References: <20060624082211.80699.qmail@web31515.mail.mud.yahoo.com>
Message-ID: <449CFEC3.5080605@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
>> You remember the standard incorrectly.
> 
> There are three standards to consider: C89/90, C99, C++97/98. Here you can find
> the opinion of one of the authors of the C++ standard in this matter:
> 
> http://mail.python.org/pipermail/c++-sig/2005-December/009869.html

This might be out of context, but Dave Abrahams comment
"C++ doesn't support the C99 restrict feature." seems irrelevant:
C++ certain does not have the "restrict" keyword, but it has
the same aliasing rules as C89 and C99. The specific problem
exists in all three languages.

Regards,
Martin

From ncoghlan at gmail.com  Sat Jun 24 11:46:22 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 24 Jun 2006 19:46:22 +1000
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>	<449A51CC.3070108@ghaering.de>	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>	<449AC151.4030500@ghaering.de>
	<bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>
Message-ID: <449D09EE.9040903@gmail.com>

Brett Cannon wrote:
> Yep.  That API will be used directly in the changes to pymalloc and 
> PyMem_*() macros (or at least the basic idea).  It is not *only* for 
> extension modules but for the core as well.
> 
>     Existing extension modules and existing C code in the Python interpreter
>     have no idea of any PyXXX_ calls, so I don't understand how new API
>     functions help here.
> 
> 
> The calls get added to pymalloc and PyMem_*() under the hood, so that 
> existing extension modules use the memory check automatically without a 
> change.  The calls are just there in case some one has some random need 
> to do their own malloc but still want to participate in the cap.  Plus 
> it helped me think everything through by giving everything I would need 
> to change internally an API.

This confused me a bit, too. It might help if you annotated each of the new 
API's with who the expected callers were:

   - trusted interpreter
   - untrusted interpreter
   - embedding application
   - extension module

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From martin at v.loewis.de  Sat Jun 24 11:47:19 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 24 Jun 2006 11:47:19 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624075654.99693.qmail@web31507.mail.mud.yahoo.com>
References: <20060624075654.99693.qmail@web31507.mail.mud.yahoo.com>
Message-ID: <449D0A27.5080506@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
> --- "Martin v. L?wis" <martin at v.loewis.de> wrote:
> 
>> I don't know. Whether a warning is a problem is a matter of attitude, also.
> 
> Our users will think our applications are broken if they see warnings like
> that. It is not professional.

Actually, your application *was* pretty close to being broken a few
weeks ago, when Guido wanted to drop the requirement that a package
must contain an __init__ file. In that case, "import math" would have
imported the directory, and given you an empty package.

Regards,
Martin

From bob at redivi.com  Sat Jun 24 12:22:32 2006
From: bob at redivi.com (Bob Ippolito)
Date: Sat, 24 Jun 2006 03:22:32 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <449D09EE.9040903@gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>	<449A51CC.3070108@ghaering.de>	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>	<449AC151.4030500@ghaering.de>
	<bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>
	<449D09EE.9040903@gmail.com>
Message-ID: <0C9D5BB1-142A-4D50-859E-C33E3570D1B1@redivi.com>


On Jun 24, 2006, at 2:46 AM, Nick Coghlan wrote:

> Brett Cannon wrote:
>> Yep.  That API will be used directly in the changes to pymalloc and
>> PyMem_*() macros (or at least the basic idea).  It is not *only* for
>> extension modules but for the core as well.
>>
>>     Existing extension modules and existing C code in the Python  
>> interpreter
>>     have no idea of any PyXXX_ calls, so I don't understand how  
>> new API
>>     functions help here.
>>
>>
>> The calls get added to pymalloc and PyMem_*() under the hood, so that
>> existing extension modules use the memory check automatically  
>> without a
>> change.  The calls are just there in case some one has some random  
>> need
>> to do their own malloc but still want to participate in the cap.   
>> Plus
>> it helped me think everything through by giving everything I would  
>> need
>> to change internally an API.
>
> This confused me a bit, too. It might help if you annotated each of  
> the new
> API's with who the expected callers were:
>
>    - trusted interpreter
>    - untrusted interpreter
>    - embedding application
>    - extension module

Threading is definitely going to be an issue with multiple  
interpreters (restricted or otherwise)... for example, the PyGILState  
API probably wouldn't work anymore.

-bob


From ncoghlan at gmail.com  Sat Jun 24 12:31:45 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 24 Jun 2006 20:31:45 +1000
Subject: [Python-Dev] Switch statement
In-Reply-To: <17547.19802.361151.705599@montanaro.dyndns.org>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
Message-ID: <449D1491.5080901@gmail.com>

The current train of thought seems to be to handle a switch statement as follows:

   1. Define switch explicitly as a hash table lookup, with the hash table 
built at function definition time

   2. Allow expressions to be flagged as 'static' to request evaluation at 
def-time

   3. Have the expressions in a case clause be implicitly flagged as static

   4. Allow 'case in' to be used to indicate that a case argument is to be 
iterated and all its values added to the current case

   5. Static names are not needed - static expressions must refer solely to 
literals and non-local names

An issue with Point 4 is a syntactic nit that Eric Sumner pointed out. Since 
it involves iteration over x to populate the jump table rather than doing a 
containment test on x, using 'case in x' is misleading. It would be better 
written as 'case *x'.

Then:
   'case 1:'     ==> a switch value of 1 will jump to this case
   'case 1, 2:'  ==> a switch value of 1 or 2 will jump to this case
   'case *x'     ==> any switch value in x will jump to this case
   'case *x, *y' ==> any switch value in x or y will jump to this case

For the remaining points, I share Jim Jewett's concern that 'function 
definition time' is well defined for function scopes only - a better 
definition of the evaluation time is needed so that it works for other code as 
well. (Unlike Jim, I have no problems with restricting switch statements to 
hashable objects and building the entire jump table at once - if what you want 
is an arbitrary if-elif chain, then write one!)

I'd also like to avoid confusing the code execution order too much. People 
objected to the out-of-order evaluation in statement local namespaces - what's 
being proposed for static expressions is significantly worse.

So here's a fleshed out proposal for 'once expressions' that are evaluated the 
first time they are encountered and cached thereafter.

Once expressions
----------------
   An expression of the form 'once EXPR' is evaluated exactly once for a given 
scope. Precedence rules are as for yield expressions.
   Evaluation occurs the first time the expression is executed. On all 
subsequent executions, the expression will return the same result as was 
returned the first time.
   Referencing a function local variable name from a static expression is a 
syntax error. References to module globals, to closure variables and to names 
not bound in the module at all are fine.

Justifying evaluation at first execution time
---------------------------------------------
   With evaluation at first execution time, the semantics are essentially the 
same in all kinds of scope (module, function, class, exec). When the 
evaluation time is defined in terms of function definition time, it is very 
unclear what happens when there is no function definition involved.
   With the once-per-scope definition above, the potentially confusing cases 
that concerned Guido would have the behaviour he desired.

 >>> def foo(c):
...   print once c
...
SyntaxError: Cannot use local variable 'c' in once expression

   The rationale for disallowing function local variables in a once expression 
is that next time the function is executed, the local variables are expected 
to contain different values, so it is unlikely that any expression depending 
on them would give the same answer. Builtins, module globals and closure 
variables, on the other hand, will typically remain the same across 
invocations of a once expression. So the rationale for the syntactic 
restriction against using local variables is still there, even though the 
local variables may actually contain valid data at the time the once 
expression is executed. This syntactic restriction only applies to function 
locals so that a module level once expression is still useful.

 >>> def foo(c):
...   def bar():
...     print once c
...   return bar
...
 >>> b1 = foo(1)
 >>> b2 = foo(2)
 >>> b1()
1
 >>> b2()
2

   For this case, the important point is that execution of the once expression 
is once per scope, not once per program. Since running the function definition 
again creates a different function object, the once expression gets executed 
again the first time that function is called.

   An advantage of first time execution for functions is that it can be used 
to defer calculation of expensive default values to the first time they're needed.

 >>> def foo(c=None):
...   if c is None:
...     c = once calculate_expensive_default()
...   # etc
...

   With function definition time evaluation, the expensive default would 
always be calculated even if the specific application always provided an 
argument to the function and hence never actually needed the default.

   The one downside to this first time execution approach is that it means 
'once' is NOT a solution to the early-binding vs late-binding problem for 
closure variables. Forcing early binding would still require abuse of function 
defaults, or a compiler directive along the lines of the current 'global'. I 
consider that a reasonable price to pay for the more consistent expression 
semantics.

CPython implementation strategy
-------------------------------
   A once expression results in the compiler creating a cell object as a 
hidden variable in the current namespace. When the once expression is 
executed, it checks if the relevant cell object is empty. If it is, then the 
expression code is evaluated in the current namespace and the result stored in 
the cell object. If the cell object is not empty, then the stored value is 
used directly as the result of the expression.
   Code objects will acquire a new attribute, co_oncevars. This is a tuple 
containing the hidden variable names assigned by the compiler. It is similar 
to the existing co_freevars used to identify the names of closure variables.
   For any code executed using exec (including module level and class level 
code), the cell objects needed to satisfy co_oncevars are created in the 
relevant namespace before the code is executed, and deleted at the end of 
execution. That way we don't have junk attributes showing up on the module and 
class objects.
   For function code (including generator functions), the cells are stored in 
a new attribute (e.g. 'func_once') on the function object so that they persist 
across calls to the function. On each call to the function, the cell objects 
are inserted into the local namespace before the function code is executed. 
This is similar to the existing func_closure attribute (just as co_oncevars is 
similar to co_freevars).
   As an alternative to using new attributes, the hidden variable names could 
be appended to co_freevars, and the necessary cells appended to func_closure. 
The problem with that approach is that it may confuse existing introspection 
tools, whereas such tools would simply ignore the new attributes.

Definition of the switch statement using once
---------------------------------------------
(I deliberately omitted the trailing colon on the 'switch' to avoid the empty 
suite problem, similar to the fact that there is no colon at the end of a 
@-decorator line.)

   switch value
   case 1:
       CASE_EQUALS_1
   case *x:
       CASE_IN_X
   else:
       CASE_ELSE

would be semantically equivalent to

    _jump_dict = once dict((1, goto_CASE_EQUALS_1),
                          *((item, goto_CASE_IN_X) for item in x))
    try:
        _offset = _jump_dict[value]
    except KeyError
        _offset = goto_CASE_ELSE
    _goto offset

(Where _goto is a compiler internal operation to jump to a different point 
within the current code object)

This would entail updating the lnotab format to permit bytecode order that 
doesn't match source code order (since the case expressions would all be up 
with the evaluation of the jump dict).


Why 'once'?
-----------
   I picked once for the keyword because I consider the most important 
semantic point about the affected expression to be the fact that it is 
evaluated at most once per scope.
   static, const, final, etc are only contenders because of the other 
languages that use them as keywords. The words, in and of themselves, don't 
really have the right meaning.
   The once keyword is used by Eiffel to indicate a 0-argument function that 
is executed the first time it is called, and thereafter returns the result of 
that first call. That is pretty close to what I'm proposing it for here (only 
I'm proposing once-per-scope for expressions rather than Eiffel's 
once-per-program for functions).
   Additionally, a quick search for "once =" in the standard lib and its tests 
didn't find any occurrences (aside from a 'nonce =' in urllib2 :). Java's 
final (which is the only other option I really considered for a keyword), 
turned up 3 genuine hits (two in the compiler module, one in test_generators).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From Scott.Daniels at Acm.Org  Sat Jun 24 14:43:49 2006
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Sat, 24 Jun 2006 05:43:49 -0700
Subject: [Python-Dev] PyRange_New() alternative?
In-Reply-To: <449CE9BF.9030408@v.loewis.de>
References: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>	<20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>	<e7i03m$sjs$1@sea.gmane.org>
	<449CE9BF.9030408@v.loewis.de>
Message-ID: <e7jc0p$tnd$1@sea.gmane.org>

Martin v. L?wis wrote:
> Scott David Daniels wrote:
>> ... if I remember the standard
>> correctly, the following code shouldn't complain:
>>
>>     PyObject_CallFunction((PyObject*) (void *) &PyRange_Type,
>>                           "lll", start, start+len*step, step)
> 
> You remember the standard incorrectly. Python's usage of casts has
> undefined behaviour, and adding casts only makes the warning go away,
> but does not make the problem go away.

    ... (PyObject*) &PyRange_Type, ...
should only work in the presence of subtypes (which C89 and C99 don't
define).  If there were a way to declare PyTypeObject as a subtype of
PyObject then this cast should work.

    ... (PyObject*) (void *) &PyRange_Type, ...
Says a pointer to PyRange_Type should have the structure of a pointer
PyObject.  Since the prelude to PyTypeObject matches that of PyObject,
this should be an effective cast.  In addition, casting pointers to and
from "void *" should be silent -- _that_ is what I thought I was
remembering of the standard.

Do not mistake this for advocacy of changing Python's macros; I was
telling the OP how he could shut up the complaint he was getting.  In
C extensions I'd be likely to do the "convert through void *" trick
myself.

-- Scott David Daniels
Scott.Daniels at Acm.Org


From martin at v.loewis.de  Sat Jun 24 15:30:13 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 24 Jun 2006 15:30:13 +0200
Subject: [Python-Dev] PyObject* aliasing (Was: PyRange_New()
	alternative?)
In-Reply-To: <e7jc0p$tnd$1@sea.gmane.org>
References: <A966EC37-FD5D-4E81-AAB3-43284593DE4A@redivi.com>	<20060623053820.45516.qmail@web31504.mail.mud.yahoo.com>	<e7i03m$sjs$1@sea.gmane.org>	<449CE9BF.9030408@v.loewis.de>
	<e7jc0p$tnd$1@sea.gmane.org>
Message-ID: <449D3E65.2020405@v.loewis.de>

Scott David Daniels wrote:
>     ... (PyObject*) (void *) &PyRange_Type, ...
> Says a pointer to PyRange_Type should have the structure of a pointer
> PyObject.  Since the prelude to PyTypeObject matches that of PyObject,
> this should be an effective cast.

The C standard says it's undefined behaviour. The compiler is free
to layout PyObject entirely different from PyTypeObject, and
dereferencing a PyTypeObject through a PyObject* is undefined
behaviour. Python does this all the time, and it did not cause much
problems in the past, but that all still does not make it
defined behaviour.

In particular, gcc now starts to assume (rightfully) that a
PyTypeObject* and a PyObject* possibly cannot refer to the same
memory when being dereferenced (hence the warning about aliasing).
That means that the compiler does not need to re-read contents of
one of them (e.g. the reference count) even if the memory gets
changed through the other pointer. That may cause bad code to be
generated (if the pointers actually do alias).

The only well-defined way to alias between types in this context
is that a pointer to a struct may alias with a pointer to its
first member. So if PyTypeObject was defined as

struct PyTypeObject {
  PyObject _ob;
  Py_ssize_t ob_size;
  const char *tp_name;
  ...
};

then Python's behaviour would be well-defined (i.e. one could
dereference the _ob member through a PyObject*).

> Do not mistake this for advocacy of changing Python's macros; I was
> telling the OP how he could shut up the complaint he was getting.  In
> C extensions I'd be likely to do the "convert through void *" trick
> myself.

And risk that the compiler generates bad code. In most cases, the
compiler cannot detect that a program breaks the standard C aliasing
rules. However, in some cases, it can, and in these cases, it issues
a warning to make the programmer aware that the program might be
full of errors (such as Python). It's unfortunate that people silence
the warnings before understanding them.

Regards,
Martin

P.S. As for an example where the compiler really does generate bad
code: consider

#include <stdio.h>

long f(int *a, long *d){
        (*d)++;
        *a = 5;
        return  *d;
}

int main()
{
        long d = 0;
        printf("%ld\n", f((int*)&d, &d));
        return 0;
}

Here, d starts out as 0, then gets incremented (to 1) in
f. Then, a value of 5 is assigned through a (which also
points to d), and then the value of d (through the pointer)
is printed.

Without optimization, gcc 4.1 generates code that prints 5.
With optimization, the compiler recognizes that d and a
cannot alias, so that it does not need to refetch *d at
the end of f (it still has the value in a register). So
it does not reread the value, and instead returns the old
value (1), and prints that.

In calling f, the compiler notices that undefined behavior
is invoked, and generates the warning. Casting through
void* silences the warning; the generated code is still
"incorrect" (of course, it's undefined, so anything
is "correct").


From exarkun at divmod.com  Sat Jun 24 16:00:05 2006
From: exarkun at divmod.com (Jean-Paul Calderone)
Date: Sat, 24 Jun 2006 10:00:05 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449D0A27.5080506@v.loewis.de>
Message-ID: <20060624140005.29014.1992302539.divmod.quotient.8985@ohm>

On Sat, 24 Jun 2006 11:47:19 +0200, "\"Martin v. L?wis\"" <martin at v.loewis.de> wrote:
>Ralf W. Grosse-Kunstleve wrote:
>> --- "Martin v. L?wis" <martin at v.loewis.de> wrote:
>>
>>> I don't know. Whether a warning is a problem is a matter of attitude, also.
>>
>> Our users will think our applications are broken if they see warnings like
>> that. It is not professional.
>
>Actually, your application *was* pretty close to being broken a few
>weeks ago, when Guido wanted to drop the requirement that a package
>must contain an __init__ file. In that case, "import math" would have
>imported the directory, and given you an empty package.

But this change was *not* made, and afaict it is not going to be made.
So the application is not broken, and the warning is entirely spurious.

I am very unhappy that the burden of understanding Python's package
structure is being pushed onto end users in this way.  Several of my
projects now emit three or four warnings on import now.

The Twisted plugin system relies on the fact that directories without
__init__ are not Python packages (since they _aren't_, have never been,
and it has always been extremely clear that Python will ignore them).

Of course, Twisted is a pretty marginal Python user so I'm sure no one
cares.

From martin at v.loewis.de  Sat Jun 24 16:24:11 2006
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Sat, 24 Jun 2006 16:24:11 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
References: <20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
Message-ID: <449D4B0B.7030605@v.loewis.de>

Jean-Paul Calderone wrote:
> I am very unhappy that the burden of understanding Python's package
> structure is being pushed onto end users in this way.  Several of my
> projects now emit three or four warnings on import now.

So are you requesting that the change is reverted?

Regards,
Martin

From aahz at pythoncraft.com  Sat Jun 24 16:27:15 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sat, 24 Jun 2006 07:27:15 -0700
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
References: <449D0A27.5080506@v.loewis.de>
	<20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
Message-ID: <20060624142715.GA12206@panix.com>

On Sat, Jun 24, 2006, Jean-Paul Calderone wrote:
>
> I am very unhappy that the burden of understanding Python's package
> structure is being pushed onto end users in this way.  Several of my
> projects now emit three or four warnings on import now.
>
> The Twisted plugin system relies on the fact that directories without
> __init__ are not Python packages (since they _aren't_, have never
> been, and it has always been extremely clear that Python will ignore
> them).
>
> Of course, Twisted is a pretty marginal Python user so I'm sure no one
> cares.

Then again, bringing this back to the original source of this change,
Google is a pretty marginal Python user, too.  ;-)

I was a pretty strong -1 on the original proposed change of allowing
import on empty directories, but my take is that if a project
deliberately includes empty directories, they can add a new warning
filter on program startup.  Your users will have to upgrade to a new
version of the application or do a similar fix in their own
sitecustomize.  I don't consider that a huge burden.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From exarkun at divmod.com  Sat Jun 24 17:03:11 2006
From: exarkun at divmod.com (Jean-Paul Calderone)
Date: Sat, 24 Jun 2006 11:03:11 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624142715.GA12206@panix.com>
Message-ID: <20060624150311.29014.711788606.divmod.quotient.9044@ohm>

On Sat, 24 Jun 2006 07:27:15 -0700, Aahz <aahz at pythoncraft.com> wrote:
>On Sat, Jun 24, 2006, Jean-Paul Calderone wrote:
>>
>> I am very unhappy that the burden of understanding Python's package
>> structure is being pushed onto end users in this way.  Several of my
>> projects now emit three or four warnings on import now.
>>
>> The Twisted plugin system relies on the fact that directories without
>> __init__ are not Python packages (since they _aren't_, have never
>> been, and it has always been extremely clear that Python will ignore
>> them).
>>
>> Of course, Twisted is a pretty marginal Python user so I'm sure no one
>> cares.
>
>Then again, bringing this back to the original source of this change,
>Google is a pretty marginal Python user, too.  ;-)

I think it is safe to say that Twisted is more widely used than anything
Google has yet released.  Twisted also has a reasonably plausible
technical reason to dislike this change.  Google has a bunch of engineers
who, apparently, cannot remember to create an empty __init__.py file in
some directories sometimes.

>
>I was a pretty strong -1 on the original proposed change of allowing
>import on empty directories, but my take is that if a project
>deliberately includes empty directories, they can add a new warning
>filter on program startup.  Your users will have to upgrade to a new
>version of the application or do a similar fix in their own
>sitecustomize.  I don't consider that a huge burden.

The usage here precludes fixing it in Twisted.  Importing twisted itself prints a warning: there's no way to get code run soon enough to suppress
this.

I do think requiring each user to modify sitecustomize is overly
burdensome.  Of course this is highly subjective and I don't expect
anyone to come to an agreement over it, but it seems clear that it is
at least a burden of some sort.

Jean-Paul

From fredrik at pythonware.com  Sat Jun 24 17:03:17 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 24 Jun 2006 17:03:17 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060623170807.1DFA.JCARLSON@uci.edu>
References: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>	<eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>
	<20060623170807.1DFA.JCARLSON@uci.edu>
Message-ID: <e7jk7i$jue$1@sea.gmane.org>

Josiah Carlson wrote:

> This is a good thing, because if switch/case ends up functionally
> identical to if/elif/else, then it has no purpose as a construct.

there's no shortage of Python constructs that are functionally identical 
to existing constructs.  as with all syntactic "sugar", the emphasis 
should be on what the programmer wants to express, not how you can 
artificially constrain the implementation to make the new thing slightly 
different from what's already in there.

and the point of switch/case is to be able to say "I'm going to dispatch 
on a single value" in a concise way; the rest is optimizations.

</F>


From exarkun at divmod.com  Sat Jun 24 17:05:28 2006
From: exarkun at divmod.com (Jean-Paul Calderone)
Date: Sat, 24 Jun 2006 11:05:28 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449D4B0B.7030605@v.loewis.de>
Message-ID: <20060624150528.29014.589129989.divmod.quotient.9047@ohm>

On Sat, 24 Jun 2006 16:24:11 +0200, "\"Martin v. L?wis\"" <martin at v.loewis.de> wrote:
>Jean-Paul Calderone wrote:
>> I am very unhappy that the burden of understanding Python's package
>> structure is being pushed onto end users in this way.  Several of my
>> projects now emit three or four warnings on import now.
>
>So are you requesting that the change is reverted?

Yes, please.

>
>Regards,
>Martin
>

From nmm1 at cus.cam.ac.uk  Fri Jun 23 10:13:17 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 23 Jun 2006 09:13:17 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
Message-ID: <E1Ftgmz-0006xA-6I@draco.cus.cam.ac.uk>

"Neal Norwitz" <nnorwitz at gmail.com> wrote:
>
> Seriously, there seems to be a fair amount of miscommunication in this
> thread.  ...

Actually, this isn't really a reply to you, but you have described
the issue pretty well.

> The best design doc that I know of is code. :-)
>
> It would be much easier to communicate using code snippets.
> I'd suggest pointing out places in the Python code that are lacking
> and how you would correct them.  That will make it easier for everyone
> to understand each other.

Yes.  That is easy.  What, however, I have part of (already) and was
proposing to do BEFORE going into details was to generate a testing
version that shows how I think that it should be done.  Then people
could experiment with both the existing code and mine, to see the
differences.

But, in order to do that, I needed to find out the best way of going
about it ....

It wouldn't help with the red herrings, such as the reasons why it
is no longer possible to rely on hardware interrupts as a mechanism.
But they are only very indirectly relevant.

The REASON that I wanted to do that was precisely because I knew that
very few people would be deeply into arithmetic models, the details
of C89 and C99 (ESPECIALLY as the standard is incomplete :-( ), and
so having a sandbox before starting the debate would be a GREAT help.
It's much easier to believe things when you can try them yourself ....



"Facundo Batista" <facundobatista at gmail.com> wrote:
> 
> Well, so I'm completely lost... because, if all you want is to be able
> to chose a returned value or an exception raised, you actually can
> control that in Decimal.

Yes, but I have so far failed to get hold of a copy of the Decimal code!
I will have another go at subverting Subversion.  I should VERY much
like to be get hold of those documents AND build a testing version of
the code - then I can go away, experiment, and come back with some more
directed comments (not mere generalities).



Aahz <aahz at pythoncraft.com> wrote:
> 
> You can't expect us to do your legwork for you, and you can't expect
> that Tim Peters is the only person on the dev team who understands what
> you're getting at.

Well, see above for the former - I did post my intents in my first
message.  And, as for the latter, I have tried asking what I can
assume that people know - it is offensive and time-consuming and hence
counter-productive to start off assuming that your audience does not
have a basic background.

To repeat, it is precisely to address THAT issue that I wanted to build
a sandbox BEFORE going into details.  If people don't know the theory
in depth and but are interested, they could experiment with the sandbox
and see what happens in practice.

> Incidentally, your posts will go directly to python-dev without
> moderation if you subscribe to the list, which is a Good Idea if you want
> to participate in discussion.

Er, you don't receive a mailing list at all if you don't subscribe!

If that is the intent, I will see if I can find how to subscribe in
the unmoderated fashion.  I didn't spot two methods on the Web pages
when I subscribed.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Fri Jun 23 14:38:14 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 23 Jun 2006 13:38:14 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Fri, 23 Jun 2006 07:57:05 EDT."
	<2e1434c10606230457h7af2f39j78a749b5984af7c1@mail.gmail.com> 
Message-ID: <E1FtkvO-0005iB-JM@libra.cus.cam.ac.uk>

To the moderator:  this is getting ridiculous.


jacobs at bioinformed.com wrote:
> 
> > >Unfortunately, that doesn't help, because it is not where the issues
> > >are.  What I don't know is how much you know about numerical models,
> > >IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
> > >reflector, but there were some lurkers.
>
> Hand wave, hand wave, hand wave.  Many of us here aren't stupid and have
> more than passing experience with numerical issues, even if we haven't been
> "active on SC22WG14".  Let's stop with the high-level pissing contest and
> lay out a clear technical description of exactly what has your knickers in a
> twist, how it hurts Python, and how we can all work together to make the
> pain go away.

SC22WG14 is the ISO committee that handles C standardisation.  One
of the reasons that the UK voted "no" was because the C99 standard
was seriously incomprehensible in many areas to anyone who had not
been active on the reflector.  If you think that I can summarise a
blazing row that went on for over 5 years and produced over a million
lines of technical argument alone in a "clear technical description",
you have an exaggerated idea of my abilities.

I have a good many documents that I could post, but they would not
help.  Some of them could be said to be "clear technical descriptions"
but most of them were written for other audiences, and assume those
audiences' backgrounds.  I recommend starting by reading the comments
in floatobject.c and mathmodule.c and then looking up the sections of
the C89 and C99 standards that are referenced by them.

> A good place to start: You mentioned earlier that there where some
> nonsensical things in floatobject.c.  Can you list some of the most serious
> of these?

Well, try the following for a start:

Python 2.4.2 (#1, May  2 2006, 08:28:01)
[GCC 4.1.0 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a = "NaN"
>>> b = float(a)
>>> c = int(b)
>>> d = (b == b)
>>> print a, b, c, d
NaN nan 0 False

Python 2.3.3 (#1, Feb 18 2004, 11:58:04) 
[GCC 2.8.1] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> a = "NaN"
>>> b = float(a)
>>> c = int(b)
>>> d = (b == b)
>>> print a, b, c, d
NaN NaN 0 True

That demonstrates that the error state is lost by converting to int,
and that NaN testing isn't reliable.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Fri Jun 23 14:50:36 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 23 Jun 2006 13:50:36 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Fri, 23 Jun 2006 13:32:37 BST."
	<2mmzc4jbmi.fsf@starship.python.net> 
Message-ID: <E1Ftl7M-0005oK-N3@libra.cus.cam.ac.uk>

Michael Hudson <mwh at python.net> wrote:
>
> But, a floating point exception isn't a machine check interrupt, it's
> a program interrupt...

For reasons that I could go into, but are irrelevant, almost all
modern CPU architectures have one ONE interrupt mechanism, and use
it for both of those.  It is the job of the interrupt handler (i.e.
FLIH, first-level interrupt handler, usually in Assembler) to
classify those, get into the appropriate state and call the interrupt
handling code.

Now, this is a Bad Idea, but separating floating-point exceptions
from machine checks at the hardware level died with mainframes, as
far as I know.  The problem with the current approach is that it
makes it very hard for the operating system to allow the application
to handle the former.  And the problem with most modern operating
systems is that don't even do what they could do at all well, because
THAT died with the mainframes, too :-(

The impact of all this mess on things like Python is that exception
handling is a nightmare area, especially when you bring in threading
(i.e. hardware threading with multiple cores, or genuinely parallel
threading on a single core).  Yes, I have brought a system down by
causing too many floating-point exceptions in all threads of a
highly parallel program on a large SMP ....

> See, that wasn't so hard!  We'd have saved a lot of heat and light if
> you'd said that at the start (and if you think you'd made it clear
> already: you hadn't).

I thought I had.  I accept your statement that I hadn't.  Sorry.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Fri Jun 23 16:35:36 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 23 Jun 2006 15:35:36 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Fri, 23 Jun 2006 09:24:23 EDT."
	<1f7befae0606230624p545518f6h41513e326fea5665@mail.gmail.com> 
Message-ID: <E1Ftmky-0002mR-Ec@draco.cus.cam.ac.uk>

"Tim Peters" <tim.peters at gmail.com> wrote:
>
> I suspect Nick spends way too much time reading standards ;-)

God help me, YES!  And in trying to get them improved.  Both of which
are very bad for my blood pressure :-(

My real interest is in portable, robust programming - I DON'T abuse
the terms to mean bitwise identical, but that is by the way - and I
delved in here trying to write a jiffy little bit of just such code
as part of a course example.  BANG!!!  It failed in both respects on
the first two systems I tried on, and it wasn't my code that was wrong.

The killer is that standards are the nearest to a roadmap for portability,
especially portability and robustness.  If you have non-conforming code,
and it goes bananas, the compiler vendor will refuse to do anything, no
matter how clearly it is a bug in the compiler or library.  What is
worse is that there is an incentive for the leading vendors (see below)
to implement down to the standard, even when it is easier to do better.
And this is happening in this area.

> What he said is:
> 
>     If you look at floatobject.c, you will find it solid with constructions
>     that make limited sense in C99 but next to no sense in C89.
> 
> And, in fact, C89 truly defines almost nothing about floating-point
> semantics or pragmatics.  Nevertheless, if a thing "works" under gcc
> and under MS C, then "it works" for something like 99.9% of Python's
> users, and competitive pressures are huge for other compiler vendors
> to play along with those two.

Yup, though you mean gcc on an x86/AMD64/EM64T system, and 99.9% is a
rhetorical exaggeration - but one of the failures WAS on one of those! 

> I don't know what specifically Nick had in mind, and join the chorus
> asking for specifics.

That is why I wanted to:

   a) Read the decimal stuff and play around with the module
and:
   b) Write a sandbox and sort out my obvious errors
and:
   c) Write a PEP describing the issue and proposals

BEFORE going into details.  The devil is in the details, and I wanted
to leave him sleeping until I had lined up my howitzers ....

> I _expect_ he's got a keen eye for genuine
> coding bugs here, but also expect I'd consider many technically
> dubious bits of FP code to be fine under the "de facto standard"
> dodge.

Actually, I tried to explain that I don't have many objections to the
coding of the relevant files - whoever wrote them and I have a LOT of
common attitudes :-) And I have been strongly into de facto standards
for over 30 years, so am happy with them.  Yes, I have found a couple of
bugs, but not ones worth fixing (e.g. there is a use of x != x where
PyISNAN should be used, and a redundant test for an already excluded
case, but what the hell?)  My main objection is that they invoke C
behaviour in many places, and that is (a) mostly unspecified in C, (b)
numerically insane in C99 and (c) broken in practice.

> So, sure, everything we do is undefined, but, no, we don't really care
> :-)  If a non-trivial 100%-guaranteed-by-the-standard-to-work C
> program exists, I don't think I've seen it.

I can prove that none exists, though I would have to trawl over
SC22WG14 messages to prove it.  I spent a LONG time trying to get
"undefined" defined and used consistently (let alone sanely) in C, and
failed dismally.

> BTW, Nick, are you aware of Python's fpectl module?  That's
> user-contributed code that attempts to catch overflow, div-by-0, and
> invalid operation on 754 boxes and transform them  into raising a
> Python-level FloatingPointError exception.  Changes were made all over
> the place to try to support this at the time.  Every time you see a
> PyFPE_START_PROTECT or PyFPE_END_PROTECT macro in Python's C code,
> that's the system it's trying to play nicely with.  "Normally", those
> macros have empty expansions.

Aware of, yes.  Have looked at, no.  I have already beaten my head
against that area and already knew the issues.  I have even implemented
run-time systems that got it right, and that is NOT pretty.

> fpectl is no longer built by default, because repeated attempts failed
> to locate any but "ya, I played with it once, I think" users, and the
> masses of platform-specific #ifdef'ery in fpectlmodule.c were
> suffering fatal bit-rot.  No users + no maintainers means I expect
> it's likely that module will go away in the foreseeable future.  You'd
> probably hate its _approach_ to this anyway ;-)

Oh, yes, I know that problem.  You would be AMAZED at how many 'working'
programs blow up when I turn it on on systems that I manage - not
excluding Python itself (integer overflow) :-)  And, no, I don't hate
that approach, because it is one of the plausible ones; not good, but
what can you do?



Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Sat Jun 24 11:48:09 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Sat, 24 Jun 2006 10:48:09 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Sat, 24 Jun 2006 01:11:05 EDT."
	<1f7befae0606232211x2271215dsbde703e0fa8bc3c5@mail.gmail.com> 
Message-ID: <E1Fu4kL-0002QW-J4@draco.cus.cam.ac.uk>

"Tim Peters" <tim.peters at gmail.com> wrote:
> 
> > SC22WG14?  is that some marketing academy?  not a very good one, obviously.
> 
> That's because it's European ;-)

Er, please don't post ironic satire of that nature - many people will
believe it!

ISO is NOT European.  It is the Internatational Standards Organisation,
of which ANSI is a member.  And, for reasons and with consequences that
are only peripherally relevant, SC22WG14 has always been dominated by
ANSI.  In fact, C89 was standardised by ANSI (sic), acting as an agent
for ISO.  C99 was standardised by ISO directly, but for various reasons
only some of which I know, was even more ANSI-dominated than C89.

Note that I am NOT saying "big bad ANSI", as a large proportion of that
was and is the ghastly support provided by many countries to their
national standards bodies.  The UK not excepted.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Sat Jun 24 13:15:37 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Sat, 24 Jun 2006 12:15:37 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Fri, 23 Jun 2006 23:50:44 EDT."
	<e7icql$rmr$1@sea.gmane.org> 
Message-ID: <E1Fu66z-0002l8-Fk@draco.cus.cam.ac.uk>

"Terry Reedy" <tjreedy at udel.edu> wrote:
>
> Of interest among their C-EPs is one for adding the equivalent of our 
> decimal module
> http://www.open-std.org/jtc1/sc22/wg14/www/projects#24732 

IBM is mounting a major campaign to get its general decimal arithmetic
standardised as THE standard form of arithmetic.  There is a similar
(more advanced) move in C++, and they are working on Fortran.  I assume
that Cobol is already on board, and there may be others.

There is nothing underhand about this - IBM is quite open about it,
I believe that they are making all critical technologies freely
design has been thought out and is at least half-sane - which makes
it among the best 1-2% of IT technologies :-(

Personally, I think that it is overkill, because it is a MASSIVELY
complex solution, and will make sense only where at least two of
implementation cost, performance, power usage and CPU/memory size are
not constraints.  E.g. mainframes, heavyweight commercial codes etc.
but definitely NOT massive parallelism, very low power computing,
micro-minaturisation and so on.  IEEE 754 was bad (which is why it is
so often implemented only in part), but this is MUCH worse.  God alone
knows whether IBM will manage to move the whole of IT design - they
have done it before, and have failed before (often after having got
further than this).

Now, whether that makes it a good match for Python is something that
is clearly fruitful grounds for debate :-)


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From nmm1 at cus.cam.ac.uk  Sat Jun 24 15:44:42 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Sat, 24 Jun 2006 14:44:42 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Fri, 23 Jun 2006 14:21:18 EDT."
	<fb6fbf560606231121v1384256ew41ca19b782889293@mail.gmail.com> 
Message-ID: <E1Fu8RG-0003g1-Nm@draco.cus.cam.ac.uk>

"Jim Jewett" <jimjjewett at gmail.com> wrote:
> 
> > The conventional interpretation was that any operation that
> > was not mathematically continuous in an open region including its
> > argument values (in the relevant domain) was an error, and that all
> > such errors should be flagged.  That is what I am talking about.
> 
> Not a bad goal, but not worth sweating over, since it isn't
> sufficient.  It still allows functions whose continuity does not
> extend to the next possible floating point approximation, or functions
> whose value, while continuous, changes "too much" in that region.

Oh, yes, quite.  But I wasn't describing something that needed effort;
I was merely describing the criterion that was traditionally used (and
still is, see below).  There is also the Principle of Least Surprise:
the behaviour of a language should be at least explicable to mere
mortals (a.k.a. ordinary programmers) - one that says "whatever the
designing committee thought good at the time" is a software engineering
disaster.

> For some uses, it is more important to be consistent with established
> practice than to be as correct as possible.  If the issues are still
> problems, and can't be solved in languages like java, then ... the
> people who want "correct" behavior will be a tiny minority, and it
> makes sense to have them use a 3rd-party extension.

I don't think that you understand the situation.

I was and am describing established practice, as used by the numeric
programmers who care about getting reliable answers - most of those
still use Fortran, for good and sufficient reasons.  There are two
other established practices:

    Floating-point is figment of your imagination - don't support it.

    Yeah.  Right.  Whatever.  It's only approximate, so who gives a
damn what it does?

Mine is the approach taken by the Fortran, C and C++ standards
and many Fortran implementations, but the established practice in
highly optimised Fortran and most C is the last.  Now, Java (to
some extent) and C99 introduced something that attempts to eliminate
errors by defining what they do (more-or-less arbitrarily); much as
if Python said that, if a list or dictionary entry wasn't found, it
would create one and return None.  But that is definitely NOT
established practice, despite the fact that its proponents claim it
is.  Even IEEE 754 (as specified) has never reached established
practice at the language level.

The very first C99 Annex F implementation that I came across appeared
in 2005 (Sun One Studio 9 under Solaris 10 - BOTH are needed); I have
heard rumours that HP-UX may have one, but neither AIX nor Linux does
(even now).  I have heard rumours that the latest Intel compiler may be
C99 Annex F, but don't use it myself, and I haven't heard anything
reliable either way for Microsoft.  What is more, many of the tender
documents for systems bought for numeric programming in 2005 said
explicitly that they wanted C89, not C99 - none asked for C99 Annex F
that I saw.  No, C99 Annex F is NOT established practice and, God
willing, never will be.

> > For example, consider conversion between float
> > and long - which class should control the semantics?
> 
> The current python approach with binary fp is to inherit from C
> (consistency with established practice).  The current python approach
> for Decimal (or custom classes) is to refuse to guess in the first
> place; people need to make an explicit conversion.  How is this a
> problem?

See above re C extablished practice.

The above is not my point.  I am talking about the generic problem
where class A says that overflow should raise an exception, class B
says that it should return infinity and class C says nothing.  What
should C = A*B do on overflow?

> [ Threading and interrupts ]

No, that is a functionality issue, but the details are too horrible
to go into here.  Python can do next to nothing about them, except
to distrust them - just as it already does.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From jcarlson at uci.edu  Sat Jun 24 17:45:15 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Sat, 24 Jun 2006 08:45:15 -0700
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1Ftgmz-0006xA-6I@draco.cus.cam.ac.uk>
References: <E1Ftgmz-0006xA-6I@draco.cus.cam.ac.uk>
Message-ID: <20060624084037.1DFD.JCARLSON@uci.edu>


Nick Maclaren <nmm1 at cus.cam.ac.uk> wrote:
> "Facundo Batista" <facundobatista at gmail.com> wrote:
> > 
> > Well, so I'm completely lost... because, if all you want is to be able
> > to chose a returned value or an exception raised, you actually can
> > control that in Decimal.
> 
> Yes, but I have so far failed to get hold of a copy of the Decimal code!
> I will have another go at subverting Subversion.  I should VERY much
> like to be get hold of those documents AND build a testing version of
> the code - then I can go away, experiment, and come back with some more
> directed comments (not mere generalities).

Download any Python 2.4 or 2.5 distribution.  It will include the
decimal.py module.  Heck, you may even have it already, if you are
running Python 2.4 or later.  To see the latest version:
http://svn.python.org/view/python/trunk/Lib/decimal.py

If you want to see the C version of the decimal module, it is available:
http://svn.python.org/view/sandbox/trunk/decimal-c/

The general magic incantation to see the SVN repository is:
http://svn.python.org/view/


 - Josiah


From bioinformed at gmail.com  Sat Jun 24 17:50:36 2006
From: bioinformed at gmail.com (Kevin Jacobs <jacobs@bioinformed.com>)
Date: Sat, 24 Jun 2006 11:50:36 -0400
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: <E1FtkvO-0005iB-JM@libra.cus.cam.ac.uk>
References: <2e1434c10606230457h7af2f39j78a749b5984af7c1@mail.gmail.com>
	<E1FtkvO-0005iB-JM@libra.cus.cam.ac.uk>
Message-ID: <2e1434c10606240850i3c49f193nac2cacc8db72059f@mail.gmail.com>

On 6/23/06, Nick Maclaren <nmm1 at cus.cam.ac.uk> wrote:
>
> jacobs at bioinformed.com wrote:
> >
> > > >Unfortunately, that doesn't help, because it is not where the issues
> > > >are.  What I don't know is how much you know about numerical models,
> > > >IEEE 754 in particular, and C99.  You weren't active on the SC22WG14
> > > >reflector, but there were some lurkers.
> >
> > Hand wave, hand wave, hand wave.  [...]
>
> SC22WG14 is the ISO committee that handles C standardisation.  [...]


I'm not asking you to describe SC22WG14 or post detailed technical summaries
of the long and painful road.  I'd like you to post things directly relevant
to Python with footnotes to necessary references.  It is then incumbent on
those that wish to respond to your post to familiarize themselves with the
relevant background material.  However, it is really darn hard to do that
when we don't know what you're trying to fix in Python.  The examples you
show below are a good start in that direction.

> A good place to start: You mentioned earlier that there where some
> > nonsensical things in floatobject.c.  Can you list some of the most
> serious
> > of these?
>
> Well, try the following for a start:
>
> Python 2.4.2 (#1, May  2 2006, 08:28:01)
> [GCC 4.1.0 (SUSE Linux)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> a = "NaN"
> >>> b = float(a)
> >>> c = int(b)
> >>> d = (b == b)
> >>> print a, b, c, d
> NaN nan 0 False


Python 2.3.3 (#1, Feb 18 2004, 11:58:04)
> [GCC 2.8.1] on sunos5
> Type "help", "copyright", "credits" or "license" for more information.
> >>> a = "NaN"
> >>> b = float(a)
> >>> c = int(b)
> >>> d = (b == b)
> >>> print a, b, c, d
> NaN NaN 0 True
>
> That demonstrates that the error state is lost by converting to int,
> and that NaN testing isn't reliable.
>


Now we're getting to business.  There are actually (at least 3 issues) that
I see:

1) The string representation of NaN is not standardized across platforms
2) on a sane platform, int(float('NaN')) should raise an ValueError
exception for the int() portion.
3) float('NaN') == float('NaN') should be false, assuming NaN is not a
signaling NaN, by default

If we include Windows:

Python 2.5b1 (r25b1:47027, Jun 20 2006, 09:31:33) [MSC v.1310 32 bit
(Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> a = "NaN"
>>> b = float(a)

Traceback (most recent call last):
  File "<pyshell#1>", line 1, in <module>
    b = float(a)
ValueError: invalid literal for float(): NaN
>>>

So:
  4) in addition to #1, the platform atof sometimes doesn't accept any
conventional spelling of NaN
  5) All of the above likely applies to infinities and +-0

So the open question is how to both define the semantics of Python floating
point operations and to implement them in a way that verifiably works on the
vast majority of platforms without turning the code into a maze of
platform-specific defines, kludges, or maintenance problems waiting to
happen.

Thanks,
-Kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060624/7f4efce8/attachment.htm 

From fredrik at pythonware.com  Sat Jun 24 19:04:50 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sat, 24 Jun 2006 19:04:50 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
Message-ID: <e7jrbg$749$1@sea.gmane.org>

Guido van Rossum wrote:

 >> just map
>>
>>      switch EXPR:
>>      case E1:
>>          ...
>>      case in E2:
>>          ...
>>      else:
>>          ...
>>
>> to
>>
>>      VAR = EXPR
>>      if VAR == E1:
>>          ...
>>      elif VAR in E2:
>>          ...
>>      else:
>>          ...
>>
>> where VAR is a temporary variable, and case and case-in clauses can be
>> freely mixed, and leave the rest to the code generator.  (we could even
>> allow "switch EXPR [as VAR]" to match a certain other sugar construct).
> 
> This used to be my position. I switched after considering the
> alternatives for what should happen if either the switch expression or
> one or more of the case expressions is unhashable.

I don't see this as much of a problem, really: we can simply restrict 
the optimization to well-known data types ("homogenous" switches using 
integers or strings should cover 99.9% of all practical cases), and then 
add an opcode that checks uses a separate dispatch object to check if 
fast dispatch is possible, and place that before an ordinary if/elif 
sequence.

the dispatch object is created when the function object is created, 
along with default values and statics.  if fast dispatch cannot be used 
for a function instance, the dispatch object is set to None, and the 
dispatch opcode turns into a NOP.

(each switch statement should of course have it's own dispatch object).

</F>


From pje at telecommunity.com  Sat Jun 24 19:18:23 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 24 Jun 2006 13:18:23 -0400
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7jrbg$749$1@sea.gmane.org>
References: <ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060624131149.01eb3ea8@sparrow.telecommunity.com>

At 07:04 PM 6/24/2006 +0200, Fredrik Lundh wrote:
>I don't see this as much of a problem, really: we can simply restrict
>the optimization to well-known data types ("homogenous" switches using
>integers or strings should cover 99.9% of all practical cases), and then
>add an opcode that checks uses a separate dispatch object to check if
>fast dispatch is possible, and place that before an ordinary if/elif
>sequence.

What about switches on types?  Things like XML-RPC and JSON want to be able 
to have a fast switch on an object's type and fall back to slower tests 
only for non-common cases.  For that matter, you can build an effective 
multiway isinstance() check using something like:

     for t in obtype.__mro__:
         switch t:
         case int: ...; break
         case str: ...; break
         else:
             continue
     else:
         # not a recognized type

This is essentially what RuleDispatch does in generic functions' dispatch 
trees now, albeit without the benefit of a "switch" statement or opcode.


From rwgk at yahoo.com  Sat Jun 24 19:29:20 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Sat, 24 Jun 2006 10:29:20 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624150311.29014.711788606.divmod.quotient.9044@ohm>
Message-ID: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>

--- Jean-Paul Calderone <exarkun at divmod.com> wrote:
> I think it is safe to say that Twisted is more widely used than anything
> Google has yet released.  Twisted also has a reasonably plausible
> technical reason to dislike this change.  Google has a bunch of engineers
> who, apparently, cannot remember to create an empty __init__.py file in
> some directories sometimes.

Simply adding a note to the ImportError message would solve this problem "just
in time":

>>> import mypackage.foo
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ImportError: No module named mypackage.foo
    Note that subdirectories are searched for imports only if they contain an
    __init__.py file: http://www.python.org/doc/essays/packages.html


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From python-dev at zesty.ca  Sat Jun 24 20:29:18 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sat, 24 Jun 2006 13:29:18 -0500 (CDT)
Subject: [Python-Dev] Switch statement
In-Reply-To: <20060623170807.1DFA.JCARLSON@uci.edu>
References: <ca471dc20606231137j440ae028p85ea4407363d1bc2@mail.gmail.com>
	<eaaf21dc0606231436n35b71a55y25a15014d428e8f4@mail.gmail.com>
	<20060623170807.1DFA.JCARLSON@uci.edu>
Message-ID: <Pine.LNX.4.58.0606241315450.17937@server1.LFW.org>

On Fri, 23 Jun 2006, Josiah Carlson wrote:
> This is a good thing, because if switch/case ends up functionally
> identical to if/elif/else, then it has no purpose as a construct.

This doesn't make sense as a rule.

Consider:

    "If x.y ends up functionally identical to getattr(x, 'y'),
     then it has no purpose as a construct."

    "If print x ends up functionally identical to import sys;
     sys.stdout.write(str(x) + '\n'), then it has no purpose as
     a construct."

What matters is not whether it's *functionally* identical.  What
matters is whether it makes more sense to the reader and has a
meaning that is likely to be what the writer wanted.

"Evaluate the switch expression just once" is a semantic win.

"Evaluate the switch expression just once, but throw an exception
if the result is not hashable" is a weaker semantic win.  (How
often is that what the writer is thinking about?)

"Throw an exception at compile time if the cases overlap" is also
a weaker semantic win.  (How often is this an actual mistake that
the writer wants to be caught at compile time?)

"Use the case values computed at compile time, not at runtime" doesn't
seem like much of a win.  (How often will this be what the writer
intended, as opposed to a surprise hiding in the bushes?)


-- ?!ng

From jcarlson at uci.edu  Sat Jun 24 21:57:19 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Sat, 24 Jun 2006 12:57:19 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <Pine.LNX.4.58.0606241315450.17937@server1.LFW.org>
References: <20060623170807.1DFA.JCARLSON@uci.edu>
	<Pine.LNX.4.58.0606241315450.17937@server1.LFW.org>
Message-ID: <20060624123919.1E06.JCARLSON@uci.edu>


Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Fri, 23 Jun 2006, Josiah Carlson wrote:
> > This is a good thing, because if switch/case ends up functionally
> > identical to if/elif/else, then it has no purpose as a construct.
> 
> This doesn't make sense as a rule.
> 
> Consider:
> 
>     "If x.y ends up functionally identical to getattr(x, 'y'),
>      then it has no purpose as a construct."
> 
>     "If print x ends up functionally identical to import sys;
>      sys.stdout.write(str(x) + '\n'), then it has no purpose as
>      a construct."

I agree with you completely, it doesn't make sense as a rule, but that
was not its intent.  Note that I chose specific values of X and Y in "if
X is functionally identical to Y, then it has no purpose as a construct"
such that it did make sense.


> What matters is not whether it's *functionally* identical.  What
> matters is whether it makes more sense to the reader and has a
> meaning that is likely to be what the writer wanted.
> 
> "Evaluate the switch expression just once" is a semantic win.
> 
> "Evaluate the switch expression just once, but throw an exception
> if the result is not hashable" is a weaker semantic win.  (How
> often is that what the writer is thinking about?)
> 
> "Throw an exception at compile time if the cases overlap" is also
> a weaker semantic win.  (How often is this an actual mistake that
> the writer wants to be caught at compile time?)
> 
> "Use the case values computed at compile time, not at runtime" doesn't
> seem like much of a win.  (How often will this be what the writer
> intended, as opposed to a surprise hiding in the bushes?)

The reasons by themselves don't seem to make sense, until you look at
them in the scope from which the decisions were made.  Just like
the word "Excelsior" makes no sense until you hear "Minnesota".

Those final three rules, when seen in the context of the rest of the
conversation, and with the understanding that one of the motivating
purposes is to improve execution time, do offer methods and mechanisms
to answer those motivations.


 - Josiah


From rrr at ronadam.com  Sat Jun 24 22:08:21 2006
From: rrr at ronadam.com (Ron Adam)
Date: Sat, 24 Jun 2006 15:08:21 -0500
Subject: [Python-Dev] Switch statement
In-Reply-To: <e7jrbg$749$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org>
Message-ID: <449D9BB5.5090504@ronadam.com>

Fredrik Lundh wrote:
> Guido van Rossum wrote:
> 
>  >> just map
>>>      switch EXPR:
>>>      case E1:
>>>          ...
>>>      case in E2:
>>>          ...
>>>      else:
>>>          ...
>>>
>>> to
>>>
>>>      VAR = EXPR
>>>      if VAR == E1:
>>>          ...
>>>      elif VAR in E2:
>>>          ...
>>>      else:
>>>          ...
>>>
>>> where VAR is a temporary variable, and case and case-in clauses can be
>>> freely mixed, and leave the rest to the code generator.  (we could even
>>> allow "switch EXPR [as VAR]" to match a certain other sugar construct).

>> This used to be my position. I switched after considering the
>> alternatives for what should happen if either the switch expression or
>> one or more of the case expressions is unhashable.

> I don't see this as much of a problem, really: we can simply restrict 
> the optimization to well-known data types ("homogenous" switches using 
> integers or strings should cover 99.9% of all practical cases)

+1  This would keep it simple to use.


A possibility that hasn't been mentioned yet is to supply a precomputed 
jump table to a switch explicitly.

     table = {expr1:1, expr2:2, ... }

     for value in data:
         switch table[value]:
        	    case 1: ...
             case 2: ...
             ...
             else: ...

(I prefer indented case's, but it's not the point here.  I can get use 
them it not being indented.)

It is an easy matter to lift evaluation of the switch table expressions 
out of inner loops or even out of functions. (if it's needed of course)


Or an alternate form may allow a pre-evaluated jump table to be 
explicitly substituted directly at the time of use. (would this be 
possible?)

     def switcher(value, table):
        switch value, table:
           case 1: ...
           case 2: ...
           ...
           else: ...


Cheers,
    Ron














From raymond.hettinger at verizon.net  Sun Jun 25 00:49:20 2006
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Sat, 24 Jun 2006 15:49:20 -0700
Subject: [Python-Dev] Simple Switch statement
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com><e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>
Message-ID: <024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>

>From what I can see, almost everyone wants a switch statement, though perhaps 
for different reasons.

The main points of contention are 1) a non-ambiguous syntax for assigning 
multiple cases to a single block of code, 2) how to compile variables as 
constants in a case statement, and 3) handling overlapping cases.

Here's a simple approach that will provide most of the benefit without trying to 
overdo it:


    switch f(x):          # any expression is allowable here but raises an 
exception if the result is not hashable
    case 1: g()           # matches when f(x)==1
    case 2,3 : h()        # matches when f(x) in (2,3)
    case 1: i()           # won't ever match because the first case 1 wins
    case (4,5), 6: j()    # matches when f(x) in ((4,5), 6)
    case "bingo": k()     # matches when f(x) in ("bingo",)
    default:   l()        # matches if nothing else does

Though implemented as a hash table, this would execute as if written:

    fx = f(x)
    hash(fx)
    if fx in (1,):
        g()
    elif fx in (2,3):
        h()
    elif fx in (1,):
        i()
    elif fx in ((4,5), 6):
        j()
    elif fx in ("bingo",):
        k()
    else:
        l()

The result of f(x) should be hashable or an exception is raised.
Cases values must be ints, strings, or tuples of ints or strings.
No expressions are allowed in cases.
Since a hash table is used, the fx value must support __hash__ and __eq__,
but not expect multiple __eq__ tests as in the elif version.

I've bypassed the constantification issue.  The comes-up throughout Python
and is not unique to the switch statement.  If someone wants a "static" or
"const" declaration, it should be evaluated separately on its own merits.

At first, I was bothered by not supporting sre style use cases with imported
codes; however, I noticed that sre's imported constants already have values that
correspond to their variable names and that that commonplace approach
makes is easy to write fast switch-case suites:

    def _compile(code, pattern, flags):
        # internal: compile a (sub)pattern
        for op, av in pattern:
            switch op:
                case 'literal', 'not_literal':
                    if flags & SRE_FLAG_IGNORECASE:
                        emit(OPCODES[OP_IGNORE[op]])
                        emit(_sre.getlower(av, flags))
                    else:
                        emit(OPCODES[op])
                        emit(av)
                elif 'in':
                    if flags & SRE_FLAG_IGNORECASE:
                        emit(OPCODES[OP_IGNORE[op]])
                        def fixup(literal, flags=flags):
                            return _sre.getlower(literal, flags)
                    else:
                        emit(OPCODES[op])
                        fixup = _identityfunction
                    skip = _len(code); emit(0)
                    _compile_charset(av, flags, code, fixup)
                    code[skip] = _len(code) - skip
                case 'any':
                    if flags & SRE_FLAG_DOTALL:
                        emit(OPCODES[ANY_ALL])
                    else:
                        emit(OPCODES[ANY])
                case 'repeat', 'min_repeat', 'max_repeat':
                    . . .

When the constants are mapped to integers instead of strings, it is no
burden to supply a reverse mapping like we already do in opcode.py.
This commonplace setup also makes it easy to write fast switch-case suites:

    from opcode import opmap

    def calc_jump_statistics(f):
        reljumps = absjumps = 0
        for opcode, oparg in gencodes(f.func_code.co_code):
            switch opmap[opcode]:
                case 'JUMP_FORWARD', 'JUMP_IF_FALSE', 'JUMP_IF_TRUE':
                    reljumps +=1
                case 'JUMP_ABSOLUTE', 'CONTINUE_LOOP':
                    absjumps += 1
                  . . .

So, that is it, my proposal for simple switch statements with a straight-forward
implementation, fast execution, simply explained behavior, and applicability to
to the most important use cases.



Raymond


P.S. For the sre case, we get a great benefit from using strings.  Since they 
are
all interned at compile time and have their hash values computed no more than
once, the dispatch table will never have to actually calculate a hash and the
full string comparison will be bypassed because "identity implies equality".
That's nice.  The code will execute clean and fast.  AND we get readability
improvements too.  Not bad.


From pje at telecommunity.com  Sun Jun 25 00:54:57 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 24 Jun 2006 18:54:57 -0400
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
Message-ID: <5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>

At 03:49 PM 6/24/2006 -0700, Raymond Hettinger wrote:
>Cases values must be ints, strings, or tuples of ints or strings.

-1.  There is no reason to restrict the types in this fashion.  Even if you 
were trying to ensure marshallability, you could still include unicode and 
longs.  However, there isn't any need for marshallability here, and I would 
like to be able to use switches on types, enumerations, and the like.


From gjcarneiro at gmail.com  Sun Jun 25 02:09:09 2006
From: gjcarneiro at gmail.com (Gustavo Carneiro)
Date: Sun, 25 Jun 2006 01:09:09 +0100
Subject: [Python-Dev] PyObject_CallFunction and 'N' format char
Message-ID: <a467ca4f0606241709h348229d5neda0e873649bdaa7@mail.gmail.com>

  Sorry this is slightly offtopic, but I think it's important...

  According to recent experiments tracking down a memory leak, it
seems that PyObject_CallFunction(func, "N", object) and
PyObject_CallFunction(func, "O", object) are doing exactly the same
thing.  However, documentation says "The C arguments are described
using a   Py_BuildValue() style format string.".  And of course
Py_BuildValue consumes one object reference, according to the
documentation and practice.  However, PyObject_CallFunction does _not_
consume such an object reference, contrary to what I believed for
years.  God knows how many leaks I may have introduced in my
bindings... :|

  Any comments?

From raymond.hettinger at verizon.net  Sun Jun 25 02:30:00 2006
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Sat, 24 Jun 2006 17:30:00 -0700
Subject: [Python-Dev] Simple Switch statement
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>
Message-ID: <006501c697ee$7f433780$dc00000a@RaymondLaptop1>

[Phillip Eby]
> I would like to be able to use switches on types, enumerations, and the like.

Be careful about wanting everything and getting nothing.
My proposal is the simplest thing that gets the job done for key use cases found 
in real code.
Also, it is defined tightly enough to allow room for growth and elaboration over 
time.
Good luck proposing some alternative that is explainable, has no hidden 
surprises,
has an easy implementation, and allows fast hash-table style dispatch.
Besides, if you want to switch on other types, it is trivial to include a 
reverse mapping
(like that in the opcode.py example).  Reverse mappings are to build and easy to 
read:


# enumeration example
colormap = {}
for code, name in enumerate('RED ORANGE YELLOW GREEN BLUE INDIGO 
MAGENTA'.split()):
    globals()[name] = code
    colormap[code] = name

def colormixer(color):
    switch colorname[color]:
        case 'RED', 'YELLOW', 'BLUE':
            handle_primary()
        case 'MAGENTA':
            get_another_color()
        default:
            handle_rest()


colormixer(RED)
colormixer(ORANGE)



Raymond






From aahz at pythoncraft.com  Sun Jun 25 03:34:41 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sat, 24 Jun 2006 18:34:41 -0700
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
References: <449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
Message-ID: <20060625013440.GA23411@panix.com>

On Sat, Jun 24, 2006, Raymond Hettinger wrote:
>
> So, that is it, my proposal for simple switch statements with a
> straight-forward implementation, fast execution, simply explained
> behavior, and applicability to to the most important use cases.

+1

I've been trying to write a response to these threads.  I don't
particularly like what looks like an attempt to shove together too many
different features into a single package.  Raymond's proposal gives
Python the switch statement people have been demanding while leaving room
for the improvements that have been suggested over a plain switch.

Phillip's point about longs and Unicode is valid, but easily addressed by
limiting cases to "hashable literal" (though we might want to explicitly
exclude floats).
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From pje at telecommunity.com  Sun Jun 25 03:45:14 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sat, 24 Jun 2006 21:45:14 -0400
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <006501c697ee$7f433780$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060624213840.037a5c58@sparrow.telecommunity.com>

At 05:30 PM 6/24/2006 -0700, Raymond Hettinger wrote:
>[Phillip Eby]
> > I would like to be able to use switches on types, enumerations, and the 
> like.
>
>Be careful about wanting everything and getting nothing.
>My proposal is the simplest thing that gets the job done for key use cases 
>found
>in real code.

It's ignoring at least symbolic constants and types -- which are certainly 
"key use cases found in real code".

Besides which, this is Python.  We don't select a bunch of built-in types 
and say "these are the only types that work".  Instead, we have protocols 
(like __hash__ and __eq__) that any object may implement.

If you don't want expressions to be implicitly lifted to function 
definition time, you'd probably be better off arguing to require the use of 
explicit 'static' for non-literal case expressions.

(Your reverse mapping, by the way, is a non-starter -- it makes the code 
considerably more verbose and less obvious than a switch statement, even if 
every 'case' has to be decorated with 'static'.)


From ncoghlan at gmail.com  Sun Jun 25 03:56:20 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 25 Jun 2006 11:56:20 +1000
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <006501c697ee$7f433780$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>	<5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>
	<006501c697ee$7f433780$dc00000a@RaymondLaptop1>
Message-ID: <449DED44.7030202@gmail.com>

Raymond Hettinger wrote:
> [Phillip Eby]
>> I would like to be able to use switches on types, enumerations, and the like.
> 
> Be careful about wanting everything and getting nothing.
> My proposal is the simplest thing that gets the job done for key use cases found 
> in real code.
> Also, it is defined tightly enough to allow room for growth and elaboration over 
> time.
> Good luck proposing some alternative that is explainable, has no hidden 
> surprises,
> has an easy implementation, and allows fast hash-table style dispatch.

I like it!

You could actually make it even simpler by having the initial implementation 
only permit strings for the cases.

Then the concept is:
   1. Each case in the switch is given one or more string names
   2. The same name cannot appear more than once in a single switch statement
   3. A case is executed when the switch value matches one of its names
   4. The else clause is executed if the switch value does not match any case
   5. Case names use string-literal syntax to permit later expansion
   6. Switching on non-strings requires an auxiliary lookup

The advantage over the status quo is that instead of having to identify code 
directly (as in a function dispatch table), the auxiliary lookup only has to 
identify the name of the appropriate case.

And it still leaves the door open for all the other features being considered:
   - literals other than strings in the cases (integers, tuples)
   - arbitrary expressions in the cases (needs 'static' expressions first)
   - sequence unpacking using 'in' or '*'

> Besides, if you want to switch on other types, it is trivial to include a 
> reverse mapping
> (like that in the opcode.py example).  Reverse mappings are to build and easy to 
> read:

You can even build the jump table after the fact if everything you want to 
switch on is an existing global or builtin variable:

def switch_table(*args):
       # Build a string switch table for a set of arguments
       # All arguments must exist in the current global namespace
       # All arguments must be hashable
       all_items = globals().items()
       all_items.extend(__builtins__.__dict__.items())
       table = {}
       for obj in args:
           for name, value in all_items:
               if obj is value:
                   table[obj] = name
       return table

 >>> typemap = switch_table(float, complex, int, long, str, unicode, Decimal)
 >>> pprint(typemap)
{<class 'decimal.Decimal'>: 'Decimal',
  <type 'complex'>: 'complex',
  <type 'float'>: 'float',
  <type 'int'>: 'int',
  <type 'long'>: 'long',
  <type 'str'>: 'str',
  <type 'unicode'>: 'unicode'}

Armed with that switch table, you can then do:

   def fast_dispatch(self, other):
       switch typemap[other.__class__]
       case 'Decimal':
           self.handle_decimal(other)
       case 'int', 'long':
           self.handle_integer(other)
       case 'float':
           self.handle_float(other)
       case 'complex':
           self.handle_complex(other)
       case 'str', 'unicode':
           self.handle_string(other)
       else:
           self.handle_any(other)



-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From rrr at ronadam.com  Sun Jun 25 04:06:33 2006
From: rrr at ronadam.com (Ron Adam)
Date: Sat, 24 Jun 2006 21:06:33 -0500
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com><e7jrbg$749$1@sea.gmane.org>	<449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
Message-ID: <449DEFA9.6090702@ronadam.com>

Raymond Hettinger wrote:
>>From what I can see, almost everyone wants a switch statement, though perhaps 
> for different reasons.
> 
> The main points of contention are 1) a non-ambiguous syntax for assigning 
> multiple cases to a single block of code, 2) how to compile variables as 
> constants in a case statement, and 3) handling overlapping cases.
> 
> Here's a simple approach that will provide most of the benefit without trying to 
> overdo it:

Looks good to me.


>     switch f(x):          # any expression is allowable here but raises an 
> exception if the result is not hashable
>     case 1: g()           # matches when f(x)==1
>     case 2,3 : h()        # matches when f(x) in (2,3)
>     case 1: i()           # won't ever match because the first case 1 wins
>     case (4,5), 6: j()    # matches when f(x) in ((4,5), 6)
>     case "bingo": k()     # matches when f(x) in ("bingo",)
>     default:   l()        # matches if nothing else does
> 
> Though implemented as a hash table, this would execute as if written:
> 
>     fx = f(x)
>     hash(fx)
>     if fx in (1,):
>         g()
>     elif fx in (2,3):
>         h()
>     elif fx in (1,):
>         i()
>     elif fx in ((4,5), 6):
>         j()
>     elif fx in ("bingo",):
>         k()
>     else:
>         l()
> 
> The result of f(x) should be hashable or an exception is raised.
> Cases values must be ints, strings, or tuples of ints or strings.
> No expressions are allowed in cases.
> Since a hash table is used, the fx value must support __hash__ and __eq__,
> but not expect multiple __eq__ tests as in the elif version.
> 
> I've bypassed the constantification issue.  The comes-up throughout Python
> and is not unique to the switch statement.  If someone wants a "static" or
> "const" declaration, it should be evaluated separately on its own merits.

Yes, I agree.


> When the constants are mapped to integers instead of strings, it is no
> burden to supply a reverse mapping like we already do in opcode.py.
> This commonplace setup also makes it easy to write fast switch-case suites:
> 
>     from opcode import opmap
> 
>     def calc_jump_statistics(f):
>         reljumps = absjumps = 0
>         for opcode, oparg in gencodes(f.func_code.co_code):
>             switch opmap[opcode]:
>                 case 'JUMP_FORWARD', 'JUMP_IF_FALSE', 'JUMP_IF_TRUE':
>                     reljumps +=1
>                 case 'JUMP_ABSOLUTE', 'CONTINUE_LOOP':
>                     absjumps += 1
>                   . . .
> 
> So, that is it, my proposal for simple switch statements with a straight-forward
> implementation, fast execution, simply explained behavior, and applicability to
> to the most important use cases.

Just what I was looking for! +1

I happen to like simple modular code that when combined is more than 
either alone, which I believe is the case here when using mappings with 
switches.  This type of synergy is common in python and I have no 
problem using a separate lookup map to do early and/or more complex 
evaluations for cases.

Cheers,
    Ron



> Raymond
> 
> 
> P.S. For the sre case, we get a great benefit from using strings.  Since they 
> are
> all interned at compile time and have their hash values computed no more than
> once, the dispatch table will never have to actually calculate a hash and the
> full string comparison will be bypassed because "identity implies equality".
> That's nice.  The code will execute clean and fast.  AND we get readability
> improvements too.  Not bad.



From ncoghlan at gmail.com  Sun Jun 25 04:12:45 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 25 Jun 2006 12:12:45 +1000
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <5.1.1.6.0.20060624213840.037a5c58@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>	<5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060624213840.037a5c58@sparrow.telecommunity.com>
Message-ID: <449DF11D.5060901@gmail.com>

Phillip J. Eby wrote:
> At 05:30 PM 6/24/2006 -0700, Raymond Hettinger wrote:
>> [Phillip Eby]
>>> I would like to be able to use switches on types, enumerations, and the 
>> like.
>>
>> Be careful about wanting everything and getting nothing.
>> My proposal is the simplest thing that gets the job done for key use cases 
>> found
>> in real code.
> 
> It's ignoring at least symbolic constants and types -- which are certainly 
> "key use cases found in real code".

Raymond's idea is a step on the road, not necessarily the endpoint. It's 
cleverness lies in the fact that it removes the dependency between getting a 
switch statement that will help with the standard library's current use cases 
and getting static expressions.

Being able to build a dispatch table as "hashable object -> case name" instead 
of having to build it as "hashable object -> callable object" is a significant 
improvement over the status quo, even if it doesn't solve everything. It 
doesn't get rid of the need for the separate dispatch table, but it does 
eliminate the need to turn everything into a separate, and it also allows the 
cases to modify the function's local namespace.

Note that, *if* static expressions or an equivalent are added later, there is 
nothing preventing them being integrated (implicitly or otherwise) into 
Raymond's simplified switch statement.

The simplified proposal breaks the current discussion into two separately 
PEP-able concepts:
   a. add a literals-only switch statement for fast local dispatch (PEP 275)
   b. add the ability to designate code for once-only evaluation

Removing the limitations on the initial version of the switch statement would 
then be one of the motivating use cases for the second PEP (which would be a 
new PEP to thrash out whether such evaluation should be at first execution 
time so it works everywhere, or at function definition time, so it only works 
at all in functions and behaves very surprisingly if inside a loop or 
conditional statement).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From talin at acm.org  Sun Jun 25 06:31:57 2006
From: talin at acm.org (Talin)
Date: Sat, 24 Jun 2006 21:31:57 -0700
Subject: [Python-Dev] Alternatives to switch?
Message-ID: <449E11BD.7010603@acm.org>

At first I was pretty excited about the switch proposals, but having 
read the various discussions I have to say that my enthusiasm has cooled 
quite a bit. The set of proposals that are currently being put forward 
have enough caveats and restrictions as to make the statement far less 
useful than I had originally hoped.

In fact, I'd like to point out something that hasn't been brought up, 
which is that in many cases having a closure rebind the switch cases 
defeats the purpose of the thing. For example:

    def outer():
       def inner(x):
          switch(x):
          case 1: ...
          case 2: ...
          case 3: ...

       return inner

If the switch cases are bound at the time that 'inner' is defined, it 
means that the hash table will be rebuilt each time 'outer' is called. 
But what if 'inner' is only intended to be used once? It means that the 
performance advantage of switch is completely negated. On the other 
hand, if 'inner' is intended to be used many times, then 'switch' is a 
win. But the compiler has no idea which of these two cases is true.

I want to try and think out of the box here, and ask the question what 
exactly we wanted a switch statement for, and if there are any other 
ways to approach the problem.

Switch statements are used to efficiently dispatch based on a single 
value to a large number of alternative code paths. The primary uses that 
have been put forward are in interpreting parse trees (the 'sre' example 
is a special case of this) and pickling. In fact, I would say that these 
two use cases alone justify the need for some improvement to the 
language, since both are popular design patterns, and both are somewhat 
ill-served by the current limits of the language.

Parse trees and pickling can, in fact, be considered as two examples of 
a broader category of "external interpretation of an object graph". By 
"external interpretation" I mean that the inspection and transformation 
of the data is not done via method calls on the individual objects, but 
rather by code outside of the object graph that recognizes individual 
object types or values and takes action based on that.

This type of architectural pattern manifests in nearly every sub-branch 
of software engineering, and usually appears when you have a complex 
graph of objects where it is inadvisable, for one reason or another, to 
put the behavior in the objects themselves. There are several possible 
reasons why this might be so. Perhaps the operations involve the 
relationships between the objects rather than the objects themselves 
(this is the parse tree case.) Or perhaps for reasons of modularity, it 
is desired that the objects not have built-in knowledge of the type of 
operation being performed - so that, for example, you can write several 
different kinds of serializers for an object graph without having to 
have the individual objects have special understanding of each different 
serializer type. This is the pickle case.

I'd like to propose that we consider this class of problems (external 
interpretation of an object graph) as the 'reference' use case for 
dicussions of the merits of the switch statement, and that evaluation of 
the merits of language changes be compared against this reference. Here 
are my reasons for suggesting this:

   -- The class is broad and encompasses a large set of practical, 
real-world applications.
   -- The class is not well-served by 'if-elif-else' dispatching styles.
   -- There have been few, if any, use cases in support of a switch 
statement that don't fall within this class.

So how does a switch statement help with this problem? Currently in 
Python, there are a limited number of ways to do N-way dispatching:

   -- if-elif-else chains
   -- method overloading
   -- dicts/arrays of function or method pointers
   -- exotic and weird solutions such as using try/except as a 
dispatching mechanism.

(I can't think of any others, but I am sure I missed something.)

We've already discussed the issues with if-elif-else chains, in 
particular the fact that they have O(N) performance instead of O(1).

The next two options both have in common the fact that they require the 
dispatch to go through a function call. This means that you are paying 
for the (allegedly expensive) Python function dispatch overhead, plus 
you no longer have access to any local variables which happened to be in 
scope when the dispatch occured.

It seems to me that the desire for a switch statement is a desire to get 
around those two limitations - in other words, if function calls were 
cheap, and there was an easy way to do dynamic scoping so that called 
functions could access their caller's variables, then there wouldn't be 
nearly as much of a desire for a switch statement.

For example, one might do a pickling function along these lines:

    dispatch_table = None
    def marshall( data ):
       type_code = None
       object_data = None

       def int_case():
          type_code = TYPE_INT
          object_data = str(data)

       def str_case():
          type_code = TYPE_STR
          object_data = str(data)

       # (and so on)

       # Build the dispatch table once only
       if dispatch_table is None:
          dispatch_table = dict(
             int, int_case,
             str, str_case,
             ...
          )

       dispatch_table[ type( data ) ]()

However, you probably wouldn't want to write the code like this in 
current-day Python -- even a fairly long if-elif-else chain would be 
more efficient, and the code isn't as neatly expressive of what you are 
trying to do. But the advantage is that the construction of the dispatch 
table is explicit rather than implicit, which avoids all of the 
arguments about when the dispatch should occur.

Another way to deal with the explicit construction of the switch table 
is to contsruct it outside of the function body. So for example, if the 
values to be switched on are meant to be evaluated at module load time, 
then the user can define the dispatch table outside of any function. The 
problem is, however, that the language requires any code that accesses 
the local variables of a function to be textually embedded within that 
function, and you can't build a dispatch table outside of a function 
that refers to code sections within a function.

In the interest of brevity, I'm going to cut it off here before I ramble 
on too much longer. I don't have an "answer", so much as I am trying to 
raise the right questions.

-- Talin

From greg.ewing at canterbury.ac.nz  Sun Jun 25 06:48:14 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 25 Jun 2006 16:48:14 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<44988C6E.4080806@canterbury.ac.nz> <449920A4.7040008@gmail.com>
	<5.1.1.6.0.20060621121821.03275ca8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621130606.0373c788@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621133734.03179858@sparrow.telecommunity.com>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
Message-ID: <449E158E.7030303@canterbury.ac.nz>

Phillip J. Eby wrote:

> 1. "case (literal|NAME)" is the syntax for equality testing -- you can't 
> use an arbitrary expression, not even a dotted name.

That's too restrictive. I want to be able to write
things like

   class Foods:
     Spam = 1
     Eggs = 2
     Ham = 3

  ...

     switch f:
       case Foods.Spam:
         ...
       case Foods.Eggs:
         ...

--
Greg

From martin at v.loewis.de  Sun Jun 25 07:58:35 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 25 Jun 2006 07:58:35 +0200
Subject: [Python-Dev] PyObject_CallFunction and 'N' format char
In-Reply-To: <a467ca4f0606241709h348229d5neda0e873649bdaa7@mail.gmail.com>
References: <a467ca4f0606241709h348229d5neda0e873649bdaa7@mail.gmail.com>
Message-ID: <449E260B.3050103@v.loewis.de>

Gustavo Carneiro wrote:
> However, PyObject_CallFunction does _not_
> consume such an object reference, contrary to what I believed for
> years.

Why do you say that? It certainly does.

Regards,
Martin

From gjcarneiro at gmail.com  Sun Jun 25 12:36:43 2006
From: gjcarneiro at gmail.com (Gustavo Carneiro)
Date: Sun, 25 Jun 2006 11:36:43 +0100
Subject: [Python-Dev] PyObject_CallFunction and 'N' format char
In-Reply-To: <449E260B.3050103@v.loewis.de>
References: <a467ca4f0606241709h348229d5neda0e873649bdaa7@mail.gmail.com>
	<449E260B.3050103@v.loewis.de>
Message-ID: <a467ca4f0606250336y587fa094h18fef7c4889d72e3@mail.gmail.com>

On 6/25/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Gustavo Carneiro wrote:
> > However, PyObject_CallFunction does _not_
> > consume such an object reference, contrary to what I believed for
> > years.
>
> Why do you say that? It certainly does.

  Yes it does.  I could almost swear it didn't last night when I was
debugging this, but today it works as expected.  I guess I was just
sleepy... :P

  Sorry.

From mwh at python.net  Sun Jun 25 14:07:01 2006
From: mwh at python.net (Michael Hudson)
Date: Sun, 25 Jun 2006 13:07:01 +0100
Subject: [Python-Dev] pypy-0.9.0: stackless, new extension compiler
Message-ID: <2mslltigm2.fsf@starship.python.net>

The PyPy development team has been busy working and we've now packaged 
our latest improvements, completed work and new experiments as 
version 0.9.0, our fourth public release.

The highlights of this fourth release of PyPy are:

**implementation of "stackless" features**
    We now support the larger part of the interface of the original
    Stackless Python -- see http://www.stackless.com for more.  A
    significant part of this is the pickling and unpickling of a running
    tasklet.

    These features, especially the pickling, can be considered to be a
    "technology preview" -- they work, but for example the error handling
    is a little patchy in places.

**ext-compiler**
    The "extension compiler" is a new way of writing a C extension for
    CPython and PyPy at the same time. For more information, see its
    documentation: http://codespeak.net/pypy/dist/pypy/doc/extcompiler.html

**rctypes**
    Most useful in combination with the ext-compiler is the fact that our
    translation framework can translate code that uses the
    standard-in-Python-2.5 ctypes module.  See its documentation for more:
    http://codespeak.net/pypy/dist/pypy/doc/rctypes.html

**framework GCs** 
    PyPy's interpreter can now be compiled to use a garbage collector
    written in RPython.  This added control over PyPy's execution makes the
    implementation of new and interesting features possible, apart from
    being a significant achievement in its own right.

**__del__/weakref/__subclasses__**
    The PyPy interpreter's compatibility with CPython continues improves:
    now we support __del__ methods, the __subclasses__ method on types and
    weak references.  We now pass around 95% of CPython's core tests.

**logic space preview**
    This release contains the first version of the logic object space,
    which will add logical variables to Python.  See its docs for more:
    http://codespeak.net/pypy/dist/pypy/doc/howto-logicobjspace-0.9.html

**high level backends preview**
    This release contains the first versions of new backends targeting high
    level languages such as Squeak and .NET/CLI and updated versions of the
    JavaScript and Common Lisp backends.  They can't compile the PyPy
    interpreter yet, but they're getting there...

**bugfixes, better performance**
    As you would expect, performance continues to improve and bugs continue
    to be fixed.  The performance of the translated PyPy interpreter is
    2.5-3x times faster than 0.8 (on richards and pystone), and is now
    stable enough to be able to run CPython's test suite to the end.

**testing refinements**
    py.test, our testing tool, now has preliminary support for doctests.
    We now run all our tests every night, and you can see the summary at:
    http://snake.cs.uni-duesseldorf.de/pypytest/summary.html

What is PyPy (about)? 
------------------------------------------------

PyPy is a MIT-licensed research-oriented reimplementation of Python
written in Python itself, flexible and easy to experiment with.  It
translates itself to lower level languages.  Our goals are to target a
large variety of platforms, small and large, by providing a
compilation toolsuite that can produce custom Python versions.
Platform, memory and threading models are to become aspects of the
translation process - as opposed to encoding low level details into
the language implementation itself.  Eventually, dynamic optimization
techniques - implemented as another translation aspect - should become
robust against language changes.

Note that PyPy is mainly a research and development project and does
not by itself focus on getting a production-ready Python
implementation although we do hope and expect it to become a viable
contender in that area sometime next year.

PyPy is partially funded as a research project under the European
Union's IST programme.

Where to start? 
-----------------------------

Getting started:    http://codespeak.net/pypy/dist/pypy/doc/getting-started.html

PyPy Documentation: http://codespeak.net/pypy/dist/pypy/doc/ 

PyPy Homepage:      http://codespeak.net/pypy/

The interpreter and object model implementations shipped with the 0.9
version can run on their own and implement the core language features
of Python as of CPython 2.4.  However, we still do not recommend using
PyPy for anything else than for education, playing or research
purposes.

Ongoing work and near term goals
---------------------------------

The Just-in-Time compiler and other performance improvements will be one of
the main topics of the next few months' work, along with finishing the
logic object space.

Project Details
---------------

PyPy has been developed during approximately 20 coding sprints across
Europe and the US.  It continues to be a very dynamically and
incrementally evolving project with many of these one-week workshops
to follow.

PyPy has been a community effort from the start and it would
not have got that far without the coding and feedback support
from numerous people.   Please feel free to give feedback and 
raise questions. 

    contact points: http://codespeak.net/pypy/dist/pypy/doc/contact.html

have fun, 
    
    the pypy team, (Armin Rigo, Samuele Pedroni, 
    Holger Krekel, Christian Tismer, 
    Carl Friedrich Bolz, Michael Hudson, 
    and many others: http://codespeak.net/pypy/dist/pypy/doc/contributor.html)

PyPy development and activities happen as an open source project  
and with the support of a consortium partially funded by a two 
year European Union IST research grant. The full partners of that 
consortium are: 
        
    Heinrich-Heine University (Germany), AB Strakt (Sweden)
    merlinux GmbH (Germany), tismerysoft GmbH (Germany) 
    Logilab Paris (France), DFKI GmbH (Germany)
    ChangeMaker (Sweden), Impara (Germany)

-- 
  And not only in the sense that they imagine heretics where these
  do not exist, but also that inquistors repress the heretical
  putrefaction so vehemently that many are driven to share in it,
  in their hatred of the judges.  -- The Name Of The Rose, Umberto Eco

From guido at python.org  Sun Jun 25 18:16:19 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 09:16:19 -0700
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
Message-ID: <ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>

Sorry, no go. You can say "supports key use cases found in real code"
as often as you like, but limiting it only to literals drops many use
cases on the floor, and is unacceptable for me. There are no other
places in Python where only literals are allowed. In your eagerness to
rule out surprises, you're creating the biggest surprise of all: the
restriction to literals is certainly a surprise!

If you want to provide a solution for the constantification issue,
let's discuss that first and then come back here.

--Guido

On 6/24/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote:
> From what I can see, almost everyone wants a switch statement, though perhaps
> for different reasons.
>
> The main points of contention are 1) a non-ambiguous syntax for assigning
> multiple cases to a single block of code, 2) how to compile variables as
> constants in a case statement, and 3) handling overlapping cases.
>
> Here's a simple approach that will provide most of the benefit without trying to
> overdo it:
>
>     switch f(x):          # any expression is allowable here but raises an exception if the result is not hashable
>     case 1: g()           # matches when f(x)==1
>     case 2,3 : h()        # matches when f(x) in (2,3)
>     case 1: i()           # won't ever match because the first case 1 wins
>     case (4,5), 6: j()    # matches when f(x) in ((4,5), 6)
>     case "bingo": k()     # matches when f(x) in ("bingo",)
>     default:   l()        # matches if nothing else does
>
> Though implemented as a hash table, this would execute as if written:
>
>     fx = f(x)
>     hash(fx)
>     if fx in (1,):
>         g()
>     elif fx in (2,3):
>         h()
>     elif fx in (1,):
>         i()
>     elif fx in ((4,5), 6):
>         j()
>     elif fx in ("bingo",):
>         k()
>     else:
>         l()
[...]

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Sun Jun 25 18:28:48 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 09:28:48 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449D1491.5080901@gmail.com>
References: <20060610142736.GA19094@21degrees.com.au>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<449D1491.5080901@gmail.com>
Message-ID: <ca471dc20606250928i638e9034la4521b0a37c1d978@mail.gmail.com>

On 6/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> [...] a syntactic nit that Eric Sumner pointed out. Since
> it involves iteration over x to populate the jump table rather than doing a
> containment test on x, using 'case in x' is misleading. It would be better
> written as 'case *x'.
>
> Then:
>    'case 1:'     ==> a switch value of 1 will jump to this case
>    'case 1, 2:'  ==> a switch value of 1 or 2 will jump to this case
>    'case *x'     ==> any switch value in x will jump to this case
>    'case *x, *y' ==> any switch value in x or y will jump to this case

I'm +0 on this idea, or something similar (maybe my original 'case in'
syntax with 'in' replaced by '*'.

I'm going to have to sleep on Nick's 'once' proposal (which deserves a
separate thread).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From martin at v.loewis.de  Sun Jun 25 19:14:25 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 25 Jun 2006 19:14:25 +0200
Subject: [Python-Dev] [Python-checkins] Things to remember when adding
 *packages* to stdlib
In-Reply-To: <ee2a432c0606212335n735716eelae07f6e1d51ceeb@mail.gmail.com>
References: <ee2a432c0606212335n735716eelae07f6e1d51ceeb@mail.gmail.com>
Message-ID: <449EC471.2000809@v.loewis.de>

Neal Norwitz wrote:
> I believe this change is all that's necessary on the Unix side to
> install wsgiref.  Can someone please update the Windows build files to
> ensure wsgiref is installed in b2?  Don't forget to update the NEWS
> entry too.

It's installed in b1 already. The msi generator picks up all .py files
in Lib automatically, except for those that have been explicitly
excluded (the plat-* ones).

> Maybe someone could come up with a heuristic to add to Misc/build.sh
> which we could test in there.

I think "make install INSTALL=true|grep true" should print the names
of all .py files in Lib, except for the ones in plat-*.

Regards,
Martin

From karol.langner at kn.pl  Sun Jun 25 19:38:40 2006
From: karol.langner at kn.pl (Karol Langner)
Date: Sun, 25 Jun 2006 19:38:40 +0200
Subject: [Python-Dev] basearray
Message-ID: <200606251938.40507.karol.langner@kn.pl>

Dear all,

 Some of you might be aware that a project has been granted to me for this 
year's Google's Summer of Code, which aims at preparing a base 
multidimensional array type for Python. While I had a late start at it, I 
would like to go through with the project.

 The focus is on preparing a minimal type, that basically only defines how 
memory is alllocated for the array, and which can be used by other, more 
sophisticated types. Later during the project, the type may be enhanced, 
depending on how using it in practice (also part of the project) works out.

 Wiki page about the project: http://scipy.org/BaseArray
 SVN repository: http://svn.scipy.org/svn/PEP/

 In order to make this a potential success, I definately need feedback from 
all you out there interested in pushing such a base type towards Python core. 
So any comments and opinions are welcome! I will keep you informed on my 
progress and ask about things that may need concensus (although I'm not sure 
which lists will be the most interested in this). Please note that I am still 
in the phase of completing the minimal type, so the svn repository does not 
contain a working example, yet.

Regards,
Karol Langner

-- 
written by Karol Langner
nie cze 25 19:18:45 CEST 2006

From raymond.hettinger at verizon.net  Sun Jun 25 20:13:37 2006
From: raymond.hettinger at verizon.net (Raymond Hettinger)
Date: Sun, 25 Jun 2006 11:13:37 -0700
Subject: [Python-Dev] Simple Switch statement
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
Message-ID: <001901c69883$18282660$dc00000a@RaymondLaptop1>

> Sorry, no go. You can say "supports key use cases found in real code"
> as often as you like,

Those were not empty words.  I provided two non-trivial worked-out examples 
taken from sre_constants.py and opcode.py.  Nick provided a third example from 
decimal.py.  In all three cases, the proposal was applied effortlessly resulting 
in improved readability and speed.  I hope you hold other proposals to the same 
standard.


> If you want to provide a solution for the constantification issue,
> let's discuss that first and then come back here.

No thanks.  That is its own can of worms.  The obvious solutions (like const 
declarations, macros, or a syntax to force compile-time expression evaluation) 
are unlikely to sit well because they run afoul Python's deeply ingrained 
dynamism.

The switch-case construct in C uses constant cases but depends on macros to make 
the constants symbolic.  Is that where we want to go with Python?  If so, that 
is most likely a Py3k discussion.

In contrast, the proposed simple switch statement is something we could have 
right away.  I will likely write-up a PEP and get a sample implementation so we 
can discuss something concrete at EuroPython.


Raymond





From fredrik at pythonware.com  Sun Jun 25 21:14:57 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sun, 25 Jun 2006 21:14:57 +0200
Subject: [Python-Dev] Alternatives to switch?
In-Reply-To: <449E11BD.7010603@acm.org>
References: <449E11BD.7010603@acm.org>
Message-ID: <e7mnbf$7f2$1@sea.gmane.org>

Talin wrote:

> In fact, I'd like to point out something that hasn't been brought up, 
> which is that in many cases having a closure rebind the switch cases 
> defeats the purpose of the thing. For example:
> 
>     def outer():
>        def inner(x):
>           switch(x):
>           case 1: ...
>           case 2: ...
>           case 3: ...
> 
>        return inner
> 
> If the switch cases are bound at the time that 'inner' is defined, it 
> means that the hash table will be rebuilt each time 'outer' is called. 

not if all cases are literals, as in your example.

and how common is this, really?  are you sure you're not arguing against 
a construct that is more efficient in many existing use cases, based on 
a single, mostly hypothetical use case ?

(after all, if you don't want Python to waste time optimizing things for 
you, you don't have to use a construct that does that...)

</F>


From fredrik at pythonware.com  Sun Jun 25 21:20:34 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Sun, 25 Jun 2006 21:20:34 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <5.1.1.6.0.20060624131149.01eb3ea8@sparrow.telecommunity.com>
References: <ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>	<449A7A48.5060404@egenix.com>	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org>
	<5.1.1.6.0.20060624131149.01eb3ea8@sparrow.telecommunity.com>
Message-ID: <e7mnlv$8d1$1@sea.gmane.org>

Phillip J. Eby wrote:

>> I don't see this as much of a problem, really: we can simply restrict
>> the optimization to well-known data types ("homogenous" switches using
>> integers or strings should cover 99.9% of all practical cases), and then
>> add an opcode that checks uses a separate dispatch object to check if
>> fast dispatch is possible, and place that before an ordinary if/elif
>> sequence.
> 
> What about switches on types?  Things like XML-RPC and JSON want to be able 
> to have a fast switch on an object's type and fall back to slower tests 
> only for non-common cases.

good point (and nice example).

>     for t in obtype.__mro__:
>          switch t:
>          case int: ...; break
>          case str: ...; break
>          else:
>              continue
>      else:
>          # not a recognized type

but I wonder how confusing the "break inside switch terminates the outer 
loop" pattern would be to a C programmer...

</F>


From pje at telecommunity.com  Sun Jun 25 21:21:53 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 25 Jun 2006 15:21:53 -0400
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <001901c69883$18282660$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
Message-ID: <5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>

At 11:13 AM 6/25/2006 -0700, Raymond Hettinger wrote:
>No thanks.  That is its own can of worms.  The obvious solutions (like const
>declarations, macros, or a syntax to force compile-time expression 
>evaluation)
>are unlikely to sit well because they run afoul Python's deeply ingrained
>dynamism.

I think perhaps you haven't been paying close attention to Fredrik's 
proposal.  The "static" operator simply lifts expression evaluation to 
function definition time, so that this:

     def x(y):
         print y * static(123+456)

becomes equivalent to this:

     foo = 123+456
     def x(y):
         print y * foo

This simple transformation doesn't "run afoul" of anything that I'm aware 
of, any more than the "with:" statement does.

Meanwhile, you seem to be arguing that forcing the use of literals at 
compilation time is somehow more dynamic than allowing them to be computed 
at runtime!  I don't get it.  Are you perhaps thinking that it's necessary 
to know the values at compilation time in order to compile the switch 
statement?  If so, note that the dictionary entries can be loaded by the 
code where the function is defined, and accessed as a free variable within 
the function body.  This is the same way that other "static" expressions 
would be implemented.

(On an unrelated note, I think that maybe rather than making "static" look 
like a function call, we should use a form that looks more like a generator 
expression, conditional, or yield expression, i.e. (static 123+456) instead 
of static(123+456), as this emphasizes its nature as an element of language 
syntax rather than making it look like a function call.)


From brett at python.org  Sun Jun 25 22:06:53 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 25 Jun 2006 13:06:53 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <449D09EE.9040903@gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<449A51CC.3070108@ghaering.de>
	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>
	<449AC151.4030500@ghaering.de>
	<bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>
	<449D09EE.9040903@gmail.com>
Message-ID: <bbaeab100606251306j7c29a3b1pb158631a2406472c@mail.gmail.com>

On 6/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
>
> Brett Cannon wrote:
> > Yep.  That API will be used directly in the changes to pymalloc and
> > PyMem_*() macros (or at least the basic idea).  It is not *only* for
> > extension modules but for the core as well.
> >
> >     Existing extension modules and existing C code in the Python
> interpreter
> >     have no idea of any PyXXX_ calls, so I don't understand how new API
> >     functions help here.
> >
> >
> > The calls get added to pymalloc and PyMem_*() under the hood, so that
> > existing extension modules use the memory check automatically without a
> > change.  The calls are just there in case some one has some random need
> > to do their own malloc but still want to participate in the cap.  Plus
> > it helped me think everything through by giving everything I would need
> > to change internally an API.
>
> This confused me a bit, too. It might help if you annotated each of the
> new
> API's with who the expected callers were:
>
>    - trusted interpreter
>    - untrusted interpreter
>    - embedding application
>    - extension module


There are only two "different" possible callers for the whole API: a trusted
interpreter or embedded application that launches an untrusted interpreter,
and then there is *everyone* (this removes the distinction of trusted and
untrusted interpreter and just views them as an interpreter).  The former
use the setting API to set the limits of the untrusted interpreter being
created, while everyone else uses the API to make sure an untrusted
interpreter does not overstep its bounds.  The checks are done regardless of
the type of interpreter, it just varies on whether the checks are NOOPs or
not.

Since the memory cap seems to be causing the confusion, let me try it
again.  You are in a trusted interpreter or embedded app and you want an
untrusted interpreter with a memory cap.  You create the interpreter and
call the PyXXX_SetMemoryCap() function on the untrusted interpreter to set
that cap.  Now, within the core interpreter code (untrusted or not) for
where the interpreter allocates and deallocates memory there are calls to
PyXXX_MemoryAllow() and PyXXX_MemoryFree().  If this is the trusted
interpreter running, they are basically NOOPs.  If it is an untrusted
interpreter, then the actual checks are done to make sure the restriction is
not broken.

And if an extension module has some reason to use the memory cap checks as
well, it can.  Same with an embedded application.

There  is no grand distinction in terms of "this is for use in the core only
while these are only for extension modules".  It is just whether a function
is used to set a restriction before an untrusted interpreter is used, or to
check to make sure that a restriction is not violated.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060625/d00b6cec/attachment.html 

From brett at python.org  Sun Jun 25 22:08:32 2006
From: brett at python.org (Brett Cannon)
Date: Sun, 25 Jun 2006 13:08:32 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <0C9D5BB1-142A-4D50-859E-C33E3570D1B1@redivi.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<449A51CC.3070108@ghaering.de>
	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>
	<449AC151.4030500@ghaering.de>
	<bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>
	<449D09EE.9040903@gmail.com>
	<0C9D5BB1-142A-4D50-859E-C33E3570D1B1@redivi.com>
Message-ID: <bbaeab100606251308q1e9bb3e1h3643033d54508b64@mail.gmail.com>

On 6/24/06, Bob Ippolito <bob at redivi.com> wrote:
>
>
> On Jun 24, 2006, at 2:46 AM, Nick Coghlan wrote:
>
> > Brett Cannon wrote:
> >> Yep.  That API will be used directly in the changes to pymalloc and
> >> PyMem_*() macros (or at least the basic idea).  It is not *only* for
> >> extension modules but for the core as well.
> >>
> >>     Existing extension modules and existing C code in the Python
> >> interpreter
> >>     have no idea of any PyXXX_ calls, so I don't understand how
> >> new API
> >>     functions help here.
> >>
> >>
> >> The calls get added to pymalloc and PyMem_*() under the hood, so that
> >> existing extension modules use the memory check automatically
> >> without a
> >> change.  The calls are just there in case some one has some random
> >> need
> >> to do their own malloc but still want to participate in the cap.
> >> Plus
> >> it helped me think everything through by giving everything I would
> >> need
> >> to change internally an API.
> >
> > This confused me a bit, too. It might help if you annotated each of
> > the new
> > API's with who the expected callers were:
> >
> >    - trusted interpreter
> >    - untrusted interpreter
> >    - embedding application
> >    - extension module
>
> Threading is definitely going to be an issue with multiple
> interpreters (restricted or otherwise)... for example, the PyGILState
> API probably wouldn't work anymore.



PyGILState won't work because there are multiple interpreters period, or
because of the introduced distinction of untrusted and trusted
interpreters?  In other words, is this some new possible breakage, or is
this an issue with threads that has always existed with multiple
interpreters?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060625/23be1cc6/attachment.htm 

From fwierzbicki at gmail.com  Sun Jun 25 22:34:58 2006
From: fwierzbicki at gmail.com (Frank Wierzbicki)
Date: Sun, 25 Jun 2006 16:34:58 -0400
Subject: [Python-Dev] Import semantics
In-Reply-To: <ca471dc20606120948h119f1c1fw2725e7d434e287df@mail.gmail.com>
References: <cfb578b20606111531t6806d5c9kd35fd8ba29638174@mail.gmail.com>
	<448D1F0D.7000405@strakt.com>
	<ca471dc20606120948h119f1c1fw2725e7d434e287df@mail.gmail.com>
Message-ID: <4dab5f760606251334o718323adt409294e2952e954a@mail.gmail.com>

Sorry for the untrimmed conversation, but I've cc'ed jython-dev, my
comments are at the bottom.

On 6/12/06, Guido van Rossum <guido at python.org> wrote:
> On 6/12/06, Samuele Pedroni <pedronis at strakt.com> wrote:
> > Fabio Zadrozny wrote:
> > > Python and Jython import semantics differ on how sub-packages should be
> > > accessed after importing some module:
> > >
> > > Jython 2.1 on java1.5.0 (JIT: null)
> > > Type "copyright", "credits" or "license" for more information.
> > >  >>> import xml
> > >  >>> xml.dom
> > > <module xml.dom at 10340434>
> > >
> > > Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on
> > > win32
> > > Type "help", "copyright", "credits" or "license" for more information.
> > >  >>> import xml
> > >  >>> xml.dom
> > > Traceback (most recent call last):
> > >   File "<stdin>", line 1, in ?
> > > AttributeError: 'module' object has no attribute 'dom'
> > >  >>> from xml.dom import pulldom
> > >  >>> xml.dom
> > > <module 'xml.dom' from 'C:\bin\Python24\lib\xml\dom\__init__.pyc'>
> > >
> > > Note that in Jython importing a module makes all subpackages beneath it
> > > available, whereas in python, only the tokens available in __init__.py
> > > are accessible, but if you do load the module later even if not getting
> > > it directly into the namespace, it gets accessible too -- this seems
> > > more like something unexpected to me -- I would expect it to be
> > > available only if I did some "import xml.dom" at some point.
> > >
> > > My problem is that in Pydev, in static analysis, I would only get the
> > > tokens available for actually imported modules, but that's not true for
> > > Jython, and I'm not sure if the current behaviour in Python was expected.
> > >
> > > So... which would be the right semantics for this?
> >
> > the difference in Jython is deliberate. I think the reason was to mimic
> > more the Java style for this, in java fully qualified names always work.
> > In jython importing the top level packages is enough to get a similar
> > effect.
> >
> > This is unlikely to change for backward compatibility reasons, at least
> > from my POV.
>
> IMO it should do this only if the imported module is really a Java
> package. If it's a Python package it should stick to python semantics
> if possible.
>
> --
> --Guido van Rossum (home page: http://www.python.org/~guido/)

This is a tough one since the BDFL and Samuele disagree here.  Perhaps
we should document the Java import behavior as permanent, but document
the Python imports in Jython as being deprecated but available until
some future release?  I believe we would keep it at least through
Jython 2.3.

-Frank

From rhettinger at ewtllc.com  Sun Jun 25 22:37:22 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Sun, 25 Jun 2006 13:37:22 -0700
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
Message-ID: <449EF402.3030408@ewtllc.com>


>>No thanks.  That is its own can of worms.  The obvious solutions (like const
>>declarations, macros, or a syntax to force compile-time expression 
>>evaluation)
>>are unlikely to sit well because they run afoul Python's deeply ingrained
>>dynamism.
>>    
>>
>
>I think perhaps you haven't been paying close attention to Fredrik's 
>proposal.
>
Yes, I have been.  That is one of the three options I listed above.  
Each has its own issues.

The static() keyword works like Forth's brackets for forcing 
compile-time evaluation.  The issue for us is that unlike other Python 
expressions, there are inconvenient limitiations on what can be 
expressed inside:

   five = 5
   eight = [8]
   def f(x, six=6):
          seven =  7
          a = static(five + 4)    # this is legal
          b = static(six + 4)      # this is illegal
          c = static(seven + 4) # this is illegal
          d = static(eight + [4]) # this is illegal
          

That will be a perpetual maintenance trap and conundrum for newbies.

Besides, the issue of constantification is orthogonal to the discussion 
at hand.  If compile-time expression evaluation ever gets approved, it 
is no problem to extend my switch statement proposal to accomodate it.  
IOW, my simplified version does not preclude future buildouts for your 
constantification magic.



>  The "static" operator simply lifts expression evaluation to 
>function definition time, so that this:
>
>     def x(y):
>         print y * static(123+456)
>
>becomes equivalent to this:
>
>     foo = 123+456
>     def x(y):
>         print y * foo
>  
>

FWIW, this is a crummy example.  Py2.5 already makes a better version of 
this transformation.

> >> def f(y):
            print y * (123+456)

> >> from dis import dis
> >> dis(f)

  2           0 LOAD_FAST                0 (y)
              3 LOAD_CONST               3 (579)
              6 BINARY_MULTIPLY    
              7 PRINT_ITEM         
              8 PRINT_NEWLINE      
              9 LOAD_CONST               0 (None)
             12 RETURN_VALUE      


>Meanwhile, you seem to be arguing that forcing the use of literals at 
>compilation time is somehow more dynamic than allowing them to be computed 
>at runtime!  I don't get it. 
>
You may be reading to much into it.  The essential idea that is that 
limiting case values to constants is a simplifying assumption that makes 
it easy to explain and easy to compile.  I wasn't truly happy with it 
until I started trying it on real code examples and found that it fit 
effortlessly and that the code was much improved.  Also, there was some 
appeal in knowing that if some constantification proposal gets accepted, 
then those would become allowable values also.



Raymond

From nmm1 at cus.cam.ac.uk  Sun Jun 25 22:49:49 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Sun, 25 Jun 2006 21:49:49 +0100
Subject: [Python-Dev] Numerical robustness, IEEE etc.
In-Reply-To: Your message of "Sat, 24 Jun 2006 11:50:36 EDT."
	<2e1434c10606240850i3c49f193nac2cacc8db72059f@mail.gmail.com> 
Message-ID: <E1FubYD-0004Dw-JK@draco.cus.cam.ac.uk>

jacobs at bioinformed.com wrote:
> 
> I'm not asking you to describe SC22WG14 or post detailed technical summaries
> of the long and painful road.  I'd like you to post things directly relevant
> to Python with footnotes to necessary references.  It is then incumbent on
> those that wish to respond to your post to familiarize themselves with the
> relevant background material.  However, it is really darn hard to do that
> when we don't know what you're trying to fix in Python.  The examples you
> show below are a good start in that direction.

Er, no.  Given your response, it has merely started off a hare.  The
issues you raise are merely ones of DETAIL, and I was and am trying
to tackle the PRINCIPLE (a.k.a. design).

I originally stated my objective, and asked for information so that I
could investigate in depth and produce (in some order) a sandbox and
a PEP.  That is still my plan.

This example was NOT of problems with the existing implementation,
but was to show how even the most basic numeric code that attempts to
handle errors cannot avoid tripping over the issues.  I shall respond
to your points, but shall try to refrain from following up.

> 1) The string representation of NaN is not standardized across platforms

Try what I actually used:

    x = 1.0e300
    x = (x*x)/(x*x)

I converted that to float('NaN') to avoid confusing people.  There
are actually many issues around the representation of NaNs, including
whether signalling NaNs should be separated from quiet NaNs and whether
they should be allowed to have values.  See IEEE 754, IEEE 754R and
C99 for more details (but not clarification).

> 2) on a sane platform, int(float('NaN')) should raise an ValueError
> exception for the int() portion.

Well, I agree with you, but Java and many of the C99 people don't.

> 3) float('NaN') == float('NaN') should be false, assuming NaN is not a
> signaling NaN, by default

Why?  Why should it not raise ValueError?  See table 4 in IEEE 754.
I could go into this one in much more depth, but let's not, at least
not now.

> So the open question is how to both define the semantics of Python floating
> point operations and to implement them in a way that verifiably works on the
> vast majority of platforms without turning the code into a maze of
> platform-specific defines, kludges, or maintenance problems waiting to
> happen.

Well, in a sense, but the second is really a non-question - i.e. it
answers itself almost trivially once the first is settled.  ALL of your
above points fall into that category.  The first question to answer is
what the fundamental model should be, and I need to investigate in
more depth before commenting on that - which should tell you roughly
what I know and what I don't about the decimal model.

The best way to get a really ghastly specification is to decide on
the details before agreeing on the intent.  Committees being what they
are, that is a recipe for something that nobody else will ever get
their heads around.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From python-dev at zesty.ca  Sun Jun 25 22:57:44 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 25 Jun 2006 15:57:44 -0500 (CDT)
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com><e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
Message-ID: <Pine.LNX.4.58.0606251552270.17937@server1.LFW.org>

On Sat, 24 Jun 2006, Raymond Hettinger wrote:
> The main points of contention are 1) a non-ambiguous syntax for assigning
> multiple cases to a single block of code, 2) how to compile variables as
> constants in a case statement, and 3) handling overlapping cases.
>
> Here's a simple approach that will provide most of the benefit without
> trying to overdo it:
[...]
> The result of f(x) should be hashable or an exception is raised.
> Cases values must be ints, strings, or tuples of ints or strings.
> No expressions are allowed in cases.

I like this proposal.  It eliminates all of the surprises that have
been hiding in the other switch proposals so far, except for throwing
an exception when the switch expression isn't hashable.  But i can
easily live with that restriction, since the cases are all literals
that must be hashable, so if the switch expression comes out to some
other type then it probably really is an error that should be caught.

I agree with the general feeling that it would be nice to have a bit
more flexibility, but so far i haven't thought of any way to get more
flexibility without more surprises, so until a better idea comes along
i'm happy with this one.


-- ?!ng

From bob at redivi.com  Sun Jun 25 22:57:55 2006
From: bob at redivi.com (Bob Ippolito)
Date: Sun, 25 Jun 2006 13:57:55 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606251308q1e9bb3e1h3643033d54508b64@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<449A51CC.3070108@ghaering.de>
	<bbaeab100606220646u75444289wa7a7abdfbac18ece@mail.gmail.com>
	<449AC151.4030500@ghaering.de>
	<bbaeab100606220928p5fc74612id0d51a155261835e@mail.gmail.com>
	<449D09EE.9040903@gmail.com>
	<0C9D5BB1-142A-4D50-859E-C33E3570D1B1@redivi.com>
	<bbaeab100606251308q1e9bb3e1h3643033d54508b64@mail.gmail.com>
Message-ID: <A809EE0F-9C17-44D4-AF80-122AD62B1C8F@redivi.com>


On Jun 25, 2006, at 1:08 PM, Brett Cannon wrote:

> On 6/24/06, Bob Ippolito <bob at redivi.com> wrote:
>
> On Jun 24, 2006, at 2:46 AM, Nick Coghlan wrote:
>
> > Brett Cannon wrote:
> >> Yep.  That API will be used directly in the changes to pymalloc and
> >> PyMem_*() macros (or at least the basic idea).  It is not *only*  
> for
> >> extension modules but for the core as well.
> >>
> >>     Existing extension modules and existing C code in the Python
> >> interpreter
> >>     have no idea of any PyXXX_ calls, so I don't understand how
> >> new API
> >>     functions help here.
> >>
> >>
> >> The calls get added to pymalloc and PyMem_*() under the hood, so  
> that
> >> existing extension modules use the memory check automatically
> >> without a
> >> change.  The calls are just there in case some one has some random
> >> need
> >> to do their own malloc but still want to participate in the cap.
> >> Plus
> >> it helped me think everything through by giving everything I would
> >> need
> >> to change internally an API.
> >
> > This confused me a bit, too. It might help if you annotated each of
> > the new
> > API's with who the expected callers were:
> >
> >    - trusted interpreter
> >    - untrusted interpreter
> >    - embedding application
> >    - extension module
>
> Threading is definitely going to be an issue with multiple
> interpreters (restricted or otherwise)... for example, the PyGILState
> API probably wouldn't work anymore.
>
>
> PyGILState won't work because there are multiple interpreters  
> period, or because of the introduced distinction of untrusted and  
> trusted interpreters?  In other words, is this some new possible  
> breakage, or is this an issue with threads that has always existed  
> with multiple interpreters?

It's an issue that's always existed with multiple interpreters, but  
multiple interpreters aren't really commonly used or tested at the  
moment so it's not very surprising.

It would be kinda nice to have an interpreter-per-thread with no GIL  
like some of the other languages have, but the C API depends on too  
much global state for that...

-bob

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060625/c250a12c/attachment.html 

From python-dev at zesty.ca  Sun Jun 25 23:01:51 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 25 Jun 2006 16:01:51 -0500 (CDT)
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060621163853.0392d168@sparrow.telecommunity.com>
	<449A7A48.5060404@egenix.com>
	<5.1.1.6.0.20060622092032.0338e2f0@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622125053.02f3d308@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<5.1.1.6.0.20060624185125.01eaac70@sparrow.telecommunity.com>
Message-ID: <Pine.LNX.4.58.0606251600280.17937@server1.LFW.org>

On Sat, 24 Jun 2006, Phillip J. Eby wrote:
> At 03:49 PM 6/24/2006 -0700, Raymond Hettinger wrote:
> >Cases values must be ints, strings, or tuples of ints or strings.
>
> -1.  There is no reason to restrict the types in this fashion.  Even if you
> were trying to ensure marshallability, you could still include unicode and
> longs.

When he said "ints" i assumed that included longs.  The distinction
is so nearly gone by now (from the Python programmer's point of view).

I don't see any particular problem with allowing all basestrings
rather than just 8-bit strings.


-- ?!ng

From g.brandl at gmx.net  Mon Jun 26 01:12:30 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 26 Jun 2006 01:12:30 +0200
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <449EF402.3030408@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>	<449D9BB5.5090504@ronadam.com>	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
	<449EF402.3030408@ewtllc.com>
Message-ID: <e7muoc$tjl$1@sea.gmane.org>

Raymond Hettinger wrote:
>>>No thanks.  That is its own can of worms.  The obvious solutions (like const
>>>declarations, macros, or a syntax to force compile-time expression 
>>>evaluation)
>>>are unlikely to sit well because they run afoul Python's deeply ingrained
>>>dynamism.
>>>    
>>>
>>
>>I think perhaps you haven't been paying close attention to Fredrik's 
>>proposal.
>>
> Yes, I have been.  That is one of the three options I listed above.  
> Each has its own issues.
> 
> The static() keyword works like Forth's brackets for forcing 
> compile-time evaluation.  The issue for us is that unlike other Python 
> expressions, there are inconvenient limitiations on what can be 
> expressed inside:
> 
>    five = 5
>    eight = [8]
>    def f(x, six=6):
>           seven =  7
>           a = static(five + 4)    # this is legal
>           b = static(six + 4)      # this is illegal
>           c = static(seven + 4) # this is illegal
>           d = static(eight + [4]) # this is illegal

Why would the last line be illegal?

> 
> That will be a perpetual maintenance trap and conundrum for newbies.

In contrary to other "newbie traps" such as mutable default arguments, if
this would give a clear exception message like
"function-local name cannot be used in static expression", I can't imagine
it would be a bigger problem than e.g. "why is my floating point result
incorrect".

Georg


From python-dev at zesty.ca  Sun Jun 25 23:35:34 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 25 Jun 2006 16:35:34 -0500 (CDT)
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606251602110.17937@server1.LFW.org>

On Sun, 25 Jun 2006, Guido van Rossum wrote:
> In your eagerness to
> rule out surprises, you're creating the biggest surprise of all: the
> restriction to literals is certainly a surprise!

I disagree.  Perhaps what we mean by "surprise" is different.  In
Raymond's design, there is a simple rule for what's allowed in a case.
The whole statement can be described in very few words:

    Each case is a literal integer, string, or tuple of integers
    or strings.  Execution jumps to the first case that matches the
    value of the switch expression (or to 'default' if no match).

That's it.  The simpler the rule, the less surprising it is.  It would
take a lot more words to explain the behaviour of something like Nick's
'once'-based proposal.  Here's an attempt:

    Each case consists of an arbitrary expression, but the expression
    may not refer to any local variables or arguments of the immediately
    surrounding function definition.  The value of each case is computed
    when the surrounding function definition is compiled.  If any two
    cases have the same value, an exception is thrown at compile time.
    At runtime, execution jumps to the case whose previously fixed value
    matches the value of the switch expression (or to 'default' if no
    match).

Not only is that longer to describe, it's also more difficult for a
beginning programmer to understand, since it requires knowing when
different parts of a program are compiled (and what happens if the
same part is compiled more than once).


-- ?!ng

From kd5bjo at gmail.com  Mon Jun 26 00:12:52 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Sun, 25 Jun 2006 17:12:52 -0500
Subject: [Python-Dev] Temporary Constantification
Message-ID: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>

It seems to me that the main reason to provide constantification (in a
switch statement or anywhere else) is to be able to optimize execution
by caching the result locally for later use.  The problem comes from
trying to determine exactly when and how a value gets calculated, as
most values in Python are subject to change.

If, however, there was a mechanism by which a cache could be
invalidated when its value changes, the only remaining difference
between cached and non-cached values becomes their execution speed and
memory usage (and possibly impact on the execution speed of other
code).  Thus, I propose a 'cached' keyword much like the static and
const proposed keywords.

In general, a cached value can be used (rather than re-evaluating the
expression) if:
  - The expression has no side effects,
  - The result of all operations is deterministic, and
  - None of the expression parameters have changed since the cached
value was generated

The first two can be determined at compile-time without too much
difficulty (except possibly function calls, I'll get to those in a
minute).  The hard issue here is knowing when parameters have changed.
 These fall into two different categories: literals and name lookups.
Immutable literals don't cause a problem, and mutable literals always
have a side-effect of generating a new object.  There are two ways to
determine if name lookups have changed:
  1) Cache all the parameters, and check them against the current values, or
  2) Be notified whenever one of the parameters changes.

The first option requires a bunch of name lookups whenever the cached
value is considered, which is exactly the problem that caching is
supposed to fix.  To implement the second, each name in each namespace
needs a list of caches that depend on the name, and all name binding
operations need to check the list and mark all dependent caches
invalid.  This is a small performance penalty whenever any name is
rebound, and a large penalty whenever a watched name is rebound.

Function calls can safely be considered volatile, but this would
invalidate many of the uses for caching.  Instead, each function has
an immutable property of being either volatile or deterministic.  Each
deterministic function call maintains its own cache which is
invalidated if  the name to which the associated function (or any of
its parameters) is rebound.  Thus, if a function is rebound to
something volatile, it does not force continual re-evaluation of other
sub-expressions.  Functions should be assumed to be volatile unless
specified otherwise (probably via a decorator).

I'm not particularly familiar with the internals of Python, so I'm not
able to actually assess the feasability or performance implications of
this proposal, but I think it basically covers the problem.

  -- Eric Sumner

From aahz at pythoncraft.com  Mon Jun 26 00:22:04 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sun, 25 Jun 2006 15:22:04 -0700
Subject: [Python-Dev] Temporary Constantification
In-Reply-To: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>
References: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>
Message-ID: <20060625222204.GA16648@panix.com>

On Sun, Jun 25, 2006, Eric Sumner wrote:
>
> In general, a cached value can be used (rather than re-evaluating the
> expression) if:
>   - The expression has no side effects,
>   - The result of all operations is deterministic, and
>   - None of the expression parameters have changed since the cached
> value was generated
> 
> The first two can be determined at compile-time without too much
> difficulty (except possibly function calls, I'll get to those in a
> minute).  

Except for properties.  So you'd have to allow only bare names, no
attributes.  You'd also have to restrict values to immutable ones.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From guido at python.org  Mon Jun 26 00:36:11 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 15:36:11 -0700
Subject: [Python-Dev] Temporary Constantification
In-Reply-To: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>
References: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>
Message-ID: <ca471dc20606251536n6ed00d65n4aa3f68b25937a59@mail.gmail.com>

On 6/25/06, Eric Sumner <kd5bjo at gmail.com> wrote:
> It seems to me that the main reason to provide constantification (in a
> switch statement or anywhere else) is to be able to optimize execution
> by caching the result locally for later use.  The problem comes from
> trying to determine exactly when and how a value gets calculated, as
> most values in Python are subject to change.

Actually, most values tend *not* to change -- it's just hard for the
compiler to prove this to so that it can use that fact.

For example, in practice, builtins don't change. Imported objects
(modules, and things you import from modules like constants, functions
and classes) don't change. Defined functions and classes don't change.
Manifest constants don't change. (*In practice*, you should add in all
cases.)

> If, however, there was a mechanism by which a cache could be
> invalidated when its value changes, the only remaining difference
> between cached and non-cached values becomes their execution speed and
> memory usage (and possibly impact on the execution speed of other
> code).  Thus, I propose a 'cached' keyword much like the static and
> const proposed keywords.

In all (or nearly all) the use cases that were considered so far, the
problem is more that the programmer knows that a certain expression
isn't going to change, but the compiler doesn't.

The problem is more that we'd like to be told (preferably at compile
time -- but runtime is better than not at all) if we assume that
something is a constant where in fact it is subject to change.

> In general, a cached value can be used (rather than re-evaluating the
> expression) if:
>   - The expression has no side effects,
>   - The result of all operations is deterministic, and
>   - None of the expression parameters have changed since the cached
> value was generated
>
> The first two can be determined at compile-time without too much
> difficulty (except possibly function calls, I'll get to those in a
> minute).  The hard issue here is knowing when parameters have changed.
>  These fall into two different categories: literals and name lookups.
> Immutable literals don't cause a problem, and mutable literals always
> have a side-effect of generating a new object.  There are two ways to
> determine if name lookups have changed:
>   1) Cache all the parameters, and check them against the current values, or
>   2) Be notified whenever one of the parameters changes.
>
> The first option requires a bunch of name lookups whenever the cached
> value is considered, which is exactly the problem that caching is
> supposed to fix.  To implement the second, each name in each namespace
> needs a list of caches that depend on the name, and all name binding
> operations need to check the list and mark all dependent caches
> invalid.  This is a small performance penalty whenever any name is
> rebound, and a large penalty whenever a watched name is rebound.
>
> Function calls can safely be considered volatile, but this would
> invalidate many of the uses for caching.  Instead, each function has
> an immutable property of being either volatile or deterministic.  Each
> deterministic function call maintains its own cache which is
> invalidated if  the name to which the associated function (or any of
> its parameters) is rebound.  Thus, if a function is rebound to
> something volatile, it does not force continual re-evaluation of other
> sub-expressions.  Functions should be assumed to be volatile unless
> specified otherwise (probably via a decorator).
>
> I'm not particularly familiar with the internals of Python, so I'm not
> able to actually assess the feasability or performance implications of
> this proposal, but I think it basically covers the problem.

Unfortunately, a mechanism that would let you register a callback for
when a particular variable or attribute used in a cached expression is
used, is pretty hard to implement without affecting the performance of
code that doesn't use it. I'm afraid this is not a very likely path
towards a solution.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg.ewing at canterbury.ac.nz  Mon Jun 26 00:58:27 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 26 Jun 2006 10:58:27 +1200
Subject: [Python-Dev] Switch statement
In-Reply-To: <ca471dc20606231347s777df21drfb2dc161c3f6ed81@mail.gmail.com>
References: <fb6fbf560606231332o6439198csbddb3e74eeb46bb6@mail.gmail.com>
	<ca471dc20606231347s777df21drfb2dc161c3f6ed81@mail.gmail.com>
Message-ID: <449F1513.7080306@canterbury.ac.nz>

Guido van Rossum wrote:
> I'm currently leaning
> towards making static expressions outside a function illegal and limit
> switches outside a function to compile-time-constant expressions.

I'm not sure I like the idea of having things that
are illegal outside a function, because it can be a
nuisance for code refactoring.

I'd be happy if case worked at the top level, but
wasn't any faster than if-elses. That wouldn't be
so bad -- top-level code is already slower due to
global variable accesses.

Also I don't care what happens if you change the
case values of a top-level case. It's undefined
behaviour anyway.

--
Greg

From guido at python.org  Mon Jun 26 01:08:19 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 16:08:19 -0700
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <Pine.LNX.4.58.0606251602110.17937@server1.LFW.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<Pine.LNX.4.58.0606251602110.17937@server1.LFW.org>
Message-ID: <ca471dc20606251608w381b1942u4da4496b25a6bdc3@mail.gmail.com>

On 6/25/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Sun, 25 Jun 2006, Guido van Rossum wrote:
> > In your eagerness to
> > rule out surprises, you're creating the biggest surprise of all: the
> > restriction to literals is certainly a surprise!
>
> I disagree.  Perhaps what we mean by "surprise" is different.

Apparently. I would find it very surprising if a language as dynamic
as Python didn't allow expressions for cases. User learning a language
generalize from examples. They see that expressions can be used
whenever constants can be used. When they find that one particular
context doesn't allow an expression, they will surely be surprised.

I'm utterly unconvinced by Raymond's arguments for his proposal.
Disallowing names goes against half a century of programming wisdom.

Here's an argument for allowing names (this argument has been used
successfully for using names instead of string literals in many APIs):
if there is a spelling error in the string literal, the case will
silently be ignored, and who knows when the bug is detected. If there
is a spelling error in a NAME, however, the error will be caught as
soon as it is evaluated.

> In Raymond's design, there is a simple rule for what's allowed in a case.
> The whole statement can be described in very few words:
>
>     Each case is a literal integer, string, or tuple of integers
>     or strings.  Execution jumps to the first case that matches the
>     value of the switch expression (or to 'default' if no match).
>
> That's it.  The simpler the rule, the less surprising it is.  It would
> take a lot more words to explain the behaviour of something like Nick's
> 'once'-based proposal.  Here's an attempt:
>
>     Each case consists of an arbitrary expression, but the expression
>     may not refer to any local variables or arguments of the immediately
>     surrounding function definition.  The value of each case is computed
>     when the surrounding function definition is compiled.  If any two
>     cases have the same value, an exception is thrown at compile time.
>     At runtime, execution jumps to the case whose previously fixed value
>     matches the value of the switch expression (or to 'default' if no
>     match).
>
> Not only is that longer to describe, it's also more difficult for a
> beginning programmer to understand, since it requires knowing when
> different parts of a program are compiled (and what happens if the
> same part is compiled more than once).

But beginning programmers don't read descriptions like that. They
generalize from examples. It's the experts who need to have the
complete unambiguous rules so they can know where the boundaries of
safe code are. Beginners tend not to get near the boundaries at all.
The experts can handle the unvarhished truth. (Hey, they can handle
metaclasses. :-)

Here's how I envision beginners learning about the switch statement
(after they're utterly familiar with if/elif). We show them a few
examples, some involving literals, some involving manifest constants
(either defined locally at the module level, or imported). We give
them *one* simple rule: "the cases must be run-time constants" (an
intentionally vague term). Then let them loose. I expect that the
results will be total satisfaction. I expect that the same approach
should work for users who are beginning Python programmers but who are
experienced in other languages (say, Java).

The most likely problem a newbie will run into if they forget about
the "cases must be constants" rule is trying to use a locally computed
value as a case expression. This will most likely involve a local
variable, and the proposed switch semantics specifically disallow
these. So it'll be a compile time error -- better than most rookie
mistakes!

Yes, it's possible that someone has a global variable that really
varies between function invocations, and they might use it in a
switch. This will fail (give the wrong answer) silently (without an
immediate error). But I think it's pretty unlikely that someone will
try this -- they must not have been paying attention in two different
classes: first when it was explained that variable globals are usually
a bad idea; second when it was explained that switch cases must be
(run-time) constants.

Note that the entire class of well-known problems (which surprise
almost every ne Python programmer at least once) that are due to
one-time evaluation of mutable initializers (both parameter defaults
and class variables) is ruled out here, by the requirement that switch
cases be hashable, which in practice means immutable.

Now, I'm not convinced that we need a switch statement. There are lots
of considerations, and I sympathize with those folks who argue that
Python doesn't need it. But I'd rather have no switch statement than
Raymond's castrated proposal.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 26 01:09:46 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 16:09:46 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <449F1513.7080306@canterbury.ac.nz>
References: <fb6fbf560606231332o6439198csbddb3e74eeb46bb6@mail.gmail.com>
	<ca471dc20606231347s777df21drfb2dc161c3f6ed81@mail.gmail.com>
	<449F1513.7080306@canterbury.ac.nz>
Message-ID: <ca471dc20606251609i7970886fy508579c6babb2a2@mail.gmail.com>

On 6/25/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Guido van Rossum wrote:
> > I'm currently leaning
> > towards making static expressions outside a function illegal and limit
> > switches outside a function to compile-time-constant expressions.
>
> I'm not sure I like the idea of having things that
> are illegal outside a function, because it can be a
> nuisance for code refactoring.
>
> I'd be happy if case worked at the top level, but
> wasn't any faster than if-elses. That wouldn't be
> so bad -- top-level code is already slower due to
> global variable accesses.
>
> Also I don't care what happens if you change the
> case values of a top-level case. It's undefined
> behaviour anyway.

Fair enough. I wasn't leaning very strongly anyway. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From python-dev at zesty.ca  Mon Jun 26 01:12:07 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 25 Jun 2006 18:12:07 -0500 (CDT)
Subject: [Python-Dev] Simple Switch statementZ
In-Reply-To: <e7muoc$tjl$1@sea.gmane.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
	<449EF402.3030408@ewtllc.com> <e7muoc$tjl$1@sea.gmane.org>
Message-ID: <Pine.LNX.4.58.0606251806290.17937@server1.LFW.org>

On Mon, 26 Jun 2006, Georg Brandl wrote:
> Raymond Hettinger wrote:
> >    five = 5
> >    eight = [8]
> >    def f(x, six=6):
> >           seven =  7
> >           a = static(five + 4)    # this is legal
> >           b = static(six + 4)      # this is illegal
> >           c = static(seven + 4) # this is illegal
> >           d = static(eight + [4]) # this is illegal
>
> Why would the last line be illegal?

I believe Raymond is assuming it would be illegal because it's mutable.
I don't think much has been said about whether static(<EXPR>) should be
allowed to yield a mutable value, but if we did allow that, it might
open up an avenue to much confusion.  (I join the chorus of voices that
dislike the name 'static' for this feature.)

Whether or not 'eight + [4]' is allowed in 'static', it certainly
wouldn't be allowed after 'switch' or 'case' since it's unhashable.


-- ?!ng

From kd5bjo at gmail.com  Mon Jun 26 01:17:47 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Sun, 25 Jun 2006 18:17:47 -0500
Subject: [Python-Dev] Temporary Constantification
In-Reply-To: <ca471dc20606251536n6ed00d65n4aa3f68b25937a59@mail.gmail.com>
References: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>
	<ca471dc20606251536n6ed00d65n4aa3f68b25937a59@mail.gmail.com>
Message-ID: <eaaf21dc0606251617q68366733u22350c72201b13b8@mail.gmail.com>

On 6/25/06, Guido van Rossum <guido at python.org> wrote:
> Unfortunately, a mechanism that would let you register a callback for
> when a particular variable or attribute used in a cached expression is
> used, is pretty hard to implement without affecting the performance of
> code that doesn't use it. I'm afraid this is not a very likely path
> towards a solution.

I could make a strong argument that it is actually impossible to
implement without affecting the performance of other code; the only
issue is whether or not the impact is acceptable.  I may be wrong, but
I think that this particular scheme minimizes the impact:
  - There is a bit more data to store in every namespace
  - There is no change to dereferencing names; no test is required, no
callback is generated
  - Binding to a name that currently has no binding simply requires
allocating the extra memory and clearing it.
  - Binding to a name that is bound and does have callbacks is slow,
but those are supposed to be constant *in practice* anyway.
  - Binding to a name that is already bound, but has no callbacks
requires a test on a single variable against a constant.

Without knowing more about the internals of Python (such as how long a
check of a single variable takes relative to binding a new value to a
name), I can't properly evaluate how much of a problem this would be.

  -- Eric Sumner

From guido at python.org  Mon Jun 26 01:23:24 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 16:23:24 -0700
Subject: [Python-Dev] Simple Switch statementZ
In-Reply-To: <Pine.LNX.4.58.0606251806290.17937@server1.LFW.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
	<449EF402.3030408@ewtllc.com> <e7muoc$tjl$1@sea.gmane.org>
	<Pine.LNX.4.58.0606251806290.17937@server1.LFW.org>
Message-ID: <ca471dc20606251623y693e6fa0ta0fc85ee2044032e@mail.gmail.com>

On 6/25/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Mon, 26 Jun 2006, Georg Brandl wrote:
> > Raymond Hettinger wrote:
> > >    five = 5
> > >    eight = [8]
> > >    def f(x, six=6):
> > >           seven =  7
> > >           a = static(five + 4)    # this is legal
> > >           b = static(six + 4)      # this is illegal
> > >           c = static(seven + 4) # this is illegal
> > >           d = static(eight + [4]) # this is illegal
> >
> > Why would the last line be illegal?
>
> I believe Raymond is assuming it would be illegal because it's mutable.
> I don't think much has been said about whether static(<EXPR>) should be
> allowed to yield a mutable value, but if we did allow that, it might
> open up an avenue to much confusion.  (I join the chorus of voices that
> dislike the name 'static' for this feature.)

What do you think of Nick C's 'once'?

> Whether or not 'eight + [4]' is allowed in 'static', it certainly
> wouldn't be allowed after 'switch' or 'case' since it's unhashable.

Right. But there are all sorts of objects that are compared by object
identity (e.g. classes, modules, even functions) which may contain
mutable components but are nevertheless "constant" for the purpose of
switch or optimization. Let's not confuse this concept of constness
with immutability.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 26 01:29:41 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 16:29:41 -0700
Subject: [Python-Dev] Temporary Constantification
In-Reply-To: <eaaf21dc0606251617q68366733u22350c72201b13b8@mail.gmail.com>
References: <eaaf21dc0606251512v6717ee42q7a3fa714a583b8e1@mail.gmail.com>
	<ca471dc20606251536n6ed00d65n4aa3f68b25937a59@mail.gmail.com>
	<eaaf21dc0606251617q68366733u22350c72201b13b8@mail.gmail.com>
Message-ID: <ca471dc20606251629x2f9dd9h353854bae288b49@mail.gmail.com>

On 6/25/06, Eric Sumner <kd5bjo at gmail.com> wrote:
> On 6/25/06, Guido van Rossum <guido at python.org> wrote:
> > Unfortunately, a mechanism that would let you register a callback for
> > when a particular variable or attribute used in a cached expression is
> > used, is pretty hard to implement without affecting the performance of
> > code that doesn't use it. I'm afraid this is not a very likely path
> > towards a solution.
>
> I could make a strong argument that it is actually impossible to
> implement without affecting the performance of other code; the only
> issue is whether or not the impact is acceptable.  I may be wrong, but
> I think that this particular scheme minimizes the impact:
>   - There is a bit more data to store in every namespace
>   - There is no change to dereferencing names; no test is required, no
> callback is generated
>   - Binding to a name that currently has no binding simply requires
> allocating the extra memory and clearing it.
>   - Binding to a name that is bound and does have callbacks is slow,
> but those are supposed to be constant *in practice* anyway.
>   - Binding to a name that is already bound, but has no callbacks
> requires a test on a single variable against a constant.
>
> Without knowing more about the internals of Python (such as how long a
> check of a single variable takes relative to binding a new value to a
> name), I can't properly evaluate how much of a problem this would be.

Your proposal would require a change to the dict type to set a
callback to be called when a particular key is modified (not a generic
callback when any key is modified).

That seems pretty tricky to do with no impact, given how highly dicts
are optimized.

Also, allowing attribute references is a whole new can of worms, since
attributes aren't necessarily implemented as standard namespaces
implemented by dictionaries. Aahz already pointed this out.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From python-dev at zesty.ca  Mon Jun 26 02:43:27 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Sun, 25 Jun 2006 19:43:27 -0500 (CDT)
Subject: [Python-Dev] Simple Switch statementZ
In-Reply-To: <ca471dc20606251623y693e6fa0ta0fc85ee2044032e@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org> 
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com> 
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com> 
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1> 
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com> 
	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com> 
	<449EF402.3030408@ewtllc.com> <e7muoc$tjl$1@sea.gmane.org> 
	<Pine.LNX.4.58.0606251806290.17937@server1.LFW.org>
	<ca471dc20606251623y693e6fa0ta0fc85ee2044032e@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606251936500.17937@server1.LFW.org>

On Sun, 25 Jun 2006, Guido van Rossum wrote:
> What do you think of Nick C's 'once'?

It's a bit closer to the right meaning... but what about:

    def f(x):
        def g(y):
            return y + once x
        return g

Does "once" mean not really once here, but "once for each new function
object that's created for g"?

> Right. But there are all sorts of objects that are compared by object
> identity (e.g. classes, modules, even functions) which may contain
> mutable components but are nevertheless "constant" for the purpose of
> switch or optimization. Let's not confuse this concept of constness
> with immutability.

That's a good point.  We need a concept like "stable for equality"
separate from "constant", since "constant" and "immutable" will mislead
those who are used to the meanings of these words in other languages.


-- ?!ng

From aahz at pythoncraft.com  Mon Jun 26 02:47:26 2006
From: aahz at pythoncraft.com (Aahz)
Date: Sun, 25 Jun 2006 17:47:26 -0700
Subject: [Python-Dev] 2.5b1 Windows install
Message-ID: <20060626004726.GA24988@panix.com>

Has anyone else tried doing an admin install with "compile .py files"
checked?  It's causing my install to blow up, but I'd prefer to assume
it's some weird Windows config/bug unless other people also have it, in
which case I'll file an SF report.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From pje at telecommunity.com  Mon Jun 26 02:46:46 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Sun, 25 Jun 2006 20:46:46 -0400
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <449EF402.3030408@ewtllc.com>
References: <5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
	<17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060625204418.01e9da78@sparrow.telecommunity.com>

At 01:37 PM 6/25/2006 -0700, Raymond Hettinger wrote:

>>>No thanks.  That is its own can of worms.  The obvious solutions (like const
>>>declarations, macros, or a syntax to force compile-time expression 
>>>evaluation)
>>>are unlikely to sit well because they run afoul Python's deeply ingrained
>>>dynamism.
>>>
>>
>>I think perhaps you haven't been paying close attention to Fredrik's 
>>proposal.
>Yes, I have been.  That is one of the three options I listed above.
>Each has its own issues.
>
>The static() keyword works like Forth's brackets for forcing compile-time 
>evaluation.

No, it doesn't; this is why I suggested that you haven't been paying close 
attention.  The evaluation is at function definition time, not compile time.


>The issue for us is that unlike other Python expressions, there are 
>inconvenient limitiations on what can be expressed inside:
>
>   five = 5
>   eight = [8]
>   def f(x, six=6):
>          seven =  7
>          a = static(five + 4)    # this is legal
>          b = static(six + 4)      # this is illegal
>          c = static(seven + 4) # this is illegal
>          d = static(eight + [4]) # this is illegal

The last one is perfectly legal, and the middle two make no sense.


From greg.ewing at canterbury.ac.nz  Mon Jun 26 02:48:26 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 26 Jun 2006 12:48:26 +1200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449D0A27.5080506@v.loewis.de>
References: <20060624075654.99693.qmail@web31507.mail.mud.yahoo.com>
	<449D0A27.5080506@v.loewis.de>
Message-ID: <449F2EDA.5070104@canterbury.ac.nz>

Martin v. L?wis wrote:

> Actually, your application *was* pretty close to being broken a few
> weeks ago, when Guido wanted to drop the requirement that a package
> must contain an __init__ file.

BTW, when that was being discussed, did anyone consider
allowing a directory to be given a .py suffix as an
alternative way to mark it as a package?

-- 
Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | Carpe post meridiem!          	  |
Christchurch, New Zealand	   | (I'm not a morning person.)          |
greg.ewing at canterbury.ac.nz	   +--------------------------------------+

From guido at python.org  Mon Jun 26 02:49:49 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 17:49:49 -0700
Subject: [Python-Dev] Simple Switch statementZ
In-Reply-To: <Pine.LNX.4.58.0606251936500.17937@server1.LFW.org>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
	<449EF402.3030408@ewtllc.com> <e7muoc$tjl$1@sea.gmane.org>
	<Pine.LNX.4.58.0606251806290.17937@server1.LFW.org>
	<ca471dc20606251623y693e6fa0ta0fc85ee2044032e@mail.gmail.com>
	<Pine.LNX.4.58.0606251936500.17937@server1.LFW.org>
Message-ID: <ca471dc20606251749o75bbdc43w65be88d8aaa6c1dd@mail.gmail.com>

On 6/25/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Sun, 25 Jun 2006, Guido van Rossum wrote:
> > What do you think of Nick C's 'once'?
>
> It's a bit closer to the right meaning... but what about:
>
>     def f(x):
>         def g(y):
>             return y + once x
>         return g
>
> Does "once" mean not really once here, but "once for each new function
> object that's created for g"?

He specifically wants the latter semantics because it solves the
problem of binding the value of a loop control variable in an outer
scope:

  def f(n):
    return [(lambda: once i) for i in range(n)]

should return n functions returning the values 0 through n-1. Without
the once it returns n identical functions all returning n-1; this is
due to outer-scope references referencing variables, not values. (In
Scheme this is solved by making the for loop create a new variable for
each iteration, but that's not Pythonic.)

> > Right. But there are all sorts of objects that are compared by object
> > identity (e.g. classes, modules, even functions) which may contain
> > mutable components but are nevertheless "constant" for the purpose of
> > switch or optimization. Let's not confuse this concept of constness
> > with immutability.
>
> That's a good point.  We need a concept like "stable for equality"
> separate from "constant", since "constant" and "immutable" will mislead
> those who are used to the meanings of these words in other languages.

Anyone familiar with const in C++ will have a good grasp of the
infinite shades of gray it can express. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Mon Jun 26 02:51:17 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 17:51:17 -0700
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
References: <449D0A27.5080506@v.loewis.de>
	<20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
Message-ID: <ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>

On 6/24/06, Jean-Paul Calderone <exarkun at divmod.com> wrote:
> >Actually, your application *was* pretty close to being broken a few
> >weeks ago, when Guido wanted to drop the requirement that a package
> >must contain an __init__ file. In that case, "import math" would have
> >imported the directory, and given you an empty package.
>
> But this change was *not* made, and afaict it is not going to be made.

Correct. We'll stick with the warning. (At least until Py3k but most
likely also in Py3k.)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From bjourne at gmail.com  Mon Jun 26 03:10:22 2006
From: bjourne at gmail.com (=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=)
Date: Mon, 26 Jun 2006 03:10:22 +0200
Subject: [Python-Dev] Switch statement
In-Reply-To: <449C4C2C.60006@comcast.net>
References: <mailman.27711.1151087287.27774.python-dev@python.org>
	<449C4C2C.60006@comcast.net>
Message-ID: <740c3aec0606251810n37eeb4f9xab885d1e8fdd824c@mail.gmail.com>

On 6/23/06, Edward C. Jones <edcjones at comcast.net> wrote:
> Python is a beautiful simple language with a rich standard library.
> Python has done fine without a switch statement up to now. Guido left it
> out of the original language for some reason (my guess is simplicity).
> Why is it needed now? What would be added next: do while or goto? The
> urge to add syntax should be resisted unless there is a high payoff
> (such as yield).
>
> There are much better ways for the developers to spend their time and
> energy (refactoring os comes to mind).
>
> Please keep Python simple.
>
> -1 on the switch statement.

I agree. IMHO switch is a useless statement which can cause many
problems in any language. It misleads programmers to dispatch in the
wrong way. If you have switch with < 5 cases, an if-elif chain fits
just fine. If the switch is larger use a dictionary that maps values
to functions. In C, many times a switch block starts small (40 lines)
but grows as the number of values to dispatch on increases. Soon it
becomes a 500 line monstrosity that is impossible to refactor because
variables from the enclosing space is used frivolously.

I don't get the speed argument either. Who cares that if-elif-chains
are O(n) and switch O(1)? If n > 10 you are doing something wrong
anyway. I don't think I have ever seen in any language a switch
construct that, barring speed concerns, wouldn't be better written
using any other dispatch mechanism than switch.

-- 
mvh Bj?rn

From fdrake at acm.org  Mon Jun 26 03:16:40 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sun, 25 Jun 2006 21:16:40 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449F2EDA.5070104@canterbury.ac.nz>
References: <20060624075654.99693.qmail@web31507.mail.mud.yahoo.com>
	<449D0A27.5080506@v.loewis.de> <449F2EDA.5070104@canterbury.ac.nz>
Message-ID: <200606252116.41300.fdrake@acm.org>

On Sunday 25 June 2006 20:48, Greg Ewing wrote:
 > BTW, when that was being discussed, did anyone consider
 > allowing a directory to be given a .py suffix as an
 > alternative way to mark it as a package?

I'd certainly be a lot happier with that than with the current behavior.  
Silly little warnings about perfectly good data-only directories are just 
silly.


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From guido at python.org  Mon Jun 26 03:42:11 2006
From: guido at python.org (Guido van Rossum)
Date: Sun, 25 Jun 2006 18:42:11 -0700
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <200606252116.41300.fdrake@acm.org>
References: <20060624075654.99693.qmail@web31507.mail.mud.yahoo.com>
	<449D0A27.5080506@v.loewis.de> <449F2EDA.5070104@canterbury.ac.nz>
	<200606252116.41300.fdrake@acm.org>
Message-ID: <ca471dc20606251842k2de8b1feo632494fbf4815616@mail.gmail.com>

On 6/25/06, Fred L. Drake, Jr. <fdrake at acm.org> wrote:
> On Sunday 25 June 2006 20:48, Greg Ewing wrote:
>  > BTW, when that was being discussed, did anyone consider
>  > allowing a directory to be given a .py suffix as an
>  > alternative way to mark it as a package?
>  :-)
> I'd certainly be a lot happier with that than with the current behavior.
> Silly little warnings about perfectly good data-only directories are just
> silly.

And silly whining about warnings for silly name conflicts are just as silly. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From foom at fuhm.net  Mon Jun 26 03:47:12 2006
From: foom at fuhm.net (James Y Knight)
Date: Sun, 25 Jun 2006 21:47:12 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
References: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
Message-ID: <FEDC1056-DB9A-449C-824F-C94CC08ECCAB@fuhm.net>


On Jun 24, 2006, at 1:29 PM, Ralf W. Grosse-Kunstleve wrote:

> --- Jean-Paul Calderone <exarkun at divmod.com> wrote:
>> I think it is safe to say that Twisted is more widely used than  
>> anything
>> Google has yet released.  Twisted also has a reasonably plausible
>> technical reason to dislike this change.  Google has a bunch of  
>> engineers
>> who, apparently, cannot remember to create an empty __init__.py  
>> file in
>> some directories sometimes.
>
> Simply adding a note to the ImportError message would solve this  
> problem "just
> in time":
>
>>>> import mypackage.foo
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ImportError: No module named mypackage.foo
>     Note that subdirectories are searched for imports only if they  
> contain an
>     __init__.py file: http://www.python.org/doc/essays/packages.html
>

I also dislike the warning solution. Making the ImportError message  
more verbose seems like a much nicer solution.

James

From greg.ewing at canterbury.ac.nz  Mon Jun 26 04:44:17 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 26 Jun 2006 14:44:17 +1200
Subject: [Python-Dev] Alternatives to switch?
In-Reply-To: <449E11BD.7010603@acm.org>
References: <449E11BD.7010603@acm.org>
Message-ID: <449F4A01.7030900@canterbury.ac.nz>

Talin wrote:

>     def outer():
>        def inner(x):
>           switch(x):
>           case 1: ...
>           case 2: ...
>           case 3: ...
> 
>        return inner
> 
> If the switch cases are bound at the time that 'inner' is defined, it 
> means that the hash table will be rebuilt each time 'outer' is called. 

I was just thinking the same thing...

> But the compiler has no idea which of these two cases is true.

Actually I think it does, by looking at the scopes of
names referred to in the case expressions. Suppose
there were a rule that case expressions are evaluated
as early as possible, and in the outermost possible
scope that contains all the names that they reference.
Then, in

   fred = 1
   mary = 2

   def f():
      ...
      def g():
         ...
         switch x:
            case fred:
               ...
            case mary:

the case expressions would be evaluated at the module
scope, somewhere between the binding of mary and the
definition of f().

The distinction between module and function scopes
could then go away. A case expression could reference
a local if it wanted, at the cost of the case expression
being evaulated for every call to the function. A
case at the module level would just be an instance
of that.

-- 
Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | Carpe post meridiem!          	  |
Christchurch, New Zealand	   | (I'm not a morning person.)          |
greg.ewing at canterbury.ac.nz	   +--------------------------------------+

From rwgk at yahoo.com  Mon Jun 26 05:16:36 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Sun, 25 Jun 2006 20:16:36 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <ca471dc20606251842k2de8b1feo632494fbf4815616@mail.gmail.com>
Message-ID: <20060626031636.64034.qmail@web31514.mail.mud.yahoo.com>

--- Guido van Rossum <guido at python.org> wrote:

> On 6/25/06, Fred L. Drake, Jr. <fdrake at acm.org> wrote:
> > On Sunday 25 June 2006 20:48, Greg Ewing wrote:
> >  > BTW, when that was being discussed, did anyone consider
> >  > allowing a directory to be given a .py suffix as an
> >  > alternative way to mark it as a package?
> >  :-)
> > I'd certainly be a lot happier with that than with the current behavior.
> > Silly little warnings about perfectly good data-only directories are just
> > silly.
> 
> And silly whining about warnings for silly name conflicts are just as silly.
> :-)

I cannot smile here. I anticipate real damage in terms of $$$. To see what lead
me to the subject of this thread, please take a quick look here, which is the
result of running (most) of our unit tests:

    http://cci.lbl.gov/~rwgk/tmp/py25b1_ImortWarning_flood

I can work around it, sure. Everybody can work around it, of course. But
consider that one hour of a professional person is at least $100 with benefits
etc. included. (If that sounds high, I know people charging much more than
that; also consider that the going rate for a car mechanic in the bay area is
$90, as you probably know.) Now say you have 1000 groups of developers having
to work around the warning (I bet you have more). There will be discussions,
alternatives will be tried and discarded, etc. Say that eats about 10 man hours
per group before the issue is settled, which again is a very conservative
estimate I believe. That makes a total of $1000000 in damages, at least. Is
that warning really worth a millon dollar?


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From fredrik at pythonware.com  Mon Jun 26 07:55:51 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 26 Jun 2006 07:55:51 +0200
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <449EF402.3030408@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060622135620.01ea9bf8@sparrow.telecommunity.com>	<5.1.1.6.0.20060622150243.03b0bd70@sparrow.telecommunity.com>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>	<449D9BB5.5090504@ronadam.com>	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>	<5.1.1.6.0.20060625151015.01efc6b0@sparrow.telecommunity.com>
	<449EF402.3030408@ewtllc.com>
Message-ID: <e7nst4$uqu$1@sea.gmane.org>

Raymond Hettinger wrote:

> The static() keyword works like Forth's brackets for forcing 
> compile-time evaluation.  The issue for us is that unlike other Python 
> expressions, there are inconvenient limitiations on what can be 
> expressed inside:
> 
>    five = 5
>    eight = [8]
>    def f(x, six=6):
>           seven =  7
>           a = static(five + 4)    # this is legal
>           b = static(six + 4)      # this is illegal
>           c = static(seven + 4) # this is illegal
>           d = static(eight + [4]) # this is illegal

bzzt.  try again.

</F>


From fredrik at pythonware.com  Mon Jun 26 08:07:54 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 26 Jun 2006 08:07:54 +0200
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <ca471dc20606251608w381b1942u4da4496b25a6bdc3@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>	<Pine.LNX.4.58.0606251602110.17937@server1.LFW.org>
	<ca471dc20606251608w381b1942u4da4496b25a6bdc3@mail.gmail.com>
Message-ID: <e7ntjn$1o8$1@sea.gmane.org>

Guido van Rossum wrote:

> Here's an argument for allowing names (this argument has been used
> successfully for using names instead of string literals in many APIs):
> if there is a spelling error in the string literal, the case will
> silently be ignored, and who knows when the bug is detected. If there
> is a spelling error in a NAME, however, the error will be caught as
> soon as it is evaluated.

which is the rationale for using names in SRE, of course.

and adding a proper switch statement will make this approach a bit 
robust; if all cases are marked as static (or made static by default), 
all case expressions will be evaluated up front.

</F>


From martin at v.loewis.de  Mon Jun 26 08:20:25 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 26 Jun 2006 08:20:25 +0200
Subject: [Python-Dev] 2.5b1 Windows install
In-Reply-To: <20060626004726.GA24988@panix.com>
References: <20060626004726.GA24988@panix.com>
Message-ID: <449F7CA9.9010105@v.loewis.de>

Aahz wrote:
> Has anyone else tried doing an admin install with "compile .py files"
> checked?  It's causing my install to blow up, but I'd prefer to assume
> it's some weird Windows config/bug unless other people also have it, in
> which case I'll file an SF report.

It works fine for me. One way for it to fail is if you uncompilable
modules in the target directory. Currently, it invokes

  [TARGETDIR]python.exe -Wi [TARGETDIR]Lib\compileall.py -f -x
bad_coding|badsyntax|site-packages [TARGETDIR]Lib

where TARGETDIR is, well, the target directory of the installation.
You could try to run this after you installed Python without pyc
compilation, to see whether it succeeds.

Regards,
martin

From martin at v.loewis.de  Mon Jun 26 08:29:49 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 26 Jun 2006 08:29:49 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060626031636.64034.qmail@web31514.mail.mud.yahoo.com>
References: <20060626031636.64034.qmail@web31514.mail.mud.yahoo.com>
Message-ID: <449F7EDD.80901@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
> I can work around it, sure. Everybody can work around it, of course. But
> consider that one hour of a professional person is at least $100 with benefits
> etc. included. (If that sounds high, I know people charging much more than
> that; also consider that the going rate for a car mechanic in the bay area is
> $90, as you probably know.) Now say you have 1000 groups of developers having
> to work around the warning (I bet you have more). There will be discussions,
> alternatives will be tried and discarded, etc. Say that eats about 10 man hours
> per group before the issue is settled, which again is a very conservative
> estimate I believe. That makes a total of $1000000 in damages, at least. Is
> that warning really worth a millon dollar?

So spend some of the money to come up with an alternate solution for
2.5b2. With a potential damage of a million dollars, it shouldn't be
too difficult to provide a patch by tomorrow, right?

This is open source, folks. Business arguments don't matter much to
many of us. I don't get any money for my contributions to Python,
and I'm not whining about all the lost consultant fees I could have
collected while contributing to Python instead.

What matters are actual contributions: bug reports, patches, PEPs,
etc. In the specific case, make sure that your alternative solution
not only makes you happy, but also solves the original problem as
good or better than the current solution (read some email archives
to find out what the original problem was).

Regards,
Martin

From ncoghlan at gmail.com  Mon Jun 26 12:27:03 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 26 Jun 2006 20:27:03 +1000
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>
References: <449D0A27.5080506@v.loewis.de>	<20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
	<ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>
Message-ID: <449FB677.9040505@gmail.com>

Guido van Rossum wrote:
> On 6/24/06, Jean-Paul Calderone <exarkun at divmod.com> wrote:
>>> Actually, your application *was* pretty close to being broken a few
>>> weeks ago, when Guido wanted to drop the requirement that a package
>>> must contain an __init__ file. In that case, "import math" would have
>>> imported the directory, and given you an empty package.
>> But this change was *not* made, and afaict it is not going to be made.
> 
> Correct. We'll stick with the warning. (At least until Py3k but most
> likely also in Py3k.)

Perhaps ImportWarning should default to being ignored, the same way 
PendingDeprecationWarning does?

Then -Wd would become 'the one obvious way' to debug import problems, since it 
would switch ImportWarning on without drowning you in a flood of import 
diagnostics the way -v can do.

Import Errors could even point you in the right direction:

 >>> import mypackage.foo
Traceback (most recent call last):
   File "<stdin>", line 1, in ?
ImportError: No module named mypackage.foo
     Diagnostic import warnings can be enabled with -Wd

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From rwgk at yahoo.com  Mon Jun 26 12:41:07 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Mon, 26 Jun 2006 03:41:07 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449F7EDD.80901@v.loewis.de>
Message-ID: <20060626104108.89960.qmail@web31510.mail.mud.yahoo.com>

--- "Martin v. L???wis" <martin at v.loewis.de> wrote:
> So spend some of the money to come up with an alternate solution for
> 2.5b2. With a potential damage of a million dollars, it shouldn't be
> too difficult to provide a patch by tomorrow, right?

My share is only 10 man hours, payed for by the US government at a scientist
salary. :-)

A simple patch with a start is attached. Example:

% ./python 
Python 2.5b1 (r25b1:47027, Jun 26 2006, 03:15:33) 
[GCC 4.1.0 20060304 (Red Hat 4.1.0-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import foo
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named foo
  Note that subdirectories are searched for imports only if they contain an
  __init__.py file. See the section on "Packages" in the Python tutorial for
  details (http://www.python.org/doc/tut/).
>>> 


The "No module named" message is repeated in these files (2.5b1 tree):

./Demo/imputil/knee.py
./Lib/ihooks.py
./Lib/modulefinder.py
./Lib/xmlcore/etree/ElementTree.py
./Lib/runpy.py
./Lib/imputil.py

If there is a consenus, I'd create a new exception ImportErrorNoModule(name)
that is used consistently from all places. This would ensure uniformity of the
message in the future.

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: import_patch
Type: application/octet-stream
Size: 1090 bytes
Desc: 467797280-import_patch
Url : http://mail.python.org/pipermail/python-dev/attachments/20060626/ce3bbfec/attachment.obj 

From ncoghlan at gmail.com  Mon Jun 26 12:46:57 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 26 Jun 2006 20:46:57 +1000
Subject: [Python-Dev] 2.5b1 Windows install
In-Reply-To: <20060626004726.GA24988@panix.com>
References: <20060626004726.GA24988@panix.com>
Message-ID: <449FBB21.7050508@gmail.com>

Aahz wrote:
> Has anyone else tried doing an admin install with "compile .py files"
> checked?  It's causing my install to blow up, but I'd prefer to assume
> it's some weird Windows config/bug unless other people also have it, in
> which case I'll file an SF report.

I tried this deliberately with b1 because it was broken in one of the alphas. 
It worked fine for me this time (installing over the top of alpha 2).

I think there were some bad .py files around that caused the breakage in the 
earlier alpha - could those have been lying around in your install directory?

Cheers,
Nick.


-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From mwh at python.net  Mon Jun 26 12:51:51 2006
From: mwh at python.net (Michael Hudson)
Date: Mon, 26 Jun 2006 11:51:51 +0100
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <FEDC1056-DB9A-449C-824F-C94CC08ECCAB@fuhm.net> (James Y.
	Knight's message of "Sun, 25 Jun 2006 21:47:12 -0400")
References: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
	<FEDC1056-DB9A-449C-824F-C94CC08ECCAB@fuhm.net>
Message-ID: <2m64ioi3zs.fsf@starship.python.net>

James Y Knight <foom at fuhm.net> writes:

> On Jun 24, 2006, at 1:29 PM, Ralf W. Grosse-Kunstleve wrote:
>
>> --- Jean-Paul Calderone <exarkun at divmod.com> wrote:
>>> I think it is safe to say that Twisted is more widely used than  
>>> anything
>>> Google has yet released.  Twisted also has a reasonably plausible
>>> technical reason to dislike this change.  Google has a bunch of  
>>> engineers
>>> who, apparently, cannot remember to create an empty __init__.py  
>>> file in
>>> some directories sometimes.
>>
>> Simply adding a note to the ImportError message would solve this  
>> problem "just
>> in time":
>>
>>>>> import mypackage.foo
>> Traceback (most recent call last):
>>   File "<stdin>", line 1, in ?
>> ImportError: No module named mypackage.foo
>>     Note that subdirectories are searched for imports only if they  
>> contain an
>>     __init__.py file: http://www.python.org/doc/essays/packages.html
>>
>
> I also dislike the warning solution. Making the ImportError message  
> more verbose seems like a much nicer solution.

Me too. 

ImportError: no module named foo 
   Note: directory foo/ with no __init__.py not imported

would be nice, but I don't know how hard it would be to achieve.  I'm
scared of the import.c.

Cheers,
mwh

-- 
  While preceding your entrance with a grenade is a good tactic in
  Quake, it can lead to problems if attempted at work.    -- C Hacking
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html

From amk at amk.ca  Mon Jun 26 14:31:27 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Mon, 26 Jun 2006 08:31:27 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449F7EDD.80901@v.loewis.de>
References: <20060626031636.64034.qmail@web31514.mail.mud.yahoo.com>
	<449F7EDD.80901@v.loewis.de>
Message-ID: <20060626123127.GA4867@localhost.localdomain>

On Mon, Jun 26, 2006 at 08:29:49AM +0200, "Martin v. L?wis" wrote:
> (read some email archives
> to find out what the original problem was).

People at Google don't read manuals?

--amk

From benji at benjiyork.com  Mon Jun 26 15:04:52 2006
From: benji at benjiyork.com (Benji York)
Date: Mon, 26 Jun 2006 09:04:52 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449FB677.9040505@gmail.com>
References: <449D0A27.5080506@v.loewis.de>	<20060624140005.29014.1992302539.divmod.quotient.8985@ohm>	<ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>
	<449FB677.9040505@gmail.com>
Message-ID: <449FDB74.2050000@benjiyork.com>

Nick Coghlan wrote:
> Perhaps ImportWarning should default to being ignored, the same way 
> PendingDeprecationWarning does?
> 
> Then -Wd would become 'the one obvious way' to debug import problems

+1
--
Benji York

From murman at gmail.com  Mon Jun 26 15:15:08 2006
From: murman at gmail.com (Michael Urman)
Date: Mon, 26 Jun 2006 08:15:08 -0500
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <001901c69883$18282660$dc00000a@RaymondLaptop1>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<001901c69883$18282660$dc00000a@RaymondLaptop1>
Message-ID: <dcbbbb410606260615u6d30889bo548736d08fa4f43e@mail.gmail.com>

On 6/25/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote:
> Those were not empty words.  I provided two non-trivial worked-out examples
> taken from sre_constants.py and opcode.py.  Nick provided a third example from
> decimal.py.  In all three cases, the proposal was applied effortlessly resulting
> in improved readability and speed.  I hope you hold other proposals to the same
> standard.

I appreciate your attempts to help us avoid overengineering this so
I'm trying to find some real world examples of a pygame event loop
that really show the benefit of supporting named constants and
expressions. I may mess up irrelevant details, but the primary case
looks something like the following (perhaps Pete Shinners could point
us to a good example loop online somewhere):

    for event in pygame.event.get():
        if event.type == pygame.KEYDOWN: ...
        elif event.type == pygame.KEYUP: ...
        elif event.type == pygame.QUIT: ...

Here all the event types are integers, but are clearly meaningless as
integers instead of an enumeration. I'd be sorely disappointed with
the addition of a switch statement that couldn't support this as
something like the following:

    for event in pygame.event.get():
        switch event.type:
        case pygame.KEYDOWN: ...
        case pygame.KEYUP: ...
        case pygame.QUIT: ...

I'd also generally like these to be captured like default values to
function arguments are. The only argument against this that stuck with
me is over the fact that locals cannot be used. If literals-only has a
chance, than I would hope that every hashable non-local capturable
expression should be at least as welcome. In summary I'm +0 on switch,
but -1 on literal-only cases.

I also would like to see a way to use 'is' instead of (or inaddition
to) '==' for the comparison, but I don't have any use cases behind
this.

Michael
-- 
Michael Urman  http://www.tortall.net/mu/blog

From ncoghlan at gmail.com  Mon Jun 26 15:31:24 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 26 Jun 2006 23:31:24 +1000
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060626123127.GA4867@localhost.localdomain>
References: <20060626031636.64034.qmail@web31514.mail.mud.yahoo.com>	<449F7EDD.80901@v.loewis.de>
	<20060626123127.GA4867@localhost.localdomain>
Message-ID: <449FE1AC.8030806@gmail.com>

A.M. Kuchling wrote:
> On Mon, Jun 26, 2006 at 08:29:49AM +0200, "Martin v. L?wis" wrote:
>> (read some email archives
>> to find out what the original problem was).
> 
> People at Google don't read manuals?

The documentation of how imports actually work isn't that easy to find?

Guido's package essay on python.org and PEP 302 seem to cover the topic pretty 
well between them, but neither of them is part of the normal documentation. 
The situation shares some unfortunate similarities with that of the new-style 
class documentation - it's documented, just not in the places you might 
initially expect :(

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From guido at python.org  Mon Jun 26 16:29:04 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 07:29:04 -0700
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <dcbbbb410606260615u6d30889bo548736d08fa4f43e@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<001901c69883$18282660$dc00000a@RaymondLaptop1>
	<dcbbbb410606260615u6d30889bo548736d08fa4f43e@mail.gmail.com>
Message-ID: <ca471dc20606260729j6e0c466cpb5193873305a6002@mail.gmail.com>

On 6/26/06, Michael Urman <murman at gmail.com> wrote:
> I also would like to see a way to use 'is' instead of (or inaddition
> to) '==' for the comparison, but I don't have any use cases behind
> this.

I've thought about this a bit, and I think it's a red herring. I've
seen some places where 'is' is used to compare constants (e.g.
sre_compare.py). But I'm pretty sure that this is a speed hack that
can safely be forgotten once (if ever) we have a switch statement.
-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From cfbolz at gmx.de  Mon Jun 26 17:54:45 2006
From: cfbolz at gmx.de (Carl Friedrich Bolz)
Date: Mon, 26 Jun 2006 17:54:45 +0200
Subject: [Python-Dev] pypy-0.9.0: stackless,   new extension compiler
In-Reply-To: <OF946B1611.4AC16C45-ON80257199.00347A71-80257199.00349087@risk.sungard.com>
References: <2mslltigm2.fsf@starship.python.net>
	<OF946B1611.4AC16C45-ON80257199.00347A71-80257199.00349087@risk.sungard.com>
Message-ID: <e7osfk$95r$1@sea.gmane.org>

Hi all!

Michael Hudson wrote:
 > The PyPy development team has been busy working and we've now packaged
 > our latest improvements, completed work and new experiments as
 > version 0.9.0, our fourth public release.

Unfortunately the download links for the release tarballs did not work
until very recently. They are now working though. You can download the
0.9 release of PyPy under:

http://codespeak.net/download/pypy/pypy-0.9.0.tar.bz2
http://codespeak.net/download/pypy/pypy-0.9.0.tar.gz
http://codespeak.net/download/pypy/pypy-0.9.0.zip

For detailed notes about how to get started into the world of PyPy see
here:

http://codespeak.net/pypy/dist/pypy/doc/getting-started.html

Sorry for the fuss and cheers,

Carl Friedrich Bolz



From rhettinger at ewtllc.com  Mon Jun 26 17:46:09 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Mon, 26 Jun 2006 08:46:09 -0700
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <dcbbbb410606260615u6d30889bo548736d08fa4f43e@mail.gmail.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>	<5.1.1.6.0.20060622160223.03e35500@sparrow.telecommunity.com>	<ca471dc20606221712x7634c308m673bd8c6e186bb37@mail.gmail.com>	<e7g74s$sdk$1@sea.gmane.org>	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>	<e7jrbg$749$1@sea.gmane.org>
	<449D9BB5.5090504@ronadam.com>	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>	<001901c69883$18282660$dc00000a@RaymondLaptop1>
	<dcbbbb410606260615u6d30889bo548736d08fa4f43e@mail.gmail.com>
Message-ID: <44A00141.1050801@ewtllc.com>

Michael Urman wrote:

> I'm trying to find some real world examples of a pygame event loop
>
>that really show the benefit of supporting named constants and
>expressions. I may mess up irrelevant details, but the primary case
>looks something like the following (perhaps Pete Shinners could point
>us to a good example loop online somewhere):
>
>    for event in pygame.event.get():
>        if event.type == pygame.KEYDOWN: ...
>        elif event.type == pygame.KEYUP: ...
>        elif event.type == pygame.QUIT: ...
>
>Here all the event types are integers, but are clearly meaningless as
>integers instead of an enumeration. I'd be sorely disappointed with
>the addition of a switch statement that couldn't support this as
>something like the following:
>
>    for event in pygame.event.get():
>        switch event.type:
>        case pygame.KEYDOWN: ...
>        case pygame.KEYUP: ...
>        case pygame.QUIT: ...
>  
>
With the simplified proposal, this would be coded with an inverse mapping:

    for event in pygame.event.get():
        switch eventmap[event.type]:
        case 'KEYDOWN': ...
        case 'KEYUP': ...
        case 'QUIT': ...


Hopefully, the static() proposal will work-out and the mapping won't be 
necessary.  If it does work-out, you'll also get more error-checking 
than you get with either the if-elif version or the simplified switch-case.



>
>I also would like to see a way to use 'is' instead of (or inaddition
>to) '==' for the comparison, but I don't have any use cases behind
>this.
>  
>

If speed is the goal, this isn't necessary.  The internal equality check 
takes a shortcut in the event of an identity match.

OTOH, if the goal is having several distinct cases that are equal but 
not identical, then that's another story.  I suggest leave the initial 
switch syntax as simple as possible and just switch on id(object).



Raymond


From mwh at python.net  Mon Jun 26 19:34:09 2006
From: mwh at python.net (Michael Hudson)
Date: Mon, 26 Jun 2006 18:34:09 +0100
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <449FDB74.2050000@benjiyork.com> (Benji York's message of "Mon,
	26 Jun 2006 09:04:52 -0400")
References: <449D0A27.5080506@v.loewis.de>
	<20060624140005.29014.1992302539.divmod.quotient.8985@ohm>
	<ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>
	<449FB677.9040505@gmail.com> <449FDB74.2050000@benjiyork.com>
Message-ID: <2mzmfzhlda.fsf@starship.python.net>

Benji York <benji at benjiyork.com> writes:

> Nick Coghlan wrote:
>> Perhaps ImportWarning should default to being ignored, the same way 
>> PendingDeprecationWarning does?
>> 
>> Then -Wd would become 'the one obvious way' to debug import problems
>
> +1

I'm not sure what this would achieve -- people who don't know enough
about Python to add __init__.py files aren't going to know enough to
make suppressed-by-default warnings not suppressed.

The more I think about it, the more I like the idea of saying
something when an import fails only because of a missing __init__.py
file.  I guess I should try to implement it...

Cheers,
mwh

-- 
  I also feel it essential to note, [...], that Description Logics,
  non-Monotonic Logics, Default Logics and Circumscription Logics 
  can all collectively go suck a cow. Thank you.
              -- http://advogato.org/person/Johnath/diary.html?start=4

From benji at benjiyork.com  Mon Jun 26 20:41:04 2006
From: benji at benjiyork.com (Benji York)
Date: Mon, 26 Jun 2006 14:41:04 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <2mzmfzhlda.fsf@starship.python.net>
References: <449D0A27.5080506@v.loewis.de>	<20060624140005.29014.1992302539.divmod.quotient.8985@ohm>	<ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>	<449FB677.9040505@gmail.com>
	<449FDB74.2050000@benjiyork.com>
	<2mzmfzhlda.fsf@starship.python.net>
Message-ID: <44A02A40.2050703@benjiyork.com>

Michael Hudson wrote:
> Benji York <benji at benjiyork.com> writes:
> 
>>Nick Coghlan wrote:
>>
>>>Perhaps ImportWarning should default to being ignored, the same way 
>>>PendingDeprecationWarning does?
>>>
>>>Then -Wd would become 'the one obvious way' to debug import problems
>>
>>+1
> 
> I'm not sure what this would achieve

I'm more concerned with what it shouldn't achieve (changing the rules in 
an annoying and disruptive way).

> The more I think about it, the more I like the idea of saying
> something when an import fails only because of a missing __init__.py
> file.

I totally agree!  I just doubted that approach would be appealing [to 
Guido] when the choice of adding warnings had already been made.
--
Benji York

From guido at python.org  Mon Jun 26 21:23:00 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 12:23:00 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
Message-ID: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>

I've written a new PEP, summarizing (my reaction to) the recent
discussion on adding a switch statement. While I have my preferences,
I'm trying to do various alternatives justice in the descriptions. The
PEP also introduces some standard terminology that may be helpful in
future discussions. I'm putting this in the Py3k series to gives us
extra time to decide; it's too important to rush it.

  http://www.python.org/dev/peps/pep-3103/

Feedback (also about misrepresentation of alternatives I don't favor)
is most welcome, either to me directly or as a followup to this post.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From python-dev at zesty.ca  Mon Jun 26 21:48:14 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 26 Jun 2006 14:48:14 -0500 (CDT)
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>

On Mon, 26 Jun 2006, Guido van Rossum wrote:
> I've written a new PEP, summarizing (my reaction to) the recent
> discussion on adding a switch statement. While I have my preferences,
> I'm trying to do various alternatives justice in the descriptions.

Thanks for writing this up!

The section that most draws my attention is "Semantics", and i guess
it isn't a surprise to either of us that you had the most to say
from the perspective you currently support (School II).  :)  Let me
suggest a couple of points to add:

  - School I sees trouble in the approach of pre-freezing a dispatch
    dictionary because it places a new and unusual burden on the
    programmer to understand exactly what kinds of case values are
    allowed to be frozen and when the case values will be frozen.

  - In the School II paragraph you say "Worse, the hash function
    might have a bug or a side effect; if we generate code that
    believes the hash, a buggy hash might generate an incorrect
    match" -- but that is primarily a criticism of the School II
    approach, not of the School I approach as you have framed it.
    It's School II that mandates that the hash be the truth.

    (It looks to me like what you're actually criticizing here is
    based on some assumptions about how you think School I might
    be implemented, and having taken School I a number of steps
    down that (unexplained) road you then see problems with it.)

Also, why is the discussion of School II mostly an argument against
School I?  What about describing the advantages of each school?


-- ?!ng

From alexander.belopolsky at gmail.com  Mon Jun 26 21:54:49 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Mon, 26 Jun 2006 15:54:49 -0400
Subject: [Python-Dev] Misleading error message from
	PyObject_GenericSetAttr
In-Reply-To: <ca471dc20606190818t2c4d84d0lc16cb7ac025436ec@mail.gmail.com>
References: <d38f5330606141423r3a03478hdc1729a6aac44735@mail.gmail.com>
	<ca471dc20606190818t2c4d84d0lc16cb7ac025436ec@mail.gmail.com>
Message-ID: <d38f5330606261254y51dc09fey4c3e52c34c42d040@mail.gmail.com>

On 6/19/06, Guido van Rossum <guido at python.org> wrote:
> On 6/14/06, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> > ... It would be better to change the message
> > to "'Foo' object has only read-only attributes (assign to .bar)" as in
> > the case tp_setattro == tp_setattr == NULL in  PyObject_SetAttr .
>
> I agree. Can you submit a patch to SF please?
>
Please see:

https://sourceforge.net/tracker/index.php?func=detail&aid=1512942&group_id=5470&atid=305470

I've tested the patch by setting tp_setattr to 0 in Xxo_Type.  With the patch:

>>> import xx
>>> x = xx.new()
>>> x.a = 2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'xxmodule.Xxo' object has only read-only attributes
(assign to .a)
>>> del x.a
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'xxmodule.Xxo' object has only read-only attributes (del .a)

Note that this log reveals a small inaccuracy in xxmodule.c : the
module name is "xx," but Xxo type name is "xxmodule.Xxo."  Should I
submit a patch fixing that?

From guido at python.org  Mon Jun 26 22:06:24 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 13:06:24 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
Message-ID: <ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>

On 6/26/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Mon, 26 Jun 2006, Guido van Rossum wrote:
> > I've written a new PEP, summarizing (my reaction to) the recent
> > discussion on adding a switch statement. While I have my preferences,
> > I'm trying to do various alternatives justice in the descriptions.
>
> Thanks for writing this up!
>
> The section that most draws my attention is "Semantics", and i guess
> it isn't a surprise to either of us that you had the most to say
> from the perspective you currently support (School II).  :)  Let me
> suggest a couple of points to add:
>
>   - School I sees trouble in the approach of pre-freezing a dispatch
>     dictionary because it places a new and unusual burden on the
>     programmer to understand exactly what kinds of case values are
>     allowed to be frozen and when the case values will be frozen.

Can you please edit the PEP yourself to add this? That will be most efficient.

>   - In the School II paragraph you say "Worse, the hash function
>     might have a bug or a side effect; if we generate code that
>     believes the hash, a buggy hash might generate an incorrect
>     match" -- but that is primarily a criticism of the School II
>     approach, not of the School I approach as you have framed it.
>     It's School II that mandates that the hash be the truth.

You seem to misunderstand what I'm saying or proposing here;
admittedly I think I left something out. With school I, if you want to
optimize using a hash table (as in PEP 275 Solution 1) you have to
catch and discard exceptions in hash(), and a bug in hash() can still
lead this optimization astray: if A == B but hash(A) != hash(B),
"switch A: // case B: ... // else: ..." may falsely take the else
branch, thereby causing a hard-to-debug difference between optimized
and unoptimized code. With school II, exceptions in hash() aren't
caught or discarded; a bug in hash() leads to the same behavior as
optimized school I, but the bug is not dependent on the optimization
level.

>     (It looks to me like what you're actually criticizing here is
>     based on some assumptions about how you think School I might
>     be implemented, and having taken School I a number of steps
>     down that (unexplained) road you then see problems with it.)

Right. School I appears just as keen as school II to use hashing to
optimize things, but isn't prepared to pay the price in semantics; but
I believe the optimizations are impossible to behave completely
identically to the unoptimized code (not even counting side effects in
hash() or __eq__()) so I believe the position that the optimized
version is equivalent to the unoptimized "official semantics"
according to school I is untenable.

> Also, why is the discussion of School II mostly an argument against
> School I?  What about describing the advantages of each school?

School II has the advantage of not incurring the problems I see with
school I, in particular catching and discarding exceptions in hash()
and differences between optimized and unoptimized code.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From jimjjewett at gmail.com  Mon Jun 26 22:30:03 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Mon, 26 Jun 2006 16:30:03 -0400
Subject: [Python-Dev] Switch statement
Message-ID: <fb6fbf560606261330ld0f6603ga622e3b5e4026a18@mail.gmail.com>

In http://mail.python.org/pipermail/python-dev/2006-June/066475.html
Nick Coghlan wrote:

> (Unlike Jim, I have no problems with restricting switch statements to
> hashable objects and building the entire jump table at once - if what you want
> is an arbitrary if-elif chain, then write one!)

I haven't been clear.  I don't object to building the entire table at
once.  I object to promising that this will happen, which forbids
other implementations from using certain optimizations.

I would prefer something like:

There is no guarantee on how often or when the case statements will be
evaluated, except that it will be after the enclosing scope exists and
before the relevant test is needed.  If a case expression has side
effects, the behavior with respect to these side effects is explicitly
undefined.

-jJ

From guido at python.org  Mon Jun 26 22:42:39 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 13:42:39 -0700
Subject: [Python-Dev] Switch statement
In-Reply-To: <fb6fbf560606261330ld0f6603ga622e3b5e4026a18@mail.gmail.com>
References: <fb6fbf560606261330ld0f6603ga622e3b5e4026a18@mail.gmail.com>
Message-ID: <ca471dc20606261342ne5c4c1cm489e5068ceeb44ec@mail.gmail.com>

On 6/26/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> In http://mail.python.org/pipermail/python-dev/2006-June/066475.html
> Nick Coghlan wrote:
>
> > (Unlike Jim, I have no problems with restricting switch statements to
> > hashable objects and building the entire jump table at once - if what you want
> > is an arbitrary if-elif chain, then write one!)
>
> I haven't been clear.  I don't object to building the entire table at
> once.  I object to promising that this will happen, which forbids
> other implementations from using certain optimizations.
>
> I would prefer something like:
>
> There is no guarantee on how often or when the case statements will be
> evaluated, except that it will be after the enclosing scope exists and
> before the relevant test is needed.  If a case expression has side
> effects, the behavior with respect to these side effects is explicitly
> undefined.

This is the kind of language that makes C++ and Fortrans standards so
difficult to interpret. I like Python's rules to be simple, and I
prefer to occasionally close off a potential optimization path in the
sake of simplicity. For example, I like left-to-right evaluation of
operands and function arguments. The C++/Fortran style weasel words to
allow optimizers to do sneaky stuff aren't really wort the trouble
they can create in Python, where even the best optimization produces
code that's much slower than C. In practice, most users observe the
behavior of the one compiler they use, and code to that "standard"; so
allowing different compilers or optimization levels to do different
things when functions have side effects is just asking for surprises.

In Python, the big speedups come from a change in algorithm. That's
why it's imprtant to me that switch be dict-based (O(1)), as an
alternative to an if/elif chain (O(N)). (The implementation doesn't
have to use a regular dict, though; the important part is that it's
based on hash() and __eq__().)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From python-dev at zesty.ca  Mon Jun 26 22:47:16 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 26 Jun 2006 15:47:16 -0500 (CDT)
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com> 
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>

On Mon, 26 Jun 2006, Guido van Rossum wrote:
> Can you please edit the PEP yourself to add this? That will be most efficient.

I've done so, and tried to clarify the next line to match (see below).

> With school I, if you want to
> optimize using a hash table (as in PEP 275 Solution 1) you have to
> catch and discard exceptions in hash(), and a bug in hash() can still
> lead this optimization astray

Right.  As written, the problem "a buggy hash might generate an
incorrect match" is not specific to School I; it's a problem with
any approach that is implemented by a hash lookup.  School II is
necessarily implemented this way; School I might or might not be.
So i think the part that says:

    the hash function might have a bug or a side effect; if we
    generate code that believes the hash, a buggy hash might
    generate an incorrect match

doesn't belong there, and i'd like your consent to remove it.
On the other hand, this criticism:

    if we generate code that catches errors in the hash to
    fall back on an if/elif chain, we might hide genuine bugs

is indeed specific to School I + hashing.

> Right. School I appears just as keen as school II to use hashing to
> optimize things, but isn't prepared to pay the price in semantics;

Ok.  Then there's an inconsistency with the definition of School I:

    School I wants to define the switch statement in term of
    an equivalent if/elif chain

To clear this up, i've edited the first line of the School II
paragraph, which previously said:

    School II sees nothing but trouble in that approach

It seems clear that by "that approach" you meant "trying to achieve
if/elif semantics while using hash optimization" rather than the
more general definition of School I that was given.  I believe
there are a few voices here (and i count myself among them) that
consider the semantics more important than the speed and are in
School I but aren't treating hash optimization as the quintessence
of 'switch', and we shouldn't leave them out.


-- ?!ng

From jjeffreyclose at yahoo.com  Mon Jun 26 22:40:53 2006
From: jjeffreyclose at yahoo.com (J. Jeffrey Close)
Date: Mon, 26 Jun 2006 13:40:53 -0700 (PDT)
Subject: [Python-Dev] Python-Dev Digest, Vol 35, Issue 143
In-Reply-To: <mailman.28074.1151333745.27774.python-dev@python.org>
Message-ID: <20060626204053.52101.qmail@web52307.mail.yahoo.com>


Hi all,

I have been trying for some time to build Python 2.4.x
from source on OS X 10.4.6.  I've found *numerous*
postings on various mailing lists and web pages
documenting the apparently well-known problems of
doing so.  Various problems arise either in the
./configure step, with configure arguments that don't
work, or in the compile, or in my case in the link
step with libtool.

The configure options I'm using are the following:
--enable-framework --with-pydebug --with-debug=yes
--prefix=/usr --with-dyld --program-suffix=.exe
--enable-universalsdk

I've managed to get past configure and can compile
everything, but in the link I get the error "Undefined
symbols:  ___eprintf" .  This appears to have
something to do with dynamic library loading not
properly pulling in libgcc.  I've tried with -lgcc in
the LD options, but that produces a configure error
"cannot compute sizeof...".

If I remove "--enable-framework" the complete build
works, but unfortunately that is the one critical
element that I need.

The web pages I've found referring to this range from
2001 to present -- still apparently everybody is
having problems with this.  Does *anybody* here have
Python built from source on this OS?

Jeff












--- python-dev-request at python.org wrote:

> Send Python-Dev mailing list submissions to
> 	python-dev at python.org
> 
> To subscribe or unsubscribe via the World Wide Web,
> visit
> 	http://mail.python.org/mailman/listinfo/python-dev
> or, via email, send a message with subject or body
> 'help' to
> 	python-dev-request at python.org
> 
> You can reach the person managing the list at
> 	python-dev-owner at python.org
> 
> When replying, please edit your Subject line so it
> is more specific
> than "Re: Contents of Python-Dev digest..."
> 
> 
> Today's Topics:
> 
>    1. Re: ImportWarning flood (Nick Coghlan)
>    2. Re: ImportWarning flood (Ralf W.
> Grosse-Kunstleve)
>    3. Re: 2.5b1 Windows install (Nick Coghlan)
>    4. Re: ImportWarning flood (Michael Hudson)
>    5. Re: ImportWarning flood (A.M. Kuchling)
>    6. Re: ImportWarning flood (Benji York)
>    7. Re: Simple Switch statement (Michael Urman)
>    8. Re: ImportWarning flood (Nick Coghlan)
>    9. Re: Simple Switch statement (Guido van Rossum)
>   10. Re: pypy-0.9.0: stackless,   new extension
> compiler
>       (Carl Friedrich Bolz)
> 
> 
>
----------------------------------------------------------------------
> 
> Message: 1
> Date: Mon, 26 Jun 2006 20:27:03 +1000
> From: Nick Coghlan <ncoghlan at gmail.com>
> Subject: Re: [Python-Dev] ImportWarning flood
> To: Guido van Rossum <guido at python.org>
> Cc: python-dev at python.org
> Message-ID: <449FB677.9040505 at gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1;
> format=flowed
> 
> Guido van Rossum wrote:
> > On 6/24/06, Jean-Paul Calderone
> <exarkun at divmod.com> wrote:
> >>> Actually, your application *was* pretty close to
> being broken a few
> >>> weeks ago, when Guido wanted to drop the
> requirement that a package
> >>> must contain an __init__ file. In that case,
> "import math" would have
> >>> imported the directory, and given you an empty
> package.
> >> But this change was *not* made, and afaict it is
> not going to be made.
> > 
> > Correct. We'll stick with the warning. (At least
> until Py3k but most
> > likely also in Py3k.)
> 
> Perhaps ImportWarning should default to being
> ignored, the same way 
> PendingDeprecationWarning does?
> 
> Then -Wd would become 'the one obvious way' to debug
> import problems, since it 
> would switch ImportWarning on without drowning you
> in a flood of import 
> diagnostics the way -v can do.
> 
> Import Errors could even point you in the right
> direction:
> 
>  >>> import mypackage.foo
> Traceback (most recent call last):
>    File "<stdin>", line 1, in ?
> ImportError: No module named mypackage.foo
>      Diagnostic import warnings can be enabled with
> -Wd
> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncoghlan at gmail.com   |  
> Brisbane, Australia
>
---------------------------------------------------------------
>              http://www.boredomandlaziness.org
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Mon, 26 Jun 2006 03:41:07 -0700 (PDT)
> From: "Ralf W. Grosse-Kunstleve" <rwgk at yahoo.com>
> Subject: Re: [Python-Dev] ImportWarning flood
> To: python-dev at python.org
> Message-ID:
>
<20060626104108.89960.qmail at web31510.mail.mud.yahoo.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> --- "Martin v. L???wis" <martin at v.loewis.de> wrote:
> > So spend some of the money to come up with an
> alternate solution for
> > 2.5b2. With a potential damage of a million
> dollars, it shouldn't be
> > too difficult to provide a patch by tomorrow,
> right?
> 
> My share is only 10 man hours, payed for by the US
> government at a scientist
> salary. :-)
> 
> A simple patch with a start is attached. Example:
> 
> % ./python 
> Python 2.5b1 (r25b1:47027, Jun 26 2006, 03:15:33) 
> [GCC 4.1.0 20060304 (Red Hat 4.1.0-3)] on linux2
> Type "help", "copyright", "credits" or "license" for
> more information.
> >>> import foo
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> ImportError: No module named foo
>   Note that subdirectories are searched for imports
> only if they contain an
>   __init__.py file. See the section on "Packages" in
> the Python tutorial for
>   details (http://www.python.org/doc/tut/).
> >>> 
> 
> 
> The "No module named" message is repeated in these
> files (2.5b1 tree):
> 
> ./Demo/imputil/knee.py
> ./Lib/ihooks.py
> ./Lib/modulefinder.py
> ./Lib/xmlcore/etree/ElementTree.py
> ./Lib/runpy.py
> ./Lib/imputil.py
> 
> If there is a consenus, I'd create a new exception
> ImportErrorNoModule(name)
> that is used consistently from all places. This
> would ensure uniformity of the
> message in the future.
> 
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam
> protection around 
> http://mail.yahoo.com 
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: import_patch
> Type: application/octet-stream
> Size: 1090 bytes
> Desc: 467797280-import_patch
> Url :
>
http://mail.python.org/pipermail/python-dev/attachments/20060626/ce3bbfec/attachment-0001.obj
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Mon, 26 Jun 2006 20:46:57 +1000
> From: Nick Coghlan <ncoghlan at gmail.com>
> Subject: Re: [Python-Dev] 2.5b1 Windows install
> To: Aahz <aahz at pythoncraft.com>
> Cc: Python-Dev <python-dev at python.org>
> Message-ID: <449FBB21.7050508 at gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1;
> format=flowed
> 
> Aahz wrote:
> > Has anyone else tried doing an admin install with
> "compile .py files"
> > checked?  It's causing my install to blow up, but
> I'd prefer to assume
> > it's some weird Windows config/bug unless other
> people also have it, in
> > which case I'll file an SF report.
> 
> I tried this deliberately with b1 because it was
> broken in one of the alphas. 
> It worked fine for me this time (installing over the
> top of alpha 2).
> 
> I think there were some bad .py files around that
> caused the breakage in the 
> earlier alpha - could those have been lying around
> in your install directory?
> 
> Cheers,
> Nick.
> 
=== message truncated ===


From brett at python.org  Mon Jun 26 22:51:11 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 26 Jun 2006 13:51:11 -0700
Subject: [Python-Dev] Python-Dev Digest, Vol 35, Issue 143
In-Reply-To: <20060626204053.52101.qmail@web52307.mail.yahoo.com>
References: <mailman.28074.1151333745.27774.python-dev@python.org>
	<20060626204053.52101.qmail@web52307.mail.yahoo.com>
Message-ID: <bbaeab100606261351r380df4cbvdf78a141bd40be4e@mail.gmail.com>

Python-Dev is about Python the language and its development.  Questions on
its use (and build) should be posted elsewhere (I would try comp.lang.python
).

-Brett

On 6/26/06, J. Jeffrey Close <jjeffreyclose at yahoo.com> wrote:
>
>
> Hi all,
>
> I have been trying for some time to build Python 2.4.x
> from source on OS X 10.4.6.  I've found *numerous*
> postings on various mailing lists and web pages
> documenting the apparently well-known problems of
> doing so.  Various problems arise either in the
> ./configure step, with configure arguments that don't
> work, or in the compile, or in my case in the link
> step with libtool.
>
> The configure options I'm using are the following:
> --enable-framework --with-pydebug --with-debug=yes
> --prefix=/usr --with-dyld --program-suffix=.exe
> --enable-universalsdk
>
> I've managed to get past configure and can compile
> everything, but in the link I get the error "Undefined
> symbols:  ___eprintf" .  This appears to have
> something to do with dynamic library loading not
> properly pulling in libgcc.  I've tried with -lgcc in
> the LD options, but that produces a configure error
> "cannot compute sizeof...".
>
> If I remove "--enable-framework" the complete build
> works, but unfortunately that is the one critical
> element that I need.
>
> The web pages I've found referring to this range from
> 2001 to present -- still apparently everybody is
> having problems with this.  Does *anybody* here have
> Python built from source on this OS?
>
> Jeff
>
>
>
>
>
>
>
>
>
>
>
>
> --- python-dev-request at python.org wrote:
>
> > Send Python-Dev mailing list submissions to
> >       python-dev at python.org
> >
> > To subscribe or unsubscribe via the World Wide Web,
> > visit
> >       http://mail.python.org/mailman/listinfo/python-dev
> > or, via email, send a message with subject or body
> > 'help' to
> >       python-dev-request at python.org
> >
> > You can reach the person managing the list at
> >       python-dev-owner at python.org
> >
> > When replying, please edit your Subject line so it
> > is more specific
> > than "Re: Contents of Python-Dev digest..."
> >
> >
> > Today's Topics:
> >
> >    1. Re: ImportWarning flood (Nick Coghlan)
> >    2. Re: ImportWarning flood (Ralf W.
> > Grosse-Kunstleve)
> >    3. Re: 2.5b1 Windows install (Nick Coghlan)
> >    4. Re: ImportWarning flood (Michael Hudson)
> >    5. Re: ImportWarning flood (A.M. Kuchling)
> >    6. Re: ImportWarning flood (Benji York)
> >    7. Re: Simple Switch statement (Michael Urman)
> >    8. Re: ImportWarning flood (Nick Coghlan)
> >    9. Re: Simple Switch statement (Guido van Rossum)
> >   10. Re: pypy-0.9.0: stackless,   new extension
> > compiler
> >       (Carl Friedrich Bolz)
> >
> >
> >
> ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Mon, 26 Jun 2006 20:27:03 +1000
> > From: Nick Coghlan <ncoghlan at gmail.com>
> > Subject: Re: [Python-Dev] ImportWarning flood
> > To: Guido van Rossum <guido at python.org>
> > Cc: python-dev at python.org
> > Message-ID: <449FB677.9040505 at gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1;
> > format=flowed
> >
> > Guido van Rossum wrote:
> > > On 6/24/06, Jean-Paul Calderone
> > <exarkun at divmod.com> wrote:
> > >>> Actually, your application *was* pretty close to
> > being broken a few
> > >>> weeks ago, when Guido wanted to drop the
> > requirement that a package
> > >>> must contain an __init__ file. In that case,
> > "import math" would have
> > >>> imported the directory, and given you an empty
> > package.
> > >> But this change was *not* made, and afaict it is
> > not going to be made.
> > >
> > > Correct. We'll stick with the warning. (At least
> > until Py3k but most
> > > likely also in Py3k.)
> >
> > Perhaps ImportWarning should default to being
> > ignored, the same way
> > PendingDeprecationWarning does?
> >
> > Then -Wd would become 'the one obvious way' to debug
> > import problems, since it
> > would switch ImportWarning on without drowning you
> > in a flood of import
> > diagnostics the way -v can do.
> >
> > Import Errors could even point you in the right
> > direction:
> >
> >  >>> import mypackage.foo
> > Traceback (most recent call last):
> >    File "<stdin>", line 1, in ?
> > ImportError: No module named mypackage.foo
> >      Diagnostic import warnings can be enabled with
> > -Wd
> >
> > Cheers,
> > Nick.
> >
> > --
> > Nick Coghlan   |   ncoghlan at gmail.com   |
> > Brisbane, Australia
> >
> ---------------------------------------------------------------
> >              http://www.boredomandlaziness.org
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Mon, 26 Jun 2006 03:41:07 -0700 (PDT)
> > From: "Ralf W. Grosse-Kunstleve" <rwgk at yahoo.com>
> > Subject: Re: [Python-Dev] ImportWarning flood
> > To: python-dev at python.org
> > Message-ID:
> >
> <20060626104108.89960.qmail at web31510.mail.mud.yahoo.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > --- "Martin v. L???wis" <martin at v.loewis.de> wrote:
> > > So spend some of the money to come up with an
> > alternate solution for
> > > 2.5b2. With a potential damage of a million
> > dollars, it shouldn't be
> > > too difficult to provide a patch by tomorrow,
> > right?
> >
> > My share is only 10 man hours, payed for by the US
> > government at a scientist
> > salary. :-)
> >
> > A simple patch with a start is attached. Example:
> >
> > % ./python
> > Python 2.5b1 (r25b1:47027, Jun 26 2006, 03:15:33)
> > [GCC 4.1.0 20060304 (Red Hat 4.1.0-3)] on linux2
> > Type "help", "copyright", "credits" or "license" for
> > more information.
> > >>> import foo
> > Traceback (most recent call last):
> >   File "<stdin>", line 1, in <module>
> > ImportError: No module named foo
> >   Note that subdirectories are searched for imports
> > only if they contain an
> >   __init__.py file. See the section on "Packages" in
> > the Python tutorial for
> >   details (http://www.python.org/doc/tut/).
> > >>>
> >
> >
> > The "No module named" message is repeated in these
> > files (2.5b1 tree):
> >
> > ./Demo/imputil/knee.py
> > ./Lib/ihooks.py
> > ./Lib/modulefinder.py
> > ./Lib/xmlcore/etree/ElementTree.py
> > ./Lib/runpy.py
> > ./Lib/imputil.py
> >
> > If there is a consenus, I'd create a new exception
> > ImportErrorNoModule(name)
> > that is used consistently from all places. This
> > would ensure uniformity of the
> > message in the future.
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Tired of spam?  Yahoo! Mail has the best spam
> > protection around
> > http://mail.yahoo.com
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: import_patch
> > Type: application/octet-stream
> > Size: 1090 bytes
> > Desc: 467797280-import_patch
> > Url :
> >
>
> http://mail.python.org/pipermail/python-dev/attachments/20060626/ce3bbfec/attachment-0001.obj
> >
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Mon, 26 Jun 2006 20:46:57 +1000
> > From: Nick Coghlan <ncoghlan at gmail.com>
> > Subject: Re: [Python-Dev] 2.5b1 Windows install
> > To: Aahz <aahz at pythoncraft.com>
> > Cc: Python-Dev <python-dev at python.org>
> > Message-ID: <449FBB21.7050508 at gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1;
> > format=flowed
> >
> > Aahz wrote:
> > > Has anyone else tried doing an admin install with
> > "compile .py files"
> > > checked?  It's causing my install to blow up, but
> > I'd prefer to assume
> > > it's some weird Windows config/bug unless other
> > people also have it, in
> > > which case I'll file an SF report.
> >
> > I tried this deliberately with b1 because it was
> > broken in one of the alphas.
> > It worked fine for me this time (installing over the
> > top of alpha 2).
> >
> > I think there were some bad .py files around that
> > caused the breakage in the
> > earlier alpha - could those have been lying around
> > in your install directory?
> >
> > Cheers,
> > Nick.
> >
> === message truncated ===
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060626/ecccfc6a/attachment.html 

From jjeffreyclose at yahoo.com  Mon Jun 26 23:03:10 2006
From: jjeffreyclose at yahoo.com (J. Jeffrey Close)
Date: Mon, 26 Jun 2006 14:03:10 -0700 (PDT)
Subject: [Python-Dev] Problems building Python on OSX 10.4.6?
In-Reply-To: <mailman.28154.1151355073.27774.python-dev@python.org>
Message-ID: <20060626210310.7604.qmail@web52313.mail.yahoo.com>

[Bleh, sorry about the subject line on my first post. 
Forgot to edit it before I sent.]


Hi all,

I have been trying for some time to build Python 2.4.x
from source on OS X 10.4.6.  I've found *numerous*
postings on various mailing lists and web pages
documenting the apparently well-known problems of
doing so.  Various problems arise either in the
./configure step, with configure arguments that don't
work, or in the compile, or in my case in the link
step with libtool.

The configure options I'm using are the following:
--enable-framework --with-pydebug --with-debug=yes
--prefix=/usr --with-dyld --program-suffix=.exe
--enable-universalsdk

I've managed to get past configure and can compile
everything, but in the link I get the error "Undefined
symbols:  ___eprintf" .  This appears to have
something to do with dynamic library loading not
properly pulling in libgcc.  I've tried with -lgcc in
the LD options, but that produces a configure error
"cannot compute sizeof...".

If I remove "--enable-framework" the complete build
works, but unfortunately that is the one critical
element that I need.

The web pages I've found referring to this range from
2001 to present -- still apparently everybody is
having problems with this.  Does *anybody* here have
Python built from source on this OS?

Jeff

From tdelaney at avaya.com  Mon Jun 26 23:53:49 2006
From: tdelaney at avaya.com (Delaney, Timothy (Tim))
Date: Tue, 27 Jun 2006 07:53:49 +1000
Subject: [Python-Dev] ImportWarning flood
Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E799@au3010avexu1.global.avaya.com>

Michael Hudson wrote:

> Benji York <benji at benjiyork.com> writes:
> 
>> Nick Coghlan wrote:
>>> Perhaps ImportWarning should default to being ignored, the same way
>>> PendingDeprecationWarning does?
>>> 
>>> Then -Wd would become 'the one obvious way' to debug import problems
>> 
>> +1
> 
> I'm not sure what this would achieve -- people who don't know enough
> about Python to add __init__.py files aren't going to know enough to
> make suppressed-by-default warnings not suppressed.

The change was prompted by developers (specifically, Google developers).
Developers should be able to put -Wd in their automated build scripts.

> The more I think about it, the more I like the idea of saying
> something when an import fails only because of a missing __init__.py
> file.  I guess I should try to implement it...

This is by far and away my preference as well (stating which directories
may have been importable if they had __init__.py in the exception) but
it was shot down in the original discussion.

Tim Delaney

From guido at python.org  Mon Jun 26 23:57:28 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 14:57:28 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
Message-ID: <ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>

On 6/26/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Mon, 26 Jun 2006, Guido van Rossum wrote:
> > Can you please edit the PEP yourself to add this? That will be most efficient.
>
> I've done so, and tried to clarify the next line to match (see below).
>
> > With school I, if you want to
> > optimize using a hash table (as in PEP 275 Solution 1) you have to
> > catch and discard exceptions in hash(), and a bug in hash() can still
> > lead this optimization astray
>
> Right.  As written, the problem "a buggy hash might generate an
> incorrect match" is not specific to School I; it's a problem with
> any approach that is implemented by a hash lookup.  School II is
> necessarily implemented this way; School I might or might not be.
> So i think the part that says:
>
>     the hash function might have a bug or a side effect; if we
>     generate code that believes the hash, a buggy hash might
>     generate an incorrect match
>
> doesn't belong there, and i'd like your consent to remove it.

I'd rather keep it, but clarify that school II considers the outcome
of the hash() the official semantics, while school I + dict-based
optimization would create optimized code that doesn't match the
language specs. My point being that not following your own specs is a
much more severe sin than trusting a buggy hash(). If hash() is buggy,
school II's official spec means that the bug affects the outcome, and
that's that; but because school I specifies the semantics as based on
an if/elif chain, using a buggy hash() means not following the spec.
If we choose school I, a user may assume that a buggy hash() doesn't
affect the outcome because it's defined in terms of == tests only.

> On the other hand, this criticism:
>
>     if we generate code that catches errors in the hash to
>     fall back on an if/elif chain, we might hide genuine bugs
>
> is indeed specific to School I + hashing.

Right.

> > Right. School I appears just as keen as school II to use hashing to
> > optimize things, but isn't prepared to pay the price in semantics;
>
> Ok.  Then there's an inconsistency with the definition of School I:
>
>     School I wants to define the switch statement in term of
>     an equivalent if/elif chain
>
> To clear this up, i've edited the first line of the School II
> paragraph, which previously said:
>
>     School II sees nothing but trouble in that approach
>
> It seems clear that by "that approach" you meant "trying to achieve
> if/elif semantics while using hash optimization" rather than the
> more general definition of School I that was given.

Right. Thanks for the clarification; indeed, the only problem I have
with the "clean" school I approach (no hash-based optimization) is
that there's no optimization, and we end up once more having to tweak
the ordering of the cases based on our expectation of their frequency
(which may not match reality).

Most school I proponents (perhaps you're the only exception) have
claimed that optimization is desirable, but added that it would be
easy to add hash-based optimization. IMO it's not so easy in the light
of various failure modes of hash(). (A possible solution would be to
only use hashing if the expression's type is one of a small set of
trusted builtins, and not a subclass; we can trust int.__hash__,
str.__hash__ and a few others.)

> I believe
> there are a few voices here (and i count myself among them) that
> consider the semantics more important than the speed and are in
> School I but aren't treating hash optimization as the quintessence
> of 'switch', and we shouldn't leave them out.

This is an important distinction; thanks for pointing it out. Perhaps
we can introduce school Ia and Ib, where Ia is "clean but unoptimized"
and Ib is "if/elif with hash-based optimization desirable when
possible".

Another distinction between the two schools is that school Ib will
have a hard time optimizing switches based on named constants. I don't
believe that the "callback if variable affecting expression value is
ever assigned to" approach will work.

School II is definitely more pragmatist; I really don't see much wrong
with defining that it works a certain way, which is not exactly what
you would expect but has the same effect in *most* common cases, and
then explaining odd behavior in various corner cases out of that way,
as long as I don't care about those corner cases.

This is pretty much how I defend the semantics of default parameter
values -- it doesn't matter that they are computed at function
definition time instead of call time as long as you only use immutable
constants (which could be named constants as long as they're
immutable), which is the only use case I care about.

There are many other examples of odd Python semantics that favor a
relatively simple implementation and glorify that implementation as
the official semantics. I find this much preferable over weasel words
a la traditional language standards which give optimizers a lot of
leeway at the cost of difference in behavior between different
compilers or optimization levels.

For a real example from C++, read "Double Checked Locking is Broken":
http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
(I first heard about this from Scott Meyers at the '06 ACCU conference
in Oxford; an earlier version of his talk is also online:
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From Ben.Young at risk.sungard.com  Mon Jun 26 11:34:08 2006
From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com)
Date: Mon, 26 Jun 2006 10:34:08 +0100
Subject: [Python-Dev] pypy-0.9.0: stackless, new extension compiler
In-Reply-To: <2mslltigm2.fsf@starship.python.net>
Message-ID: <OF946B1611.4AC16C45-ON80257199.00347A71-80257199.00349087@risk.sungard.com>

Congratulations!

python-dev-bounces+python=theyoungfamily.co.uk at python.org wrote on 
25/06/2006 13:07:01:

> The PyPy development team has been busy working and we've now packaged 
> our latest improvements, completed work and new experiments as 
> version 0.9.0, our fourth public release.
> 
> The highlights of this fourth release of PyPy are:
> 
> **implementation of "stackless" features**
>     We now support the larger part of the interface of the original
>     Stackless Python -- see http://www.stackless.com for more.  A
>     significant part of this is the pickling and unpickling of a running
>     tasklet.
> 
>     These features, especially the pickling, can be considered to be a
>     "technology preview" -- they work, but for example the error 
handling
>     is a little patchy in places.
> 
> **ext-compiler**
>     The "extension compiler" is a new way of writing a C extension for
>     CPython and PyPy at the same time. For more information, see its
>     documentation: 
http://codespeak.net/pypy/dist/pypy/doc/extcompiler.html
> 
> **rctypes**
>     Most useful in combination with the ext-compiler is the fact that 
our
>     translation framework can translate code that uses the
>     standard-in-Python-2.5 ctypes module.  See its documentation for 
more:
>     http://codespeak.net/pypy/dist/pypy/doc/rctypes.html
> 
> **framework GCs** 
>     PyPy's interpreter can now be compiled to use a garbage collector
>     written in RPython.  This added control over PyPy's execution makes 
the
>     implementation of new and interesting features possible, apart from
>     being a significant achievement in its own right.
> 
> **__del__/weakref/__subclasses__**
>     The PyPy interpreter's compatibility with CPython continues 
improves:
>     now we support __del__ methods, the __subclasses__ method on types 
and
>     weak references.  We now pass around 95% of CPython's core tests.
> 
> **logic space preview**
>     This release contains the first version of the logic object space,
>     which will add logical variables to Python.  See its docs for more:
>     http://codespeak.net/pypy/dist/pypy/doc/howto-logicobjspace-0.9.html
> 
> **high level backends preview**
>     This release contains the first versions of new backends targeting 
high
>     level languages such as Squeak and .NET/CLI and updated versions of 
the
>     JavaScript and Common Lisp backends.  They can't compile the PyPy
>     interpreter yet, but they're getting there...
> 
> **bugfixes, better performance**
>     As you would expect, performance continues to improve and bugs 
continue
>     to be fixed.  The performance of the translated PyPy interpreter is
>     2.5-3x times faster than 0.8 (on richards and pystone), and is now
>     stable enough to be able to run CPython's test suite to the end.
> 
> **testing refinements**
>     py.test, our testing tool, now has preliminary support for doctests.
>     We now run all our tests every night, and you can see the summary 
at:
>     http://snake.cs.uni-duesseldorf.de/pypytest/summary.html
> 
> What is PyPy (about)? 
> ------------------------------------------------
> 
> PyPy is a MIT-licensed research-oriented reimplementation of Python
> written in Python itself, flexible and easy to experiment with.  It
> translates itself to lower level languages.  Our goals are to target a
> large variety of platforms, small and large, by providing a
> compilation toolsuite that can produce custom Python versions.
> Platform, memory and threading models are to become aspects of the
> translation process - as opposed to encoding low level details into
> the language implementation itself.  Eventually, dynamic optimization
> techniques - implemented as another translation aspect - should become
> robust against language changes.
> 
> Note that PyPy is mainly a research and development project and does
> not by itself focus on getting a production-ready Python
> implementation although we do hope and expect it to become a viable
> contender in that area sometime next year.
> 
> PyPy is partially funded as a research project under the European
> Union's IST programme.
> 
> Where to start? 
> -----------------------------
> 
> Getting started:    http://codespeak.net/pypy/dist/pypy/doc/getting-
> started.html
> 
> PyPy Documentation: http://codespeak.net/pypy/dist/pypy/doc/ 
> 
> PyPy Homepage:      http://codespeak.net/pypy/
> 
> The interpreter and object model implementations shipped with the 0.9
> version can run on their own and implement the core language features
> of Python as of CPython 2.4.  However, we still do not recommend using
> PyPy for anything else than for education, playing or research
> purposes.
> 
> Ongoing work and near term goals
> ---------------------------------
> 
> The Just-in-Time compiler and other performance improvements will be one 
of
> the main topics of the next few months' work, along with finishing the
> logic object space.
> 
> Project Details
> ---------------
> 
> PyPy has been developed during approximately 20 coding sprints across
> Europe and the US.  It continues to be a very dynamically and
> incrementally evolving project with many of these one-week workshops
> to follow.
> 
> PyPy has been a community effort from the start and it would
> not have got that far without the coding and feedback support
> from numerous people.   Please feel free to give feedback and 
> raise questions. 
> 
>     contact points: http://codespeak.net/pypy/dist/pypy/doc/contact.html
> 
> have fun, 
> 
>     the pypy team, (Armin Rigo, Samuele Pedroni, 
>     Holger Krekel, Christian Tismer, 
>     Carl Friedrich Bolz, Michael Hudson, 
>     and many others: 
http://codespeak.net/pypy/dist/pypy/doc/contributor.html)
> 
> PyPy development and activities happen as an open source project 
> and with the support of a consortium partially funded by a two 
> year European Union IST research grant. The full partners of that 
> consortium are: 
> 
>     Heinrich-Heine University (Germany), AB Strakt (Sweden)
>     merlinux GmbH (Germany), tismerysoft GmbH (Germany) 
>     Logilab Paris (France), DFKI GmbH (Germany)
>     ChangeMaker (Sweden), Impara (Germany)
> 
> -- 
>   And not only in the sense that they imagine heretics where these
>   do not exist, but also that inquistors repress the heretical
>   putrefaction so vehemently that many are driven to share in it,
>   in their hatred of the judges.  -- The Name Of The Rose, Umberto Eco
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-
> dev/python%40theyoungfamily.co.uk
> 


From mamrhein at users.sourceforge.net  Mon Jun 26 15:57:32 2006
From: mamrhein at users.sourceforge.net (Michael Amrhein)
Date: Mon, 26 Jun 2006 15:57:32 +0200
Subject: [Python-Dev] enhancements for uuid module
Message-ID: <449FE7CC.1040905@users.sourceforge.net>

Hi Ka-Ping,
I would like to propose two enhancements for your uuid module in Python 2.5:

1) I've written functions to retrieve the MAC address that do not depend 
on running external programs. Please see the attached file.

2) In order to reduce the pickle footprint of UUIDs I would add a 
__reduce__ method to class UUID like

     def __reduce__(self):
         return (uuid, (self.int,))

together with a helper function (at module level) like

def uuid(i):
     return UUID(int=i)

Please feel free to use the supplied code.
Cheers
Michael
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: hostid.py
Url: http://mail.python.org/pipermail/python-dev/attachments/20060626/1e85b75e/attachment.pot 

From python-dev at zesty.ca  Tue Jun 27 00:38:33 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 26 Jun 2006 17:38:33 -0500 (CDT)
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com> 
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org> 
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com> 
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606261708530.17937@server1.LFW.org>

On Mon, 26 Jun 2006, Guido van Rossum wrote:
> Most school I proponents (perhaps you're the only exception) have
> claimed that optimization is desirable, but added that it would be
> easy to add hash-based optimization. IMO it's not so easy in the light
> of various failure modes of hash(). (A possible solution would be to
> only use hashing if the expression's type is one of a small set of
> trusted builtins, and not a subclass; we can trust int.__hash__,
> str.__hash__ and a few others.)

That's a good idea!  At first glance, it seems like that could lead to
a plausible compromise.


-- ?!ng

From python-dev at zesty.ca  Tue Jun 27 00:41:13 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 26 Jun 2006 17:41:13 -0500 (CDT)
Subject: [Python-Dev] School IIb?
In-Reply-To: <ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com> 
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org> 
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com> 
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>

Here's a possible adjustment to the School-II approach that i think
avoids the issues i've been raising, while giving the desired
O(n)-to-O(1) speedup in common situations.  It's basically School-II
dispatch, plus a check:

On compilation, freeze any cases that meet the School-II conditions
and have a trustworthy __hash__ method into a dictionary.  At runtime,
when the dictionary yields a hit, check if the case expression yields
a different value.  If the value has changed, use if/elif processing.

In most cases the case-equality check will be cheap (e.g. an attribute
lookup), but it would allow us to establish for sure that the switch
value really matches the case value when we branch to a particular
case, so we'd not be so vulnerable to __hash__ misbehaving, which
seems to be your main source of discomfort with if/elif semantics.


-- ?!ng

From guido at python.org  Tue Jun 27 00:52:49 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 15:52:49 -0700
Subject: [Python-Dev] School IIb?
In-Reply-To: <Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
Message-ID: <ca471dc20606261552o3d5fe88u6674770bae99bc31@mail.gmail.com>

On 6/26/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> Here's a possible adjustment to the School-II approach that i think
> avoids the issues i've been raising, while giving the desired
> O(n)-to-O(1) speedup in common situations.  It's basically School-II
> dispatch, plus a check:
>
> On compilation, freeze any cases that meet the School-II conditions
> and have a trustworthy __hash__ method into a dictionary.  At runtime,
> when the dictionary yields a hit, check if the case expression yields
> a different value.  If the value has changed, use if/elif processing.
>
> In most cases the case-equality check will be cheap (e.g. an attribute
> lookup), but it would allow us to establish for sure that the switch
> value really matches the case value when we branch to a particular
> case, so we'd not be so vulnerable to __hash__ misbehaving, which
> seems to be your main source of discomfort with if/elif semantics.

I don't see how this can work for named constants, since their value
is unknown at compilation time.

I also don't like that you apparently are fine with all sort of
reorderings of the evaluation of the cases (the matching case is
always evaluated redundantly; other cases may be evaluate if the
optimization failed).

hash() misbehaving is not my main source of discomfort. It's the
messiness of trying to define rules that are as flexible as needed for
optimization and yet claiming to maintain the strict if/elif-chain
semantics. There's no messiness needed in the def-time-freeze rule;
the optimizer can't screw it, because the rule is so much simpler. The
code generator for optimized school I would be a beast.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From pje at telecommunity.com  Tue Jun 27 01:05:30 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 26 Jun 2006 19:05:30 -0400
Subject: [Python-Dev] School IIb?
In-Reply-To: <ca471dc20606261552o3d5fe88u6674770bae99bc31@mail.gmail.com
 >
References: <Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
	<ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
Message-ID: <5.1.1.6.0.20060626185852.03aacf18@sparrow.telecommunity.com>

At 03:52 PM 6/26/2006 -0700, Guido van Rossum wrote:
>It's the
>messiness of trying to define rules that are as flexible as needed for
>optimization and yet claiming to maintain the strict if/elif-chain
>semantics.

Hear, hear!  We already have if/elif, we don't need another way to spell 
it.  The whole point of switch is that it asserts that exactly *one* case 
is supposed to match -- which means by definition that the *order* of the 
cases must not matter.  It is an unprioritized selection, rather than 
sequential selection.

I think that probably the biggest misunderstanding of switch that has been 
put forth is that it's shorthand for a particular pattern of if-elif use, 
when in actuality it's the other way around: if-elif is sometimes used as a 
crude workaround for the absence of a switch feature.


From jjeffreyclose at yahoo.com  Tue Jun 27 01:18:25 2006
From: jjeffreyclose at yahoo.com (J. Jeffrey Close)
Date: Mon, 26 Jun 2006 16:18:25 -0700 (PDT)
Subject: [Python-Dev] Python-Dev Digest, Vol 35, Issue 143
In-Reply-To: <bbaeab100606261351r380df4cbvdf78a141bd40be4e@mail.gmail.com>
Message-ID: <20060626231825.72263.qmail@web52303.mail.yahoo.com>


Hi all,

Sorry for my inappropriate posting.   I just joined
the list and didn't realize the complete scope.  I
will stay on the list, I'm very interested in it from
a semantics & implementation perspective as well. 
Thanks to Brett for the heads-up.

Jeff




--- Brett Cannon <brett at python.org> wrote:

> Python-Dev is about Python the language and its
> development.  Questions on
> its use (and build) should be posted elsewhere (I
> would try comp.lang.python
> ).
> 
> -Brett
> 
> On 6/26/06, J. Jeffrey Close
> <jjeffreyclose at yahoo.com> wrote:
> >
> >
> > Hi all,
> >
> > I have been trying for some time to build Python
> 2.4.x
> > from source on OS X 10.4.6.  I've found *numerous*
> > postings on various mailing lists and web pages
> > documenting the apparently well-known problems of
> > doing so.  Various problems arise either in the
> > ./configure step, with configure arguments that
> don't
> > work, or in the compile, or in my case in the link
> > step with libtool.
> >
> > The configure options I'm using are the following:
> > --enable-framework --with-pydebug --with-debug=yes
> > --prefix=/usr --with-dyld --program-suffix=.exe
> > --enable-universalsdk
> >
> > I've managed to get past configure and can compile
> > everything, but in the link I get the error
> "Undefined
> > symbols:  ___eprintf" .  This appears to have
> > something to do with dynamic library loading not
> > properly pulling in libgcc.  I've tried with -lgcc
> in
> > the LD options, but that produces a configure
> error
> > "cannot compute sizeof...".
> >
> > If I remove "--enable-framework" the complete
> build
> > works, but unfortunately that is the one critical
> > element that I need.
> >
> > The web pages I've found referring to this range
> from
> > 2001 to present -- still apparently everybody is
> > having problems with this.  Does *anybody* here
> have
> > Python built from source on this OS?
> >
> > Jeff
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > --- python-dev-request at python.org wrote:
> >
> > > Send Python-Dev mailing list submissions to
> > >       python-dev at python.org
> > >
> > > To subscribe or unsubscribe via the World Wide
> Web,
> > > visit
> > >      
> http://mail.python.org/mailman/listinfo/python-dev
> > > or, via email, send a message with subject or
> body
> > > 'help' to
> > >       python-dev-request at python.org
> > >
> > > You can reach the person managing the list at
> > >       python-dev-owner at python.org
> > >
> > > When replying, please edit your Subject line so
> it
> > > is more specific
> > > than "Re: Contents of Python-Dev digest..."
> > >
> > >
> > > Today's Topics:
> > >
> > >    1. Re: ImportWarning flood (Nick Coghlan)
> > >    2. Re: ImportWarning flood (Ralf W.
> > > Grosse-Kunstleve)
> > >    3. Re: 2.5b1 Windows install (Nick Coghlan)
> > >    4. Re: ImportWarning flood (Michael Hudson)
> > >    5. Re: ImportWarning flood (A.M. Kuchling)
> > >    6. Re: ImportWarning flood (Benji York)
> > >    7. Re: Simple Switch statement (Michael
> Urman)
> > >    8. Re: ImportWarning flood (Nick Coghlan)
> > >    9. Re: Simple Switch statement (Guido van
> Rossum)
> > >   10. Re: pypy-0.9.0: stackless,   new extension
> > > compiler
> > >       (Carl Friedrich Bolz)
> > >
> > >
> > >
> >
>
----------------------------------------------------------------------
> > >
> > > Message: 1
> > > Date: Mon, 26 Jun 2006 20:27:03 +1000
> > > From: Nick Coghlan <ncoghlan at gmail.com>
> > > Subject: Re: [Python-Dev] ImportWarning flood
> > > To: Guido van Rossum <guido at python.org>
> > > Cc: python-dev at python.org
> > > Message-ID: <449FB677.9040505 at gmail.com>
> > > Content-Type: text/plain; charset=ISO-8859-1;
> > > format=flowed
> > >
> > > Guido van Rossum wrote:
> > > > On 6/24/06, Jean-Paul Calderone
> > > <exarkun at divmod.com> wrote:
> > > >>> Actually, your application *was* pretty
> close to
> > > being broken a few
> > > >>> weeks ago, when Guido wanted to drop the
> > > requirement that a package
> > > >>> must contain an __init__ file. In that case,
> > > "import math" would have
> > > >>> imported the directory, and given you an
> empty
> > > package.
> > > >> But this change was *not* made, and afaict it
> is
> > > not going to be made.
> > > >
> > > > Correct. We'll stick with the warning. (At
> least
> > > until Py3k but most
> > > > likely also in Py3k.)
> > >
> > > Perhaps ImportWarning should default to being
> > > ignored, the same way
> > > PendingDeprecationWarning does?
> > >
> > > Then -Wd would become 'the one obvious way' to
> debug
> > > import problems, since it
> > > would switch ImportWarning on without drowning
> you
> > > in a flood of import
> > > diagnostics the way -v can do.
> > >
> > > Import Errors could even point you in the right
> > > direction:
> > >
> > >  >>> import mypackage.foo
> > > Traceback (most recent call last):
> > >    File "<stdin>", line 1, in ?
> > > ImportError: No module named mypackage.foo
> > >      Diagnostic import warnings can be enabled
> with
> > > -Wd
> > >
> > > Cheers,
> > > Nick.
> > >
> > > --
> > > Nick Coghlan   |   ncoghlan at gmail.com   |
> > > Brisbane, Australia
> > >
> >
>
---------------------------------------------------------------
> > >              http://www.boredomandlaziness.org
> > >
> > >
> > > ------------------------------
> > >
> > > Message: 2
> > > Date: Mon, 26 Jun 2006 03:41:07 -0700 (PDT)
> > > From: "Ralf W. Grosse-Kunstleve"
> <rwgk at yahoo.com>
> > > Subject: Re: [Python-Dev] ImportWarning flood
> > > To: python-dev at python.org
> > > Message-ID:
> > >
> >
>
<20060626104108.89960.qmail at web31510.mail.mud.yahoo.com>
> > > Content-Type: text/plain; charset="iso-8859-1"
> > >
> > > --- "Martin v. L???wis" <martin at v.loewis.de>
> wrote:
> > > > So spend some of the money to come up with an
> > > alternate solution for
> > > > 2.5b2. With a potential damage of a million
> > > dollars, it shouldn't be
> > > > too difficult to provide a patch by tomorrow,
> 
=== message truncated ===


From rwgk at yahoo.com  Tue Jun 27 01:23:38 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Mon, 26 Jun 2006 16:23:38 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5FF1E799@au3010avexu1.global.avaya.com>
Message-ID: <20060626232338.81461.qmail@web31504.mail.mud.yahoo.com>

--- "Delaney, Timothy (Tim)" <tdelaney at avaya.com> wrote:

> Michael Hudson wrote:
> 
> > Benji York <benji at benjiyork.com> writes:
> > 
> >> Nick Coghlan wrote:
> >>> Perhaps ImportWarning should default to being ignored, the same way
> >>> PendingDeprecationWarning does?
> >>> 
> >>> Then -Wd would become 'the one obvious way' to debug import problems
> >> 
> >> +1
> > 
> > I'm not sure what this would achieve -- people who don't know enough
> > about Python to add __init__.py files aren't going to know enough to
> > make suppressed-by-default warnings not suppressed.
> 
> The change was prompted by developers (specifically, Google developers).
> Developers should be able to put -Wd in their automated build scripts.
> 
> > The more I think about it, the more I like the idea of saying
> > something when an import fails only because of a missing __init__.py
> > file.  I guess I should try to implement it...
> 
> This is by far and away my preference as well (stating which directories
> may have been importable if they had __init__.py in the exception) but
> it was shot down in the original discussion.

I guess it is probably quite tricky to implement. Note the list of files with
the "No module named" message I posted earlier. Somehow you'd have to keep
track of all potential directories in all these different contexts.

I think a combination of pointing to the documentation and mentioning -Wd would
cover all situations. Most people just need a reminder. That's easy to achieve
with a new ImportErrorNoModule(name) exception.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From murman at gmail.com  Tue Jun 27 01:26:45 2006
From: murman at gmail.com (Michael Urman)
Date: Mon, 26 Jun 2006 18:26:45 -0500
Subject: [Python-Dev] Simple Switch statement
In-Reply-To: <44A00141.1050801@ewtllc.com>
References: <17547.19802.361151.705599@montanaro.dyndns.org>
	<e7g74s$sdk$1@sea.gmane.org>
	<ca471dc20606231155n185e9f27vb4d10603bb39f2c@mail.gmail.com>
	<e7jrbg$749$1@sea.gmane.org> <449D9BB5.5090504@ronadam.com>
	<024c01c697e0$6fa2b340$dc00000a@RaymondLaptop1>
	<ca471dc20606250916i4469c060q253ae2c7cf29f966@mail.gmail.com>
	<001901c69883$18282660$dc00000a@RaymondLaptop1>
	<dcbbbb410606260615u6d30889bo548736d08fa4f43e@mail.gmail.com>
	<44A00141.1050801@ewtllc.com>
Message-ID: <dcbbbb410606261626s2b908db0nf13d6cf7587755ed@mail.gmail.com>

On 6/26/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote:
> With the simplified proposal, this would be coded with an inverse mapping:
>
>     for event in pygame.event.get():
>         switch eventmap[event.type]:
>         case 'KEYDOWN': ...
>         case 'KEYUP': ...
>         case 'QUIT': ...

Ah, here's where it gets interesting. While that definitely works on
the surface, it can run into some difficulties of scope. SDL (on which
pygame is based) allows user-defined events, also integers, but
without a predefined meaning. If pygame provides the inverse mapping,
it won't contain the user-defined events. If you construct it, you
have to choose between a local lookup and a global one, and then we
start seeing a global store for an essentially local construct, or we
risk differences when there's more than one locality for using it.

While you're right; it should be simple to ensure that the inverse map
handles at least the set the switch handles, and it keeps evaluation
simpler, I still find the limitation ugly. As mentioned, the early
error checking is poor, and it just doesn't feel like the rest of
python.

> >I also would like to see a way to use 'is' [...]for the comparison
>
> [If] the goal is having several distinct cases that are equal but
> not identical, then that's another story.  I suggest leave the initial
> switch syntax as simple as possible and just switch on id(object).

Switching on id(object) only sounds palatable if we're not bound by
the simple switch's limitations. Having a second (if inverted) mapping
merely of identifier to object smells particularly rancid.

What if I want some cases done as identity, but some as equality?
Since I'm having no luck finding a real use case for this, perhaps I
should assume a nested switch would be adequate. Assuming static or
capture it doesn't look too bad, so I think I'll go with Guido's
hypothesis that it's a red herring.

    switch id(value):
    case id(object): ...
    case id(None): ...
    else:
        switch value:
        case 1: ...
        case 'orange':
        else: raise ValueError

Michael
-- 
Michael Urman  http://www.tortall.net/mu/blog

From Martin.Maly at microsoft.com  Tue Jun 27 02:16:45 2006
From: Martin.Maly at microsoft.com (Martin Maly)
Date: Mon, 26 Jun 2006 17:16:45 -0700
Subject: [Python-Dev] Semantic of isinstance
Message-ID: <5C0A6F919D675745BB1DBA7412DB68F505364E8FB3@df-foxhound-msg.exchange.corp.microsoft.com>

Hello Python Dev,

I am trying to understand the correct semantic of the isinstance built-in function and while doing so, I came across few cases which raise some questions.

1) First example - a class instance pretends to have different class via __class__.

>>> class D(object):
...     def getclass(self):
...         print "D.getclass"
...         return C
...     __class__ = property(getclass)
...
>>> isinstance(D(), D)
True
>>> isinstance(D(), C)
D.getclass
True

isinstance in this case returns True to both C and D test. I would expect to see the __class__ property being called in both cases and get:

>>> isinstance(D(), D)
D.getclass
False

but that's not the case for some reason. It seems that the __class__ is only accessed in some cases, but not always, leading to what I think is a semantic inconsistency.

2) Second, slightly more complicated example, uses an instance with __bases__ on it as the 2nd parameter to isinstance:

class E(object):
    def getbases(self):
        print "E.getbases"
        return ()
    __bases__ = property(getbases)

class C(object):
    def getbases(self):
        print "C.getbases"
        return (E,)                     # C() claims: "E is my base class"
    __bases__ = property(getbases)

class D(object):
    def getclass(self):
        print "D.getclass"
        return C()                      # D() claims: "C() is my __class__"
    __class__ = property(getclass)


class F(object): pass


print "Test 1"
print isinstance(D(), E())              # testing against E() instance
print "Test 2"
print isinstance(D(), E)                # testing against E class

The output here is:

Test 1
E.getbases
D.getclass
C.getbases
False

Test 2
D.getclass
False

In the 2nd test, D.getclass is called to get the __class__ of D(), which returns C() instance. At this point I would expect that C.getbases gets called as __bases__ are retrieved, which would return tuple consisting of E and ultimately produce True result. However, in this case the __bases__ are never accessed on C() (the class of D()). The test_isinstance.py actually tests for similar case.

My question is based on what assumptions does the standard Python implementation bypass the access to __bases__, __class__ etc. when evaluating isinstance? Did I happen to come across a bug or an inconsistency in Python implementation, or am I hitting an intentional behavior?

Thanks very much for reading and some insights!
Martin

From python-dev at zesty.ca  Tue Jun 27 02:41:07 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Mon, 26 Jun 2006 19:41:07 -0500 (CDT)
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>

Hi, Brett.

> I have been working on a design doc for restricted execution of Python

I'm excited to see that you're working on this.

> as part of my dissertation for getting Python into Firefox to replace
> JavaScript on the web.

Wow.  What's your game plan?  Do you have a story for convincing the
Mozilla folks to include Python in the standard Firefox distribution,
even though the whole browser UI is already written in Javascript?
Do you want Python to be used to scripts in web pages, Java-style
embedded objects, or both?  I'm curious to know what you have in mind...

I'll post again with more detailed feedback on your document, but here's
a general comment.  I'd really like to see some worked examples of how
you want to see restricted execution mode used, in order to motivate
and evaluate the design and implementation.

In particular, how do you envision restricted execution interacting
with the standard library?  ("Not at all" is a possible answer.)
Are there existing modules or existing Python programs you expect
to just work using restricted execution mode, or are you willing to
ask programmers who use restricted execution to adopt a new style?


-- ?!ng

From brett at python.org  Tue Jun 27 03:00:58 2006
From: brett at python.org (Brett Cannon)
Date: Mon, 26 Jun 2006 18:00:58 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>
Message-ID: <bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>

On 6/26/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
>
> Hi, Brett.
>
> > I have been working on a design doc for restricted execution of Python
>
> I'm excited to see that you're working on this.


Yeah, I just hope I have a design that works.  =)

> as part of my dissertation for getting Python into Firefox to replace
> > JavaScript on the web.
>
> Wow.  What's your game plan?  Do you have a story for convincing the
> Mozilla folks to include Python in the standard Firefox distribution,
> even though the whole browser UI is already written in Javascript?
> Do you want Python to be used to scripts in web pages, Java-style
> embedded objects, or both?  I'm curious to know what you have in mind...


The plan is to allow pure Python code to be embedded into web pages like
JavaScript.  I am not going for the applet approach like Java.

As for convincing the Mozilla folks, I want working code first before I try
to do a big push.  But the idea is that JavaScript does not scale well for
large applications, which more and more people want.  Python provides a more
structured approach to programming that can help facilitate designing large
web applications that have a complicated client-side component.  There is
also a large userbase already that I hope will like this and speak up to
say, "I want this!"  But otherwise, no, no plan to get Mozilla to go along
with it.  =)  If they don't pick it up I can probably go with an extension
or something.  It is my dissertation; can't expect too much real-world usage
out of it.  =)

I am not expecting Mozilla to rip out JavaScript.  I just want the ability
to do client-side scripting in a web page to be doable in Python instead.
If they want to rewrite parts of Mozilla's UI in Python, then wonderful!
But I would not be hurt or whatever if they didn't bother since that would
be a huge undertaking.  JavaScript can live side-by-side with Python until
people realize Python is better and slowly begin to migrate over.  =)

I'll post again with more detailed feedback on your document, but here's
> a general comment.  I'd really like to see some worked examples of how
> you want to see restricted execution mode used, in order to motivate
> and evaluate the design and implementation.


The idea is that there be a separate Python interpreter per web browser page
instance.  So each tab in Mozilla would be running a Python interpreter.  I
don't think JavaScript's security model is really that bad so I am trying to
follow it within reason.  So the main goal is for people who embed the
interpreter and do not need any form of trusted interpreter to run to be
able to easily have an interpreter(s) running in various states of
restricted execution.

So, launch an interpreter, set the restrictions, pass in the DOM, and then
execute the Python code in the HTML in this untrusted interpreter.

In particular, how do you envision restricted execution interacting
> with the standard library?  ("Not at all" is a possible answer.)
> Are there existing modules or existing Python programs you expect
> to just work using restricted execution mode, or are you willing to
> ask programmers who use restricted execution to adopt a new style?
>
>
I expect everything to work within the restricted bounds of security
restrictions turned on for an interpreter.  This means that if you allow
file reading you can unpickle a file from disk.  But if you don't, then you
can still import 'pickle', but it will fail when you try to use pickle.load()
with a restricted execution exception.  That is why I am placing the
security restriction at the import level and then at key points in key
objects (file, socket, sys.stdin, etc.); to minimize possible slip-ups in
security by catching them at the OS/code boundary at the C level in the
interpreter/stdlib.

I really don't want to ask programmer to adopt a new style.  They might have
to change very slightly ("where is the __file__ attribute on modules?"), but
overall I want to minimize as much as possible a shift in Python programming
style.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060626/d6343b6b/attachment.html 

From greg.ewing at canterbury.ac.nz  Tue Jun 27 04:59:46 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 27 Jun 2006 14:59:46 +1200
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5C0A6F919D675745BB1DBA7412DB68F505364E8FB3@df-foxhound-msg.exchange.corp.microsoft.com>
References: <5C0A6F919D675745BB1DBA7412DB68F505364E8FB3@df-foxhound-msg.exchange.corp.microsoft.com>
Message-ID: <44A09F22.3030300@canterbury.ac.nz>

Martin Maly wrote:

>>>>isinstance(D(), D)
> 
> True
> 
>>>>isinstance(D(), C)
> 
> D.getclass
> True

This looks like a new/old class thing. Presumably once
everything is changed over to new-style classes, this
inconsistency will go away. And from the current behaviour,
it looks like __class__ and __bases__ will be bypassed
by isinstance() (unless Guido decides to change that).

-- 
Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | Carpe post meridiem!          	  |
Christchurch, New Zealand	   | (I'm not a morning person.)          |
greg.ewing at canterbury.ac.nz	   +--------------------------------------+

From Martin.Maly at microsoft.com  Tue Jun 27 05:04:41 2006
From: Martin.Maly at microsoft.com (Martin Maly)
Date: Mon, 26 Jun 2006 20:04:41 -0700
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <44A09F22.3030300@canterbury.ac.nz>
Message-ID: <5C0A6F919D675745BB1DBA7412DB68F5053847B370@df-foxhound-msg.exchange.corp.microsoft.com>

Thanks for the response. The code snippet I sent deals with new style classes only so I assume that in some cases isinstance falls back to old-style-like handling which then asks for __bases__ and __class__ etc, possibly incorrectly so on new style classes.

Thanks again!
Martin

-----Original Message-----
From: Greg Ewing [mailto:greg.ewing at canterbury.ac.nz]
Sent: Monday, June 26, 2006 8:00 PM
To: Martin Maly
Cc: python-dev at python.org
Subject: Re: [Python-Dev] Semantic of isinstance

Martin Maly wrote:

>>>>isinstance(D(), D)
>
> True
>
>>>>isinstance(D(), C)
>
> D.getclass
> True

This looks like a new/old class thing. Presumably once
everything is changed over to new-style classes, this
inconsistency will go away. And from the current behaviour,
it looks like __class__ and __bases__ will be bypassed
by isinstance() (unless Guido decides to change that).

--
Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,          | Carpe post meridiem!                 |
Christchurch, New Zealand          | (I'm not a morning person.)          |
greg.ewing at canterbury.ac.nz        +--------------------------------------+

From pje at telecommunity.com  Tue Jun 27 05:38:06 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 26 Jun 2006 23:38:06 -0400
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5C0A6F919D675745BB1DBA7412DB68F505364E8FB3@df-foxhound-msg
	.exchange.corp.microsoft.com>
Message-ID: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>

At 05:16 PM 6/26/2006 -0700, Martin Maly wrote:
> >>> class D(object):
>...     def getclass(self):
>...         print "D.getclass"
>...         return C
>...     __class__ = property(getclass)
>...
> >>> isinstance(D(), D)
>True
> >>> isinstance(D(), C)
>D.getclass
>True
>
>isinstance in this case returns True to both C and D test. I would expect 
>to see the __class__ property being called in both cases and get:
>
> >>> isinstance(D(), D)
>D.getclass
>False
>
>but that's not the case for some reason.

That's because isinstance checks type(D()) and finds it equal to D -- this 
shortcuts the process.



>  It seems that the __class__ is only accessed in some cases, but not 
> always, leading to what I think is a semantic inconsistency.

It's not inconsistent - isinstance() checks __class__ in *addition* to 
type() in order to allow proxying tricks like lying about your 
__class__.  It therefore returns true if either your real type *or* your 
__class__ matches, and as you can see, the real type is checked first.


>class E(object):
>     def getbases(self):
>         print "E.getbases"
>         return ()
>     __bases__ = property(getbases)
>
>class C(object):
>     def getbases(self):
>         print "C.getbases"
>         return (E,)                     # C() claims: "E is my base class"
>     __bases__ = property(getbases)
>
>class D(object):
>     def getclass(self):
>         print "D.getclass"
>         return C()                      # D() claims: "C() is my __class__"
>     __class__ = property(getclass)
>
>
>class F(object): pass
>
>
>print "Test 1"
>print isinstance(D(), E())              # testing against E() instance
>print "Test 2"
>print isinstance(D(), E)                # testing against E class
>
>The output here is:
>
>Test 1
>E.getbases
>D.getclass
>C.getbases
>False
>
>Test 2
>D.getclass
>False
>
>In the 2nd test, D.getclass is called to get the __class__ of D(), which 
>returns C() instance. At this point I would expect that C.getbases gets 
>called as __bases__ are retrieved, which would return tuple consisting of 
>E and ultimately produce True result.

As it happens, this is due to the fact that E is a type, while E() is 
not.  There's an optimization in the isinstance() machinery that simply 
checks to see if D().__class__ is a subtype of E.  That's where your 
experiment fails.

I'm not sure whether this behavior should be considered correct or not.


From pje at telecommunity.com  Tue Jun 27 05:41:17 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Mon, 26 Jun 2006 23:41:17 -0400
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5C0A6F919D675745BB1DBA7412DB68F5053847B370@df-foxhound-msg
	.exchange.corp.microsoft.com>
References: <44A09F22.3030300@canterbury.ac.nz>
Message-ID: <5.1.1.6.0.20060626233930.02c71a68@sparrow.telecommunity.com>

At 08:04 PM 6/26/2006 -0700, Martin Maly wrote:
>Thanks for the response. The code snippet I sent deals with new style 
>classes only so I assume that in some cases isinstance falls back to 
>old-style-like handling which then asks for __bases__ and __class__ etc, 
>possibly incorrectly so on new style classes.

Actually, it's correct behavior for isinstance() to inspect __class__, even 
on new-style classes.  I suspect that the question of checking for 
__bases__ in one of the shortcut branches just didn't come up when the code 
was being written.  This does appear to be a language design question, 
regarding how much of this machinery one is allowed to emulate.


From nnorwitz at gmail.com  Tue Jun 27 06:37:50 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Mon, 26 Jun 2006 21:37:50 -0700
Subject: [Python-Dev] [Python-checkins] Things to remember when adding
	*packages* to stdlib
In-Reply-To: <449EC471.2000809@v.loewis.de>
References: <ee2a432c0606212335n735716eelae07f6e1d51ceeb@mail.gmail.com>
	<449EC471.2000809@v.loewis.de>
Message-ID: <ee2a432c0606262137p59c8cb3atcedbe25c42109e03@mail.gmail.com>

On 6/25/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Neal Norwitz wrote:
> > I believe this change is all that's necessary on the Unix side to
> > install wsgiref.  Can someone please update the Windows build files to
> > ensure wsgiref is installed in b2?  Don't forget to update the NEWS
> > entry too.
>
> It's installed in b1 already. The msi generator picks up all .py files
> in Lib automatically, except for those that have been explicitly
> excluded (the plat-* ones).

Ah cool.  I was confusing it with extensions and mis-remembering.

> > Maybe someone could come up with a heuristic to add to Misc/build.sh
> > which we could test in there.
>
> I think "make install INSTALL=true|grep true" should print the names
> of all .py files in Lib, except for the ones in plat-*.

I modified the build.sh script to run the installed version.  As long
as some test references the new package, this should catch the
problem.  I sure hope we don't allow a new package without tests. :-)

I also modified the buildbot config to ignore changes if all the files
in a revision are under Demo, Doc, or Misc.  This change (if I got it
right and it works) should help reduce some builds that have little
benefit.

n

From greg.ewing at canterbury.ac.nz  Tue Jun 27 07:08:06 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 27 Jun 2006 17:08:06 +1200
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
Message-ID: <44A0BD36.2070402@canterbury.ac.nz>

Phillip J. Eby wrote:

> It's not inconsistent - isinstance() checks __class__ in *addition* to 
> type() in order to allow proxying tricks like lying about your 
> __class__.

If this is a deliberate feature, it's a bit patchy, because
it means the proxy can't lie about *not* being an instance
of its real type.

Perhaps Guido could clarify how much lying a proxy is
supposed to be able to get away with?

--
Greg

From guido at python.org  Tue Jun 27 07:11:31 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 22:11:31 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <Pine.LNX.4.58.0606261708530.17937@server1.LFW.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<Pine.LNX.4.58.0606261708530.17937@server1.LFW.org>
Message-ID: <ca471dc20606262211l79d27481rd9a9cc3882c48879@mail.gmail.com>

On 6/26/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
> On Mon, 26 Jun 2006, Guido van Rossum wrote:
> > Most school I proponents (perhaps you're the only exception) have
> > claimed that optimization is desirable, but added that it would be
> > easy to add hash-based optimization. IMO it's not so easy in the light
> > of various failure modes of hash(). (A possible solution would be to
> > only use hashing if the expression's type is one of a small set of
> > trusted builtins, and not a subclass; we can trust int.__hash__,
> > str.__hash__ and a few others.)
>
> That's a good idea!  At first glance, it seems like that could lead to
> a plausible compromise.

I'm not so sure. I realized that school I really doesn't have a good
story for optimizing cases involving named constants.

Anyway, after this afternoon's discussion I rewrote the section of the
PEP that discusses the semantic schools of though, hopefully
representing and distinguishing the different schools more accurately.
Look for revision 47120 (it's in svn, but it seems to take a while to
propagate to python.org).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Jun 27 07:13:51 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 22:13:51 -0700
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <44A0BD36.2070402@canterbury.ac.nz>
References: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
	<44A0BD36.2070402@canterbury.ac.nz>
Message-ID: <ca471dc20606262213l7427f926l37b10583452f00e5@mail.gmail.com>

On 6/26/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Phillip J. Eby wrote:
>
> > It's not inconsistent - isinstance() checks __class__ in *addition* to
> > type() in order to allow proxying tricks like lying about your
> > __class__.
>
> If this is a deliberate feature, it's a bit patchy, because
> it means the proxy can't lie about *not* being an instance
> of its real type.
>
> Perhaps Guido could clarify how much lying a proxy is
> supposed to be able to get away with?

Sorry, I don't remember all the constraints. Read the code and weep.
This should be revisited for Py3k. The code became convoluted out of
some needs in Zope; I can't remember if it was Zope 2 or Zope 3 that
needed this (probably both) and I can't remember the specific
situation where it was needed.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From fredrik at pythonware.com  Tue Jun 27 07:28:31 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 27 Jun 2006 07:28:31 +0200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
Message-ID: <e7qfls$fr9$1@sea.gmane.org>

Guido van Rossum wrote:

> Most school I proponents (perhaps you're the only exception) have
> claimed that optimization is desirable, but added that it would be
> easy to add hash-based optimization. IMO it's not so easy in the light
> of various failure modes of hash(). (A possible solution would be to
> only use hashing if the expression's type is one of a small set of
> trusted builtins, and not a subclass; we can trust int.__hash__,
> str.__hash__ and a few others.)

that's the obvious approach for the optimize-under-the-hood school -- 
only optimize if you can do that cleanly (to enable optimization, all 
case values must be either literals or statics, have the same type, and 
belong to a set of trusted types).  this gives a speedup in all main use 
cases, and clear semantics in all cases.

another approach would be to treat switch/case and switch/case-static as 
slightly different constructs; if all cases are static, put the case 
values in a dictionary, and do the lookup as usual (propagating any errors).

</F>


From guido at python.org  Tue Jun 27 07:37:33 2006
From: guido at python.org (Guido van Rossum)
Date: Mon, 26 Jun 2006 22:37:33 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <e7qfls$fr9$1@sea.gmane.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<e7qfls$fr9$1@sea.gmane.org>
Message-ID: <ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>

On 6/26/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Guido van Rossum wrote:
>
> > Most school I proponents (perhaps you're the only exception) have
> > claimed that optimization is desirable, but added that it would be
> > easy to add hash-based optimization. IMO it's not so easy in the light
> > of various failure modes of hash(). (A possible solution would be to
> > only use hashing if the expression's type is one of a small set of
> > trusted builtins, and not a subclass; we can trust int.__hash__,
> > str.__hash__ and a few others.)
>
> that's the obvious approach for the optimize-under-the-hood school --
> only optimize if you can do that cleanly (to enable optimization, all
> case values must be either literals or statics, have the same type, and
> belong to a set of trusted types).  this gives a speedup in all main use
> cases, and clear semantics in all cases.
>
> another approach would be to treat switch/case and switch/case-static as
> slightly different constructs; if all cases are static, put the case
> values in a dictionary, and do the lookup as usual (propagating any errors).

I think we need a PEP for const/static/only/cached/precomputed or
whatever people like to call it.

Once we have (say) static, I think making the case expressions static
by default would still cover all useful cases, and would allow us to
diagnose duplicate cases reliably (which the if/elif chain semantics
don't allow IIUC).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From ronaldoussoren at mac.com  Tue Jun 27 08:40:58 2006
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Tue, 27 Jun 2006 08:40:58 +0200
Subject: [Python-Dev] Problems building Python on OSX 10.4.6?
In-Reply-To: <20060626210310.7604.qmail@web52313.mail.yahoo.com>
References: <20060626210310.7604.qmail@web52313.mail.yahoo.com>
Message-ID: <50184A19-9D5D-4BFE-9775-DB87A8FCC00D@mac.com>


On 26-jun-2006, at 23:03, J. Jeffrey Close wrote:

> [Bleh, sorry about the subject line on my first post.
> Forgot to edit it before I sent.]
>
>
> Hi all,
>
> I have been trying for some time to build Python 2.4.x
> from source on OS X 10.4.6.  I've found *numerous*
> postings on various mailing lists and web pages
> documenting the apparently well-known problems of
> doing so.  Various problems arise either in the
> ./configure step, with configure arguments that don't
> work, or in the compile, or in my case in the link
> step with libtool.
>
> The configure options I'm using are the following:
> --enable-framework --with-pydebug --with-debug=yes
> --prefix=/usr --with-dyld --program-suffix=.exe
> --enable-universalsdk

Which sources are you using? The 2.4.x tarballs don't support -- 
enable-universalsdk (and yes, configure won't complain about unknown  
or misspelled options, that's an entirely different subject). Those  
SDKs also don't support mixing --prefix and --enable-framework, the  
prefix argument will be ignored completely. Python 2.5 does support -- 
enable-universalsdk and specifiying other prefixes when using -- 
enable-framework. I have a patch that backports this to 2.4.x, but  
haven't applied this yet because I haven't had time to fully test  
this yet, hopefully I'll manage this week.

BTW. --enable-framework --with-pydebug is all you need, the others  
are either default or deduced from the platform information by  
configure.

And completely off-topic: please don't install your own build of  
python as /usr/bin/python. IMHO this is conceptually bad, but more  
importantly Apple uses python in some parts of the OS (PDF workflows  
for example) and this change could well break that.

>
> I've managed to get past configure and can compile
> everything, but in the link I get the error "Undefined
> symbols:  ___eprintf" .  This appears to have
> something to do with dynamic library loading not
> properly pulling in libgcc.  I've tried with -lgcc in
> the LD options, but that produces a configure error
> "cannot compute sizeof...".

What version of the developer tools are you using? And are you  
building on PPC or Intel?

One way to work around the problems you're having is to install the  
binary install of 2.4.3.

>
> If I remove "--enable-framework" the complete build
> works, but unfortunately that is the one critical
> element that I need.
>
> The web pages I've found referring to this range from
> 2001 to present -- still apparently everybody is
> having problems with this.  Does *anybody* here have
> Python built from source on this OS?

Obviously someone must have managed, there's a binary installer for  
the OS :-).  And before you nobody that ran into this problem found  
it important enough to actually tell us about is.

Ronald

>
> Jeff
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ 
> ronaldoussoren%40mac.com


From steve at holdenweb.com  Tue Jun 27 09:15:40 2006
From: steve at holdenweb.com (Steve Holden)
Date: Tue, 27 Jun 2006 08:15:40 +0100
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
References: <20060624150311.29014.711788606.divmod.quotient.9044@ohm>
	<20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
Message-ID: <e7qlus$pss$3@sea.gmane.org>

Ralf W. Grosse-Kunstleve wrote:
> --- Jean-Paul Calderone <exarkun at divmod.com> wrote:
> 
>>I think it is safe to say that Twisted is more widely used than anything
>>Google has yet released.  Twisted also has a reasonably plausible
>>technical reason to dislike this change.  Google has a bunch of engineers
>>who, apparently, cannot remember to create an empty __init__.py file in
>>some directories sometimes.
> 
> 
> Simply adding a note to the ImportError message would solve this problem "just
> in time":
> 
> 
>>>>import mypackage.foo
> 
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
> ImportError: No module named mypackage.foo
>     Note that subdirectories are searched for imports only if they contain an
>     __init__.py file: http://www.python.org/doc/essays/packages.html
> 
Yeah, that'll really help the end-user whose sys admin has just upgraded 
to 2.5, won't it?

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


From sreeram at tachyontech.net  Tue Jun 27 08:33:39 2006
From: sreeram at tachyontech.net (K.S.Sreeram)
Date: Tue, 27 Jun 2006 12:03:39 +0530
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>	<e7qfls$fr9$1@sea.gmane.org>
	<ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
Message-ID: <44A0D143.2000907@tachyontech.net>

Guido van Rossum wrote:
> I think we need a PEP for const/static/only/cached/precomputed or
> whatever people like to call it.

fredrik's got a micro pep at http://online.effbot.org

> Once we have (say) static, I think making the case expressions static
> by default would still cover all useful cases, and would allow us to
> diagnose duplicate cases reliably (which the if/elif chain semantics
> don't allow IIUC).

Making case expressions default static would be very surprising to users
because of the restrictions placed by static. For instance 'case in a',
will not support containers which have a custom __contains__ method. It
will also not support containers like lists, and sets because they are
mutable. IMHO this doesn't feel very pythonic.

Instead if we redefine the goal of the switch statement to be 'ease of
expression' rather than 'optimization', then it can just be used as a
concise alternative to if-elif chains, and we can make 'case in a' work
with all containers where a regular 'in' statement works *AND* still
give the possibility of fast lookup when the programmer wants, using
explicit static.

I feel programmer expressivity is more important, and default static
case expressions looks like premature optimization.

Regards
Sreeram

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 252 bytes
Desc: OpenPGP digital signature
Url : http://mail.python.org/pipermail/python-dev/attachments/20060627/3349e311/attachment.pgp 

From maric at aristote.info  Tue Jun 27 09:29:12 2006
From: maric at aristote.info (Maric Michaud)
Date: Tue, 27 Jun 2006 09:29:12 +0200
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
Message-ID: <200606270929.12912.maric@aristote.info>

Le mardi 27 juin 2006 05:38, Phillip J. Eby a ?crit?:
> At 05:16 PM 6/26/2006 -0700, Martin Maly wrote:
> > >>> class D(object):
> >
> >...     def getclass(self):
> >...         print "D.getclass"
> >...         return C
> >...     __class__ = property(getclass)
> >...
> >
> > >>> isinstance(D(), D)
> >
> >True
> >
> > >>> isinstance(D(), C)
> >
> >D.getclass
> >True
> >
> >isinstance in this case returns True to both C and D test. I would expect
> >
> >to see the __class__ property being called in both cases and get:
> > >>> isinstance(D(), D)
> >
> >D.getclass
> >False
> >
> >but that's not the case for some reason.
>
> That's because isinstance checks type(D()) and finds it equal to D -- this
> shortcuts the process.
>
> >  It seems that the __class__ is only accessed in some cases, but not
> > always, leading to what I think is a semantic inconsistency.
>
> It's not inconsistent - isinstance() checks __class__ in *addition* to
> type() in order to allow proxying tricks like lying about your
> __class__.  It therefore returns true if either your real type *or* your
> __class__ matches, and as you can see, the real type is checked first.
>
> >class E(object):
> >     def getbases(self):
> >         print "E.getbases"
> >         return ()
> >     __bases__ = property(getbases)
> >
> >class C(object):
> >     def getbases(self):
> >         print "C.getbases"
> >         return (E,)                     # C() claims: "E is my base
> > class" __bases__ = property(getbases)
> >
> >class D(object):
> >     def getclass(self):
> >         print "D.getclass"
> >         return C()                      # D() claims: "C() is my
> > __class__" __class__ = property(getclass)
> >
> >
> >class F(object): pass
> >
> >
> >print "Test 1"
> >print isinstance(D(), E())              # testing against E() instance
> >print "Test 2"
> >print isinstance(D(), E)                # testing against E class
> >
> >The output here is:
> >
> >Test 1
> >E.getbases
> >D.getclass
> >C.getbases
> >False
> >
> >Test 2
> >D.getclass
> >False
> >
> >In the 2nd test, D.getclass is called to get the __class__ of D(), which
> >returns C() instance. At this point I would expect that C.getbases gets
> >called as __bases__ are retrieved, which would return tuple consisting of
> >E and ultimately produce True result.
>
> As it happens, this is due to the fact that E is a type, while E() is
> not.  There's an optimization in the isinstance() machinery that simply
> checks to see if D().__class__ is a subtype of E.  That's where your
> experiment fails.
>
> I'm not sure whether this behavior should be considered correct or not.
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/maric%40aristote.info

Doesn't seems to be just related to isinstance implementation, furthermore, it 
is very surprising that old-style and new style classes behave exactly the 
opposite way.

In [2]: class a(object) :
   ...:     __class__ = 0
   ...:
   ...:

In [3]: a.__class__
Out[3]: <type 'type'>

In [4]: a().__class__
Out[4]: 0

In [7]: class a :
   ...:     __class__ = 0
   ...:
   ...:

In [8]: a.__class__
Out[8]: 0

In [9]: a().__class__
Out[9]: <class __main__.a at 0xa78cb4ac>

-- 
_____________

Maric Michaud
_____________

Aristote - www.aristote.info
3 place des tapis
69004 Lyon
Tel: +33 426 880 097

From robinbryce at gmail.com  Tue Jun 27 10:42:24 2006
From: robinbryce at gmail.com (Robin Bryce)
Date: Tue, 27 Jun 2006 09:42:24 +0100
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<e7qfls$fr9$1@sea.gmane.org>
	<ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
Message-ID: <bcf87d920606270142x7a3ef0ddx74959931260c80f8@mail.gmail.com>

> PEP 3103, When to Freeze the Dispatch Dict/Option 1

2 things resonated with me for Raymond's proposal and the follow up:

- It seemed agnostic to almost all of the independently contentious issues.
- "is defined tightly enough to allow room for growth and elaboration over
time" [Raymond]. In particular it left room for
const/static/only/cached/etc to come along later.

I think its worth acknowledging this in the PEP.

Is nothing better than something in this case ? I don't know.

> I think we need a PEP for const/static/only/cached/precomputed or
> whatever people like to call it.
>
> Once we have (say) static, I think making the case expressions static
> by default would still cover all useful cases, and would allow us to
> diagnose duplicate cases reliably (which the if/elif chain semantics
> don't allow IIUC).

If the expectation is that static/const will evolve as a sibling pep,
does this not make Raymond's suggestion any more appealing, even a
little ?

Is it unacceptable - or impractical - to break the addition of switch
to python in two (minor version separated) steps ?

Robin

From ncoghlan at iinet.net.au  Tue Jun 27 14:03:45 2006
From: ncoghlan at iinet.net.au (Nick Coghlan)
Date: Tue, 27 Jun 2006 22:03:45 +1000
Subject: [Python-Dev] PEP 328 and PEP 338, redux
Message-ID: <44A11EA1.1000605@iinet.net.au>

Mitch Chapman [1] tripped over the fact that relative imports don't like main 
modules because __name__ doesn't contain any package hierarchy information.

It occurred to me that a slight modification to PEP 338 might solve the 
problem fairly cleanly: instead of simply setting __name__ to '__main__' for a 
module in a package, the -m switch could prepend the package name so that 
relative imports can work correctly.

Inside the module, the test for "am I the main module" would need to be 
"__name__.endswith('__main__')" instead of "__name__ == '__main__'", but other 
than that, there should be very little impact.

By sticking the main module into sys.modules under two different names (once 
with the package prefix and once without), runpy could even take care of 
ensuring that the following invariant held:

   import __main__
   by_name = __import__(__main__.__name__)
   assert by_name is __main__

The behaviour of top level modules would be unaffected, since they won't have 
a package prefix.

Cheers,
Nick.

[1] http://www.python.org/sf/1510172

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From martin at v.loewis.de  Tue Jun 27 14:49:59 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 27 Jun 2006 14:49:59 +0200
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
References: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
Message-ID: <44A12977.4050202@v.loewis.de>

Phillip J. Eby wrote:
>>  It seems that the __class__ is only accessed in some cases, but not 
>> always, leading to what I think is a semantic inconsistency.
> 
> It's not inconsistent - isinstance() checks __class__ in *addition* to 
> type() in order to allow proxying tricks like lying about your 
> __class__.  It therefore returns true if either your real type *or* your 
> __class__ matches, and as you can see, the real type is checked first.

This is not the original rationale, though: the check for a __class__
attribute on non-instance objects was introduced in r13520, to support
ExtensionClasses. I never fully understood ExtensionClasses, but I
believe they were not based on proxying tricks. Instead, they were
an early version of new-style classes.

Regards,
Martin



From martin at v.loewis.de  Tue Jun 27 14:52:54 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 27 Jun 2006 14:52:54 +0200
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <5C0A6F919D675745BB1DBA7412DB68F5053847B370@df-foxhound-msg.exchange.corp.microsoft.com>
References: <5C0A6F919D675745BB1DBA7412DB68F5053847B370@df-foxhound-msg.exchange.corp.microsoft.com>
Message-ID: <44A12A26.9070009@v.loewis.de>

Martin Maly wrote:
> Thanks for the response. The code snippet I sent deals with new style
> classes only so I assume that in some cases isinstance falls back to
> old-style-like handling which then asks for __bases__ and __class__
> etc, possibly incorrectly so on new style classes.

Again, I believe this is all included for ExtensionClasses: it looks
for __class__ on the object if the type check fails, so that an
ExtensionClass could be actually a class derived from the C type.

Regards,
Martin

From ncoghlan at gmail.com  Tue Jun 27 15:36:33 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 27 Jun 2006 23:36:33 +1000
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
Message-ID: <44A13461.508@gmail.com>

Guido van Rossum wrote:
> I've written a new PEP, summarizing (my reaction to) the recent
> discussion on adding a switch statement. While I have my preferences,
> I'm trying to do various alternatives justice in the descriptions. The
> PEP also introduces some standard terminology that may be helpful in
> future discussions. I'm putting this in the Py3k series to gives us
> extra time to decide; it's too important to rush it.
> 
>   http://www.python.org/dev/peps/pep-3103/

A generally nice summary, but as one of the advocates of Option 2 when it 
comes to freezing the jump table, I'd like to see it given some better press :)

> Feedback (also about misrepresentation of alternatives I don't favor)
> is most welcome, either to me directly or as a followup to this post.

My preferred variant of Option 2 (calculation of the jump table on first use) 
disallows function locals in the switch cases just like Option 3. The 
rationale is that the locals can't be expected to remain the same across 
different invocations of the function, so caching an expression that depends 
on them is just as nonsensical for Option 2 as it is for Option 3 (and hence 
should trigger a Syntax Error either way).

Given that variant, my reasons for preferring Option 2 over Option 3 are:
  - the semantics are the same at module, class and function level
  - the order of execution roughly matches the order of the source code
  - it does not cause any surprises when switches are inside conditional logic

As an example of the latter kind of surprise, consider this:

   def surprise(x):
      do_switch = False
      if do_switch:
          switch x:
              case sys.stderr.write("Not reachable!\n"):
                  pass

Option 2 won't print anything, since the switch statement is never executed, 
so the jump table is never built. Option 3 (def-time calculation of the jump 
table), however, will print "Not reachable!" to stderr when the function is 
defined.

Now consider this small change, where the behaviour of Option 3 is not only 
surprising but outright undefined:

   def surprise(x):
      if 0:
          switch x:
              case sys.stderr.write("Not reachable!\n"):
                  pass

The optimiser is allowed to throw away the contents of an if 0: block. This 
makes no difference for Option 2 (since it never executed the case expression 
in the first place), but what happens under Option 3? Is "Not reachable!" 
written to stderr or not?

When it comes to the question of "where do we store the result?" for the 
first-execution calculation of the jump table, my proposal is "a hidden cell 
in the current namespace".

The first time the switch statement is executed, the cell object is empty, so 
the jump table creation code is executed and the result stored in the cell. On 
subsequent executions of the switch statement, the jump table is retrieved 
directly from the cell.

For functions, the cell objects for any switch tables would be created 
internally by the function object constructor based on the attributes of the 
code object. So the cells would be created anew each time the function 
definition is executed. These would be saved on the function object and 
inserted into the local namespace under the appropriate names before the code 
is executed (this is roughly the same thing that is done for closure 
variables). Deleting from the namespace afterwards isn't necessary, since the 
function local namespace gets thrown away anyway.

For module and class code, code execution (i.e. the exec statement) is 
modified so that when a code object is flagged as requiring these hidden 
cells, they are created and inserted into the namespace before the code is 
executed and removed from the namespace when execution of the code is 
complete. Doing it this way prevents the hidden cells from leaking into the 
attribute namespace of the class or module without requiring implicit 
insertion of a try-finally into the generated bytecode. This means that switch 
statements will work correctly in all code executed via an exec statement.

The hidden variables would simply use the normal format for temp names 
assigned by the compiler: "_[%d]". Such temporary names are already used by 
the with statement and by list comprehensions.

To deal with the threading problem mentioned in the PEP, I believe it would 
indeed be necessary to use double-checked locking. Fortunately Python's 
execution order is well enough defined that this works as intended, and the 
optimiser won't screw it up the way it can in C++. Each of the hidden cell 
objects created by a function would have to contain a synchronisation lock 
that was acquired before the jump table was calculated (the module level cell 
objects created by exec wouldn't need the synchronisation lock). Pseudo-code 
for the cell initialisation process:

   if the cell is empty:
       acquire the cell's lock
       try:
           if the cell is still empty:
               build the jump table and store it in the cell
       finally:
           release the cell's lock
    retrieve the jump table from the cell

No, it's not a coincidence that my proposal for 'once' expressions is simply a 
matter of taking the above semantics for evaluating the jump table and 
allowing them to be applied to an arbitrary expression. I actually had the 
idea for the jump table semantics before I thought of generalising it :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Tue Jun 27 15:41:54 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 27 Jun 2006 23:41:54 +1000
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
Message-ID: <44A135A2.5090303@gmail.com>

Guido van Rossum wrote:
> For a real example from C++, read "Double Checked Locking is Broken":
> http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
> (I first heard about this from Scott Meyers at the '06 ACCU conference
> in Oxford; an earlier version of his talk is also online:
> http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf).

I swear I wrote what I did about double-checked locking and Option 2 *before* 
reading this particular post!

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From pje at telecommunity.com  Tue Jun 27 15:48:58 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 27 Jun 2006 09:48:58 -0400
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <ca471dc20606262213l7427f926l37b10583452f00e5@mail.gmail.co
 m>
References: <44A0BD36.2070402@canterbury.ac.nz>
	<5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
	<44A0BD36.2070402@canterbury.ac.nz>
Message-ID: <5.1.1.6.0.20060627094356.01ee8a40@sparrow.telecommunity.com>

At 10:13 PM 6/26/2006 -0700, Guido van Rossum wrote:
>On 6/26/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>>Phillip J. Eby wrote:
>>
>> > It's not inconsistent - isinstance() checks __class__ in *addition* to
>> > type() in order to allow proxying tricks like lying about your
>> > __class__.
>>
>>If this is a deliberate feature, it's a bit patchy, because
>>it means the proxy can't lie about *not* being an instance
>>of its real type.
>>
>>Perhaps Guido could clarify how much lying a proxy is
>>supposed to be able to get away with?
>
>Sorry, I don't remember all the constraints. Read the code and weep.
>This should be revisited for Py3k. The code became convoluted out of
>some needs in Zope; I can't remember if it was Zope 2 or Zope 3 that
>needed this (probably both) and I can't remember the specific
>situation where it was needed.

It was Zope 3 security proxies, although *any* proxy type benefits.  The 
idea was to make proxy objects be able to lie about their __class__ and be 
believed by isinstance().  However, there was no requirement that 
isinstance(ob, Proxy) return False, so that's not implemented.  And lying 
about __bases__ appears to only be allowed for things that aren't types.


From pje at telecommunity.com  Tue Jun 27 15:53:07 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 27 Jun 2006 09:53:07 -0400
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <200606270929.12912.maric@aristote.info>
References: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
	<5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060627095038.03342008@sparrow.telecommunity.com>

At 09:29 AM 6/27/2006 +0200, Maric Michaud wrote:
>Le mardi 27 juin 2006 05:38, Phillip J. Eby a ?crit :
> > As it happens, this is due to the fact that E is a type, while E() is
> > not.  There's an optimization in the isinstance() machinery that simply
> > checks to see if D().__class__ is a subtype of E.  That's where your
> > experiment fails.
> >
> > I'm not sure whether this behavior should be considered correct or not.
> >
>
>Doesn't seems to be just related to isinstance implementation,

That may be, but it's due to code that's entirely 
independent.  isinstance() does what it does independently of the behavior 
you describe below, which is a product of how descriptors behave in new and 
old-style classes.  The behavior you describe below is a natural 
consequence of the documented behavior of descriptors, while the above 
behavior is a performance optimization in isinstance().  They aren't related.


>furthermore, it
>is very surprising that old-style and new style classes behave exactly the
>opposite way.
>
>In [2]: class a(object) :
>    ...:     __class__ = 0
>    ...:
>    ...:
>
>In [3]: a.__class__
>Out[3]: <type 'type'>
>
>In [4]: a().__class__
>Out[4]: 0
>
>In [7]: class a :
>    ...:     __class__ = 0
>    ...:
>    ...:
>
>In [8]: a.__class__
>Out[8]: 0
>
>In [9]: a().__class__
>Out[9]: <class __main__.a at 0xa78cb4ac>


From pje at telecommunity.com  Tue Jun 27 15:59:46 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 27 Jun 2006 09:59:46 -0400
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <44A12977.4050202@v.loewis.de>
References: <5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
	<5.1.1.6.0.20060626232454.01ee2370@sparrow.telecommunity.com>
Message-ID: <5.1.1.6.0.20060627095541.0328a008@sparrow.telecommunity.com>

At 02:49 PM 6/27/2006 +0200, Martin v. L?wis wrote:
>Phillip J. Eby wrote:
> >>  It seems that the __class__ is only accessed in some cases, but not
> >> always, leading to what I think is a semantic inconsistency.
> >
> > It's not inconsistent - isinstance() checks __class__ in *addition* to
> > type() in order to allow proxying tricks like lying about your
> > __class__.  It therefore returns true if either your real type *or* your
> > __class__ matches, and as you can see, the real type is checked first.
>
>This is not the original rationale, though: the check for a __class__
>attribute on non-instance objects was introduced in r13520, to support
>ExtensionClasses. I never fully understood ExtensionClasses, but I
>believe they were not based on proxying tricks. Instead, they were
>an early version of new-style classes.

Okay, well I recall discussion on zope-dev regarding making sure that 
isinstance() would support __class__ for security proxies as 
well.  However, I do not recall from the discussion whether isinstance() 
already did this as an effect of the above, so it's possible that the 
discussion was regarding an existing behavior.

In any event, ExtensionClasses are obsolete or becoming so, but security 
proxies are an ongoing need, and lying about __class__ is used by other 
projects besides Zope.

I'm not aware of anything besides ExtensionClass that lies about __bases__, 
however, which might explain why that aspect of the behavior isn't as well 
fleshed out.


From jimjjewett at gmail.com  Tue Jun 27 16:50:40 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 27 Jun 2006 10:50:40 -0400
Subject: [Python-Dev] doc for new restricted execution design for Python
Message-ID: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>

(1)  Is it impossible for an interpreter to switch between trusted and
untrusted modes?  This is probably a reasonable restriction, but worth
calling out loudly in the docs.

(2)  For the APIs returning an int, it wasn't clear what that int
would be, other than NULL => interpreter is trusted.

I'm not sure that NULL is even always the best answer when the
interpreter is trusted.  For example, if I called PyXXX_AllowFile, I
want to know whether the file is now allowed; I don't really care that
it is allowed because the interpreter is trusted anyhow.

(3)  Should PyXXX_Trusted have a variant that takes group/type/string,
meaning "Am I allowed to do *this*?", rather than having to
special-case the "You can do anything" case?

(4)  For capped resources, there needs to be a way to tell what that
cap is, and how much is left.  (Logically, this provides "how much is
already used", which is already a frequently requested feature for
memory.)

One use of untrusted interpreters is to stop runaway processes.  For
example, it might always be OK to add 4M memory, so long as it has
been at least 10 seconds since the last request.  This requires the
controller to know what the current setting is.

Caps and current usage should also be available (though read-only)
from python; it is quite sensible to spill some cache when getting too
close to your memory limit.

(5)  I think file creation/writing should be capped rather than
binary; it is reasonable to say "You can create a single temp file up
to 4K" or "You can create files, but not more than 20Meg total".

(6)  Given your expectation of one interpreter per web page,
interpreters will have to be very lightweight.  This might be the real
answer to the CPU limiting -- just run each restricted interpreter as
its own "thread" (possibly not an OS-level thread), and let the
scheduler switch it out.

-jJ

From guido at python.org  Tue Jun 27 17:08:11 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 08:08:11 -0700
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <44A11EA1.1000605@iinet.net.au>
References: <44A11EA1.1000605@iinet.net.au>
Message-ID: <ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>

On 6/27/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote:
> Mitch Chapman [1] tripped over the fact that relative imports don't like main
> modules because __name__ doesn't contain any package hierarchy information.
>
> It occurred to me that a slight modification to PEP 338 might solve the
> problem fairly cleanly: instead of simply setting __name__ to '__main__' for a
> module in a package, the -m switch could prepend the package name so that
> relative imports can work correctly.
>
> Inside the module, the test for "am I the main module" would need to be
> "__name__.endswith('__main__')" instead of "__name__ == '__main__'", but other
> than that, there should be very little impact.
>
> By sticking the main module into sys.modules under two different names (once
> with the package prefix and once without), runpy could even take care of
> ensuring that the following invariant held:
>
>    import __main__
>    by_name = __import__(__main__.__name__)
>    assert by_name is __main__
>
> The behaviour of top level modules would be unaffected, since they won't have
> a package prefix.
>
> Cheers,
> Nick.
>
> [1] http://www.python.org/sf/1510172

Bad idea IMO. The __name__ == "__main__" rule is so ingrained, you
don't want to mess with it.

Such a main module ought to use an *absolute* import to reach into the
rest of the package.

However, I'm fine with setting *another* variable to the full package
name so someone who *really* wants to do relative imports here knows
the package name.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Jun 27 17:11:39 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 08:11:39 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A0D143.2000907@tachyontech.net>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<e7qfls$fr9$1@sea.gmane.org>
	<ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
	<44A0D143.2000907@tachyontech.net>
Message-ID: <ca471dc20606270811v4d4f2f00y52426fed1ebdaba4@mail.gmail.com>

On 6/26/06, K.S.Sreeram <sreeram at tachyontech.net> wrote:
> Guido van Rossum wrote:
> > I think we need a PEP for const/static/only/cached/precomputed or
> > whatever people like to call it.
>
> fredrik's got a micro pep at http://online.effbot.org
>
> > Once we have (say) static, I think making the case expressions static
> > by default would still cover all useful cases, and would allow us to
> > diagnose duplicate cases reliably (which the if/elif chain semantics
> > don't allow IIUC).
>
> Making case expressions default static would be very surprising to users
> because of the restrictions placed by static. For instance 'case in a',
> will not support containers which have a custom __contains__ method. It
> will also not support containers like lists, and sets because they are
> mutable. IMHO this doesn't feel very pythonic.
>
> Instead if we redefine the goal of the switch statement to be 'ease of
> expression' rather than 'optimization', then it can just be used as a
> concise alternative to if-elif chains, and we can make 'case in a' work
> with all containers where a regular 'in' statement works *AND* still
> give the possibility of fast lookup when the programmer wants, using
> explicit static.
>
> I feel programmer expressivity is more important, and default static
> case expressions looks like premature optimization.

You've just placed yourself in School Ia (see the updated PEP). I
respectfully disagree.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Jun 27 17:13:46 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 08:13:46 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <bcf87d920606270142x7a3ef0ddx74959931260c80f8@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<e7qfls$fr9$1@sea.gmane.org>
	<ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
	<bcf87d920606270142x7a3ef0ddx74959931260c80f8@mail.gmail.com>
Message-ID: <ca471dc20606270813m60619473u63b5c20ad2a60492@mail.gmail.com>

On 6/27/06, Robin Bryce <robinbryce at gmail.com> wrote:
> > PEP 3103, When to Freeze the Dispatch Dict/Option 1
>
> 2 things resonated with me for Raymond's proposal and the follow up:
>
> - It seemed agnostic to almost all of the independently contentious issues.

Except for the need to use named constants.

> - "is defined tightly enough to allow room for growth and elaboration over
> time" [Raymond]. In particular it left room for
> const/static/only/cached/etc to come along later.
>
> I think its worth acknowledging this in the PEP.

Search for Raymond's name. It's there.

> Is nothing better than something in this case ? I don't know.
>
> > I think we need a PEP for const/static/only/cached/precomputed or
> > whatever people like to call it.
> >
> > Once we have (say) static, I think making the case expressions static
> > by default would still cover all useful cases, and would allow us to
> > diagnose duplicate cases reliably (which the if/elif chain semantics
> > don't allow IIUC).
>
> If the expectation is that static/const will evolve as a sibling pep,
> does this not make Raymond's suggestion any more appealing, even a
> little ?

No, then School Ia becomes more appealing. Raymond's proposal is
unpythonic by not allowing expressions.

> Is it unacceptable - or impractical - to break the addition of switch
> to python in two (minor version separated) steps ?

But what's the point? We have until Python 3000 anyway.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From nicko at nicko.org  Tue Jun 27 17:23:16 2006
From: nicko at nicko.org (Nicko van Someren)
Date: Tue, 27 Jun 2006 16:23:16 +0100
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <44A11EA1.1000605@iinet.net.au>
References: <44A11EA1.1000605@iinet.net.au>
Message-ID: <6E847CD4-1153-4E18-8F84-C413C12DEF6F@nicko.org>

On 27 Jun 2006, at 13:03, Nick Coghlan wrote:
> ...
> It occurred to me that a slight modification to PEP 338 might solve  
> the
> problem fairly cleanly: instead of simply setting __name__ to  
> '__main__' for a
> module in a package, the -m switch could prepend the package name  
> so that
> relative imports can work correctly.
>
> Inside the module, the test for "am I the main module" would need  
> to be
> "__name__.endswith('__main__')" instead of "__name__ ==  
> '__main__'", but other
> than that, there should be very little impact.

Hum... other than effecting more or less every runnable python module  
around it should be very little impact.  That sounds like quite a bit  
of impact to me!

	Nicko


From guido at python.org  Tue Jun 27 17:35:42 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 08:35:42 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A13461.508@gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com>
Message-ID: <ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>

On 6/27/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Guido van Rossum wrote:
> > I've written a new PEP, summarizing (my reaction to) the recent
> > discussion on adding a switch statement. While I have my preferences,
> > I'm trying to do various alternatives justice in the descriptions. The
> > PEP also introduces some standard terminology that may be helpful in
> > future discussions. I'm putting this in the Py3k series to gives us
> > extra time to decide; it's too important to rush it.
> >
> >   http://www.python.org/dev/peps/pep-3103/
>
> A generally nice summary, but as one of the advocates of Option 2 when it
> comes to freezing the jump table, I'd like to see it given some better press :)

Sure. Feel free to edit the PEP directly if you want.

> > Feedback (also about misrepresentation of alternatives I don't favor)
> > is most welcome, either to me directly or as a followup to this post.
>
> My preferred variant of Option 2 (calculation of the jump table on first use)
> disallows function locals in the switch cases just like Option 3. The
> rationale is that the locals can't be expected to remain the same across
> different invocations of the function, so caching an expression that depends
> on them is just as nonsensical for Option 2 as it is for Option 3 (and hence
> should trigger a Syntax Error either way).

OK, but the explanation of Option 2 becomes more cumbersome then:
instead of "first time executed" it now is "first time executed and
you cannot use any locals (but you can use locals if you're executing
globally, and you can use locals of outer functions) (oh, and whether
locals in a class are okay is anybody's guess)."

> Given that variant, my reasons for preferring Option 2 over Option 3 are:
>   - the semantics are the same at module, class and function level

No they're not. At the global level, this is okay bit at the function
level it's not:

  C = 1
  switch x:
  case C: print 42

Unless I misunderstand you and you want to disallow locals at the
global level too, in which case I see this okay at the function level
but not at the global level:

  switch x:
  case re.IGNORECASE: print 42

So I don't see how this is really true.

>   - the order of execution roughly matches the order of the source code

Only roughly though. One can still create obfuscated examples.

>   - it does not cause any surprises when switches are inside conditional logic
>
> As an example of the latter kind of surprise, consider this:
>
>    def surprise(x):
>       do_switch = False
>       if do_switch:
>           switch x:
>               case sys.stderr.write("Not reachable!\n"):
>                   pass
>
> Option 2 won't print anything, since the switch statement is never executed,
> so the jump table is never built. Option 3 (def-time calculation of the jump
> table), however, will print "Not reachable!" to stderr when the function is
> defined.

That's a pretty crooked example if you ask me. I think we all agree
that side effects of case expressions is one way how we can deduce the
compiler's behind-the-scenes tricks (even School Ib is okay with
this). So I don't accept this as proof that Option 2 is better.

> Now consider this small change, where the behaviour of Option 3 is not only
> surprising but outright undefined:
>
>    def surprise(x):
>       if 0:
>           switch x:
>               case sys.stderr.write("Not reachable!\n"):
>                   pass
>
> The optimiser is allowed to throw away the contents of an if 0: block. This
> makes no difference for Option 2 (since it never executed the case expression
> in the first place), but what happens under Option 3? Is "Not reachable!"
> written to stderr or not?

This is a good question. I think both behaviors are acceptable. Again,
the problem is with the side-effect-full case expression, not with
Option 3.

> When it comes to the question of "where do we store the result?" for the
> first-execution calculation of the jump table, my proposal is "a hidden cell
> in the current namespace".

Um, what do you mean by the current namespace? You can't mean the
locals of the function containing the switch. There aren't always
outer functions so I must conclude you mean the module globals. But
I've never seen those referred to as "the current namespace".

> The first time the switch statement is executed, the cell object is empty, so
> the jump table creation code is executed and the result stored in the cell. On
> subsequent executions of the switch statement, the jump table is retrieved
> directly from the cell.

OK.

> For functions, the cell objects for any switch tables would be created
> internally by the function object constructor based on the attributes of the
> code object. So the cells would be created anew each time the function
> definition is executed. These would be saved on the function object and
> inserted into the local namespace under the appropriate names before the code
> is executed (this is roughly the same thing that is done for closure
> variables). Deleting from the namespace afterwards isn't necessary, since the
> function local namespace gets thrown away anyway.

So do I understand that the switch gets re-initialized whenever a new
function object is created? That seems a violation of the "first time
executed" rule, or at least a modification ("first time executed per
defined function"). Or am I misunderstanding?

> For module and class code, code execution (i.e. the exec statement) is
> modified so that when a code object is flagged as requiring these hidden
> cells, they are created and inserted into the namespace before the code is
> executed and removed from the namespace when execution of the code is
> complete. Doing it this way prevents the hidden cells from leaking into the
> attribute namespace of the class or module without requiring implicit
> insertion of a try-finally into the generated bytecode. This means that switch
> statements will work correctly in all code executed via an exec statement.

But if I have a code object c containing a switch statement (not
inside a def) with a side effect in one of its cases, the side effect
is activated each time through the following loop, IIUC:

  d = {}
  for i in range(10):
    exec c in d

> The hidden variables would simply use the normal format for temp names
> assigned by the compiler: "_[%d]". Such temporary names are already used by
> the with statement and by list comprehensions.

Fine.

> To deal with the threading problem mentioned in the PEP, I believe it would
> indeed be necessary to use double-checked locking. Fortunately Python's
> execution order is well enough defined that this works as intended, and the
> optimiser won't screw it up the way it can in C++. Each of the hidden cell
> objects created by a function would have to contain a synchronisation lock
> that was acquired before the jump table was calculated (the module level cell
> objects created by exec wouldn't need the synchronisation lock). Pseudo-code
> for the cell initialisation process:
>
>    if the cell is empty:
>        acquire the cell's lock
>        try:
>            if the cell is still empty:
>                build the jump table and store it in the cell
>        finally:
>            release the cell's lock
>     retrieve the jump table from the cell
>
> No, it's not a coincidence that my proposal for 'once' expressions is simply a
> matter of taking the above semantics for evaluating the jump table and
> allowing them to be applied to an arbitrary expression. I actually had the
> idea for the jump table semantics before I thought of generalising it :)

I'm confused how you can first argue that tying things to the function
definition is one of the main drawbacks of Option 3, and then proceed
to tie Option 2 to the function definition as well. This sounds like
by far the most convoluted specification I have seen so far. I hope
I'm misunderstanding what you mean by namespace.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Tue Jun 27 17:47:28 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 08:47:28 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
Message-ID: <bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>

On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> (1)  Is it impossible for an interpreter to switch between trusted and
> untrusted modes?  This is probably a reasonable restriction, but worth
> calling out loudly in the docs.


Yes, you should not change the state once the interpreter is used for
execution.

(2)  For the APIs returning an int, it wasn't clear what that int
> would be, other than NULL => interpreter is trusted.


Doesn't matter.  I should probably change it to a say "a false value"
instead of NULL.

I'm not sure that NULL is even always the best answer when the
> interpreter is trusted.  For example, if I called PyXXX_AllowFile, I
> want to know whether the file is now allowed; I don't really care that
> it is allowed because the interpreter is trusted anyhow.


It's a question of whether you want that interpretation or want to make sure
you never call the restriction setters on trusted interpreters.  Anyone else
have a preference like Jim?

(3)  Should PyXXX_Trusted have a variant that takes group/type/string,
> meaning "Am I allowed to do *this*?", rather than having to
> special-case the "You can do anything" case?


The PyXXX_Trusted() case is meant as a blanket trusted/untrusted test.  If
you want more fine-grained, use the other checking functions (e.g.,
PyXXX_ExtendedCheckValue(), etc.).  As I mention in the docs, if you want a
"am I allowed to do this, if not I want to do something else", wrap the
checking functions in another function and check that function's return
value::

  int
  check_for_value(group, type, string)
  {
    PyXXX_ExtendedCheckValue(group, type, string, 0);
    return 1;
  }

(4)  For capped resources, there needs to be a way to tell what that
> cap is, and how much is left.  (Logically, this provides "how much is
> already used", which is already a frequently requested feature for
> memory.)


Fair enough.

One use of untrusted interpreters is to stop runaway processes.  For
> example, it might always be OK to add 4M memory, so long as it has
> been at least 10 seconds since the last request.  This requires the
> controller to know what the current setting is.
>
> Caps and current usage should also be available (though read-only)
> from python; it is quite sensible to spill some cache when getting too
> close to your memory limit.


Yeah, being able to read your restrictions seems reasonable to do from an
untrusted interpreter.

(5)  I think file creation/writing should be capped rather than
> binary; it is reasonable to say "You can create a single temp file up
> to 4K" or "You can create files, but not more than 20Meg total".


That has been suggested before.  Anyone else like this idea?

(6)  Given your expectation of one interpreter per web page,
> interpreters will have to be very lightweight.  This might be the real
> answer to the CPU limiting -- just run each restricted interpreter as
> its own "thread" (possibly not an OS-level thread), and let the
> scheduler switch it out.


That's another possibility; having the OS's threading capabilities run
individual instances of the interpreter in its own thread instead of having
Python manage all of the interpreters itself.  I just don't know how
feasible that is based on how Python is designed to be embedded and has so
much global state.

Thanks for the feedback, Jim!

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/4f76f071/attachment.html 

From jimjjewett at gmail.com  Tue Jun 27 18:09:20 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 27 Jun 2006 12:09:20 -0400
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
Message-ID: <fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>

On 6/27/06, Brett Cannon <brett at python.org> wrote:
> On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:

> > (2)  For the APIs returning an int, it wasn't clear what that int
> > would be, other than NULL => interpreter is trusted.

> Doesn't matter.  I should probably change it to a say "a false value"
> instead of NULL.

But what if they succeed?  Do they return -1, 1, the amount allocated, ...

> > (3)  Should PyXXX_Trusted have a variant that takes group/type/string,
> > meaning "Am I allowed to do *this*?", rather than having to
> > special-case the "You can do anything" case?

> The PyXXX_Trusted() case is meant as a blanket trusted/untrusted test.  If
> you want more fine-grained, use the other checking functions (e.g.,
> PyXXX_ExtendedCheckValue(), etc.).

You gave an example of a library that was generally useful even in
restricted mode, but had one convenience function that shouldn't
always be permitted.

I imagine a function that is dangerous only because it takes a
filename rather than an open stream; I want to wrap it in some sort of
guard, but I would rather make a single "Can I do this?" query.

Under the current API, I would need separate logic for "The
interpreter is completely trusted" and "The interpreter is not
trusted, but can do _this_".  In practice, I'm betting that many
extension modules will skip at least one of these steps.

-jJ

From pje at telecommunity.com  Tue Jun 27 18:12:56 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 27 Jun 2006 12:12:56 -0400
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.co
 m>
References: <44A11EA1.1000605@iinet.net.au>
 <44A11EA1.1000605@iinet.net.au>
Message-ID: <5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>

At 08:08 AM 6/27/2006 -0700, Guido van Rossum wrote:
>Bad idea IMO. The __name__ == "__main__" rule is so ingrained, you
>don't want to mess with it.

Actually, maybe we *do* want to, for this usage.

Note that until Python 2.5, it was not possible to do "python -m 
nested.module", so this change merely prevents *existing* modules from 
being run this way -- when they could not have been before!

So, such modules would require a minor change to run under -m.  Is this 
actually a problem, or is it a new feature?


From brett at python.org  Tue Jun 27 18:26:16 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 09:26:16 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>
Message-ID: <bbaeab100606270926v6cc9e7w28401dd54b4b6622@mail.gmail.com>

On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> On 6/27/06, Brett Cannon <brett at python.org> wrote:
> > On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> > > (2)  For the APIs returning an int, it wasn't clear what that int
> > > would be, other than NULL => interpreter is trusted.
>
> > Doesn't matter.  I should probably change it to a say "a false value"
> > instead of NULL.
>
> But what if they succeed?  Do they return -1, 1, the amount allocated, ...


It can be specified as 1 or whatever.  I just planned on a true value.

> > (3)  Should PyXXX_Trusted have a variant that takes group/type/string,
> > > meaning "Am I allowed to do *this*?", rather than having to
> > > special-case the "You can do anything" case?
>
> > The PyXXX_Trusted() case is meant as a blanket trusted/untrusted
> test.  If
> > you want more fine-grained, use the other checking functions (e.g.,
> > PyXXX_ExtendedCheckValue(), etc.).
>
> You gave an example of a library that was generally useful even in
> restricted mode, but had one convenience function that shouldn't
> always be permitted.
>
> I imagine a function that is dangerous only because it takes a
> filename rather than an open stream; I want to wrap it in some sort of
> guard, but I would rather make a single "Can I do this?" query.


Well, for the filename operation it should be protected by the file object
protection anyway, but ignoring this fact, I get your point.

Under the current API, I would need separate logic for "The
> interpreter is completely trusted" and "The interpreter is not
> trusted, but can do _this_".  In practice, I'm betting that many
> extension modules will skip at least one of these steps.


My worry with this is that by providing checking functions that just return
true or false that people will rely on those too much and have logic errors
in their check and let security holes develop.  That is why the checking
functions as they stand now are macros that do the error return for you.

If people *really* need this I can add check functions and rename the
current check functions to be more like "require" functions.  I just want to
hear other people say they will need this that often to warrant the risk of
supporting possible security leaks from coding mistakes.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/271170d1/attachment.htm 

From brett at python.org  Tue Jun 27 18:31:35 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 09:31:35 -0700
Subject: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a
	crasher?
Message-ID: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>

If you look at that crasher, you will notice that recursion depth is set to
1 << 30 before any code is run.  If you remove that setting high setting and
go with the default then the test doesn't crash and raises the appropriate
RuntimeError.

Setting the recursion depth to such a high number will crash the interpreter
even when the proper recursion checks are in place.  This doesn't seem like
a legit crasher to me if it requires an insane recursion depth that would
crash almost any C program that had recursion in it.

Anyone have any objections if I call foul on the test and remove it without
any changes to Python?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/05f9e361/attachment.html 

From rasky at develer.com  Tue Jun 27 18:32:22 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Tue, 27 Jun 2006 18:32:22 +0200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
References: <44A11EA1.1000605@iinet.net.au> <44A11EA1.1000605@iinet.net.au>
	<ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.co m>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
Message-ID: <033001c69a07$44140890$d503030a@trilan>

Phillip J. Eby wrote:

> Actually, maybe we *do* want to, for this usage.
>
> Note that until Python 2.5, it was not possible to do "python -m
> nested.module", so this change merely prevents *existing* modules from
> being run this way -- when they could not have been before!
>
> So, such modules would require a minor change to run under -m.  Is
> this
> actually a problem, or is it a new feature?

This is where I wonder why the "def __main__()" PEP was rejected in the
first place. It would have solved this problem as well.
-- 
Giovanni Bajo


From collinw at gmail.com  Tue Jun 27 18:34:32 2006
From: collinw at gmail.com (Collin Winter)
Date: Tue, 27 Jun 2006 17:34:32 +0100
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <6E847CD4-1153-4E18-8F84-C413C12DEF6F@nicko.org>
References: <44A11EA1.1000605@iinet.net.au>
	<6E847CD4-1153-4E18-8F84-C413C12DEF6F@nicko.org>
Message-ID: <43aa6ff70606270934w6ae08c41p89fddb48bb9ddf4e@mail.gmail.com>

On 6/27/06, Nicko van Someren <nicko at nicko.org> wrote:
> Hum... other than effecting more or less every runnable python module
> around it should be very little impact.  That sounds like quite a bit
> of impact to me!

Going from "__name__ == '__main__'" to "__name__.endswith('__main__')"
can be handled by a search-and-replace function, so, yes, the impact
is indeed minimal. It's not like you have to manually recode every
usage.

Collin Winter

From guido at python.org  Tue Jun 27 18:40:21 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 09:40:21 -0700
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
References: <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
Message-ID: <ca471dc20606270940g5fdd8f02kd728033601f989b2@mail.gmail.com>

On 6/27/06, Phillip J. Eby <pje at telecommunity.com> wrote:
> At 08:08 AM 6/27/2006 -0700, Guido van Rossum wrote:
> >Bad idea IMO. The __name__ == "__main__" rule is so ingrained, you
> >don't want to mess with it.
>
> Actually, maybe we *do* want to, for this usage.
>
> Note that until Python 2.5, it was not possible to do "python -m
> nested.module", so this change merely prevents *existing* modules from
> being run this way -- when they could not have been before!
>
> So, such modules would require a minor change to run under -m.  Is this
> actually a problem, or is it a new feature?

It's a feature with a problem, that's what it is. :-)

My main concern is that people will feel the requirement to change all
existing main programs to use the endswith() idiom whether they need
it or not; there's a powerful meme that says you should be
future-compatible and who knows when your script will end up as part
of a package. So we'd see proliferation of the new idiom way beyond
necessity, which would be a shame.

I'd rather turn the argument around: if you had a "main" script that
used your package before 2.5, the script would be required to use
absolute import to access the package anyway. Presumably the script
would be copied to somewhere on $PATH and the package would be copied
somewhere on $PYTHONPATH (site-packages most likely) and the script
would invoke the package via its full name.

The new -m feature adds the possibility that exactly the same main
script may now also be copied (with the rest of the package) onto
$PYTHONPATH, without also copying it to $PATH, and it can be invoked
using -m.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From aahz at pythoncraft.com  Tue Jun 27 18:40:52 2006
From: aahz at pythoncraft.com (Aahz)
Date: Tue, 27 Jun 2006 09:40:52 -0700
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
References: <44A11EA1.1000605@iinet.net.au> <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
Message-ID: <20060627164052.GA24094@panix.com>

On Tue, Jun 27, 2006, Phillip J. Eby wrote:
> At 08:08 AM 6/27/2006 -0700, Guido van Rossum wrote:
>>
>>Bad idea IMO. The __name__ == "__main__" rule is so ingrained, you
>>don't want to mess with it.
> 
> Actually, maybe we *do* want to, for this usage.
> 
> Note that until Python 2.5, it was not possible to do "python -m 
> nested.module", so this change merely prevents *existing* modules from 
> being run this way -- when they could not have been before!
> 
> So, such modules would require a minor change to run under -m.  Is this 
> actually a problem, or is it a new feature?

Well, yes, considering that cd'ing to the module's dir and doing "python
module.py" will now fail.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From thomas at python.org  Tue Jun 27 18:46:15 2006
From: thomas at python.org (Thomas Wouters)
Date: Tue, 27 Jun 2006 18:46:15 +0200
Subject: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a
	crasher?
In-Reply-To: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
References: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
Message-ID: <9e804ac0606270946t7d5cb810jf353f5ab4efb88ff@mail.gmail.com>

On 6/27/06, Brett Cannon <brett at python.org> wrote:
>
> If you look at that crasher, you will notice that recursion depth is set
> to 1 << 30 before any code is run.  If you remove that setting high setting
> and go with the default then the test doesn't crash and raises the
> appropriate RuntimeError.
>
> Setting the recursion depth to such a high number will crash the
> interpreter even when the proper recursion checks are in place.  This
> doesn't seem like a legit crasher to me if it requires an insane recursion
> depth that would crash almost any C program that had recursion in it.
>
> Anyone have any objections if I call foul on the test and remove it
> without any changes to Python?
>

Well, it's a valid crasher. It crashes Python to recurse too much. The
recursion limit was added to CPython to prevent the crash from happening too
easily, but that limit is just an implementation detail (and furthermore,
the actual limit is just guessed.) It's not like a real solution is
impossible, it's just very complex. Much like, say, restricted execution :-)

-- 
Thomas Wouters <thomas at python.org>

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/b4c7fae8/attachment.htm 

From jimjjewett at gmail.com  Tue Jun 27 19:06:27 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 27 Jun 2006 13:06:27 -0400
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606270926v6cc9e7w28401dd54b4b6622@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>
	<bbaeab100606270926v6cc9e7w28401dd54b4b6622@mail.gmail.com>
Message-ID: <fb6fbf560606271006t3c542afld6064d738b7eaca@mail.gmail.com>

On 6/27/06, Brett Cannon <brett at python.org> wrote:
> My worry with this is that by providing checking functions that just return
> true or false that people will rely on those too much and have logic errors
> in their check and let security holes develop.  That is why the checking
> functions as they stand now are macros that do the error return for you.

Using a macro that returns an Error is OK.  (Well, from this
perspective; it might be a problem for reference leaks.)

I just want a single call that does my erroring out, instead of two
separate calls depending on whether the interpreter is trusted.

-jJ

From brett at python.org  Tue Jun 27 19:06:32 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 10:06:32 -0700
Subject: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a
	crasher?
In-Reply-To: <9e804ac0606270946t7d5cb810jf353f5ab4efb88ff@mail.gmail.com>
References: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
	<9e804ac0606270946t7d5cb810jf353f5ab4efb88ff@mail.gmail.com>
Message-ID: <bbaeab100606271006h6ad95398uddcac5a5a1f0b46e@mail.gmail.com>

On 6/27/06, Thomas Wouters <thomas at python.org> wrote:
>
>
> On 6/27/06, Brett Cannon <brett at python.org> wrote:
> >
> > If you look at that crasher, you will notice that recursion depth is set
> > to 1 << 30 before any code is run.  If you remove that setting high setting
> > and go with the default then the test doesn't crash and raises the
> > appropriate RuntimeError.
> >
> > Setting the recursion depth to such a high number will crash the
> > interpreter even when the proper recursion checks are in place.  This
> > doesn't seem like a legit crasher to me if it requires an insane recursion
> > depth that would crash almost any C program that had recursion in it.
> >
> > Anyone have any objections if I call foul on the test and remove it
> > without any changes to Python?
> >
>
> Well, it's a valid crasher. It crashes Python to recurse too much. The
> recursion limit was added to CPython to prevent the crash from happening too
> easily, but that limit is just an implementation detail (and furthermore,
> the actual limit is just guessed.) It's not like a real solution is
> impossible, it's just very complex. Much like, say, restricted execution :-)
>

OK, let me rephrase: I don't feel like fixing this if the proper thing
happens when the default recursion depth is in place.  There are a ton of
other recursion issues if you set the recursion depth to 1,073,741,824.  One
could try to make the interpreter non-recursive or stackless, but I leave
that to people who are smarter than me.  =)

And so, with that view, I don't see the test as something that needs special
attention that is brought by being in crashers since I suspect that one will
sit there forever.  =)

-Brett

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/f866973a/attachment.html 

From rrr at ronadam.com  Tue Jun 27 19:08:01 2006
From: rrr at ronadam.com (Ron Adam)
Date: Tue, 27 Jun 2006 12:08:01 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A13461.508@gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com>
Message-ID: <44A165F1.6080807@ronadam.com>

> Given that variant, my reasons for preferring Option 2 over Option 3 are:
>   - the semantics are the same at module, class and function level
>   - the order of execution roughly matches the order of the source code
>   - it does not cause any surprises when switches are inside conditional logic
> 
> As an example of the latter kind of surprise, consider this:
> 
>    def surprise(x):
>       do_switch = False
>       if do_switch:
>           switch x:
>               case sys.stderr.write("Not reachable!\n"):
>                   pass
> 
> Option 2 won't print anything, since the switch statement is never executed, 
> so the jump table is never built. Option 3 (def-time calculation of the jump 
> table), however, will print "Not reachable!" to stderr when the function is 
> defined.

Good points on order of define vs order of execution surprises.



WARNING: probable over generalization below or really usefull idea
depending on your point of view.  ;)

I use dict base dispatching in a number of my programs and like it with
the exception I need to first define all the code in functions (or use
lambda) even if they are only one line.  So it results in a three step
process, define functions,  define dict,  and then call it.  And I need
to make sure all the function calls use the same calling signature. In
some cases I'm passing variables that one function doesn't need because
it is needed in one of the other cases.

So modeling the switch after dictionary dispatching more directly where
the switch is explicitly defined first and then used later might be good
both because it offers reuse in the current scope and it can easily be
used in code that currently uses dict style dispatching.

    switch name:
       1:
          ...
       TWO:
          ...
       'a', 'b', 'c':
          ...
       in range(5,10):
          ...
       else:
          ...

    for choice in data:
       do choice in name:    # best calling form I can think of.


I think this avoids most of the define time/order and optimization
issues as well as the issues I have with dict base dispatching so I
thought it might be worth a mention.  There may still be some advantage
to evaluating the case expressions early, but I think it might not be
needed as much in this form so they could be evaluated at switch
definition time, which is the order the code is written.

The main arguments against this form might be that it approaches macro
and named blocks capabilities a bit too closely, but those may also be
arguments for it as this may more directly fulfill the reasons some want
named blocks and or macros.

To use a named switch in such a way just call the desired case
explicitly ...

     switch responses:
        'make_red':
            ...
        'make_blue':
            ...

     do 'make_red' in responses:
     ...
     do 'make_red' in responses:     # again
     ...
     do 'make_blue' in responses:
     ...

So it offers local reuse of code in a more direct way than a switch
statement does and more closely matches that which is current practice
with dictionary dispatching.

Cheers,
    Ron

















Just a thought,
     Ron










From mwh at python.net  Tue Jun 27 19:19:45 2006
From: mwh at python.net (Michael Hudson)
Date: Tue, 27 Jun 2006 18:19:45 +0100
Subject: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a
 crasher?
In-Reply-To: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
	(Brett Cannon's message of "Tue, 27 Jun 2006 09:31:35 -0700")
References: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
Message-ID: <2mirmmh5xq.fsf@starship.python.net>

"Brett Cannon" <brett at python.org> writes:

> If you look at that crasher, you will notice that recursion depth is set
> to 1 << 30 before any code is run.  If you remove that setting high
> setting and go with the default then the test doesn't crash and raises the
> appropriate RuntimeError.
>
> Setting the recursion depth to such a high number will crash the
> interpreter even when the proper recursion checks are in place.  This
> doesn't seem like a legit crasher to me if it requires an insane recursion
> depth that would crash almost any C program that had recursion in it.
>
> Anyone have any objections if I call foul on the test and remove it
> without any changes to Python?

Yes, it's still a way to crash Python :-) (in fact, a problem vaguely
like this that made a complete test run segfault on 64-bit platforms
was fixed in PyPy recently).

More seriously, the recursion limit approach is IMHO something of a
hack, as the amount of bytes of C stack in between increments is
rather variable (try seeing how high you have to set the recursion
limit to when the recursion invovles list.sort() compared to when it
doesn't).  I don't have a fantastic idea for fixing this, but I quite
like having some kind of reminder of it.

Cheers,
mwh

-- 
  ZAPHOD:  Who are you?
  ROOSTA:  A friend.
  ZAPHOD:  Oh yeah? Anyone's friend in particular, or just generally 
           well-disposed to people?               -- HHGttG, Episode 7

From guido at python.org  Tue Jun 27 19:20:25 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 10:20:25 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A165F1.6080807@ronadam.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com> <44A165F1.6080807@ronadam.com>
Message-ID: <ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>

On 6/27/06, Ron Adam <rrr at ronadam.com> wrote:
> I use dict base dispatching in a number of my programs and like it with
> the exception I need to first define all the code in functions (or use
> lambda) even if they are only one line.  So it results in a three step
> process, define functions,  define dict,  and then call it.  And I need
> to make sure all the function calls use the same calling signature. In
> some cases I'm passing variables that one function doesn't need because
> it is needed in one of the other cases.
>
> So modeling the switch after dictionary dispatching more directly where
> the switch is explicitly defined first and then used later might be good
> both because it offers reuse in the current scope and it can easily be
> used in code that currently uses dict style dispatching.
>
>     switch name:
>        1:
>           ...
>        TWO:
>           ...
>        'a', 'b', 'c':
>           ...
>        in range(5,10):
>           ...
>        else:
>           ...
>
>     for choice in data:
>        do choice in name:    # best calling form I can think of.

It looks like your proposal is to change switch into a command that
defines a function of one parameter. Instead of the "do <expression>
in <switch>" call you could just call the switch -- no new syntax
needed. Your example above would be

  for choice in data:
    name(choice)          # 'name' is the switch's name

However, early on in the switch discussion it was agreed that switch,
like if/elif, should  not create a new scope; it should just be a
control flow statement sharing the surrounding scope. The switch as
function definition would require the use of globals.

Also, it would make sense if a switch could be a method instead of a function.

I realize that by proposing a new invocation syntax (do ... in ...)
you might have intended some other kind of interaction between the
switch and the surrounding scope. but exactly what you're proposing
isn't very clear from your examples, since you don't have any example
code in the case suites, just "...".

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Tue Jun 27 19:21:38 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 10:21:38 -0700
Subject: [Python-Dev] Misleading error message from
	PyObject_GenericSetAttr
In-Reply-To: <d38f5330606261254y51dc09fey4c3e52c34c42d040@mail.gmail.com>
References: <d38f5330606141423r3a03478hdc1729a6aac44735@mail.gmail.com>
	<ca471dc20606190818t2c4d84d0lc16cb7ac025436ec@mail.gmail.com>
	<d38f5330606261254y51dc09fey4c3e52c34c42d040@mail.gmail.com>
Message-ID: <ca471dc20606271021s262eb572vc2df92eba84887f1@mail.gmail.com>

Does anyone here have time to look at this?

On 6/26/06, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> On 6/19/06, Guido van Rossum <guido at python.org> wrote:
> > On 6/14/06, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> > > ... It would be better to change the message
> > > to "'Foo' object has only read-only attributes (assign to .bar)" as in
> > > the case tp_setattro == tp_setattr == NULL in  PyObject_SetAttr .
> >
> > I agree. Can you submit a patch to SF please?
> >
> Please see:
>
> https://sourceforge.net/tracker/index.php?func=detail&aid=1512942&group_id=5470&atid=305470
>
> I've tested the patch by setting tp_setattr to 0 in Xxo_Type.  With the patch:
>
> >>> import xx
> >>> x = xx.new()
> >>> x.a = 2
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> AttributeError: 'xxmodule.Xxo' object has only read-only attributes
> (assign to .a)
> >>> del x.a
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> AttributeError: 'xxmodule.Xxo' object has only read-only attributes (del .a)
>
> Note that this log reveals a small inaccuracy in xxmodule.c : the
> module name is "xx," but Xxo type name is "xxmodule.Xxo."  Should I
> submit a patch fixing that?
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Tue Jun 27 19:29:20 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 10:29:20 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606271006t3c542afld6064d738b7eaca@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>
	<bbaeab100606270926v6cc9e7w28401dd54b4b6622@mail.gmail.com>
	<fb6fbf560606271006t3c542afld6064d738b7eaca@mail.gmail.com>
Message-ID: <bbaeab100606271029m8539221m9207414dcef3f5fa@mail.gmail.com>

On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> On 6/27/06, Brett Cannon <brett at python.org> wrote:
> > My worry with this is that by providing checking functions that just
> return
> > true or false that people will rely on those too much and have logic
> errors
> > in their check and let security holes develop.  That is why the checking
> > functions as they stand now are macros that do the error return for you.
>
> Using a macro that returns an Error is OK.  (Well, from this
> perspective; it might be a problem for reference leaks.)


Shouldn't be as long as you put the call right after variable declarations
and you don't do an PyObject creation at variable declaration time.

I just want a single call that does my erroring out, instead of two
> separate calls depending on whether the interpreter is trusted.


Oh, you won't!  You have the set call before you even start using the
interpreter to define your restrictions; that has a return value to flag
that you are trying to set restrictions on a trusted interpreter, and thus
are trying to do somethign that just won't work.  Then you have the check
functions that run in *any* interpreter.  If you happen to be running in a
trusted interpreter, then they do nothing; basically a NOOP and allow
execution to continue.  But if you are running an untrusted interpreter, the
check is performed.

Does that make sense?  In running code within an interpreter there is no
trusted/untrusted distinction when it comes to using checking functions.
The distinction only exists outside the interpreter before you begin using
it.


-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/5e1a508a/attachment.htm 

From brett at python.org  Tue Jun 27 19:32:08 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 10:32:08 -0700
Subject: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a
	crasher?
In-Reply-To: <2mirmmh5xq.fsf@starship.python.net>
References: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
	<2mirmmh5xq.fsf@starship.python.net>
Message-ID: <bbaeab100606271032n5de4a33fja8fbba88da33a3e2@mail.gmail.com>

On 6/27/06, Michael Hudson <mwh at python.net> wrote:
>
> "Brett Cannon" <brett at python.org> writes:
>
> > If you look at that crasher, you will notice that recursion depth is set
> > to 1 << 30 before any code is run.  If you remove that setting high
> > setting and go with the default then the test doesn't crash and raises
> the
> > appropriate RuntimeError.
> >
> > Setting the recursion depth to such a high number will crash the
> > interpreter even when the proper recursion checks are in place.  This
> > doesn't seem like a legit crasher to me if it requires an insane
> recursion
> > depth that would crash almost any C program that had recursion in it.
> >
> > Anyone have any objections if I call foul on the test and remove it
> > without any changes to Python?
>
> Yes, it's still a way to crash Python :-) (in fact, a problem vaguely
> like this that made a complete test run segfault on 64-bit platforms
> was fixed in PyPy recently).
>
> More seriously, the recursion limit approach is IMHO something of a
> hack, as the amount of bytes of C stack in between increments is
> rather variable (try seeing how high you have to set the recursion
> limit to when the recursion invovles list.sort() compared to when it
> doesn't).  I don't have a fantastic idea for fixing this, but I quite
> like having some kind of reminder of it.



OK, with you and Thomas both wanting to keep it I will let it be.  I just
won't worry about fixing it myself during my interpreter hardening crusade.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/1c8750c9/attachment.html 

From jimjjewett at gmail.com  Tue Jun 27 19:42:20 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 27 Jun 2006 13:42:20 -0400
Subject: [Python-Dev] School IIb?
Message-ID: <fb6fbf560606271042l5f8dce32s50dde33738ba034b@mail.gmail.com>

> On compilation, freeze any cases that meet the School-II conditions
> and have a trustworthy __hash__ method into a dictionary.

As long as the semantics are based on if-elif, you have to support

    if    (optimizable)
    elif (has a side effect)
    elif (optimizable)
    elif (not optimizable)
    elif (optimizable)
    elif (has a side effect)
    elif (optimizable)

where the four "optimizable" cases are actually in four separate dictionaries.

-jJ

From Scott.Daniels at Acm.Org  Tue Jun 27 19:53:45 2006
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Tue, 27 Jun 2006 10:53:45 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
Message-ID: <e7rr9v$871$1@sea.gmane.org>

Brett Cannon wrote:
> On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>>  ...
>> Caps and current usage should also be available (though read-only)
>> from python; it is quite sensible to spill some cache when getting too
>> close to your memory limit.
> 
> Yeah, being able to read your restrictions seems reasonable to do from an
> untrusted interpreter.

Certainly in some cases I'd like to run a Python program that claims it
"plays nice" without its being able to see that it is in jail. Otherwise
I can't escalate my trust of the code based on old behavior (it might be
nice only when the jailer is around).  So, reading your restrictions is
a capability I'd like to be able to control.

-- Scott David Daniels
Scott.Daniels at Acm.Org


From brett at python.org  Tue Jun 27 20:06:30 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 11:06:30 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <e7rr9v$871$1@sea.gmane.org>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<e7rr9v$871$1@sea.gmane.org>
Message-ID: <bbaeab100606271106r5cc10b3en12141cf002a3b11e@mail.gmail.com>

On 6/27/06, Scott David Daniels <Scott.Daniels at acm.org> wrote:
>
> Brett Cannon wrote:
> > On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> >>  ...
> >> Caps and current usage should also be available (though read-only)
> >> from python; it is quite sensible to spill some cache when getting too
> >> close to your memory limit.
> >
> > Yeah, being able to read your restrictions seems reasonable to do from
> an
> > untrusted interpreter.
>
> Certainly in some cases I'd like to run a Python program that claims it
> "plays nice" without its being able to see that it is in jail. Otherwise
> I can't escalate my trust of the code based on old behavior (it might be
> nice only when the jailer is around).  So, reading your restrictions is
> a capability I'd like to be able to control.


Sounds reasonable.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/2e0316f6/attachment.html 

From jimjjewett at gmail.com  Tue Jun 27 20:07:40 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 27 Jun 2006 14:07:40 -0400
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606271029m8539221m9207414dcef3f5fa@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>
	<bbaeab100606270926v6cc9e7w28401dd54b4b6622@mail.gmail.com>
	<fb6fbf560606271006t3c542afld6064d738b7eaca@mail.gmail.com>
	<bbaeab100606271029m8539221m9207414dcef3f5fa@mail.gmail.com>
Message-ID: <fb6fbf560606271107t2dca99dq4817792952e6df9c@mail.gmail.com>

On 6/27/06, Brett Cannon <brett at python.org> wrote:
> On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> > On 6/27/06, Brett Cannon <brett at python.org> wrote:

> Shouldn't be as long as you put the call right after variable declarations
> and you don't do an PyObject creation at variable declaration time.

When PEPping this, please add that restriction to the Extension Module
Crippling section.

> > I just want a single call that does my erroring out, instead of two
> > separate calls depending on whether the interpreter is trusted.

> Oh, you won't!  You have the set call before you even start using the
> interpreter to define your restrictions; that has a return value to flag
> that you are trying to set restrictions on a trusted interpreter, and thus
> are trying to do somethign that just won't work.  Then you have the check
> functions that run in *any* interpreter.

This is what I was missing -- the bit about who uses which part of the API.

Is the following correct:


Py_XXXCheck* and Py_XXXExtendedCheck* are called by C extension
modules.  They error out of the current function if the action would
not be allowed.  (In the special case of of a fully trusted function,
the happen to compile themselves out.)

There may be some Py_XXXInfo functions added to find out what the
limits are, particularly for python code.

Py_XXXTrusted() should really be renamed Py_XXXCheckTrusted().
Crippled extension modules should really use Py_XXXCheck*, but
PyXXXCheckTrusted is a quick way to get all-or-nothing.

No other PyXXX functions should ever be (directly) called by any
loadable module, not even by C extension modules; they are called only
by an embedding program.

-jJ

From alexander.belopolsky at gmail.com  Tue Jun 27 20:09:19 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 27 Jun 2006 14:09:19 -0400
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
Message-ID: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>

Setobject code allocates several internal objects on the heap that are
cleaned up by the PySet_Fini function.  This is a fine design choice,
but it often makes debugging applications with embedded python more
difficult.

I propose to eliminate the need for PySet_Fini as follows:

1. Make dummy and emptyfrozenset static objects similar to Py_None
2. Eliminate the free sets reuse scheme.

The second proposal is probably more controversial, but is there any
real benefit from that scheme when pymalloc is enabled?

From brett at python.org  Tue Jun 27 20:11:50 2006
From: brett at python.org (Brett Cannon)
Date: Tue, 27 Jun 2006 11:11:50 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606271107t2dca99dq4817792952e6df9c@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<fb6fbf560606270909m66762515idbce785361dc4d4e@mail.gmail.com>
	<bbaeab100606270926v6cc9e7w28401dd54b4b6622@mail.gmail.com>
	<fb6fbf560606271006t3c542afld6064d738b7eaca@mail.gmail.com>
	<bbaeab100606271029m8539221m9207414dcef3f5fa@mail.gmail.com>
	<fb6fbf560606271107t2dca99dq4817792952e6df9c@mail.gmail.com>
Message-ID: <bbaeab100606271111s3e8cce12i3f061f8846ea7aae@mail.gmail.com>

On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> On 6/27/06, Brett Cannon <brett at python.org> wrote:
> > On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> >
> > > On 6/27/06, Brett Cannon <brett at python.org> wrote:
>
> > Shouldn't be as long as you put the call right after variable
> declarations
> > and you don't do an PyObject creation at variable declaration time.
>
> When PEPping this, please add that restriction to the Extension Module
> Crippling section.


Sure.

> > I just want a single call that does my erroring out, instead of two
> > > separate calls depending on whether the interpreter is trusted.
>
> > Oh, you won't!  You have the set call before you even start using the
> > interpreter to define your restrictions; that has a return value to flag
> > that you are trying to set restrictions on a trusted interpreter, and
> thus
> > are trying to do somethign that just won't work.  Then you have the
> check
> > functions that run in *any* interpreter.
>
> This is what I was missing -- the bit about who uses which part of the
> API.
>
> Is the following correct:
>
>
> Py_XXXCheck* and Py_XXXExtendedCheck* are called by C extension
> modules.  They error out of the current function if the action would
> not be allowed.  (In the special case of of a fully trusted function,
> the happen to compile themselves out.)


They don't compile themselves out unless you didn't compile the
functionality in at all, but yes, that's right.

There may be some Py_XXXInfo functions added to find out what the
> limits are, particularly for python code.


Yep.  Once the C API is settled equivalents at the Python level will be
dealt with.

Py_XXXTrusted() should really be renamed Py_XXXCheckTrusted().
> Crippled extension modules should really use Py_XXXCheck*, but
> PyXXXCheckTrusted is a quick way to get all-or-nothing.


Rename seems reasonable.  And yes, that is the right idea of usage.

No other PyXXX functions should ever be (directly) called by any
> loadable module, not even by C extension modules; they are called only
> by an embedding program.


Yep.

I think I will try to add a paragraph at the top using pseudocode, showing
typical usage.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/b2807ff7/attachment-0001.htm 

From amk at amk.ca  Tue Jun 27 20:47:51 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Tue, 27 Jun 2006 14:47:51 -0400
Subject: [Python-Dev] Do we need a bug triage day?
Message-ID: <20060627184751.GA6679@localhost.localdomain>

Do we need a sort of mini bug-day to look at the outstanding bugs and
note ones that absolutely need to be fixed before 2.5final?  Or has
someone already done this?

--amk


From rrr at ronadam.com  Tue Jun 27 20:54:09 2006
From: rrr at ronadam.com (Ron Adam)
Date: Tue, 27 Jun 2006 13:54:09 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	
	<44A13461.508@gmail.com> <44A165F1.6080807@ronadam.com>
	<ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
Message-ID: <44A17ED1.8020404@ronadam.com>

Guido van Rossum wrote:
> On 6/27/06, Ron Adam <rrr at ronadam.com> wrote:
>> I use dict base dispatching in a number of my programs and like it with
>> the exception I need to first define all the code in functions (or use
>> lambda) even if they are only one line.  So it results in a three step
>> process, define functions,  define dict,  and then call it.  And I need
>> to make sure all the function calls use the same calling signature. In
>> some cases I'm passing variables that one function doesn't need because
>> it is needed in one of the other cases.
>>
>> So modeling the switch after dictionary dispatching more directly where
>> the switch is explicitly defined first and then used later might be good
>> both because it offers reuse in the current scope and it can easily be
>> used in code that currently uses dict style dispatching.
>>
>>     switch name:
>>        1:
>>           ...
>>        TWO:
>>           ...
>>        'a', 'b', 'c':
>>           ...
>>        in range(5,10):
>>           ...
>>        else:
>>           ...
>>
>>     for choice in data:
>>        do choice in name:    # best calling form I can think of.
> 
> It looks like your proposal is to change switch into a command that
> defines a function of one parameter. Instead of the "do <expression>
> in <switch>" call you could just call the switch -- no new syntax
> needed. Your example above would be
> 
>  for choice in data:
>    name(choice)          # 'name' is the switch's name

I thought of using a function call so it would be more like using a 
generator, but also ruled it out because it does create a new scope and 
I think closures may complicate it or it would require also passing all 
the names needed for each case which would get old quick if it is 
required every time.  One of the things I want to be able to avoid in 
dict based dispatching for cases with only one or two lines of code.

So my intent was that it use the local scope and not use the function 
call signature which implies a new scope to the reader and a returned 
value, thus the 'do choice in name' calling form.  No returned value is 
needed because it has full access to local name space.

for example you wouldn't write...

    return if x: 42 else: 84

but would instead...

    if x:
      y = 42
    else:
      y = 84
    return y


The 'do' is used in the same context an 'if' is used.

    switch a:
      True: y=42
      else: y=84

    do x in a:
    return y



> However, early on in the switch discussion it was agreed that switch,
> like if/elif, should  not create a new scope; it should just be a
> control flow statement sharing the surrounding scope. The switch as
> function definition would require the use of globals.
> 
> Also, it would make sense if a switch could be a method instead of a 
> function.

There's no reason why it couldn't be put "in" a method.  If the switch 
uses the surrounding name space you have that flexibility.  I'm not sure 
if the select definition could be put in the body of a class and have 
the do's in a method. That would be like having an if in the body of the 
class and the else to it in a method, so I would think it wouldn't be 
allowed.  So they both would need to be in the same name space and the 
select will always need to be defined before the 'do' is executed.

> I realize that by proposing a new invocation syntax (do ... in ...)
> you might have intended some other kind of interaction between the
> switch and the surrounding scope. but exactly what you're proposing
> isn't very clear from your examples, since you don't have any example
> code in the case suites, just "...".

What was intended probably would be more closely related to constructing 
a switch with BASICS gosub command.


    one:              # in basic these do not have their own scope
      print 'one'
      return          # return from subroutine not function here

    two:
      print 'two'
      return

    three:
      print 'three'
      return

    data = ('one', 'two', 'three')
    for choice in data:
        if choice == 'one': gosub one
        elif choice == 'two': gosub two
        elif choice == 'three': gosub three


Which would be better expressed as..


    switch choices:
        'one':  print 'one'
        'two':  print 'two'
        'three':  print 'three'

    for choice in ('one', 'two', 'three'):
        do choice in choices

Each case label expression would be evaluated when the switch block is 
executed, ie... in order it appears in the program, but the code for 
each case would be skipped until a (do choice in choices) line. Each 
switch case block would not fall through but return to the next line 
after the 'do' line by default.

The whole thing could be put in a separate function or method if it's 
desired to get the single function call form you suggested along with a 
separate name space.

    def switcher(choice):
        switcher roo:
           1: a = 42
           42: a = 1
           else: raise ValueError

        do choice in switcher:
        return a

    switcher(1)   ->   42
    switcher(42)  ->   1
    switcher(100) ->   raises exception


Cheers,
    Ron


















From guido at python.org  Tue Jun 27 21:11:50 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 12:11:50 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A17ED1.8020404@ronadam.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com> <44A165F1.6080807@ronadam.com>
	<ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
	<44A17ED1.8020404@ronadam.com>
Message-ID: <ca471dc20606271211m7a4f48e6rb28a0e1068d7106@mail.gmail.com>

On 6/27/06, Ron Adam <rrr at ronadam.com> wrote:
> Guido van Rossum wrote:
> > It looks like your proposal is to change switch into a command that
> > defines a function of one parameter. Instead of the "do <expression>
> > in <switch>" call you could just call the switch -- no new syntax
> > needed. Your example above would be
> >
> >  for choice in data:
> >    name(choice)          # 'name' is the switch's name
>
> I thought of using a function call so it would be more like using a
> generator, but also ruled it out because it does create a new scope and
> I think closures may complicate it or it would require also passing all
> the names needed for each case which would get old quick if it is
> required every time.  One of the things I want to be able to avoid in
> dict based dispatching for cases with only one or two lines of code.
>
> So my intent was that it use the local scope and not use the function
> call signature which implies a new scope to the reader and a returned
> value, thus the 'do choice in name' calling form.  No returned value is
> needed because it has full access to local name space.
>
> for example you wouldn't write...
>
>     return if x: 42 else: 84
>
> but would instead...
>
>     if x:
>       y = 42
>     else:
>       y = 84
>     return y
>
>
> The 'do' is used in the same context an 'if' is used.
>
>     switch a:
>       True: y=42
>       else: y=84
>
>     do x in a:
>     return y

Ah, I see.

> > However, early on in the switch discussion it was agreed that switch,
> > like if/elif, should  not create a new scope; it should just be a
> > control flow statement sharing the surrounding scope. The switch as
> > function definition would require the use of globals.
> >
> > Also, it would make sense if a switch could be a method instead of a
> > function.
>
> There's no reason why it couldn't be put "in" a method.  If the switch
> uses the surrounding name space you have that flexibility.  I'm not sure
> if the select definition could be put in the body of a class and have
> the do's in a method. That would be like having an if in the body of the
> class and the else to it in a method, so I would think it wouldn't be
> allowed.  So they both would need to be in the same name space and the
> select will always need to be defined before the 'do' is executed.
>
> > I realize that by proposing a new invocation syntax (do ... in ...)
> > you might have intended some other kind of interaction between the
> > switch and the surrounding scope. but exactly what you're proposing
> > isn't very clear from your examples, since you don't have any example
> > code in the case suites, just "...".
>
> What was intended probably would be more closely related to constructing
> a switch with BASICS gosub command.

I understand now.

But I have a question: if I write

  for i in range(10):
    switch S:
      case i: print 42

(i.e. the switch is *inside* the for loop) does the switch get defined
10 times (with 10 different case values!) or not?

>     one:              # in basic these do not have their own scope
>       print 'one'
>       return          # return from subroutine not function here
>
>     two:
>       print 'two'
>       return
>
>     three:
>       print 'three'
>       return
>
>     data = ('one', 'two', 'three')
>     for choice in data:
>         if choice == 'one': gosub one
>         elif choice == 'two': gosub two
>         elif choice == 'three': gosub three
>
>
> Which would be better expressed as..
>
>
>     switch choices:
>         'one':  print 'one'
>         'two':  print 'two'
>         'three':  print 'three'
>
>     for choice in ('one', 'two', 'three'):
>         do choice in choices

I'm not sure I like the idea of using BASIC as a way to explain Python
functionality... :-)

> Each case label expression would be evaluated when the switch block is
> executed, ie... in order it appears in the program, but the code for
> each case would be skipped until a (do choice in choices) line. Each
> switch case block would not fall through but return to the next line
> after the 'do' line by default.
>
> The whole thing could be put in a separate function or method if it's
> desired to get the single function call form you suggested along with a
> separate name space.
>
>     def switcher(choice):
>         switcher roo:
>            1: a = 42
>            42: a = 1
>            else: raise ValueError
>
>         do choice in switcher:
>         return a
>
>     switcher(1)   ->   42
>     switcher(42)  ->   1
>     switcher(100) ->   raises exception

I'm still unclear on when you propose the case expressions to be
evaluated. Each time the "switch" statement is encountered? That would
be the most natural given the rest of your explanation. But then a
switch inside a function that references globally defined constants
would be re-evalulated each time the function is called; much of the
discussion here is focused on trying to reduce the number of times the
switch cases are evaluated to once per program invocation or once per
function *definition*.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From jimjjewett at gmail.com  Tue Jun 27 21:21:46 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 27 Jun 2006 15:21:46 -0400
Subject: [Python-Dev] Switch statement - handling errors
Message-ID: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>

On 6/26/06, Guido van Rossum <guido at python.org> wrote:
> I like Python's rules to be simple, and I
> prefer to occasionally close off a potential optimization path in the
> sake of simplicity.

(Almost) everyone agrees that the case expressions SHOULD be run-time
constants.  The disagreements are largely over what to do when this
gets violated.


Bad Case Option (1) -- Accept everything
----------------------------------------

Readability is everything.  Switch/case tells you that every branch is
using similar predicates on the same variable.  If that variable or
predicate can't be easily optimized, then so what -- it is still
better to read.  (Largely School Ia)


Bad Case Option (2) -- Accept very little
-----------------------------------------

Enforce good case expressions.  (Raymond's proposal)  This does the
right thing when it works, but it is initially very restricted -- and
a crippled case statement may be worse than none at all.


Bad Case Option (3) -- Raise Exceptions
---------------------------------------

Flag bugs early.  The semantics require non-overlapping hashable
constants, so raise an exception if this gets violated.  This does the
right thing, but catching all the violations in a timely manner is
hard.

Freezing at first use is too late for a good exception, but any
earlier has surprising restrictions.

There is no good way to realize that a "constant" has changed after the freeze.


Bad Case Option (4) -- Ignore problems
--------------------------------------

This is for optimization; go ahead and ignore any problems you can.
Maybe that branch will never be taken...  Ironically, this is also
largely school I, since it matches the if semantics.


Bad Case Option (5) -- ad hoc mixture
-------------------------------------

Pick an arbitrary set of rules, and follow it.

Guido is currently leaning towards this, with the rules being "freeze
at definition", raise for unhashable, ignore later changes, undecided
on overlapping ranges.

The disadvantage is that people can "cheat" with non-constant
expressions.  Sometimes, this will work out great.  Sometimes it will
lead to nasty non-localized bugs.  We have to explain exactly which
cheats are allowed, and that explanation could get byzantine.


Bad Case Option (6) -- Undefined
--------------------------------

Undefined behavior.  We don't yet know which strategy (or mix of
strategies) is best.  So don't lock ourselves (and Jython, and PyPy,
and IronPython, and ShedSkin, and ...) into the wrong strategy.

The down side is that people may start to count on the actual behavior
anyhow; then (in practice) we might just have Bad Case Option (5)
without documentation.

-jJ

From robinbryce at gmail.com  Tue Jun 27 21:30:16 2006
From: robinbryce at gmail.com (Robin Bryce)
Date: Tue, 27 Jun 2006 20:30:16 +0100
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606270813m60619473u63b5c20ad2a60492@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<e7qfls$fr9$1@sea.gmane.org>
	<ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com>
	<bcf87d920606270142x7a3ef0ddx74959931260c80f8@mail.gmail.com>
	<ca471dc20606270813m60619473u63b5c20ad2a60492@mail.gmail.com>
Message-ID: <bcf87d920606271230y182d34e5h76d4e3781da41761@mail.gmail.com>

> But what's the point? We have until Python 3000 anyway.
Ah, my mistake. In my enthusiasm, I foolishly got the time frames of
peps 3103 & 275 mixed up.

From kd5bjo at gmail.com  Tue Jun 27 22:36:43 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Tue, 27 Jun 2006 15:36:43 -0500
Subject: [Python-Dev] Split switch statement
Message-ID: <eaaf21dc0606271336r1599a9c2x6be3f6a242b5fcd5@mail.gmail.com>

One of the big problems here seems to be that an optimized switch
statement requires some code to be evaluated fewer times than the rest
of the switch statement, and there isn't any good way to make that
happen with a single statement.  Thus, I propose two statements: a
'dispatcher' statement and a 'switch' statement.  The dispatcher
statement defines the available cases, and generates a dispatcher
object, and the switch statement specifies code to be run for each
case.

---------

#Sample 1: Primary dispatcher syntax
dispatcher myEvents on e:
    on e:
         case None: Idle
     on getattr(e, 'type', None):
         from globalEvents: *
         case 42: myEvent      # or whatever the user event code is
#EOF

A dispatcher statement contains a sequence of 'on' blocks.  Each 'on'
block specifies an expression and a set of cases.  The expression is
stored as a lambda inside the dispatcher which is applied whenever the
switch is run.  Inside a 'on' block, there are two kinds of
statements.  'case' evaluates its expression immediately, and
associates it with a label; 'from' imports tests and labels from
another dispatcher.  If the result of any case expression is
unhashable, an exception is raised.

----------

#Sample 2: Shorthand dispatcher syntax
dispatcher globalEvents:
    case pygame.KEYDOWN: KEYDOWN
    case pygame.KEYUP:   KEYUP
    ...
#EOF

Because dispatching on the switched value directly is so common, any
'from' or 'case' statements outside an 'on' block are considered to be
applied to be inside an "on <switched_value>" block.  The name for the
switched value can be omitted if it's not needed.

----------
#Sample 3: Using a dispatcher
while True:
    ...
    switch events on pygame.event.poll():
    case KEYUP, KEYDOWN: ...
    case myEvent: ...
    case Idle: ...
    else: ...
#EOF

Internally, each switch statement has some unique identifier.  Each
dispatcher object maintains a list of the switch statements it has
previously serviced.  If this switch statement is new to this
dispatcher, the dispatcher verifies that it might generate all of the
cases that are specified in the switch.  Otherwise, an exception is
raised.

If the test passed (or was skipped due to previous experience), each
of the 'on' expressions in the dispatcher is executed (in order) and
their results are checked against the stored values.  If no case (from
the switch, not the dispatcher) matches, the switch's 'else' block is
executed, if present.  If more than one case (from the switch)
matches, an exception is raised.  Otherwise, the code from associated
case block is executed.

  -- Eric Sumner

PS. Yes, I know that's not how pygame handles idle events; it makes a
better sample this way.

From rrr at ronadam.com  Wed Jun 28 00:06:52 2006
From: rrr at ronadam.com (Ron Adam)
Date: Tue, 27 Jun 2006 17:06:52 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606271211m7a4f48e6rb28a0e1068d7106@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	
	<44A13461.508@gmail.com> <44A165F1.6080807@ronadam.com>	
	<ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>	
	<44A17ED1.8020404@ronadam.com>
	<ca471dc20606271211m7a4f48e6rb28a0e1068d7106@mail.gmail.com>
Message-ID: <44A1ABFC.7070108@ronadam.com>

Guido van Rossum wrote:

>> What was intended probably would be more closely related to constructing
>> a switch with BASICS gosub command.
> 
> I understand now.
> 
> But I have a question: if I write
> 
>  for i in range(10):
>    switch S:
>      case i: print 42
> 
> (i.e. the switch is *inside* the for loop) does the switch get defined
> 10 times (with 10 different case values!) or not?

In this instance the switch would be redefined 10 times.  The ending 
switch would be:

    switch S:
       case 10: print 42


The static keyword could be used with this form as well to force define 
time evaluation. (see last example.)



> I'm not sure I like the idea of using BASIC as a way to explain Python
> functionality... :-)

Yes, I agree! :-)

Fortunately, once (and if) it's defined (what ever it turns out to be) 
Python examples can be used to explain Python.  ;-)


>> Each case label expression would be evaluated when the switch block is
>> executed, ie... in order it appears in the program, but the code for
>> each case would be skipped until a (do choice in choices) line. Each
>> switch case block would not fall through but return to the next line
>> after the 'do' line by default.
>>
>> The whole thing could be put in a separate function or method if it's
>> desired to get the single function call form you suggested along with a
>> separate name space.
>>
>>     def switcher(choice):
>>         switcher roo:
>>            1: a = 42
>>            42: a = 1
>>            else: raise ValueError
>>
>>         do choice in switcher:
>>         return a
>>
>>     switcher(1)   ->   42
>>     switcher(42)  ->   1
>>     switcher(100) ->   raises exception
> 
> I'm still unclear on when you propose the case expressions to be
> evaluated. Each time the "switch" statement is encountered? That would
> be the most natural given the rest of your explanation. But then a
> switch inside a function that references globally defined constants
> would be re-evalulated each time the function is called; much of the
> discussion here is focused on trying to reduce the number of times the
> switch cases are evaluated to once per program invocation or once per
> function *definition*.

Each time the 'switch' statement is encountered for the above.


Allowing static to be used with this form could be an option to force 
function define time evaluations of cases.

     ONE = 1
     FOURTYTWO = 42

     def switcher(choice):
        static switcher roo:   # evaluate cases at function def time.
           ONE: a = 42
           FOURTYTWO: a = 1
           else: raise ValueError

        do choice in switcher:
           return a

     switcher(1)   ->   42
     switcher(42)  ->   1
     switcher(100) ->   raises exception


That would give you both call time and def time evaluations for cases 
with clear behavior for both (I think).

Also since the switch has a name, it might be possible to examine their 
values with dir(switch_suite_name).  That might help in debugging and 
explaining the behavior in different situations.


The 'case' keyword could be left off in this form because the switch 
body is always a suite.  I find it more readable without it in the case 
of simple literals or named values, and more readable with it for more 
complex expressions.


I don't think I can clarify this further without getting over my head. 
  I probably am a bit already.  I'm in the know enough to get myself in 
trouble (at times) group. ;-)

Ron



From rrr at ronadam.com  Wed Jun 28 00:10:49 2006
From: rrr at ronadam.com (Ron Adam)
Date: Tue, 27 Jun 2006 17:10:49 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A1ABFC.7070108@ronadam.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>		<44A13461.508@gmail.com>
	<44A165F1.6080807@ronadam.com>		<ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>		<44A17ED1.8020404@ronadam.com>	<ca471dc20606271211m7a4f48e6rb28a0e1068d7106@mail.gmail.com>
	<44A1ABFC.7070108@ronadam.com>
Message-ID: <44A1ACE9.1000007@ronadam.com>

Ron Adam wrote:

> In this instance the switch would be redefined 10 times.  The ending 
> switch would be:
> 
>     switch S:
>        case 10: print 42

Silly mistake correction...  :)

       switch S:
          case 9: print 42


From guido at python.org  Wed Jun 28 01:05:19 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 16:05:19 -0700
Subject: [Python-Dev] Switch statement - handling errors
In-Reply-To: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>
References: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>
Message-ID: <ca471dc20606271605v24d86385ma8f87b4a02e7c400@mail.gmail.com>

On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> On 6/26/06, Guido van Rossum <guido at python.org> wrote:
> > I like Python's rules to be simple, and I
> > prefer to occasionally close off a potential optimization path in the
> > sake of simplicity.
>
> (Almost) everyone agrees that the case expressions SHOULD be run-time
> constants.  The disagreements are largely over what to do when this
> gets violated.

Thanks for the elaboration. I'm not sure how to respond except to
correct your representation of my position:

> Bad Case Option (1) -- Accept everything
> ----------------------------------------
>
> Readability is everything.  Switch/case tells you that every branch is
> using similar predicates on the same variable.  If that variable or
> predicate can't be easily optimized, then so what -- it is still
> better to read.  (Largely School Ia)
>
>
> Bad Case Option (2) -- Accept very little
> -----------------------------------------
>
> Enforce good case expressions.  (Raymond's proposal)  This does the
> right thing when it works, but it is initially very restricted -- and
> a crippled case statement may be worse than none at all.
>
>
> Bad Case Option (3) -- Raise Exceptions
> ---------------------------------------
>
> Flag bugs early.  The semantics require non-overlapping hashable
> constants, so raise an exception if this gets violated.  This does the
> right thing, but catching all the violations in a timely manner is
> hard.
>
> Freezing at first use is too late for a good exception, but any
> earlier has surprising restrictions.
>
> There is no good way to realize that a "constant" has changed after the freeze.
>
>
> Bad Case Option (4) -- Ignore problems
> --------------------------------------
>
> This is for optimization; go ahead and ignore any problems you can.
> Maybe that branch will never be taken...  Ironically, this is also
> largely school I, since it matches the if semantics.
>
>
> Bad Case Option (5) -- ad hoc mixture
> -------------------------------------
>
> Pick an arbitrary set of rules, and follow it.
>
> Guido is currently leaning towards this, with the rules being "freeze
> at definition", raise for unhashable, ignore later changes, undecided
> on overlapping ranges.

Actually I'm all for flagging overlapping changes as errors when the
dict is frozen.

> The disadvantage is that people can "cheat" with non-constant
> expressions.  Sometimes, this will work out great.  Sometimes it will
> lead to nasty non-localized bugs.  We have to explain exactly which
> cheats are allowed, and that explanation could get byzantine.

Actually I would simply explain that all cheats are frowned upon, just
like all side effects in case expressions.

>
>
> Bad Case Option (6) -- Undefined
> --------------------------------
>
> Undefined behavior.  We don't yet know which strategy (or mix of
> strategies) is best.  So don't lock ourselves (and Jython, and PyPy,
> and IronPython, and ShedSkin, and ...) into the wrong strategy.
>
> The down side is that people may start to count on the actual behavior
> anyhow; then (in practice) we might just have Bad Case Option (5)
> without documentation.
>
> -jJ
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From greg.ewing at canterbury.ac.nz  Wed Jun 28 02:02:54 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 28 Jun 2006 12:02:54 +1200
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <44A12A26.9070009@v.loewis.de>
References: <5C0A6F919D675745BB1DBA7412DB68F5053847B370@df-foxhound-msg.exchange.corp.microsoft.com>
	<44A12A26.9070009@v.loewis.de>
Message-ID: <44A1C72E.4060901@canterbury.ac.nz>

Martin v. L?wis wrote:

> Again, I believe this is all included for ExtensionClasses: it looks
> for __class__ on the object if the type check fails, so that an
> ExtensionClass could be actually a class derived from the C type.

Now that we have had new-style classes for quite a
while, is there still a need to support ExtensionClasses?

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 28 02:23:03 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 28 Jun 2006 12:23:03 +1200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>
References: <44A11EA1.1000605@iinet.net.au>
	<ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>
Message-ID: <44A1CBE7.10802@canterbury.ac.nz>

Guido van Rossum wrote:

> Bad idea IMO. The __name__ == "__main__" rule is so ingrained, you
> don't want to mess with it.

It would only make a difference for main modules inside
packages. Wouldn't that be fairly rare? The vast majority
of existing __name__ == "__main__" uses ought to be
unaffected.

--
Greg

From greg.ewing at canterbury.ac.nz  Wed Jun 28 02:33:14 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 28 Jun 2006 12:33:14 +1200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <033001c69a07$44140890$d503030a@trilan>
References: <44A11EA1.1000605@iinet.net.au> <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
	<033001c69a07$44140890$d503030a@trilan>
Message-ID: <44A1CE4A.2000900@canterbury.ac.nz>

Giovanni Bajo wrote:

> This is where I wonder why the "def __main__()" PEP was rejected in the
> first place. It would have solved this problem as well.

Could this be reconsidered for Py3k?

--
Greg

From pje at telecommunity.com  Wed Jun 28 02:41:08 2006
From: pje at telecommunity.com (Phillip J. Eby)
Date: Tue, 27 Jun 2006 20:41:08 -0400
Subject: [Python-Dev] Semantic of isinstance
In-Reply-To: <44A1C72E.4060901@canterbury.ac.nz>
References: <44A12A26.9070009@v.loewis.de>
	<5C0A6F919D675745BB1DBA7412DB68F5053847B370@df-foxhound-msg.exchange.corp.microsoft.com>
	<44A12A26.9070009@v.loewis.de>
Message-ID: <5.1.1.6.0.20060627203833.01ed6950@sparrow.telecommunity.com>

At 12:02 PM 6/28/2006 +1200, Greg Ewing wrote:
>Martin v. L?wis wrote:
>
> > Again, I believe this is all included for ExtensionClasses: it looks
> > for __class__ on the object if the type check fails, so that an
> > ExtensionClass could be actually a class derived from the C type.
>
>Now that we have had new-style classes for quite a
>while, is there still a need to support ExtensionClasses?

That's the wrong question.  The right question is, "is there a need to 
support isinstance() for proxy objects?" and the answer is yes.

As far as I know, nobody has proposed to change this behavior of 
isinstance(), nor even suggested a reason for doing so.


From guido at python.org  Wed Jun 28 03:46:45 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 18:46:45 -0700
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <44A1CE4A.2000900@canterbury.ac.nz>
References: <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
	<033001c69a07$44140890$d503030a@trilan>
	<44A1CE4A.2000900@canterbury.ac.nz>
Message-ID: <ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>

On 6/27/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Giovanni Bajo wrote:
>
> > This is where I wonder why the "def __main__()" PEP was rejected in the
> > first place. It would have solved this problem as well.
>
> Could this be reconsidered for Py3k?

You have a point.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From janssen at parc.com  Wed Jun 28 04:01:24 2006
From: janssen at parc.com (Bill Janssen)
Date: Tue, 27 Jun 2006 19:01:24 PDT
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: Your message of "Mon, 26 Jun 2006 18:00:58 PDT."
	<bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com> 
Message-ID: <06Jun27.190124pdt."58641"@synergy1.parc.xerox.com>

> The plan is to allow pure Python code to be embedded into web pages like
> JavaScript.  I am not going for the applet approach like Java.

Java support is now just a plug-in.  Should be easy to make a Python
plug-in system that works the same way.  If only we had a GUI... :-)

Bill

From fredrik at pythonware.com  Wed Jun 28 04:14:21 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 04:14:21 +0200
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>
References: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>
Message-ID: <e7solq$o6$1@sea.gmane.org>

Alexander Belopolsky wrote:

> Setobject code allocates several internal objects on the heap that are
> cleaned up by the PySet_Fini function.  This is a fine design choice,
> but it often makes debugging applications with embedded python more
> difficult.

given that CPython has about a dozen Fini functions, what exactly is it 
that makes PySet_Fini so problematic ?

$ more Python/pythonrun.c

	...
	PyMethod_Fini();
	PyFrame_Fini();
	PyCFunction_Fini();
	PyTuple_Fini();
	PyList_Fini();
	PySet_Fini();
	PyString_Fini();
	PyInt_Fini();
	PyFloat_Fini();
	_PyUnicode_Fini();
	...

</F>


From python-dev at zesty.ca  Wed Jun 28 05:21:20 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Tue, 27 Jun 2006 22:21:20 -0500 (CDT)
Subject: [Python-Dev] Switch statement - handling errors
In-Reply-To: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>
References: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606272219000.17937@server1.LFW.org>

On Tue, 27 Jun 2006, Jim Jewett wrote:
> (Almost) everyone agrees that the case expressions SHOULD be run-time
> constants.  The disagreements are largely over what to do when this
> gets violated.

I like your summary and understood most of it (options 1, 2, 3, 5, 6).
The only part i didn't understand was this:

> Bad Case Option (4) -- Ignore problems
> --------------------------------------
>
> This is for optimization; go ahead and ignore any problems you can.
> Maybe that branch will never be taken...  Ironically, this is also
> largely school I, since it matches the if semantics.

Could you elaborate on what this means?  Does "ignore any problems"
mean "even if a case value changes, pretend it didn't change"?  But
that wouldn't match the 'if' semantics, so i'm not sure what you
had in mind.

> Bad Case Option (6) -- Undefined
> --------------------------------
[...]
> The down side is that people may start to count on the actual behavior
> anyhow; then (in practice) we might just have Bad Case Option (5)
> without documentation.

I agree with this last paragraph.  Option 6 seems the most risky of all.


-- ?!ng

From nnorwitz at gmail.com  Wed Jun 28 05:44:23 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Tue, 27 Jun 2006 20:44:23 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
Message-ID: <ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>

On 6/27/06, Brett Cannon <brett at python.org> wrote:
>
> > (5)  I think file creation/writing should be capped rather than
> > binary; it is reasonable to say "You can create a single temp file up
> > to 4K" or "You can create files, but not more than 20Meg total".
>
> That has been suggested before.  Anyone else like this idea?

What would this code do:

    MAX = 4
    for i in xrange(10):
      fp = open(str(i), 'w+')
      fp.write(' ' * (MAX // 4))
      fp.close()
      if i % 2:
          os.unlink(str(i))

How many times should this execute, 4 or 8?  What about if there is no
if i % 2 and the file is unlinked at the end of each loop?  Should
that loop 10 times without error?  What would happen if we used the
same file name?  What would happen if we did something like:

    fp = open(str(i), 'w+')
    MAX = 4
    for i in xrange(10000):
      fp.seek(0)
      fp.write(' ' * (MAX // 4))

Should this succeed?

n

From guido at python.org  Wed Jun 28 06:40:35 2006
From: guido at python.org (Guido van Rossum)
Date: Tue, 27 Jun 2006 21:40:35 -0700
Subject: [Python-Dev] Switch statement - handling errors
In-Reply-To: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>
References: <fb6fbf560606271221n43c80b3bx98c125a4e77305d7@mail.gmail.com>
Message-ID: <ca471dc20606272140k2de2e10dkbaa3572a0188a11b@mail.gmail.com>

On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> Bad Case Option (5) -- ad hoc mixture
> -------------------------------------
>
> Pick an arbitrary set of rules, and follow it.
>
> Guido is currently leaning towards this, with the rules being "freeze
> at definition", raise for unhashable, ignore later changes, undecided
> on overlapping ranges.
>
> The disadvantage is that people can "cheat" with non-constant
> expressions.  Sometimes, this will work out great.  Sometimes it will
> lead to nasty non-localized bugs.  We have to explain exactly which
> cheats are allowed, and that explanation could get byzantine.

A solution that is often offered in situations like this: Pychecker
(or something like it) can do a much more thorough check. It should be
easy for Pychecker to keep track of the constancy of variables.

IMO my proposal has no cheats. It has well-defined semantics. Maybe
not the semantics you'd like to see, but without ESP built into the
compiler that's impossible. If you stick to the simple rule "constants
only" then you won't see any semantics surprises. Just stay away from
anything where you're not sure whether it's really a constant, and
you'll be fine. If on the other hand you want to explore the
boundaries of the semantics, we'll give you one simple rule, and you
can verify that that rule is indeed all there is.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From talin at acm.org  Wed Jun 28 07:52:52 2006
From: talin at acm.org (Talin)
Date: Tue, 27 Jun 2006 22:52:52 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	<44A13461.508@gmail.com>
	<44A165F1.6080807@ronadam.com>
	<ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
Message-ID: <44A21934.40801@acm.org>

Guido van Rossum wrote:
> On 6/27/06, Ron Adam <rrr at ronadam.com> wrote:
> 
>>So modeling the switch after dictionary dispatching more directly where
>>the switch is explicitly defined first and then used later might be good
>>both because it offers reuse in the current scope and it can easily be
>>used in code that currently uses dict style dispatching.
>>
>>    switch name:
>>       1:
>>          ...
>>       TWO:
>>          ...
>>       'a', 'b', 'c':
>>          ...
>>       in range(5,10):
>>          ...
>>       else:
>>          ...
>>
>>    for choice in data:
>>       do choice in name:    # best calling form I can think of.
> 
> 
> It looks like your proposal is to change switch into a command that
> defines a function of one parameter. Instead of the "do <expression>
> in <switch>" call you could just call the switch -- no new syntax
> needed. Your example above would be
> 
>   for choice in data:
>     name(choice)          # 'name' is the switch's name

This parallels some of my thinking -- that we ought to somehow make the 
dict-building aspect of the switch statement explicit (which is better 
than implicit, as we all have been taught.)

My version of this is to add to Python the notion of a simple 
old-fashioned subroutine - that is, a function with no arguments and no 
additional scope, which can be referred to by name. For example:

def MyFunc( x ):
    sub case_1:
       ...

    sub case_2:
       ...

    sub case_3:
       ...

    # A direct call to the subroutine:
    do case_1

    # An indirect call
    y = case_2
    do y

    # A dispatch through a dict
    d = dict( a=case_1, b=case_2, c_case_3 )
    do d[ 'a' ]

The 'sub' keyword defines a subroutine. A subroutine is simply a block 
of bytecode with a return op at the end. When a subroutine is invoked, 
control passes to the indented code within the 'sub' clause, and 
continues to the end of the block - there is no 'fall through' to the 
next block. When the subroutine is complete, a return instruction is 
exected, and control transfers back to the original location.

Because subroutines do not define a new scope, they can freely modify 
the variables of the scope in which they are defined, just like the code 
in an 'if' or 'else' block.

One ambiguity here is what happens if you attempt to call a subroutine 
from outside of the code block in which it is defined. The easiest 
solution is to declare that this is an error - in other words, if the 
current execution scope is different than the scope in which the 
subroutine is defined, an exception is thrown.

A second possibility is to store a reference to the defining scope as 
part of the subroutine definition. So when you take a reference to 
'case_1', you are actually referring to a closure of the enclosing scope 
and the subroutine address.

This approach has a number of advantages that I can see:

   -- Completely eliminates the problems of when to freeze the dict, 
because the dict is 'frozen' explicitly (or not at all, if desired.)

   -- Completely eliminates the question of whether to support ranges in 
the switch cases. The programmer is free to invent whatever type of 
dispatch mechanism they wish. For example, instead of using a dict, they 
could use an array of subroutines, or a spanning tree / BSP tree to 
represent contiguous ranges of options.

   -- Allows for development of dispatch methods beyond the switch model 
- for example, the dictionary could be computed, transformed and 
manipulated by user code before used for dispatch.

   -- Allows for experimentation with other flow of control forms.

The primary disadvantage of this form is that the case values and the 
associated code blocks are no longer co-located, which reduces some of 
the expressive power of the switch.

Note that if you don't want to define a new keyword, an alternate syntax 
would be 'def name:' with no argument braces, indicating that this is 
not a function but a procedure.

-- Talin

From ncoghlan at gmail.com  Wed Jun 28 09:15:13 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 28 Jun 2006 17:15:13 +1000
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>
References: <44A11EA1.1000605@iinet.net.au>
	<ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>
Message-ID: <44A22C81.5070701@gmail.com>

Guido van Rossum wrote:
> However, I'm fine with setting *another* variable to the full package
> name so someone who *really* wants to do relative imports here knows
> the package name.

OK, I'll do that. Any objections to __module_name__ as the name of the
variable? (to keep things simple, run_module() will always define the
variable, even if __name__ and __module_name__ say the same thing).

I'll then put a note in PEP 338 about the necessary hackery to make relative 
imports work correctly from a main module - I don't see any reason to include 
something like that in the normal docs, since the recommended approach is for 
main modules to use absolute imports.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From jcarlson at uci.edu  Wed Jun 28 09:23:45 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 28 Jun 2006 00:23:45 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A21934.40801@acm.org>
References: <ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
	<44A21934.40801@acm.org>
Message-ID: <20060627233313.1090.JCARLSON@uci.edu>


Talin <talin at acm.org> wrote:
> My version of this is to add to Python the notion of a simple 
> old-fashioned subroutine - that is, a function with no arguments and no 
> additional scope, which can be referred to by name. For example:

I don't like the idea of an embedded subrutine for a few reasons.  One
of them is because you need to define the case -> sub mapping
dictionaries in each pass, you are getting no improvement in speed
(which is a motivating factor in this discussion).  Even worse, the
disconnect between case definition and dispatch makes it feel quite a
bit like a modified label/goto proposal.  The ultimate killer is that
your proposed syntax (even using def) make this construct less readable
than pretty much any if/elif/else chain I have ever seen.

 - Josiah


From ncoghlan at gmail.com  Wed Jun 28 09:56:45 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 28 Jun 2006 17:56:45 +1000
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	
	<44A13461.508@gmail.com>
	<ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>
Message-ID: <44A2363D.8090300@gmail.com>

Guido van Rossum wrote:
> I think we all agree
> that side effects of case expressions is one way how we can deduce the
> compiler's behind-the-scenes tricks (even School Ib is okay with
> this). So I don't accept this as proof that Option 2 is better.

OK, I worked out a side effect free example of why I don't like option 3:

   def outer(cases=None):
       def inner(option, force_default=False):
           if cases is not None and not force_default:
               switch option:
                   case in cases[0]:
                       # case 0 handling
                   case in cases[1]:
                       # case 1 handling
                   case in cases[2]:
                       # case 2 handling
           # Default handling
       return inner

I believe it's reasonable to expect this to work fine - the case expressions 
don't refer to any local variables, and the subscript operations on the 
closure variable are protected by a sanity check to ensure that variable isn't 
None.

There certainly isn't anything in the code above to suggest to a reader that 
the condition attempting to guard evaluation of the switch statement might not 
do its job.

With first-time-execution jump table evaluation, there's no problem - when the 
closure variable is None, there's no way to enter the body of the if
statement, so the switch statement is never executed and the case expressions
are never evaluated. Such functions will still be storing a cell object for
the switch's jump table, but it will always be empty because the code to
populate it never gets a chance to run.

With the out of order execution involved in def-time evaluation, however, the
case expressions would always be executed, even though the inner function is 
trying to protect them with a sanity check on the value of the closure variable.

Using Option 3 semantics would mean that calling "outer()" given the above 
function definition will give you the rather surprising result "TypeError: 
'NoneType' object is unsubscriptable", with a traceback pointing to the line 
"case cases[0]:" in the body of a function that hasn't been called, and that 
includes an if statement preventing that line from being reached when 'cases' 
is None.

>> When it comes to the question of "where do we store the result?" for the
>> first-execution calculation of the jump table, my proposal is "a 
>> hidden cell
>> in the current namespace".
> 
> Um, what do you mean by the current namespace? You can't mean the
> locals of the function containing the switch. There aren't always
> outer functions so I must conclude you mean the module globals. But
> I've never seen those referred to as "the current namespace".

By 'current namespace' I really do mean locals() - the cell objects themselves
would be local variables from the point of view of the currently executing code.

For functions, the cell objects would be created at function definition time,
for code handled via exec-style execution, they'd be created just before 
execution of the first statement begins. In either case, the cell objects 
would already be in locals() before any bytecode gets executed.

It's only the calculation of the cell *contents* that gets deferred until
first execution of the switch statement.

> So do I understand that the switch gets re-initialized whenever a new
> function object is created? That seems a violation of the "first time
> executed" rule, or at least a modification ("first time executed per
> defined function"). Or am I misunderstanding?

I took it as a given that 'first time execution' had to be per function
and/or invocation of exec - tying caching of expressions that rely on module
globals or closure variables to code objects doesn't make any sense, because
the code object may have different globals and/or closure variables next time
it gets executed.

I may not have explained my opinion about that very well though, because the 
alternative didn't even seem to be an option.

> But if I have a code object c containing a switch statement (not
> inside a def) with a side effect in one of its cases, the side effect
> is activated each time through the following loop, IIUC:
> 
>  d = {}
>  for i in range(10):
>    exec c in d

Yep. For module and class level code, the caching really only has any
speed benefit if the switch statement is inside a loop.

The rationale for doing it that way becomes clearer if you consider what would 
happen if you created a new dictionary each time through the loop:

   for i in range(10):
       d = {}
       exec c in d
       print d["result"]

> I'm confused how you can first argue that tying things to the function
> definition is one of the main drawbacks of Option 3, and then proceed
> to tie Option 2 to the function definition as well. This sounds like
> by far the most convoluted specification I have seen so far. I hope
> I'm misunderstanding what you mean by namespace.

It's not the link to function definitions that I object to in Option 3, it's
the idea of evaluating the cases at function definition *time*. I believe the
out-of-order execution involved will result in too many surprises when you
start considering surrounding control flow statements that lead to the switch 
statement not being executed at all.

If a switch statement is inside a class statement, a function definition
statement, or an exec statement then I still expect the jump table to be
recalculated every time the containing statement is executed, regardless of
whether Option 2 or Option 3 is used for when the cases expressions get
evaluated (similarly, reloading a module would recalculate any module level 
jump tables)

And I agree my suggestions are the most involved so far, but I think that's 
because the current description of option 3 is hand-waving away a couple of 
important issues:
   - how does it deal with module and class level code?
   - how does it deal with switch statements that are inside conditional logic
where that conditional logic determines whether or not the case
expressions can be safely evaluated?

(I guess the fact that I'm refining the idea while writing about it doesn't 
really help, either. . .)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From talin at acm.org  Wed Jun 28 10:04:31 2006
From: talin at acm.org (Talin)
Date: Wed, 28 Jun 2006 01:04:31 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <20060627233313.1090.JCARLSON@uci.edu>
References: <ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
	<44A21934.40801@acm.org> <20060627233313.1090.JCARLSON@uci.edu>
Message-ID: <44A2380F.6000503@acm.org>

Josiah Carlson wrote:
> Talin <talin at acm.org> wrote:
> 
>>My version of this is to add to Python the notion of a simple 
>>old-fashioned subroutine - that is, a function with no arguments and no 
>>additional scope, which can be referred to by name. For example:
> 
> 
> I don't like the idea of an embedded subrutine for a few reasons.  One
> of them is because you need to define the case -> sub mapping
> dictionaries in each pass, you are getting no improvement in speed
> (which is a motivating factor in this discussion).  Even worse, the
> disconnect between case definition and dispatch makes it feel quite a
> bit like a modified label/goto proposal.  The ultimate killer is that
> your proposed syntax (even using def) make this construct less readable
> than pretty much any if/elif/else chain I have ever seen.
> 
>  - Josiah

The case -> sub mapping doesn't need to be defined every time - that's 
the point, you as the programmer decide when and how to construct the 
dictionary, rather than the language trying to guess what it is you 
want. EIBTI.

For example, if I wanted to emulate the "dict on first use" semantics, 
all I would have to do is something along the lines of:

    d = None
    def MyFunc( x ):
       global d

       sub ... etc...

       if d is None:
          d = dict( ... )

       do d[ x ]

You could also define the switch in an outer function that contains an 
inner function that is called multiple times:

    def Outer():
       sub S1:
          ...

       sub S2:
          ...

       sub S3:
          ...

       dispatch = {
          parser.IDENT: S1,
          parser.NUMBER: S2,
          parser.COMMENT: S3
       }

       def Inner( x ):
          do dispatch[ x ]

       return Inner

There is also the possibility of building the dict before the function 
is run, however this requires a method of peeking into the function body 
and extracting the definitions there. For example, suppose the 
subroutine names were also attributes of the function object:

    def MyFunc( x ):
       sub upper:
          ...
       sub lower:
          ...
       sub control:
          ...
       sub digit:
          ...

       do dispatch[ x ]


    # Lets use an array this time, for variety
    dispatch = [
       MyFunc.upper,
       MyFunc.lower,
       MyFunc.upper, # Yes, 2 and 3 are the same as 0 and 1
       MyFunc.lower,
       MyFunc.control,
       MyFunc.digit,
    ]

(Note that we still enforce the rule that the 'do' and the 'sub' 
statements have to be in the same scope - but the construction of the 
dispatch table doesn't have to be.)

With regards to your second and third points: sure, I freely admit that 
this proposal is less readable than a switch statement. The question is, 
however, is it more readable than what we have *now*? As I have 
explained, comparing it to if/elif/else chains is unfair, because they 
don't have equivalent performance. The real question is, is it more 
readable than, say, a dictionary of references to individual functions; 
and I think that there are a number of possible use cases where the 
answer would be 'yes'.

I also admit that what I propose offers less in the way of syntactical 
sugar than a switch statement - but in return what you gain is complete 
absence of the various 'surprise' behaviors that people have been 
arguing about.

Note, for example, that in the above example you are free to use 
constants, variables, attributes, or any other kind of value in the 
dictionary, as long as its a valid dictionary key. There's no fussing 
about with 'const' or 'static' or whether or not you can use local 
variables or compiler literals or whatever. You don't have to worry 
about whether it works in module scope (it does), or in class scope 
(well...it works as well as any other executable code does.)

(Not that 'const' and 'static' et all aren't valid ideas, but I want to 
avoid creating a syntactical construct in Python that requires going 
against Python's inherent dynamism.)

I think that language features should "just work" in all cases, or at 
least all cases that are reasonable. I don't like the idea of a switch 
statement that is hedged around with unintuitive exceptions and strange 
corner cases.

-- Talin

From fredrik at pythonware.com  Wed Jun 28 10:06:38 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 10:06:38 +0200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>	<44A13461.508@gmail.com><ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>
	<44A2363D.8090300@gmail.com>
Message-ID: <e7tdae$nip$1@sea.gmane.org>

Nick Coghlan wrote:

> There certainly isn't anything in the code above to suggest to a reader that
> the condition attempting to guard evaluation of the switch statement might not
> do its job.

that's why the evaluation model used in the case statement needs to be explicit.

that applies to the "once but not really" approach, as well as the "static = in global
scope" approach (http://online.effbot.org/2006_06_01_archive.htm#pep-static).
there are no shortcuts here, if we want things to be easy to explain and easy to
internalize.

</F> 




From fredrik at pythonware.com  Wed Jun 28 10:38:27 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 10:38:27 +0200
Subject: [Python-Dev] School IIb?
References: <Pine.LNX.4.58.0606261738370.17937@server1.LFW.org><ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com><Pine.LNX.4.58.0606261438120.17937@server1.LFW.org><ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com><Pine.LNX.4.58.0606261521270.17937@server1.LFW.org><ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com><Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
	<ca471dc20606261552o3d5fe88u6674770bae99bc31@mail.gmail.com >
	<5.1.1.6.0.20060626185852.03aacf18@sparrow.telecommunity.com>
Message-ID: <e7tf63$tv6$1@sea.gmane.org>

Phillip J. Eby wrote:

> Hear, hear!  We already have if/elif, we don't need another way to spell
> it.  The whole point of switch is that it asserts that exactly *one* case
> is supposed to match

that's not true for all programming languages that has a switch construct, though;
the common trait is that you're dispatching on a single value, not necessarily that
there cannot be potentially overlapping case conditions.

</F> 




From fredrik at pythonware.com  Wed Jun 28 10:39:27 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 10:39:27 +0200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com><Pine.LNX.4.58.0606261438120.17937@server1.LFW.org><ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com><Pine.LNX.4.58.0606261521270.17937@server1.LFW.org><ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com><e7qfls$fr9$1@sea.gmane.org><ca471dc20606262237y18854bfbvbbd0793d0c8745f5@mail.gmail.com><bcf87d920606270142x7a3ef0ddx74959931260c80f8@mail.gmail.com>
	<ca471dc20606270813m60619473u63b5c20ad2a60492@mail.gmail.com>
Message-ID: <e7tf80$u4h$1@sea.gmane.org>

Guido van Rossum wrote:

>> Is it unacceptable - or impractical - to break the addition of switch
>> to python in two (minor version separated) steps ?
>
> But what's the point? We have until Python 3000 anyway.

except that we may want to "reserve" the necessary keywords in 2.6...

</F> 




From arigo at tunes.org  Wed Jun 28 12:44:31 2006
From: arigo at tunes.org (Armin Rigo)
Date: Wed, 28 Jun 2006 12:44:31 +0200
Subject: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a
	crasher?
In-Reply-To: <bbaeab100606271032n5de4a33fja8fbba88da33a3e2@mail.gmail.com>
References: <bbaeab100606270931v376c6fa3v653f2cfcd92c0880@mail.gmail.com>
	<2mirmmh5xq.fsf@starship.python.net>
	<bbaeab100606271032n5de4a33fja8fbba88da33a3e2@mail.gmail.com>
Message-ID: <20060628104430.GA21825@code0.codespeak.net>

Hi Brett,

On Tue, Jun 27, 2006 at 10:32:08AM -0700, Brett Cannon wrote:
> OK, with you and Thomas both wanting to keep it I will let it be.  I just
> won't worry about fixing it myself during my interpreter hardening crusade.

I agree with this too.  If I remember correctly, you even mentioned in
your rexec docs that sys.setrecursionlimit() should be disallowed from
being run by untrusted code, which means that an untrusted interpreter
would be safe.

I guess we could add an example of a bogus 'new.code()' call in the
Lib/test/crashers directory too, without you having to worry about it in
untrusted mode if new.code() is forbidden.  I could also add my
'gc.get_referrers()' attack, which should similarly not be callable from
untrusted code anyway.


A bientot,

Armin

From martin at v.loewis.de  Wed Jun 28 12:52:45 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 12:52:45 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060626104108.89960.qmail@web31510.mail.mud.yahoo.com>
References: <20060626104108.89960.qmail@web31510.mail.mud.yahoo.com>
Message-ID: <44A25F7D.8000505@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
> If there is a consenus, I'd create a new exception ImportErrorNoModule(name)
> that is used consistently from all places. This would ensure uniformity of the
> message in the future.

A correction proposal should only be given if it is likely correct.
There can be many reasons why an import could fail: there might be
no read permission for the file, or the PYTHONPATH might be setup
incorrectly.

IOW, a hint about a missing __init__.py should only be given if
a directory with the name of module was found, but lacked an
__init__.py (i.e. in the cases where currently a warning is
produced).

Regards,
Martin

From glingl at aon.at  Wed Jun 28 12:57:23 2006
From: glingl at aon.at (Gregor Lingl)
Date: Wed, 28 Jun 2006 12:57:23 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
Message-ID: <44A26093.8070503@aon.at>

xturtle.py, extended turtle graphics
a new Tkinter based turtle graphics module for Python

I just have released xturtle.py (v.0.91).  It can be found at:

http://sourceforge.net/tracker/?group_id=5470&atid=305470

with RequestID 1513695 (and 1513699 for the docs)
and also here

http://ada.rg16.asn-wien.ac.at/~python/xturtle

with some supplementary information.

xturtle was first announced at the edu-sig and is reported 
to work properly on all major platforms (Mac, Linux and 
Windows)

Now I was suggested to discuss it on this list. So I'll try.

For now I'll give only two indroductory statements and wait 
for a response, hoping that a fruitful discussion will evolve.

(I) The module xturtle.py is an extended reeimplementation 
of turtle.py, retains its merits and is backward compatible 
to turtle.py. Enhancements over turtle.py are:

# Better animation of the turtle movements, especially of 
turning the turtle. So the turtles can more easily be used 
as a visual feedback instrument by the (beginning) programmer.
# Different turtle shapes, gif-images as turtle shapes, user 
defined and user controllable turtle shapes, among them 
compound (multicolored) shapes.
# Fine control over turtle movement and screen updates via 
delay(), and enhanced tracer() and speed(), update() methods.
# Aliases for the most commonly used commands, like fd for 
forward etc., following the early Logo traditions. This 
reduces the boring work of typing long sequences of 
commands, which often occur in a natural way when kids try 
to program fancy pictures on their first encounter with 
turtle graphcis (still not knowing nothing about loops).
# Some simple commands/methods for creating event driven 
programs (mouse-, key-, timer-events). Especially useful for 
programming simple games.
# A scrollable Canvas class. The scrollable Canvas can be 
extended interactively as needed while playing around with 
the turtle(s) (e. g. to follow some escaped turtle). The 
Window containing the default canvas when using the Pen - 
class also can be resized and repositioned programmatically.
# Commands for controlling background color or background 
image.


(II) Motives: I designed this module to provide utmost easy 
access to a sufficiently rich graphics toolkit. I consider 
this crucial as well for students as for teachers, who e. g.
have to decide which programming language to use for 
teaching programming. Considering turtle graphics as a very
appropriate tool for introductory programming courses I used
the current turtle.py as a central tool in my book "Python 
for kids" despite beeing well aware of its deficiencies.
Now I propose a better one.

You may unterstand by intentions best by having a look at 
the 25+ example scripts (with the included demoViewer) 
provided in the xturtle.zip file, which can be downloaded
from the above mentioned website. (I think these demos 
should not be included in the standard distribution, they 
could go into a special edu-distro as Kirby Urner suggested 
lately.)

I would very much appreciate if xturtle.py could go into 
Python 2.5

Let's see

Gregor


From glingl at aon.at  Wed Jun 28 13:42:39 2006
From: glingl at aon.at (Gregor Lingl)
Date: Wed, 28 Jun 2006 13:42:39 +0200
Subject: [Python-Dev] xturtle.py - a replacement for turtle.py
Message-ID: <44A26B2F.3090700@aon.at>

xturtle.py, extended turtle graphics
is a new Tkinter based turtle graphics module for Python

xturtle.py (Version 0.91) can be found at:

http://sourceforge.net/tracker/?group_id=5470&atid=305470
(Request ID 1513695, and 1513699 for the docs)

and at

http://ada.rg16.asn-wien.ac.at/~python/xturtle
together with a set of example scripts and a demoViewer

xturtle was first announced at edu-sig and is reported to 
work properly on all major platforms (Mac, Linux and
Windows) I propose to use it as a replacement for turtle.py

It was suggested to me to discuss it on this list.  So I'll try that.

For now I'll give only two introductory statements and then wait for a response, hoping that a fruitful 
discussion will evolve.

(I) xturtle.py is a reimplementation of turtle.py, retaining its merits and is backward compatible to turtle.py. Enhancements over turtle.py are:


# Better animation of the turtle movements, especially of turning the 
turtle. So the turtles can more easily be used as a visual feedback 
instrument by the (beginning) programmer.
# Different turtle shapes, gif-images as turtle shapes, user defined and 
user controllable turtle shapes, among them compound (multicolored) shapes.
# Fine control over turtle movement and screen updates via |delay()|, 
and enhanced |tracer()| and |speed()|, |update()| methods.
# Aliases for the most commonly used commands, like |fd| for |forward| 
etc., following the early Logo traditions. This reduces the boring work 
of typing long sequences of commands, which often occur in a natural way 
when kids try to program fancy pictures on their first encounter with 
turtle graphcis (still not knowing loops).
# Some simple commands/methods for creating event driven programs 
(mouse-, key-, timer-events). Especially useful for programming simple 
games.
# A scrollable Canvas class. The scrollable Canvas can be extended 
interactively as needed while playing around with the turtle(s) (e. g. 
to follow some escaped turtle). # Commands for controlling background 
color or background image.

(II) Motives: My goal was to provide utmost easy access to a 
sufficiently rich graphics toolkit. I consider this of crucial 
importance for students and teachers who, e. g., have to decide which 
language to use for introductory programming courses. Moreover I 
consider turtle graphics as an excellent tool to visualize programming 
concept. So I already used the current turtle.py as a central tool in 
the first edition of my book "Python f?r Kids", despite of its apparent 
deficiencies.

Now I propose an alternative: xturtle.py. You will best understand my 
intentions by having a look at the 25+ demo scripts using the 
accompanying demoViewer, which are provided with turtle.zip at the above 
mentioned website. (I do not propose to include these into the standard 
distribution. Perhaps they could go into some special edudistro as Kirby 
Urner suggested lately)

I would appreciate it very much if xturtle.py could go into Python2.5. 
I'm ready to do the amendments, which may emerge as necessary from the 
dicussion here.

Regards,
Gregor Lingl





From glingl at aon.at  Wed Jun 28 13:45:57 2006
From: glingl at aon.at (Gregor Lingl)
Date: Wed, 28 Jun 2006 13:45:57 +0200
Subject: [Python-Dev] Oh-why that??  Please ignore one of the two
Message-ID: <44A26BF5.8070607@aon.at>

Sorry (dunno why)
Gregor

From rwgk at yahoo.com  Wed Jun 28 14:25:58 2006
From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve)
Date: Wed, 28 Jun 2006 05:25:58 -0700 (PDT)
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <44A25F7D.8000505@v.loewis.de>
Message-ID: <20060628122558.19551.qmail@web31505.mail.mud.yahoo.com>

--- "Martin v. L???wis" <martin at v.loewis.de> wrote:

> Ralf W. Grosse-Kunstleve wrote:
> > If there is a consenus, I'd create a new exception
> ImportErrorNoModule(name)
> > that is used consistently from all places. This would ensure uniformity of
> the
> > message in the future.
> 
> A correction proposal should only be given if it is likely correct.

It is not a proposal, just a "note". Maybe a better alternative would be

ImportError: No module name foo
    Reminder: To resolve import problems consult the section on "Packages"
    at http://www.python.org/doc/tut/

> There can be many reasons why an import could fail: there might be
> no read permission for the file,

The warning in 2.5b1 doesn't fire in this case:

  % ls -l junk.py
  ---------- 1 rwgk cci 16 Jun 28 05:01 junk.py
  % python
  Python 2.5b1 (r25b1:47027, Jun 26 2006, 02:59:25) 
  [GCC 4.1.0 20060304 (Red Hat 4.1.0-3)] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import junk
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  ImportError: No module named junk
  >>> 

> or the PYTHONPATH might be setup
> incorrectly.

That's impossible to detect.

> IOW, a hint about a missing __init__.py should only be given if
> a directory with the name of module was found, but lacked an
> __init__.py (i.e. in the cases where currently a warning is
> produced).

I am thinking you'd need to build up a buffer of potential warnings while
trying to resolve an import. If the import succeeds the buffer is discarded, if
it fails it is added to the exception message, or the warnings are "flushed"
right before the ImportError is raised. Does that sound right? How would this
interact with threading (it seems you'd need a separate buffer for each
thread)?


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From ncoghlan at gmail.com  Wed Jun 28 15:18:42 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 28 Jun 2006 23:18:42 +1000
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <44A22C81.5070701@gmail.com>
References: <44A11EA1.1000605@iinet.net.au>	<ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>
	<44A22C81.5070701@gmail.com>
Message-ID: <44A281B2.2080309@gmail.com>

Nick Coghlan wrote:
> Guido van Rossum wrote:
>> However, I'm fine with setting *another* variable to the full package
>> name so someone who *really* wants to do relative imports here knows
>> the package name.
> 
> OK, I'll do that. Any objections to __module_name__ as the name of the
> variable? (to keep things simple, run_module() will always define the
> variable, even if __name__ and __module_name__ say the same thing).
> 
> I'll then put a note in PEP 338 about the necessary hackery to make relative 
> imports work correctly from a main module - I don't see any reason to include 
> something like that in the normal docs, since the recommended approach is for 
> main modules to use absolute imports.

These two bits have been done.

The workaround to replace __name__ with __module_name__ in order to enable 
relative imports turned out to be pretty ugly, so I also worked up a patch to 
import.c to get it to treat __module_name__ as an override for __name__ when 
__name__ == '__main__'.

With the patch in place, relative imports from a main module executed using 
'-m' would work out of the box.

So given a test_foo.py that started like this:

   import unittest
   import ..foo
   # Define the tests
   # Run the tests if __name__ = '__main__'

A file layout like this:

   /home
     /ncoghlan
        /devel
          /package
             /__init__.py
             /foo.py
             /test
                /__init__.py
                /test_foo.py

And a current working directory of /home/ncoghlan/devel, then the tests could 
be run simply by doing:

python -m package.test.test_foo

With beta 1 or current SVN, that would blow up with a ValueError.

We can't do anything to help directly executed files, though - the interpreter 
simply doesn't have access to the info it needs in order to determine the 
location of such files in the module namespace.

I'll post the patch to SF tomorrow (assuming the site decides to come back by 
then). In addition to the import.c changes, the patch includes some additional 
tests for runpy that exercise this and make sure it works, since runpy is the 
intended client.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From martin at v.loewis.de  Wed Jun 28 15:23:47 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 15:23:47 +0200
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060628122558.19551.qmail@web31505.mail.mud.yahoo.com>
References: <20060628122558.19551.qmail@web31505.mail.mud.yahoo.com>
Message-ID: <44A282E3.1000009@v.loewis.de>

Ralf W. Grosse-Kunstleve wrote:
>> There can be many reasons why an import could fail: there might be
>> no read permission for the file,
> 
> The warning in 2.5b1 doesn't fire in this case:

Sure, but it would produce your "note", right? And the note would be
essentially wrong. Instead, the ImportError should read

ImportError: No module named junk; could not open junk.py (permission
denied)

>> or the PYTHONPATH might be setup
>> incorrectly.
> 
> That's impossible to detect.

Right. So the ImportError should not guess that there is a problem
with packages if there could be dozens of other reasons why the
import failed.

> I am thinking you'd need to build up a buffer of potential warnings while
> trying to resolve an import. If the import succeeds the buffer is discarded, if
> it fails it is added to the exception message, or the warnings are "flushed"
> right before the ImportError is raised. Does that sound right? 

That might work, yes.

> How would this interact with threading (it seems you'd need a separate 
> buffer for each thread)?

There are several solutions. I think you are holding the import lock
all the time, so there can be only one import running (one would have
to check whether the import lock is really held all the time); in
that case, a global variable would work just fine.

Another option is to pass-through all import-related data across all
function calls as a parameter; that may actually cause a reduction
in the number of parameters to the current functions, and simplify
the code. Define a struct to hold all the relevant data, allocate
it when entering the import code, pass it to every function,
fill it out as needed, and deallocate it when leaving the
import code. Allocation of the struct itself could likely be done
on stack.

Yet another option is to put the data into thread storage (although
care is needed wrt. recursive imports within one thread).

Regards,
Martin



From martin at v.loewis.de  Wed Jun 28 15:24:32 2006
From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 15:24:32 +0200
Subject: [Python-Dev] xturtle.py - a replacement for turtle.py
In-Reply-To: <44A26B2F.3090700@aon.at>
References: <44A26B2F.3090700@aon.at>
Message-ID: <44A28310.9000904@v.loewis.de>

Gregor Lingl wrote:
> I would appreciate it very much if xturtle.py could go into Python2.5. 
> I'm ready to do the amendments, which may emerge as necessary from the 
> dicussion here.

I see little chance for that. Python 2.5 is feature-frozen.

Regards,
Martin

From jimjjewett at gmail.com  Wed Jun 28 15:42:05 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 28 Jun 2006 09:42:05 -0400
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
Message-ID: <fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>

On 6/27/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> On 6/27/06, Brett Cannon <brett at python.org> wrote:
> >
> > > (5)  I think file creation/writing should be capped rather than
> > > binary; it is reasonable to say "You can create a single temp file up
> > > to 4K" or "You can create files, but not more than 20Meg total".

> > That has been suggested before.  Anyone else like this idea?

> [ What exactly does the limit mean?  bytes written?  bytes currently stored?  bytes stored after exit?]

IMHO, I would prefer that it limit disk consumption; a deleted or
overwritten file would not count against the process, but even a
temporary spike would need to be less than the cap.

That said, I would consider any of the mentioned implementations an
acceptable proxy; the point is just that I might want to let a program
save data without letting it have my entire hard disk.

-jJ

From amk at amk.ca  Wed Jun 28 16:08:41 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Wed, 28 Jun 2006 10:08:41 -0400
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <44A26093.8070503@aon.at>
References: <44A26093.8070503@aon.at>
Message-ID: <20060628140841.GA13947@rogue.amk.ca>

On Wed, Jun 28, 2006 at 12:57:23PM +0200, Gregor Lingl wrote:
> I would very much appreciate if xturtle.py could go into 
> Python 2.5

That decision is up to Anthony Baxter, the release manager.
Unfortunately 2.5beta1 is already out, and the developers try to avoid
large changes during the beta series, so I wouldn't be optimistic
about 2.5.  

Enhancing the turtle module would be an excellent candidate for 2.6,
though.  Please file a patch on SourceForge for this so the improved
module doesn't get forgotten.

Great demos, BTW!  I especially like the gravity ones.

--amk

From guido at python.org  Wed Jun 28 16:33:12 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 07:33:12 -0700
Subject: [Python-Dev] School IIb?
In-Reply-To: <e7tf63$tv6$1@sea.gmane.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
	<5.1.1.6.0.20060626185852.03aacf18@sparrow.telecommunity.com>
	<e7tf63$tv6$1@sea.gmane.org>
Message-ID: <ca471dc20606280733m75d8a374ica8ddbff4f40224b@mail.gmail.com>

On 6/28/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> Phillip J. Eby wrote:
>
> > Hear, hear!  We already have if/elif, we don't need another way to spell
> > it.  The whole point of switch is that it asserts that exactly *one* case
> > is supposed to match
>
> that's not true for all programming languages that has a switch construct, though;
> the common trait is that you're dispatching on a single value, not necessarily that
> there cannot be potentially overlapping case conditions.

You have a point.

Suppose you're switching on some os-specific constants (e.g. exported
by the os module or some module like that). You have a case for each.
But on some os, two different constants have the same value (since on
that os they are implemented the same way -- like O_TEXT and O_BINARY
on Unix). Now your switch wouldn't work at all on that os; it would be
much better if you could arrange the cases so that one case has
preference over another.

There's also the (more likely) use case where you have a set of cases
to be treated the same, but one member of the set must be treated
differently. It would be convenient to put the exception in an earlier
case and be done with it.

Yet, it seems a shame not to be able to diagnose dead code due to
accidental case duplication. Maybe that's less important, and
pychecker can deal with it? After all we don't diagnose duplicate
method definitions either, and that must have bitten many of us
(usually due to a copy-and-paste error)...

This doesn't move me to school I. But I do want to introduce school
IIb which resolves redundant cases by saying the first match wins.
This is trivial to implement when building the dispatch dict (skip
keys already present).

(An alternative would be to introduce new syntax to indicate "okay to
have overlapping cases" or "ok if this case is dead code" but I find
that overkill.)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 28 16:45:30 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 07:45:30 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A21934.40801@acm.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com> <44A165F1.6080807@ronadam.com>
	<ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
	<44A21934.40801@acm.org>
Message-ID: <ca471dc20606280745p120ec4a0mf5c0d29bb5933dec@mail.gmail.com>

Looks like this doesn't help at all when pre-computing the dispatch
dict based on named constants. So this is a no-go.

I should add that ABC had such named subroutines (but not for
switching); I dropped them to simplify things. They're not an
intrinsically undesirable or even unnecessary thing IMO. But it
doesn't solve my use case for switching. The syntax is also seriously
cumbersome compared to a PEP-3103-style switch.

--Guido

On 6/27/06, Talin <talin at acm.org> wrote:
> This parallels some of my thinking -- that we ought to somehow make the
> dict-building aspect of the switch statement explicit (which is better
> than implicit, as we all have been taught.)
>
> My version of this is to add to Python the notion of a simple
> old-fashioned subroutine - that is, a function with no arguments and no
> additional scope, which can be referred to by name. For example:
>
> def MyFunc( x ):
>     sub case_1:
>        ...
>
>     sub case_2:
>        ...
>
>     sub case_3:
>        ...
>
>     # A direct call to the subroutine:
>     do case_1
>
>     # An indirect call
>     y = case_2
>     do y
>
>     # A dispatch through a dict
>     d = dict( a=case_1, b=case_2, c_case_3 )
>     do d[ 'a' ]

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From fredrik at pythonware.com  Wed Jun 28 16:46:32 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 16:46:32 +0200
Subject: [Python-Dev] School IIb?
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com><Pine.LNX.4.58.0606261438120.17937@server1.LFW.org><ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com><Pine.LNX.4.58.0606261521270.17937@server1.LFW.org><ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com><Pine.LNX.4.58.0606261738370.17937@server1.LFW.org><5.1.1.6.0.20060626185852.03aacf18@sparrow.telecommunity.com><e7tf63$tv6$1@sea.gmane.org>
	<ca471dc20606280733m75d8a374ica8ddbff4f40224b@mail.gmail.com>
Message-ID: <e7u4o8$937$1@sea.gmane.org>

Guido van Rossum wrote:

>> that's not true for all programming languages that has a switch construct, though;
>> the common trait is that you're dispatching on a single value, not necessarily that
>> there cannot be potentially overlapping case conditions.
>
> You have a point.

that can happen to the best of us ;-)

> Suppose you're switching on some os-specific constants (e.g. exported
> by the os module or some module like that). You have a case for each.
> But on some os, two different constants have the same value (since on
> that os they are implemented the same way -- like O_TEXT and O_BINARY
> on Unix). Now your switch wouldn't work at all on that os; it would be
> much better if you could arrange the cases so that one case has
> preference over another.
>
> There's also the (more likely) use case where you have a set of cases
> to be treated the same, but one member of the set must be treated
> differently. It would be convenient to put the exception in an earlier
> case and be done with it.

same approach as for try/except, in other words.

> Yet, it seems a shame not to be able to diagnose dead code due to
> accidental case duplication. Maybe that's less important, and
> pychecker can deal with it? After all we don't diagnose duplicate
> method definitions either, and that must have bitten many of us
> (usually due to a copy-and-paste error)...

we could use a warning for this...

> This doesn't move me to school I. But I do want to introduce school
> IIb which resolves redundant cases by saying the first match wins.
> This is trivial to implement when building the dispatch dict (skip
> keys already present).

I just wish I could figure out what school my original micro-PEP belongs
to (but as long as my implementation note is still just a draft, I guess no-
body else can figure that out either... ;-)

</F> 




From guido at python.org  Wed Jun 28 17:00:25 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 08:00:25 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A2363D.8090300@gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com>
	<ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>
	<44A2363D.8090300@gmail.com>
Message-ID: <ca471dc20606280800n7b06ca38u7af4069af612000@mail.gmail.com>

On 6/28/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Guido van Rossum wrote:
> > I think we all agree
> > that side effects of case expressions is one way how we can deduce the
> > compiler's behind-the-scenes tricks (even School Ib is okay with
> > this). So I don't accept this as proof that Option 2 is better.
>
> OK, I worked out a side effect free example of why I don't like option 3:
>
>    def outer(cases=None):
>        def inner(option, force_default=False):
>            if cases is not None and not force_default:
>                switch option:
>                    case in cases[0]:
>                        # case 0 handling
>                    case in cases[1]:
>                        # case 1 handling
>                    case in cases[2]:
>                        # case 2 handling
>            # Default handling
>        return inner
>
> I believe it's reasonable to expect this to work fine - the case expressions
> don't refer to any local variables, and the subscript operations on the
> closure variable are protected by a sanity check to ensure that variable isn't
> None.

It's only reasonable if you're in school I.

As I have repeatedly said, the only use cases I care about are those
where the case expressions are constants for the lifetime of the
process. (The compiler doesn't need to know this but the programmer
does.)

> There certainly isn't anything in the code above to suggest to a reader that
> the condition attempting to guard evaluation of the switch statement might not
> do its job.
>
> With first-time-execution jump table evaluation, there's no problem - when the
> closure variable is None, there's no way to enter the body of the if
> statement, so the switch statement is never executed and the case expressions
> are never evaluated. Such functions will still be storing a cell object for
> the switch's jump table, but it will always be empty because the code to
> populate it never gets a chance to run.
>
> With the out of order execution involved in def-time evaluation, however, the
> case expressions would always be executed, even though the inner function is
> trying to protect them with a sanity check on the value of the closure variable.
>
> Using Option 3 semantics would mean that calling "outer()" given the above
> function definition will give you the rather surprising result "TypeError:
> 'NoneType' object is unsubscriptable", with a traceback pointing to the line
> "case cases[0]:" in the body of a function that hasn't been called, and that
> includes an if statement preventing that line from being reached when 'cases'
> is None.

That's a perfectly reasonable outcome to me.

> >> When it comes to the question of "where do we store the result?" for the
> >> first-execution calculation of the jump table, my proposal is "a
> >> hidden cell
> >> in the current namespace".
> >
> > Um, what do you mean by the current namespace? You can't mean the
> > locals of the function containing the switch. There aren't always
> > outer functions so I must conclude you mean the module globals. But
> > I've never seen those referred to as "the current namespace".
>
> By 'current namespace' I really do mean locals() - the cell objects themselves
> would be local variables from the point of view of the currently executing code.
>
> For functions, the cell objects would be created at function definition time,
> for code handled via exec-style execution, they'd be created just before
> execution of the first statement begins. In either case, the cell objects
> would already be in locals() before any bytecode gets executed.
>
> It's only the calculation of the cell *contents* that gets deferred until
> first execution of the switch statement.
>
> > So do I understand that the switch gets re-initialized whenever a new
> > function object is created? That seems a violation of the "first time
> > executed" rule, or at least a modification ("first time executed per
> > defined function"). Or am I misunderstanding?
>
> I took it as a given that 'first time execution' had to be per function
> and/or invocation of exec - tying caching of expressions that rely on module
> globals or closure variables to code objects doesn't make any sense, because
> the code object may have different globals and/or closure variables next time
> it gets executed.
>
> I may not have explained my opinion about that very well though, because the
> alternative didn't even seem to be an option.

PEP 3103 discusses several ways to implement first-time-really.

I suggest that you edit the PEP to add option 2a which is
first-time-per-function-definition.

> > But if I have a code object c containing a switch statement (not
> > inside a def) with a side effect in one of its cases, the side effect
> > is activated each time through the following loop, IIUC:
> >
> >  d = {}
> >  for i in range(10):
> >    exec c in d
>
> Yep. For module and class level code, the caching really only has any
> speed benefit if the switch statement is inside a loop.
>
> The rationale for doing it that way becomes clearer if you consider what would
> happen if you created a new dictionary each time through the loop:
>
>    for i in range(10):
>        d = {}
>        exec c in d
>        print d["result"]
>
> > I'm confused how you can first argue that tying things to the function
> > definition is one of the main drawbacks of Option 3, and then proceed
> > to tie Option 2 to the function definition as well. This sounds like
> > by far the most convoluted specification I have seen so far. I hope
> > I'm misunderstanding what you mean by namespace.
>
> It's not the link to function definitions that I object to in Option 3, it's
> the idea of evaluating the cases at function definition *time*. I believe the
> out-of-order execution involved will result in too many surprises when you
> start considering surrounding control flow statements that lead to the switch
> statement not being executed at all.
>
> If a switch statement is inside a class statement, a function definition
> statement, or an exec statement then I still expect the jump table to be
> recalculated every time the containing statement is executed, regardless of
> whether Option 2 or Option 3 is used for when the cases expressions get
> evaluated (similarly, reloading a module would recalculate any module level
> jump tables)
>
> And I agree my suggestions are the most involved so far, but I think that's
> because the current description of option 3 is hand-waving away a couple of
> important issues:
>    - how does it deal with module and class level code?

Not so much hand-waving as several possibilities, each of which is
clearly defined and has some (dis)advandages.

>    - how does it deal with switch statements that are inside conditional logic

No handwaving here -- these are still frozen.

> where that conditional logic determines whether or not the case
> expressions can be safely evaluated?

That would only matter for non-constant cases, a use case that I reject.

> (I guess the fact that I'm refining the idea while writing about it doesn't
> really help, either. . .)

We're all doing that, so no problem.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From anthony at interlink.com.au  Wed Jun 28 17:16:35 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 29 Jun 2006 01:16:35 +1000
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <44A26093.8070503@aon.at>
References: <44A26093.8070503@aon.at>
Message-ID: <200606290116.38007.anthony@interlink.com.au>

On Wednesday 28 June 2006 20:57, Gregor Lingl wrote:
> I would very much appreciate if xturtle.py could go into
> Python 2.5

Unfortunately Python 2.5b1 came out last week. Now that we're in beta, 
we're feature frozen (unless some horrible issue comes up that means 
we really need to do a feature change). This looks very nice, but 
it's going to have to wait until 2.6 :-(

Sorry. Timing is everything.

For others reading along at home - I kinda think that the 
post-beta-feature-freeze is a similar sort to the one we have for 
bugfix releases (maybe _slightly_ lower hurdles for a new feature, 
but only just). Does this seem reasonable? If so, I'll add a note to 
the [still unfinished :-(] PEP 101 rewrite. 

Anthony 
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From anthony at interlink.com.au  Wed Jun 28 17:20:04 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 29 Jun 2006 01:20:04 +1000
Subject: [Python-Dev] [Python-checkins] r47142 - in python/trunk:
	Doc/lib/librunpy.tex Lib/runpy.py Lib/test/test_runpy.py
In-Reply-To: <20060628104148.362A01E4004@bag.python.org>
References: <20060628104148.362A01E4004@bag.python.org>
Message-ID: <200606290120.06338.anthony@interlink.com.au>

On Wednesday 28 June 2006 20:41, nick.coghlan wrote:
> Author: nick.coghlan
> Date: Wed Jun 28 12:41:47 2006
> New Revision: 47142
>
> Modified:
>    python/trunk/Doc/lib/librunpy.tex
>    python/trunk/Lib/runpy.py
>    python/trunk/Lib/test/test_runpy.py
> Log:
> Make full module name available as __module_name__ even when
> __name__ is set to something else (like '__main__')

Er. Um. Feature freeze? 

Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From glingl at aon.at  Wed Jun 28 18:10:19 2006
From: glingl at aon.at (Gregor Lingl)
Date: Wed, 28 Jun 2006 18:10:19 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <200606290116.38007.anthony@interlink.com.au>
References: <44A26093.8070503@aon.at>
	<200606290116.38007.anthony@interlink.com.au>
Message-ID: <44A2A9EB.6050802@aon.at>

Anthony Baxter schrieb:
> On Wednesday 28 June 2006 20:57, Gregor Lingl wrote:
>   
>> I would very much appreciate if xturtle.py could go into
>> Python 2.5
>>     
>
> Unfortunately Python 2.5b1 came out last week. Now that we're in beta, 
> we're feature frozen (unless some horrible issue comes up that means 
> we really need to do a feature change). This looks very nice, but 
> it's going to have to wait until 2.6 :-(
>
> Sorry. Timing is everything.
>   

So you mean that will at least last one more year? Not fine.

I wonder if this xturtle.py is a Python feature.
When Vern Ceder did his patch of turtle.py some weeks ago, there arouse a
discussion if a PEP was necessary to change turtle.py and the general 
opinion
in the discussion then was no. So I thought, that turtle-graphics is some
marginal element in the Python standard distribution.  (I thought something
like features get defined in PEPs)

Already now, only one week after publishing it I have some very positive
feedback and people start to use it. So I think there is some demand for 
it.
Moreover I think it could also help to convince more teachers to use 
Python as
a first language.

Could you imagine - downgrading it's 'featureness' - to put it into 2.5.1
or something like this?

I'll have a talk at Europython 2006 on July 5th about xturtle -  an
opportunity if sombody feels it's worth discussing it.

Regards,
Gregor
 


From jimjjewett at gmail.com  Wed Jun 28 18:12:31 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 28 Jun 2006 12:12:31 -0400
Subject: [Python-Dev] once [was: Simple Switch statementZ]
Message-ID: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>

On 6/25/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:

>     def f(x):
>         def g(y):
>             return y + once x
>         return g

> Does "once" mean not really once here, but "once for each new function
> object that's created for g"?

Until today, it hadn't really occurred to me that once could mean once
per module load rather than once per defining scope.  I suppose that
is reasonable if the values really are constant, but most of the
concerns are about what to do when this assumption is violated.  It
does add a bit of funny flow-control, though.

    def f():
        def g():
            def h():
                once x # or switch

Normally, there wouldn't be any need to even look inside g (let alone
h) at module load time, because the definition of f was run, but the
definitions of g and h were not.  With module-level once, x is
implicitly a module-level variable despite the nesting.

Guido:

> He specifically wants the latter semantics because it solves the
> problem of binding the value of a loop control variable in an outer
> scope:

Not really.  To solve the loop control problem (where the "constant"
is certainly not a run-time constant), a once variable also has to be
eagerly evaluated.  (function definition time?)

Nick suggested using once to delay computation of expensive defaults.
This means that even if every generated function has its own once
variable, none of those variables would be bound to any specific value
until they are called -- by which time the loop variable may well be
rebound.

-jJ

From fredrik at pythonware.com  Wed Jun 28 18:19:17 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 18:19:17 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
References: <44A26093.8070503@aon.at><200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at>
Message-ID: <e7ua65$v2j$1@sea.gmane.org>

Gregor Lingl wrote:

> Already now, only one week after publishing it I have some very positive
> feedback and people start to use it. So I think there is some demand for
> it.

some demand != should be added to the core distribution a few days after
its first release. (and if everything that someone somewhere is using should
be added to the core, a stock Python distro wouldn't fit on a DVD...)

> Moreover I think it could also help to convince more teachers to use
> Python as a first language.

and teachers won't be able to install a second package ?  sorry, but I don't
believe that for a second.

</F> 




From rhettinger at ewtllc.com  Wed Jun 28 18:26:41 2006
From: rhettinger at ewtllc.com (Raymond Hettinger)
Date: Wed, 28 Jun 2006 09:26:41 -0700
Subject: [Python-Dev] xturtle.py - a replacement for turtle.py
In-Reply-To: <44A26B2F.3090700@aon.at>
References: <44A26B2F.3090700@aon.at>
Message-ID: <44A2ADC1.6060704@ewtllc.com>

Gregor Lingl wrote:

>I would appreciate it very much if xturtle.py could go into Python2.5. 
>  
>

+1  The need for turtle.py improvements was discussed at the last 
PyCon.  It would be a nice plus for people teaching programming to kids.


In theory, it is a little late to be adding new modules.  In practice, 
it's probably not a problem.  Next time, type faster ;-)


Raymond


From jcarlson at uci.edu  Wed Jun 28 18:41:29 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 28 Jun 2006 09:41:29 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A2380F.6000503@acm.org>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>
Message-ID: <20060628090725.1099.JCARLSON@uci.edu>


Talin <talin at acm.org> wrote:
> Josiah Carlson wrote:
> > Talin <talin at acm.org> wrote:
> > 
> >>My version of this is to add to Python the notion of a simple 
> >>old-fashioned subroutine - that is, a function with no arguments and no 
> >>additional scope, which can be referred to by name. For example:
> > 
> > 
> > I don't like the idea of an embedded subrutine for a few reasons.  One
> > of them is because you need to define the case -> sub mapping
> > dictionaries in each pass, you are getting no improvement in speed
> > (which is a motivating factor in this discussion).  Even worse, the
> > disconnect between case definition and dispatch makes it feel quite a
> > bit like a modified label/goto proposal.  The ultimate killer is that
> > your proposed syntax (even using def) make this construct less readable
> > than pretty much any if/elif/else chain I have ever seen.
> > 
> >  - Josiah
> 
> The case -> sub mapping doesn't need to be defined every time - that's 
> the point, you as the programmer decide when and how to construct the 
> dictionary, rather than the language trying to guess what it is you 
> want. EIBTI.

Beautiful is better than ugly.

> You could also define the switch in an outer function that contains an 
> inner function that is called multiple times:
> 
>     def Outer():
>        sub S1:
>           ...
> 
>        sub S2:
>           ...
> 
>        sub S3:
>           ...
> 
>        dispatch = {
>           parser.IDENT: S1,
>           parser.NUMBER: S2,
>           parser.COMMENT: S3
>        }
> 
>        def Inner( x ):
>           do dispatch[ x ]
> 
>        return Inner

This allows direct access to a namespace that was previously read-only
from other namespaces (right now closure namespaces are read-only,
objects within them may not be). ...


> There is also the possibility of building the dict before the function 
> is run, however this requires a method of peeking into the function body 
> and extracting the definitions there. For example, suppose the 
> subroutine names were also attributes of the function object:
> 
>     def MyFunc( x ):
>        sub upper:
>           ...
>        sub lower:
>           ...
>        sub control:
>           ...
>        sub digit:
>           ...
> 
>        do dispatch[ x ]
> 
> 
>     # Lets use an array this time, for variety
>     dispatch = [
>        MyFunc.upper,
>        MyFunc.lower,
>        MyFunc.upper, # Yes, 2 and 3 are the same as 0 and 1
>        MyFunc.lower,
>        MyFunc.control,
>        MyFunc.digit,
>     ]

... One of my other desires for switch/case or its equivalent is that of
encapsulation.  Offering such access from outside or inside the function
violates what Python has currently defined as its mode of operations for
encapsulation.


> With regards to your second and third points: sure, I freely admit that 
> this proposal is less readable than a switch statement. The question is, 
> however, is it more readable than what we have *now*? As I have 
> explained, comparing it to if/elif/else chains is unfair, because they 
> don't have equivalent performance. The real question is, is it more 
> readable than, say, a dictionary of references to individual functions; 
> and I think that there are a number of possible use cases where the 
> answer would be 'yes'.

Why is the comparison against if/elif/else unfair, regardless of speed? 
We've been comparing switch/case to if/elif/else from a speed
perspective certainly, stating that it must be faster (hopefully O(1)
rather than O(n)), but that hasn't been the only discussion.  In fact,
one of the reasons we are considering switch/case is because readability
still counts, and people coming from C/etc., are familliar with it. Some
find switch/case significantly easier to read, I don't, but I also don't
find it significantly harder to read.

On the other hand, if I found someone using sub in a bit of Python code,
I'd probably cry, then rewrite the thing using if/elif/else. If I was
fiesty, I'd probably do some branch counting and reorder the tests, but
I would never use subs.


> I think that language features should "just work" in all cases, or at 
> least all cases that are reasonable. I don't like the idea of a switch 
> statement that is hedged around with unintuitive exceptions and strange 
> corner cases.

And I don't like the idea of making my code ugly.  I would honestly
rather have no change than to have sub/def+do.

 - Josiah


From guido at python.org  Wed Jun 28 18:40:15 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 09:40:15 -0700
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
Message-ID: <ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>

On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> On 6/25/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:
>
> >     def f(x):
> >         def g(y):
> >             return y + once x
> >         return g
>
> > Does "once" mean not really once here, but "once for each new function
> > object that's created for g"?
>
> Until today, it hadn't really occurred to me that once could mean once
> per module load rather than once per defining scope.

Funny. Until today (in a different post) it hadn't occurred to me that
the proponents of "first-use" switch evaluation were talking about
first-use within a function object. I guess the words are ambiguous.

I'm not really a proponent of "once-per-module-scope" semantics in
either case, so I'll gladly drop this issue.

> I suppose that
> is reasonable if the values really are constant, but most of the
> concerns are about what to do when this assumption is violated.  It
> does add a bit of funny flow-control, though.
>
>     def f():
>         def g():
>             def h():
>                 once x # or switch
>
> Normally, there wouldn't be any need to even look inside g (let alone
> h) at module load time, because the definition of f was run, but the
> definitions of g and h were not.  With module-level once, x is
> implicitly a module-level variable despite the nesting.
>
> Guido:
>
> > He specifically wants the latter semantics because it solves the
> > problem of binding the value of a loop control variable in an outer
> > scope:
>
> Not really.  To solve the loop control problem (where the "constant"
> is certainly not a run-time constant), a once variable also has to be
> eagerly evaluated.  (function definition time?)
>
> Nick suggested using once to delay computation of expensive defaults.
> This means that even if every generated function has its own once
> variable, none of those variables would be bound to any specific value
> until they are called -- by which time the loop variable may well be
> rebound.

Hm. We couldn't use this interpretation of 'once' to capture the value
of a loop variable in a nested function. Recall the typical example;
the goal is to return a list of argument-less functions that return 0,
1, 2, corresponding to their position in the list. The naive approach
is

  def index_functions(n):
    return [(lambda: i) for i in range(n)]

This returns a list of 10 functions that each return the final
variable of 'i', i.e. 9.

The current fix is

  def index_functions(n):
    return [(lambda i=i: i) for i in range(n)]

which works but has the disadvantage of returning a list of functions
of 0 or 1 argument

I believe at least one poster has pointed out that 'once' (if defined
suitably) could be used as a better way to do this:

  def index_functions(n):
    return [(lambda: once i) for i in range(n)]

But delaying the evaluation of the once argument until the function is
called would break this, since none of these functions are called
until after the loop is over, so the original bug would be back.

Perhaps 'once' is too misleading a name, given the confusion you
alluded to earlier. Maybe we could use 'capture' instead? A capture
expression would be captured at every function definition time,
period. Capture expressions outside functions would be illegal or
limited to compile-time constant expressions (unless someone has a
better idea). A capture expression inside "if 0:" would still be
captured to simplify the semantics (unless the compiler can prove that
it has absolutely no side effects).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 28 18:41:33 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 09:41:33 -0700
Subject: [Python-Dev] School IIb?
In-Reply-To: <e7u4o8$937$1@sea.gmane.org>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<Pine.LNX.4.58.0606261438120.17937@server1.LFW.org>
	<ca471dc20606261306l68be4aa9m867efab4403362dc@mail.gmail.com>
	<Pine.LNX.4.58.0606261521270.17937@server1.LFW.org>
	<ca471dc20606261457x423bfa23y120d1c2c61388110@mail.gmail.com>
	<Pine.LNX.4.58.0606261738370.17937@server1.LFW.org>
	<5.1.1.6.0.20060626185852.03aacf18@sparrow.telecommunity.com>
	<e7tf63$tv6$1@sea.gmane.org>
	<ca471dc20606280733m75d8a374ica8ddbff4f40224b@mail.gmail.com>
	<e7u4o8$937$1@sea.gmane.org>
Message-ID: <ca471dc20606280941l65839663v6316eb34997aced4@mail.gmail.com>

On 6/28/06, Fredrik Lundh <fredrik at pythonware.com> wrote:
> I just wish I could figure out what school my original micro-PEP belongs
> to (but as long as my implementation note is still just a draft, I guess no-
> body else can figure that out either... ;-)

There aren't just schools; there are alternatives (1-4 and A-D) and
options (1-4). :-)

Please publish your implementation! (Again if I just missed it.)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 28 18:45:24 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 09:45:24 -0700
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <44A281B2.2080309@gmail.com>
References: <44A11EA1.1000605@iinet.net.au>
	<ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>
	<44A22C81.5070701@gmail.com> <44A281B2.2080309@gmail.com>
Message-ID: <ca471dc20606280945j5891a132g6aad2d1c49b6c986@mail.gmail.com>

On 6/28/06, Nick Coghlan <ncoghlan at gmail.com> wrote:

> The workaround to replace __name__ with __module_name__ in order to enable
> relative imports turned out to be pretty ugly, so I also worked up a patch to
> import.c to get it to treat __module_name__ as an override for __name__ when
> __name__ == '__main__'.

Ah, clever. +1.

> So given a test_foo.py that started like this:
>
>    import unittest
>    import ..foo

Um, that's not legal syntax last I looked. Leading dots can only be
used in "from ... import". Did you change that too? I really hope you
didn't!

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From jcarlson at uci.edu  Wed Jun 28 18:48:41 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Wed, 28 Jun 2006 09:48:41 -0700
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <44A2A9EB.6050802@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at>
Message-ID: <20060628094550.109C.JCARLSON@uci.edu>


Gregor Lingl <glingl at aon.at> wrote:
> Could you imagine - downgrading it's 'featureness' - to put it into 2.5.1
> or something like this?

Changing features/abilities of Python in micro releases (2.5 -> 2.5.1),
aside from bugfixes, is a no-no.  See the Python 2.2 -> 2.2.1
availability of True/False for an example of where someone made a
mistake in doing this.

 - Josiah


From guido at python.org  Wed Jun 28 18:49:23 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 09:49:23 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
Message-ID: <ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>

On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> On 6/27/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> > On 6/27/06, Brett Cannon <brett at python.org> wrote:
> > >
> > > > (5)  I think file creation/writing should be capped rather than
> > > > binary; it is reasonable to say "You can create a single temp file up
> > > > to 4K" or "You can create files, but not more than 20Meg total".
>
> > > That has been suggested before.  Anyone else like this idea?
>
> > [ What exactly does the limit mean?  bytes written?  bytes currently stored?  bytes stored after exit?]
>
> IMHO, I would prefer that it limit disk consumption; a deleted or
> overwritten file would not count against the process, but even a
> temporary spike would need to be less than the cap.

Some additional notes:

- File size should be rounded up to some block size (512 if you don't
have filesystem specific information) before adding to the total.

- Total number of files (i.e. inodes) in existence should be capped, too.

- If sandboxed code is allowed to create dierctories, the total depth
and the total path length should also be capped.

(I find reading about trusted and untrusted code confusing; a few
times I've had to read a sentence three times before realizing I had
swapped those two words. Perhaps we can distinguish between trusted
and sandboxed? Or even native and sandboxed?)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Wed Jun 28 18:50:42 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 09:50:42 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
Message-ID: <bbaeab100606280950u7cbea49bk7890d70388af44c1@mail.gmail.com>

On 6/27/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
>
> On 6/27/06, Brett Cannon <brett at python.org> wrote:
> >
> > > (5)  I think file creation/writing should be capped rather than
> > > binary; it is reasonable to say "You can create a single temp file up
> > > to 4K" or "You can create files, but not more than 20Meg total".
> >
> > That has been suggested before.  Anyone else like this idea?
>
> What would this code do:
>
>     MAX = 4
>     for i in xrange(10):
>       fp = open(str(i), 'w+')
>       fp.write(' ' * (MAX // 4))
>       fp.close()
>       if i % 2:
>           os.unlink(str(i))


 First of all, it would require that the file names have been cleared for
writing.  Otherwise an exception will be thrown the first time open() is
called.  Second, the os.unlink() will fail unless you whitelist your
platform's OS-specific module that is used by 'os' (e.g., posix).

How many times should this execute, 4 or 8?  What about if there is no
> if i % 2 and the file is unlinked at the end of each loop?  Should
> that loop 10 times without error?  What would happen if we used the
> same file name?  What would happen if we did something like:
>
>     fp = open(str(i), 'w+')
>     MAX = 4
>     for i in xrange(10000):
>       fp.seek(0)
>       fp.write(' ' * (MAX // 4))
>
> Should this succeed?


 If I put in any cap, I would make it universal for *all* disk writes (and
probably do the same for network sends).

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/2c4589c5/attachment.htm 

From guido at python.org  Wed Jun 28 18:50:57 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 09:50:57 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <20060628090725.1099.JCARLSON@uci.edu>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>
	<20060628090725.1099.JCARLSON@uci.edu>
Message-ID: <ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>

Let's just drop the switchable subroutine proposal. It's not viable.

On 6/28/06, Josiah Carlson <jcarlson at uci.edu> wrote:
>
> Talin <talin at acm.org> wrote:
> > Josiah Carlson wrote:
> > > Talin <talin at acm.org> wrote:
> > >
> > >>My version of this is to add to Python the notion of a simple
> > >>old-fashioned subroutine - that is, a function with no arguments and no
> > >>additional scope, which can be referred to by name. For example:
> > >
> > >
> > > I don't like the idea of an embedded subrutine for a few reasons.  One
> > > of them is because you need to define the case -> sub mapping
> > > dictionaries in each pass, you are getting no improvement in speed
> > > (which is a motivating factor in this discussion).  Even worse, the
> > > disconnect between case definition and dispatch makes it feel quite a
> > > bit like a modified label/goto proposal.  The ultimate killer is that
> > > your proposed syntax (even using def) make this construct less readable
> > > than pretty much any if/elif/else chain I have ever seen.
> > >
> > >  - Josiah
> >
> > The case -> sub mapping doesn't need to be defined every time - that's
> > the point, you as the programmer decide when and how to construct the
> > dictionary, rather than the language trying to guess what it is you
> > want. EIBTI.
>
> Beautiful is better than ugly.
>
> > You could also define the switch in an outer function that contains an
> > inner function that is called multiple times:
> >
> >     def Outer():
> >        sub S1:
> >           ...
> >
> >        sub S2:
> >           ...
> >
> >        sub S3:
> >           ...
> >
> >        dispatch = {
> >           parser.IDENT: S1,
> >           parser.NUMBER: S2,
> >           parser.COMMENT: S3
> >        }
> >
> >        def Inner( x ):
> >           do dispatch[ x ]
> >
> >        return Inner
>
> This allows direct access to a namespace that was previously read-only
> from other namespaces (right now closure namespaces are read-only,
> objects within them may not be). ...
>
>
> > There is also the possibility of building the dict before the function
> > is run, however this requires a method of peeking into the function body
> > and extracting the definitions there. For example, suppose the
> > subroutine names were also attributes of the function object:
> >
> >     def MyFunc( x ):
> >        sub upper:
> >           ...
> >        sub lower:
> >           ...
> >        sub control:
> >           ...
> >        sub digit:
> >           ...
> >
> >        do dispatch[ x ]
> >
> >
> >     # Lets use an array this time, for variety
> >     dispatch = [
> >        MyFunc.upper,
> >        MyFunc.lower,
> >        MyFunc.upper, # Yes, 2 and 3 are the same as 0 and 1
> >        MyFunc.lower,
> >        MyFunc.control,
> >        MyFunc.digit,
> >     ]
>
> ... One of my other desires for switch/case or its equivalent is that of
> encapsulation.  Offering such access from outside or inside the function
> violates what Python has currently defined as its mode of operations for
> encapsulation.
>
>
> > With regards to your second and third points: sure, I freely admit that
> > this proposal is less readable than a switch statement. The question is,
> > however, is it more readable than what we have *now*? As I have
> > explained, comparing it to if/elif/else chains is unfair, because they
> > don't have equivalent performance. The real question is, is it more
> > readable than, say, a dictionary of references to individual functions;
> > and I think that there are a number of possible use cases where the
> > answer would be 'yes'.
>
> Why is the comparison against if/elif/else unfair, regardless of speed?
> We've been comparing switch/case to if/elif/else from a speed
> perspective certainly, stating that it must be faster (hopefully O(1)
> rather than O(n)), but that hasn't been the only discussion.  In fact,
> one of the reasons we are considering switch/case is because readability
> still counts, and people coming from C/etc., are familliar with it. Some
> find switch/case significantly easier to read, I don't, but I also don't
> find it significantly harder to read.
>
> On the other hand, if I found someone using sub in a bit of Python code,
> I'd probably cry, then rewrite the thing using if/elif/else. If I was
> fiesty, I'd probably do some branch counting and reorder the tests, but
> I would never use subs.
>
>
> > I think that language features should "just work" in all cases, or at
> > least all cases that are reasonable. I don't like the idea of a switch
> > statement that is hedged around with unintuitive exceptions and strange
> > corner cases.
>
> And I don't like the idea of making my code ugly.  I would honestly
> rather have no change than to have sub/def+do.
>
>  - Josiah
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Wed Jun 28 18:51:11 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 09:51:11 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
Message-ID: <bbaeab100606280951v7d171b97jc288b3ecaae0b4b9@mail.gmail.com>

On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> On 6/27/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> > On 6/27/06, Brett Cannon <brett at python.org> wrote:
> > >
> > > > (5)  I think file creation/writing should be capped rather than
> > > > binary; it is reasonable to say "You can create a single temp file
> up
> > > > to 4K" or "You can create files, but not more than 20Meg total".
>
> > > That has been suggested before.  Anyone else like this idea?
>
> > [ What exactly does the limit mean?  bytes written?  bytes currently
> stored?  bytes stored after exit?]
>
> IMHO, I would prefer that it limit disk consumption; a deleted or
> overwritten file would not count against the process, but even a
> temporary spike would need to be less than the cap.
>
> That said, I would consider any of the mentioned implementations an
> acceptable proxy; the point is just that I might want to let a program
> save data without letting it have my entire hard disk.
>
>
Well, that's easy to solve; don't allow any files to be open for writing.
=)

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/0ddeea2e/attachment.html 

From brett at python.org  Wed Jun 28 18:57:48 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 09:57:48 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
	<ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>
Message-ID: <bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>

On 6/28/06, Guido van Rossum <guido at python.org> wrote:
>
> On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> > On 6/27/06, Neal Norwitz <nnorwitz at gmail.com> wrote:
> > > On 6/27/06, Brett Cannon <brett at python.org> wrote:
> > > >
> > > > > (5)  I think file creation/writing should be capped rather than
> > > > > binary; it is reasonable to say "You can create a single temp file
> up
> > > > > to 4K" or "You can create files, but not more than 20Meg total".
> >
> > > > That has been suggested before.  Anyone else like this idea?
> >
> > > [ What exactly does the limit mean?  bytes written?  bytes currently
> stored?  bytes stored after exit?]
> >
> > IMHO, I would prefer that it limit disk consumption; a deleted or
> > overwritten file would not count against the process, but even a
> > temporary spike would need to be less than the cap.
>
> Some additional notes:
>
> - File size should be rounded up to some block size (512 if you don't
> have filesystem specific information) before adding to the total.


Why?

- Total number of files (i.e. inodes) in existence should be capped, too.


If you want that kind of cap, just specify individual files you are willing
to let people open for reading; that is your cap.  Only have to worry about
this if you open an entire directory open for writing.

- If sandboxed code is allowed to create dierctories, the total depth
> and the total path length should also be capped.


Once again, another one of those balance issues of where do we draw the line
in terms of simplicity in the setting compared to controlling every possible
setting people will want (especially, it seems, when it comes to writing to
disk).  And if you want to allow directory writing, you need to allow use of
the platform's OS-specific module (e.g., posix) to do it since open() won't
let you create a directory.

I really want to keep the settings and setup simple.  I don't want to
overburden people with a ton of security settings.

(I find reading about trusted and untrusted code confusing; a few
> times I've had to read a sentence three times before realizing I had
> swapped those two words. Perhaps we can distinguish between trusted
> and sandboxed? Or even native and sandboxed?)
>
>
>

Fair enough.  When I do the next draft I will make them more distinctive
(probably "trusted" and "sandboxed").

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/42d76a73/attachment.htm 

From brett at python.org  Wed Jun 28 18:59:51 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 09:59:51 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <5461075764560597004@unknownmsgid>
References: <bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>
	<5461075764560597004@unknownmsgid>
Message-ID: <bbaeab100606280959l1eed1563q429ba39cb73f9615@mail.gmail.com>

On 6/27/06, Bill Janssen <janssen at parc.com> wrote:
>
> > The plan is to allow pure Python code to be embedded into web pages like
> > JavaScript.  I am not going for the applet approach like Java.
>
> Java support is now just a plug-in.  Should be easy to make a Python
> plug-in system that works the same way.  If only we had a GUI... :-)


Yep, it would be.  Then again, Mark Hammond has already done a bunch of work
for pyXPCOM, so getting Python compiled right into Firefox itself shouldn't
be too bad.

If this really takes off, will probably want both: get into Firefox, but
have an extension for pre-existing installations.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/4b6fabcd/attachment.html 

From glingl at aon.at  Wed Jun 28 19:01:53 2006
From: glingl at aon.at (Gregor Lingl)
Date: Wed, 28 Jun 2006 19:01:53 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <20060628094550.109C.JCARLSON@uci.edu>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at> <20060628094550.109C.JCARLSON@uci.edu>
Message-ID: <44A2B601.6070907@aon.at>

Josiah Carlson schrieb:
> Gregor Lingl <glingl at aon.at> wrote:
>   
>> Could you imagine - downgrading it's 'featureness' - to put it into 2.5.1
>> or something like this?
>>     
>
> Changing features/abilities of Python in micro releases (2.5 -> 2.5.1),
> aside from bugfixes, is a no-no. 
I understand. Nevertheless one should see, that there is
_not_a _single_ module_ in the whole of Standard Python distro
which _depends_ on turtle.py.
This certainly makes a difference to the True/False-change problem.

Gregor
>  See the Python 2.2 -> 2.2.1
> availability of True/False for an example of where someone made a
> mistake in doing this.
>
>  - Josiah
>
>
>
>   


From collinw at gmail.com  Wed Jun 28 19:03:03 2006
From: collinw at gmail.com (Collin Winter)
Date: Wed, 28 Jun 2006 18:03:03 +0100
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <20060628094550.109C.JCARLSON@uci.edu>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at> <20060628094550.109C.JCARLSON@uci.edu>
Message-ID: <43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>

On 6/28/06, Josiah Carlson <jcarlson at uci.edu> wrote:
>
> Gregor Lingl <glingl at aon.at> wrote:
> > Could you imagine - downgrading it's 'featureness' - to put it into 2.5.1
> > or something like this?
>
> Changing features/abilities of Python in micro releases (2.5 -> 2.5.1),
> aside from bugfixes, is a no-no.  See the Python 2.2 -> 2.2.1
> availability of True/False for an example of where someone made a
> mistake in doing this.

This may be a stupid question, but we're talking about replacing the
turtle.py in Lib/lib-tk/, right? The one that's basically just a GUI
demo / introduction to programming tool?

If so, can someone explain to me how improving something like this is
akin to introducing new keywords that everyone will take advantage of
(to use Josiah's True/False example)?

While I have no opinion on Gregor's app, and while I fully agree that
new language features and stdlib modules should generally stay out of
bug-fix point releases, xturtle doesn't seem to rise to that level
(and hence, those restrictions).

Thanks,
Collin Winter

From aahz at pythoncraft.com  Wed Jun 28 19:11:20 2006
From: aahz at pythoncraft.com (Aahz)
Date: Wed, 28 Jun 2006 10:11:20 -0700
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at> <20060628094550.109C.JCARLSON@uci.edu>
	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
Message-ID: <20060628171120.GA11975@panix.com>

On Wed, Jun 28, 2006, Collin Winter wrote:
>
> This may be a stupid question, but we're talking about replacing the
> turtle.py in Lib/lib-tk/, right? The one that's basically just a GUI
> demo / introduction to programming tool?
> 
> If so, can someone explain to me how improving something like this is
> akin to introducing new keywords that everyone will take advantage of
> (to use Josiah's True/False example)?
> 
> While I have no opinion on Gregor's app, and while I fully agree that
> new language features and stdlib modules should generally stay out of
> bug-fix point releases, xturtle doesn't seem to rise to that level
> (and hence, those restrictions).

The problem is that it's a slippery slope.  There is a *LOT* of political
value coming from "no features in bug releases, period".  People feel
that they can rely on Python to stay stable and not have to check what
the actual changes were; it increases the confidence level in bugfix
releases.

Either the new turtle module goes in for beta2 (after suitably convincing
the release manager and Guido) or it waits for 2.6.  I don't have a
strong feeling about this issue, though I'm a mild -0 on allowing it.
Nobody can claim there wasn't notice about the beta release date and the
restrictions after it.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From jimjjewett at gmail.com  Wed Jun 28 19:17:09 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 28 Jun 2006 13:17:09 -0400
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
	<ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
Message-ID: <fb6fbf560606281017u22b9ec29nacdee07acbf5dfb9@mail.gmail.com>

On 6/28/06, Guido van Rossum <guido at python.org> wrote:

>   def index_functions(n):
>     return [(lambda i=i: i) for i in range(n)]

> which works but has the disadvantage of returning a list of functions
> of 0 or 1 argument

> I believe at least one poster has pointed out that 'once' (if defined
> suitably) could be used as a better way to do this:

Cleaner, yes.  But you would still have to remember the once, just as
you have to remember the i=i, so I don't think it would actually save
any confusion in practice.

Another alternative might be letting functions get at themselves,
rather than just their names.  (Methods can save attributes on self,
but functions are out of luck if someone else reused their name.)

> Perhaps 'once' is too misleading a name, given the confusion you
> alluded to earlier. Maybe we could use 'capture' instead? A capture
> expression would be captured at every function definition time,
> period.

I think it would have the same problem; I would still want to read
that as "The first time you run this, capture the result.", rather
than "Capture the binding current at funcdef time, even though you're
skipping all the other statements at this indent level."

> Capture expressions outside functions would be illegal or
> limited to compile-time constant expressions (unless someone has a
> better idea).

At a minimum, it should be able to capture the expression's current
value at load-time, which might well involve names imported from
another module.

> A capture expression inside "if 0:" would still be
> captured to simplify the semantics (unless the compiler can prove that
> it has absolutely no side effects).

Running code that was guarded by "if 0:" sounds like a really bad idea.

-jJ

From anthony at interlink.com.au  Wed Jun 28 19:21:09 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 29 Jun 2006 03:21:09 +1000
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
References: <200606290116.38007.anthony@interlink.com.au>
	<20060628094550.109C.JCARLSON@uci.edu>
	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
Message-ID: <200606290321.14617.anthony@interlink.com.au>

On Thursday 29 June 2006 03:03, Collin Winter wrote:
> This may be a stupid question, but we're talking about replacing
> the turtle.py in Lib/lib-tk/, right? The one that's basically just
> a GUI demo / introduction to programming tool?
>
> If so, can someone explain to me how improving something like this
> is akin to introducing new keywords that everyone will take
> advantage of (to use Josiah's True/False example)?


2.x.y releases should be compatible for all values of y, (including 
the empty value <wink>). PEP-006 has the details and rationale. 
People shouldn't have to worry that things break with a minor 
release. It's important that packagers of Python for distributions 
can feel confident that a "bugfix" release of Python is _actually_ 
just a bugfix release, and that they can push it out to their users.
This means everyone wins.

I'm unconvinced that a new turtle module is worth ramming in on short 
notice to make it into 2.5 final. It can easily be made available via 
the cheeseshop and with setuptools for extremely easy installation 
between 2.5 and 2.6. With all the work that's been done to make 2.5 
what will hopefully be the most solid Python release ever, I don't 
want to slip up now. :-)

And needless to say, there is no way it is suitable for a bugfix 
release. 

Had it been pushed through a couple of weeks earlier (while we were in 
alpha) - sure, it looks like it could have been a good addition to 
the stdlib. But the release timeline's been out there for a while 
now - heck, b1 was actually a few days later than originally planned. 

Anthony
-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From rasky at develer.com  Wed Jun 28 19:21:33 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Wed, 28 Jun 2006 19:21:33 +0200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
References: <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
	<033001c69a07$44140890$d503030a@trilan>
	<44A1CE4A.2000900@canterbury.ac.nz>
	<ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
Message-ID: <039201c69ad7$4dc91df0$d503030a@trilan>

Guido van Rossum wrote:

>>> This is where I wonder why the "def __main__()" PEP was rejected in
>>> the first place. It would have solved this problem as well.
>>
>> Could this be reconsidered for Py3k?
>
> You have a point.

AFAICT, there's nothing preventing it from being added in 2.6. It won't
break existing code with the "if name == main" paradigm.
-- 
Giovanni Bajo


From jimjjewett at gmail.com  Wed Jun 28 19:25:33 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 28 Jun 2006 13:25:33 -0400
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
	<ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>
	<bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
Message-ID: <fb6fbf560606281025w5aa9d386qb831c7e430ca88d@mail.gmail.com>

On 6/28/06, Brett Cannon <brett at python.org> wrote:
> On 6/28/06, Guido van Rossum <guido at python.org> wrote:

> > - File size should be rounded up to some block size (512 if you don't
> > have filesystem specific information) before adding to the total.

> Why?

That reflects the amount of disk I no longer have available for other
purposes.

> > - Total number of files (i.e. inodes) in existence should be capped, too.

> If you want that kind of cap, just specify individual files you are willing
> to let people open for reading; that is your cap.  Only have to worry about
> this if you open an entire directory open for writing.

Right, but on some systems, inodes are themselves a limited resource.
I'm not sure how common that is.

> > - If sandboxed code is allowed to create directories, the total depth
> > and the total path length should also be capped.

> Once again, another one of those balance issues of where do we draw the line
> in terms of simplicity ... you need to allow use of
> the platform's OS-specific module ( e.g., posix) to do it since open() won't
> let you create a directory.

I *think* this is to avoid security holes, and your solution of
letting the open filter out bad names should be OK, but ... what if it
isn't?  What if cd then mkdir will let them create something too long?
 Do we have to wait for an OS patch?

> > (I find reading about trusted and untrusted code confusing; a few
> > times I've had to read a sentence three times before realizing I had
> > swapped those two words. Perhaps we can distinguish between trusted
> > and sandboxed? Or even native and sandboxed?)

unlimited vs sandboxed?

-jJ

From talin at acm.org  Wed Jun 28 19:25:27 2006
From: talin at acm.org (Talin)
Date: Wed, 28 Jun 2006 10:25:27 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>	
	<20060628090725.1099.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
Message-ID: <44A2BB87.50406@acm.org>

Guido van Rossum wrote:
> Let's just drop the switchable subroutine proposal. It's not viable.
> 

Perhaps not - but at the same time, when discussing new language 
features, let's not just limit ourselves to what other languages have 
done already.

Forget subroutines for a moment - the main point of the thread was the 
idea that the dispatch table was built explicitly rather than 
automatically - that instead of arguing over first-use vs. 
function-definition, we let the user decide. I'm sure that my specific 
proposal isn't the only way that this could be done.

-- Talin

From guido at python.org  Wed Jun 28 19:33:19 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 10:33:19 -0700
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <fb6fbf560606281017u22b9ec29nacdee07acbf5dfb9@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
	<ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
	<fb6fbf560606281017u22b9ec29nacdee07acbf5dfb9@mail.gmail.com>
Message-ID: <ca471dc20606281033w667d0f76la8b0624c72d149aa@mail.gmail.com>

On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> On 6/28/06, Guido van Rossum <guido at python.org> wrote:
>
> >   def index_functions(n):
> >     return [(lambda i=i: i) for i in range(n)]
>
> > which works but has the disadvantage of returning a list of functions
> > of 0 or 1 argument
>
> > I believe at least one poster has pointed out that 'once' (if defined
> > suitably) could be used as a better way to do this:
>
> Cleaner, yes.  But you would still have to remember the once, just as
> you have to remember the i=i, so I don't think it would actually save
> any confusion in practice.

Yes it would, to the reader; if you see a function with a default
argument now, you have to wonder if the default is there just to
capture a value.

> Another alternative might be letting functions get at themselves,
> rather than just their names.  (Methods can save attributes on self,
> but functions are out of luck if someone else reused their name.)

This has been proposed before. Because (as you say) the function name
is not generally safe to use, there's no good API; all proposals I've
seen are ugly. And anyway this would be way too complex for the little
lambda I was using as an example.

> > Perhaps 'once' is too misleading a name, given the confusion you
> > alluded to earlier. Maybe we could use 'capture' instead? A capture
> > expression would be captured at every function definition time,
> > period.
>
> I think it would have the same problem; I would still want to read
> that as "The first time you run this, capture the result.", rather
> than "Capture the binding current at funcdef time, even though you're
> skipping all the other statements at this indent level."

"Capture" can have many meanings. Of course some people will still
misunderstand it. But it's more likely that they would look it up the
first time they saw it rather than assumign they knew what it meant.

> > Capture expressions outside functions would be illegal or
> > limited to compile-time constant expressions (unless someone has a
> > better idea).
>
> At a minimum, it should be able to capture the expression's current
> value at load-time, which might well involve names imported from
> another module.

I'm not sure what you mean by "load time". If you mean to do this
before the execution of the module body starts, then none of the
imported names are known (import is an executable statement, too).

> > A capture expression inside "if 0:" would still be
> > captured to simplify the semantics (unless the compiler can prove that
> > it has absolutely no side effects).
>
> Running code that was guarded by "if 0:" sounds like a really bad idea.

Assuming that the compiler will treat code guarded by "if 0:"
different from code guarded by "if x:" where you happen to know that x
is always false sounds like a really bad idea too. I'm happy to treat
elimination of "if 0:" blocks as optimizations. I'm not happy to state
into the language spec that the compiler should treat such code
differently. next, you're going to claim that a local variable only
assigned within such a block is not really a local (and references to
it outside the block should be treated as globals)...

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From guido at python.org  Wed Jun 28 19:36:28 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 10:36:28 -0700
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A2BB87.50406@acm.org>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>
	<20060628090725.1099.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
	<44A2BB87.50406@acm.org>
Message-ID: <ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>

On 6/28/06, Talin <talin at acm.org> wrote:
> Guido van Rossum wrote:
> > Let's just drop the switchable subroutine proposal. It's not viable.
>
> Perhaps not - but at the same time, when discussing new language
> features, let's not just limit ourselves to what other languages have
> done already.

Well, Python 3000 is explcitly not intended as a platform for
arbitrary experimentation with feature invention (read PEP 3000).

I've gotten quite a bit of mileage out of borrowing from other
languages instead of inventing my own stuff, so I don't want to go out
inventing as a replacement of researching options that have already
been tried elsewhere.

> Forget subroutines for a moment - the main point of the thread was the
> idea that the dispatch table was built explicitly rather than
> automatically - that instead of arguing over first-use vs.
> function-definition, we let the user decide. I'm sure that my specific
> proposal isn't the only way that this could be done.

But anything that makes the build explicit is going to be so much more
ugly. And I still think you're trying to solve the wrong problem.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From trentm at activestate.com  Wed Jun 28 19:41:02 2006
From: trentm at activestate.com (Trent Mick)
Date: Wed, 28 Jun 2006 10:41:02 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>	<Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>
	<bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>
Message-ID: <44A2BF2E.4020706@activestate.com>

Brett Cannon wrote:
> The plan is to allow pure Python code to be embedded into web pages like 
> JavaScript. ...

> ...Then again, Mark Hammond has already done a bunch of work for pyXPCOM, so getting Python compiled right into Firefox itself shouldn't be too bad.
> 
> If this really takes off, will probably want both: get into Firefox, but have an extension for pre-existing installations.

You should really speak with Mark, if you haven't recently. He's gotten 
a lot further than just PyXPCOM. My understanding (I might be a bit off 
on the branches and timing) is that his "DOM_AGNOSTIC" work on the 
Mozilla code has mostly been checked into the trunk. This work is to do 
mostly what you are describing: Python as a client-side scripting 
language along-side JavaScript.

I can't recall what Mozilla's distribution plans are for this. Certainly 
it wouldn't be before Firefox 3. Then again, default Firefox builds 
would like not include Python by default.

It sounds to me that a restricted-execution/security-model story for 
Python would be important here.

Mark (and me a little bit) has been sketching out creating a "Python for 
Mozilla/Firefox" extension for installing an embedded Python into an 
existing Firefox installation on the pyxpcom list:

http://aspn.activestate.com/ASPN/Mail/Message/pyxpcom/3167613

> The idea is that there be a separate Python interpreter per web browser page instance. 

I think there may be scaling issues there. JavaScript isn't doing that 
is it, do you know? As well, that doesn't seem like it would translate 
well to sharing execution between separate chrome windows in a 
non-browser XUL/Mozilla-based app.

Trent

-- 
Trent Mick
trentm at activestate.com

From guido at python.org  Wed Jun 28 19:41:37 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 10:41:37 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
	<ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>
	<bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
Message-ID: <ca471dc20606281041j3998e105i4e875af77be8ea66@mail.gmail.com>

On 6/28/06, Brett Cannon <brett at python.org> wrote:
> On 6/28/06, Guido van Rossum <guido at python.org> wrote:
> > - File size should be rounded up to some block size (512 if you don't
> > have filesystem specific information) before adding to the total.
>
> Why?

Because that's how filesystems work. Allocations are in terms of block
sizes. 1000 files of 1 byte take up the same space as 1000 files of
512 bytes (in most common filesystems anyway -- I think Reiserfs may
be different).

> > - Total number of files (i.e. inodes) in existence should be capped, too.
>
> If you want that kind of cap, just specify individual files you are willing
> to let people open for reading; that is your cap.  Only have to worry about
> this if you open an entire directory open for writing.

I'm not talking about filedescriptors (although that's another
cappable resource); I'm talking about number of files in the
filesystem. Most unix filesystems have a limit; I've run into it
occasionally when I had a really large partition with not enough
inodes configured and I was creating lots of tiny files. See df(1).

> > - If sandboxed code is allowed to create directories, the total depth
> > and the total path length should also be capped.
>
> Once again, another one of those balance issues of where do we draw the line
> in terms of simplicity in the setting compared to controlling every possible
> setting people will want (especially, it seems, when it comes to writing to
> disk).  And if you want to allow directory writing, you need to allow use of
> the platform's OS-specific module ( e.g., posix) to do it since open() won't
> let you create a directory.
>
> I really want to keep the settings and setup simple.  I don't want to
> overburden people with a ton of security settings.

Well, I prefixed it with "if you want to allow directory creation". If
you don't allow that, fine. But if you do allow that (and it's an
easily controlled operation just like file creation) I've given you
some things to watch out for. I once ran into a situation where a
script had gone off into deep recursion and created a near-infinite
hierarchy of directories that rm -r couldn't remove (because it
constructs absolute paths that exceeded MAXPATH).

> > (I find reading about trusted and untrusted code confusing; a few
> > times I've had to read a sentence three times before realizing I had
> > swapped those two words. Perhaps we can distinguish between trusted
> > and sandboxed? Or even native and sandboxed?)

> Fair enough.  When I do the next draft I will make them more distinctive
> (probably "trusted" and "sandboxed").

Great!

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From anthony at interlink.com.au  Wed Jun 28 19:42:55 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Thu, 29 Jun 2006 03:42:55 +1000
Subject: [Python-Dev] RFC: trunk checkins between now and 2.5 final
Message-ID: <200606290342.58447.anthony@interlink.com.au>

This is a request for comments - this is my current thinking on a 
policy for checkins to the trunk between now and the release of 2.5 
final. 

----

Now that we're in beta:

If you don't add an entry to Misc/NEWS, a test (if relevant or 
possible) and docs (if relevant), the checkin is probably going to 
get reverted. This continues through the release candidate stages.
I mean it about Misc/NEWS, too. Every change to the code gets a NEWS 
entry.

If it adds a feature in beta, and you didn't get signoff first, it's 
going to get treated as a revert candidate. People like myself or 
Neal shouldn't have to run after you to review the patch after it's 
in SVN. 

If a checkin breaks the buildbots, unless the bug is very shallow and 
it's easier for someone of us to fix than revert, it's going to get a 
little set of three red dots on it's forehead ala Predator.

Once we hit release candidate 1, the trunk gets branched to 
release25-maint. 

On release25-maint, between rc1 and 2.5 final:

If you checkin to that branch, get signoff first. This is regardless 
of whether it's bugfix or feature. The checkin is going to get the 
big revert cannon targetting it, otherwise.

If you need to go round one of these things, get signoff (in public!) 
first, or else if not in public, mention the signoff in the commit 
message. Preferably in public, though.

Once 2.5 final is out, the normal maintenance branch guidelines come 
into effect for release25-maint. That is, bugfixes only. This is all 
documented in PEP-0008.

A few notes on rationale for my being such a pain in the backside 
about this:

Now that we're in beta, we should be spending the time nailing down 
bugs. Every feature added has the potential to add bugs - in 
addition, other people are going to have to review that change to 
make sure it's sane. There's only so many mental cycles to go around, 
and they should be spent on fixing existing bugs, not creating new 
ones <wink>.

Once we're in RC, we're going to really, really want to ratchet up the 
quality meter. 

Between Neal and myself we have a fair amount of timezone coverage, so 
you should be able to get hold of one of us easily enough. My contact 
details (including all manner of instant messenger type things) are 
also pretty easy to find, they've been posted here a number of times 
before. 
----

Anyway, this is the current thinking. Am I being too dogmatic here? 
Comments solicited.

As far as people to sign off on things, Neal, myself or Guido should 
be the ones to do it. Course, Guido will probably decide he doesn't 
want this dubious honour <wink>.

Anthony

From guido at python.org  Wed Jun 28 19:45:40 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 10:45:40 -0700
Subject: [Python-Dev] RFC: trunk checkins between now and 2.5 final
In-Reply-To: <200606290342.58447.anthony@interlink.com.au>
References: <200606290342.58447.anthony@interlink.com.au>
Message-ID: <ca471dc20606281045q721358c5l376be21f32084cba@mail.gmail.com>

On 6/28/06, Anthony Baxter <anthony at interlink.com.au> wrote:
> As far as people to sign off on things, Neal, myself or Guido should
> be the ones to do it. Course, Guido will probably decide he doesn't
> want this dubious honour <wink>.

Right. But I agree with the policy.

FWIW, I think Nick's change for -m is okay given that it's a pretty
minor feat and -m is new anyway, but I'd like you and/or Neal decide
on that. Leaving it broken is also pretty minor IMO (and was my first
preference whe he brought it up -- see my posts in response to his
proposal).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From brett at python.org  Wed Jun 28 19:45:55 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 10:45:55 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606281025w5aa9d386qb831c7e430ca88d@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
	<ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>
	<bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
	<fb6fbf560606281025w5aa9d386qb831c7e430ca88d@mail.gmail.com>
Message-ID: <bbaeab100606281045t4c7edfddg17d65b1354a7eb87@mail.gmail.com>

On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
>
> On 6/28/06, Brett Cannon <brett at python.org> wrote:
> > On 6/28/06, Guido van Rossum <guido at python.org> wrote:
>
> > > - File size should be rounded up to some block size (512 if you don't
> > > have filesystem specific information) before adding to the total.
>
> > Why?
>
> That reflects the amount of disk I no longer have available for other
> purposes.


Ah, OK.

> > - Total number of files (i.e. inodes) in existence should be capped,
> too.
>
> > If you want that kind of cap, just specify individual files you are
> willing
> > to let people open for reading; that is your cap.  Only have to worry
> about
> > this if you open an entire directory open for writing.
>
> Right, but on some systems, inodes are themselves a limited resource.


OK.

I'm not sure how common that is.
>


> > > - If sandboxed code is allowed to create directories, the total depth
> > > and the total path length should also be capped.
>
> > Once again, another one of those balance issues of where do we draw the
> line
> > in terms of simplicity ... you need to allow use of
> > the platform's OS-specific module ( e.g., posix) to do it since open()
> won't
> > let you create a directory.
>
> I *think* this is to avoid security holes, and your solution of
> letting the open filter out bad names should be OK, but ... what if it
> isn't?  What if cd then mkdir will let them create something too long?
> Do we have to wait for an OS patch?


Then isn't that a problem with cwd() and mkdir()?

And we can play the "what if" scenario forever with security.  There is
always the possibility me or some originally trusted code is going to turn
out to be unsafe.  This is why I am preferring an approach of just not
allowing the importation of possibly unsafe code unless you *really* trust
it yourself.

-Brett

> > (I find reading about trusted and untrusted code confusing; a few
> > > times I've had to read a sentence three times before realizing I had
> > > swapped those two words. Perhaps we can distinguish between trusted
> > > and sandboxed? Or even native and sandboxed?)
>
> unlimited vs sandboxed?
>
> -jJ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/5b2d8a19/attachment.htm 

From brett at python.org  Wed Jun 28 19:49:03 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 10:49:03 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <ca471dc20606281041j3998e105i4e875af77be8ea66@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
	<ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>
	<bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
	<ca471dc20606281041j3998e105i4e875af77be8ea66@mail.gmail.com>
Message-ID: <bbaeab100606281049s5df1185nae93cd0cba4eac89@mail.gmail.com>

On 6/28/06, Guido van Rossum <guido at python.org> wrote:
>
> On 6/28/06, Brett Cannon <brett at python.org> wrote:
> > On 6/28/06, Guido van Rossum <guido at python.org> wrote:
> > > - File size should be rounded up to some block size (512 if you don't
> > > have filesystem specific information) before adding to the total.
> >
> > Why?
>
> Because that's how filesystems work. Allocations are in terms of block
> sizes. 1000 files of 1 byte take up the same space as 1000 files of
> 512 bytes (in most common filesystems anyway -- I think Reiserfs may
> be different).
>
> > > - Total number of files (i.e. inodes) in existence should be capped,
> too.
> >
> > If you want that kind of cap, just specify individual files you are
> willing
> > to let people open for reading; that is your cap.  Only have to worry
> about
> > this if you open an entire directory open for writing.
>
> I'm not talking about filedescriptors (although that's another
> cappable resource); I'm talking about number of files in the
> filesystem. Most unix filesystems have a limit; I've run into it
> occasionally when I had a really large partition with not enough
> inodes configured and I was creating lots of tiny files. See df(1).


I understand that.  What I am saying is that by specifying only specific
files to open you cap the number of open inodes you create.  If you only
have room for five more inodes for the program to open, only specify five
specific files for the sandboxed interpreter (see?  I learn fast  =) that it
can open.

> > - If sandboxed code is allowed to create directories, the total depth
> > > and the total path length should also be capped.
> >
> > Once again, another one of those balance issues of where do we draw the
> line
> > in terms of simplicity in the setting compared to controlling every
> possible
> > setting people will want (especially, it seems, when it comes to writing
> to
> > disk).  And if you want to allow directory writing, you need to allow
> use of
> > the platform's OS-specific module ( e.g., posix) to do it since open()
> won't
> > let you create a directory.
> >
> > I really want to keep the settings and setup simple.  I don't want to
> > overburden people with a ton of security settings.
>
> Well, I prefixed it with "if you want to allow directory creation". If
> you don't allow that, fine. But if you do allow that (and it's an
> easily controlled operation just like file creation) I've given you
> some things to watch out for. I once ran into a situation where a
> script had gone off into deep recursion and created a near-infinite
> hierarchy of directories that rm -r couldn't remove (because it
> constructs absolute paths that exceeded MAXPATH).


I would rather not handle that and just warn people that if they allow full
use of 'os' that they better know what they are doing.

-Brett

> > (I find reading about trusted and untrusted code confusing; a few
> > > times I've had to read a sentence three times before realizing I had
> > > swapped those two words. Perhaps we can distinguish between trusted
> > > and sandboxed? Or even native and sandboxed?)
>
> > Fair enough.  When I do the next draft I will make them more distinctive
> > (probably "trusted" and "sandboxed").
>
> Great!
>
> --
> --Guido van Rossum (home page: http://www.python.org/~guido/)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/740c0e83/attachment.html 

From tim.peters at gmail.com  Wed Jun 28 19:53:47 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Wed, 28 Jun 2006 13:53:47 -0400
Subject: [Python-Dev] RFC: trunk checkins between now and 2.5 final
In-Reply-To: <200606290342.58447.anthony@interlink.com.au>
References: <200606290342.58447.anthony@interlink.com.au>
Message-ID: <1f7befae0606281053m5d4d4f50q566fd38ad9f91aa5@mail.gmail.com>

Only one gripe:

[Anthony Baxter]
> ...
> Once we hit release candidate 1, the trunk gets branched to
> reease25-maint.

Save the branch for 2.5 final (i.e., the 2.5final tag and the
release25-maint branch start life exactly the same).  Adding a new
step before it's possible to fix rc1 critical bugs is the worst
possible time to add a new step.  PEP 356 shows only one week between
rc1 and final, and nobody is gonna from frustration waiting a week to
merge their (currently non-existent, AFAICT) 2.6 branches into the
trunk.

As to the rest, I'll be happy to help revert things ;-)

From brett at python.org  Wed Jun 28 19:54:24 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 10:54:24 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <44A2BF2E.4020706@activestate.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>
	<bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>
	<44A2BF2E.4020706@activestate.com>
Message-ID: <bbaeab100606281054l49bc7442sd4d14ccb005f35e1@mail.gmail.com>

On 6/28/06, Trent Mick <trentm at activestate.com> wrote:
>
> Brett Cannon wrote:
> > The plan is to allow pure Python code to be embedded into web pages like
> > JavaScript. ...
>
> > ...Then again, Mark Hammond has already done a bunch of work for
> pyXPCOM, so getting Python compiled right into Firefox itself shouldn't be
> too bad.
> >
> > If this really takes off, will probably want both: get into Firefox, but
> have an extension for pre-existing installations.
>
> You should really speak with Mark, if you haven't recently. He's gotten
> a lot further than just PyXPCOM. My understanding (I might be a bit off
> on the branches and timing) is that his "DOM_AGNOSTIC" work on the
> Mozilla code has mostly been checked into the trunk. This work is to do
> mostly what you are describing: Python as a client-side scripting
> language along-side JavaScript.


Handling the Firefox integration is next month, so I will be talking to him.


I can't recall what Mozilla's distribution plans are for this. Certainly
> it wouldn't be before Firefox 3. Then again, default Firefox builds
> would like not include Python by default.
>
> It sounds to me that a restricted-execution/security-model story for
> Python would be important here.


Yep.  One of the reasons I am dealing with it.

Mark (and me a little bit) has been sketching out creating a "Python for
> Mozilla/Firefox" extension for installing an embedded Python into an
> existing Firefox installation on the pyxpcom list:
>
> http://aspn.activestate.com/ASPN/Mail/Message/pyxpcom/3167613
>
> > The idea is that there be a separate Python interpreter per web browser
> page instance.
>
> I think there may be scaling issues there. JavaScript isn't doing that
> is it, do you know? As well, that doesn't seem like it would translate
> well to sharing execution between separate chrome windows in a
> non-browser XUL/Mozilla-based app.


I don't know how JavaScript is doing it yet.  The critical thing for me for
this month was trying to come up with a security model.

And if you don't think it is going to scale, how do you think it should be
done?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/7ca5efce/attachment.htm 

From g.brandl at gmx.net  Wed Jun 28 20:02:30 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 28 Jun 2006 20:02:30 +0200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
References: <44A11EA1.1000605@iinet.net.au>	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>	<033001c69a07$44140890$d503030a@trilan>	<44A1CE4A.2000900@canterbury.ac.nz>
	<ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
Message-ID: <e7ug7n$msk$1@sea.gmane.org>

Guido van Rossum wrote:
> On 6/27/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>> Giovanni Bajo wrote:
>>
>> > This is where I wonder why the "def __main__()" PEP was rejected in the
>> > first place. It would have solved this problem as well.
>>
>> Could this be reconsidered for Py3k?
> 
> You have a point.

Added to PEP 3100.

Georg


From kd5bjo at gmail.com  Wed Jun 28 20:25:47 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Wed, 28 Jun 2006 13:25:47 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>
	<20060628090725.1099.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
	<44A2BB87.50406@acm.org>
	<ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
Message-ID: <eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>

> > Forget subroutines for a moment - the main point of the thread was the
> > idea that the dispatch table was built explicitly rather than
> > automatically - that instead of arguing over first-use vs.
> > function-definition, we let the user decide. I'm sure that my specific
> > proposal isn't the only way that this could be done.
>
> But anything that makes the build explicit is going to be so much more
> ugly. And I still think you're trying to solve the wrong problem.

Only if the programmer has to see it.  The dispatch table need not
include the behaviors of each of the cases; it only needs to define
what the cases are.  In most of the use cases I've seen, switch is
used to define behavior for different values of an enumeration.  The
dispatch table for an enumeration can be built wherever the values for
the enumeration are defined (such as in a module).  Programmers don't
need to bother with making a dispatch table unless they are defining
enumeration values themselves.

  -- Eric Sumner

Note: I sent an email yesterday with a proposal to this effect, but it
seems to have been lost.  If anybody wants, I can resend it.

From jimjjewett at gmail.com  Wed Jun 28 20:38:04 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 28 Jun 2006 14:38:04 -0400
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <ca471dc20606281033w667d0f76la8b0624c72d149aa@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
	<ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
	<fb6fbf560606281017u22b9ec29nacdee07acbf5dfb9@mail.gmail.com>
	<ca471dc20606281033w667d0f76la8b0624c72d149aa@mail.gmail.com>
Message-ID: <fb6fbf560606281138l6d46cceidf20cc96624ef7fd@mail.gmail.com>

On 6/28/06, Guido van Rossum <guido at python.org> wrote:
> On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> > On 6/28/06, Guido van Rossum <guido at python.org> wrote:

> > >   def index_functions(n):
> > >     return [(lambda i=i: i) for i in range(n)]

> > > which works but has the disadvantage of returning a list of functions
> > > of 0 or 1 argument

> > Another alternative might be letting functions get at themselves,
> > rather than just their names.  (Methods can save attributes on self,
> > but functions are out of luck if someone else reused their name.)

> This has been proposed before. Because (as you say) the function name
> is not generally safe to use, there's no good API; all proposals I've
> seen are ugly.

It basically requires a reserved word.

    def f(a, b="key",  __func__.extra=i):
        if __func__.extra < 43: ...

> And anyway this would be way too complex for the little
> lambda I was using as an example.

    def index_functions(n):
        return [(lambda __func__.i=i: i) for i in range(n)]

> > > Capture expressions outside functions would be illegal or
> > > limited to compile-time constant expressions (unless someone has a
> > > better idea).

> > At a minimum, it should be able to capture the expression's current
> > value at load-time, which might well involve names imported from
> > another module.

> I'm not sure what you mean by "load time".

The first time a module is imported.  When running from a .py file,
this is the same as compile time.

I didn't say compile-time because I don't want it frozen permanently
for the entire installation when the .pyc file is first written.

> > > A capture expression inside "if 0:" would still be
> > > captured to simplify the semantics (unless the compiler can prove that
> > > it has absolutely no side effects).

> > Running code that was guarded by "if 0:" sounds like a really bad idea.

> Assuming that the compiler will treat code guarded by "if 0:"
> different from code guarded by "if x:" where you happen to know that x
> is always false sounds like a really bad idea too.

The indent rules mean it will never be reached, so it can't have side
effects.  I expect that "if 0:  push_the_red_button" will not risk
pushing the red button.

Freezing a once-variable at funcdef time means that I have to look
inside the indent after all.  "If 0:" is just an especially bad
special case.

-jJ

From bob at redivi.com  Wed Jun 28 20:39:58 2006
From: bob at redivi.com (Bob Ippolito)
Date: Wed, 28 Jun 2006 11:39:58 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606281054l49bc7442sd4d14ccb005f35e1@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>
	<bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>
	<44A2BF2E.4020706@activestate.com>
	<bbaeab100606281054l49bc7442sd4d14ccb005f35e1@mail.gmail.com>
Message-ID: <B8D7CBAD-6C47-40F7-B64F-0D99763912F5@redivi.com>


On Jun 28, 2006, at 10:54 AM, Brett Cannon wrote:

> On 6/28/06, Trent Mick <trentm at activestate.com> wrote:
> Brett Cannon wrote:
>
> Mark (and me a little bit) has been sketching out creating a  
> "Python for
> Mozilla/Firefox" extension for installing an embedded Python into an
> existing Firefox installation on the pyxpcom list:
>
> http://aspn.activestate.com/ASPN/Mail/Message/pyxpcom/3167613
>
> > The idea is that there be a separate Python interpreter per web  
> browser page instance.
>
> I think there may be scaling issues there. JavaScript isn't doing that
> is it, do you know? As well, that doesn't seem like it would translate
> well to sharing execution between separate chrome windows in a
> non-browser XUL/Mozilla-based app.
>
> I don't know how JavaScript is doing it yet.  The critical thing  
> for me for this month was trying to come up with a security model.
>
> And if you don't think it is going to scale, how do you think it  
> should be done?

Why wouldn't it scale? How much interpreter state is there really  
anyway?

-bob

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/3f1f5a7b/attachment.html 

From guido at python.org  Wed Jun 28 20:47:18 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 11:47:18 -0700
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <fb6fbf560606281138l6d46cceidf20cc96624ef7fd@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
	<ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
	<fb6fbf560606281017u22b9ec29nacdee07acbf5dfb9@mail.gmail.com>
	<ca471dc20606281033w667d0f76la8b0624c72d149aa@mail.gmail.com>
	<fb6fbf560606281138l6d46cceidf20cc96624ef7fd@mail.gmail.com>
Message-ID: <ca471dc20606281147l5efc3feevd1e902b4e1daf202@mail.gmail.com>

On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> On 6/28/06, Guido van Rossum <guido at python.org> wrote:
> > On 6/28/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> It basically requires a reserved word.
>
>     def f(a, b="key",  __func__.extra=i):
>         if __func__.extra < 43: ...
>
> > And anyway this would be way too complex for the little
> > lambda I was using as an example.
>
>     def index_functions(n):
>         return [(lambda __func__.i=i: i) for i in range(n)]

I told you it was ugly. :-)

> > I'm not sure what you mean by "load time".
>
> The first time a module is imported.  When running from a .py file,
> this is the same as compile time.
>
> I didn't say compile-time because I don't want it frozen permanently
> for the entire installation when the .pyc file is first written.

So it won't have access to imported modules, contradicting your
earlier statement " which might well involve names imported from
another module".

> > > > A capture expression inside "if 0:" would still be
> > > > captured to simplify the semantics (unless the compiler can prove that
> > > > it has absolutely no side effects).
>
> > > Running code that was guarded by "if 0:" sounds like a really bad idea.
>
> > Assuming that the compiler will treat code guarded by "if 0:"
> > different from code guarded by "if x:" where you happen to know that x
> > is always false sounds like a really bad idea too.
>
> The indent rules mean it will never be reached, so it can't have side
> effects.  I expect that "if 0:  push_the_red_button" will not risk
> pushing the red button.
>
> Freezing a once-variable at funcdef time means that I have to look
> inside the indent after all.  "If 0:" is just an especially bad
> special case.

So we have what seems to be an impasse. Some people would really like
once-expressions to be captured at def-time rather than at the first
execution per def; this is the only way to use it so solve the "outer
loop variable reference" problem. Others would really hate it if a
once could be hidden in unreachable code but still execute, possibly
with a side effect.

I'm not sure that the possibility of writing obfuscated code should
kill a useful feature. What do others think? It's basically impossible
to prevent obfuscated code and we've had this argument before:
preventing bad code is not the goal of the language; encouraging good
code is.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From janssen at parc.com  Wed Jun 28 20:55:12 2006
From: janssen at parc.com (Bill Janssen)
Date: Wed, 28 Jun 2006 11:55:12 PDT
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: Your message of "Wed, 28 Jun 2006 09:59:51 PDT."
	<bbaeab100606280959l1eed1563q429ba39cb73f9615@mail.gmail.com> 
Message-ID: <06Jun28.115520pdt."58641"@synergy1.parc.xerox.com>

> Yep, it would be.  Then again, Mark Hammond has already done a bunch of work
> for pyXPCOM, so getting Python compiled right into Firefox itself shouldn't
> be too bad.

Of course, that's the road Sun first went down with Java, and that
turned out not-so-well for them.  I think the plug-in approach may be
stronger (but admittedly more limited), as lots of plug-ins work with
many different browsers, thus encouraging page designers to actually
use them.

Bill

From martin at v.loewis.de  Wed Jun 28 20:59:24 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 20:59:24 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>
	<20060628094550.109C.JCARLSON@uci.edu>
	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
Message-ID: <44A2D18C.3000705@v.loewis.de>

Collin Winter wrote:
> While I have no opinion on Gregor's app, and while I fully agree that
> new language features and stdlib modules should generally stay out of
> bug-fix point releases, xturtle doesn't seem to rise to that level
> (and hence, those restrictions).

It's a stdlib module, even if no other stdlib modules depend on it;
try "import turtle".

In the specific case, the problem with adding it to 2.5 is that xturtle
is a huge rewrite, so ideally, the code should be reviewed before being
added. Given that this is a lot of code, nobody will have the time to
perform a serious review. It will be hard enough to find somebody to
review it for 2.6 - often, changes of this size take several years to
review (primarily because it is so specialized that only few people
even consider reviewing it).

Regards,
Martin

From andreas.raab at gmx.de  Wed Jun 28 21:02:05 2006
From: andreas.raab at gmx.de (Andreas Raab)
Date: Wed, 28 Jun 2006 12:02:05 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <ca471dc20606281041j3998e105i4e875af77be8ea66@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>	<ca471dc20606280949x7cb5a096q3b235bd0fc96a905@mail.gmail.com>	<bbaeab100606280957p2ec3d39aj1a847ae62c3962a6@mail.gmail.com>
	<ca471dc20606281041j3998e105i4e875af77be8ea66@mail.gmail.com>
Message-ID: <44A2D22D.4040101@gmx.de>

Guido van Rossum wrote:
> On 6/28/06, Brett Cannon <brett at python.org> wrote:
>> On 6/28/06, Guido van Rossum <guido at python.org> wrote:
>>> - File size should be rounded up to some block size (512 if you don't
>>> have filesystem specific information) before adding to the total.
>> Why?
> 
> Because that's how filesystems work. Allocations are in terms of block
> sizes. 1000 files of 1 byte take up the same space as 1000 files of
> 512 bytes (in most common filesystems anyway -- I think Reiserfs may
> be different).

Forgive me if I'm missing the obvious but shouldn't block size be taken 
into consideration when setting the cap rather than for testing the file 
size? E.g., what happens if you specify a cap of 100 bytes, your block 
size is 512 and the user tries to write a single byte? Fail, because 
it's logically allocation 512 and the cap is at 100? That seems 
backwards to me since it would require that the app first determine the 
block size of the OS it's running on before it can even set a "working" 
cap.

And if the interpreter implicitly rounds the cap up to block size, then 
there isn't much of a point in specifying the number of bytes to begin 
with - perhaps use number of blocks instead?

In any case, I'd argue that if you allow the cap to be set at any 
specific number of bytes, the interpreter should honor *exactly* that 
number of bytes, blocks or not.

Cheers,
   - Andreas

From foom at fuhm.net  Wed Jun 28 21:29:11 2006
From: foom at fuhm.net (James Y Knight)
Date: Wed, 28 Jun 2006 15:29:11 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <FEDC1056-DB9A-449C-824F-C94CC08ECCAB@fuhm.net>
References: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
	<FEDC1056-DB9A-449C-824F-C94CC08ECCAB@fuhm.net>
Message-ID: <A0292BC1-BD87-473E-ACE1-65C0B563F32B@fuhm.net>


On Jun 25, 2006, at 9:47 PM, James Y Knight wrote:

>
> On Jun 24, 2006, at 1:29 PM, Ralf W. Grosse-Kunstleve wrote:
>
>> --- Jean-Paul Calderone <exarkun at divmod.com> wrote:
>>> I think it is safe to say that Twisted is more widely used than
>>> anything
>>> Google has yet released.  Twisted also has a reasonably plausible
>>> technical reason to dislike this change.  Google has a bunch of
>>> engineers
>>> who, apparently, cannot remember to create an empty __init__.py
>>> file in
>>> some directories sometimes.
>>
>> Simply adding a note to the ImportError message would solve this
>> problem "just
>> in time":
>>
>>>>> import mypackage.foo
>> Traceback (most recent call last):
>>   File "<stdin>", line 1, in ?
>> ImportError: No module named mypackage.foo
>>     Note that subdirectories are searched for imports only if they
>> contain an
>>     __init__.py file: http://www.python.org/doc/essays/packages.html
>>
>
> I also dislike the warning solution. Making the ImportError message
> more verbose seems like a much nicer solution.

I just found another reason to dislike the warnings: my homedir on  
one machine has a lot of random directories in it. One of them is  
named "readline". Every time I run python 2.5, it now helpfully notes:
   sys:1: ImportWarning: Not importing directory 'readline': missing  
__init__.py

It used to be the case that it was very unlikely that running python  
in your homedir would cause issues. Even though the current directory  
is on the default pythonpath, you needed to have either a file ending  
in .py or a directory with an __init__.py with the same name as a  
python module to cause problems. And that is generally unlikely to  
happen. Now, however, you get warnings just by having _any_ directory  
in your CWD with the same name as a python module. That's much more  
likely to happen; I can't be the only one who will have this issue.

I'd like to suggest the simple solution quoted above with a constant  
string added to the ImportError message would be good enough, and  
better than the current warning situation. Clearly it would be even  
better if someone did the complicated thing of keeping track of which  
directories would have been used had they had __init__.py files in  
them, and appending _that_ to the eventual ImportError message, but I  
don't think removing the warning should be held up on doing that.


James


From brett at python.org  Wed Jun 28 21:45:39 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 12:45:39 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <5094038299185356955@unknownmsgid>
References: <bbaeab100606280959l1eed1563q429ba39cb73f9615@mail.gmail.com>
	<5094038299185356955@unknownmsgid>
Message-ID: <bbaeab100606281245w1a19f66bub36f732b9ed7991e@mail.gmail.com>

On 6/28/06, Bill Janssen <janssen at parc.com> wrote:
>
> > Yep, it would be.  Then again, Mark Hammond has already done a bunch of
> work
> > for pyXPCOM, so getting Python compiled right into Firefox itself
> shouldn't
> > be too bad.
>
> Of course, that's the road Sun first went down with Java, and that
> turned out not-so-well for them.  I think the plug-in approach may be
> stronger (but admittedly more limited), as lots of plug-ins work with
> many different browsers, thus encouraging page designers to actually
> use them.


Right.  As I have said, for widespread use an extension will be needed.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/cdbe476b/attachment.htm 

From glingl at aon.at  Wed Jun 28 22:05:19 2006
From: glingl at aon.at (Gregor Lingl)
Date: Wed, 28 Jun 2006 22:05:19 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
 ATTENTION PLEASE!
In-Reply-To: <44A2D18C.3000705@v.loewis.de>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de>
Message-ID: <44A2E0FF.2010808@aon.at>

Martin v. L?wis schrieb:
> Collin Winter wrote:
>   
>> While I have no opinion on Gregor's app, and while I fully agree that
>> new language features and stdlib modules should generally stay out of
>> bug-fix point releases, xturtle doesn't seem to rise to that level
>> (and hence, those restrictions).
>>     
>
> It's a stdlib module, even if no other stdlib modules depend on it;
> try "import turtle".
>
> In the specific case, the problem with adding it to 2.5 is that xturtle
> is a huge rewrite, so ideally, the code should be reviewed before being
> added. Given that this is a lot of code, nobody will have the time to
> perform a serious review. It will be hard enough to find somebody to
> review it for 2.6 - often, changes of this size take several years to
> review (primarily because it is so specialized that only few people
> even consider reviewing it).
>   
Sorry Martin, but to me this seems not to be the right way to manage things.
We have turtle.py revised in Python2.5b1

Please try this example (as I  just did) :

IDLE 1.2b1      ==== No Subprocess ====
 >>> from turtle import *
 >>> begin_fill()
 >>> circle(100,90)  # observe the turtle
 >>> backward(200)
 >>> circle(100,90)
 >>> color("red")
 >>> end_fill()
IDLE internal error in runcode()
Traceback (most recent call last):
  File "<pyshell#6>", line 1, in <module>
    end_fill()
  File "C:\Python25\lib\lib-tk\turtle.py", line 724, in end_fill
    def end_fill(): _getpen.end_fill()
AttributeError: 'function' object has no attribute 'end_fill'
 >>>

An error occurs, because in line 724 it should read
def end_fill(): _getpen().end_fill()

(Who reviewed it? This is a _newly_added_ function -
did nobody try it out yet? Incredible!!)

I corrected it and did:

IDLE 1.2b1      ==== No Subprocess ====
 >>> from turtle import *
 >>> begin_fill()
 >>> circle(100,90)
 >>> backward(200)
 >>> circle(100,90)
 >>> color("red")
 >>> end_fill()
 >>>

What a shame!! An immanent bug, persistent
for years now!

Is this what Anthony Baxter calls
"the most solid Python release ever"

In contrast to this:

IDLE 1.2b1      ==== No Subprocess ====
 >>> from xturtle import *
 >>> begin_fill()
 >>> circle(100,90)
 >>> backward(200)
 >>> circle(100,90)
 >>> color("red")
 >>> end_fill()
 >>>

works correctly and the turtle travels along the arcs
with the same speed as along the straight lines.
Bugs like the one I detected above (by chance) cannot occur in the code of
my xturtle, because I don't have to type the definitions of those 
frunctions
into the script by hand. Instead they are generated automatically from the
corresponding methods of RawPen and Pen respectively.

And aren't 25+ bugfree samplescripts of great variety
and broad range in complexity to consider a more
reliable proof of at least usability than the procedure
you applied?

My xturtle is certainly not bugfree. But it's (also
certainly) not worse than turtle.py and way more versatile.

A more courageous and less bureaucratic approach to the problem
would be appropriate. Perhaps combined with some fantasy.
For example: put turtle.py and  xturtle.py both into beta2 and
see which one stands better the (beta)test of time. Or perhaps you have
an even better idea!

Regards,
Gregor

P.S.: If this posting doesn't move points of view, at least
it reveals one fixable bug in turtle.py (albeit also one unfixable!)







> Regards,
> Martin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/glingl%40aon.at
>
>
>   


From aahz at pythoncraft.com  Wed Jun 28 22:14:49 2006
From: aahz at pythoncraft.com (Aahz)
Date: Wed, 28 Jun 2006 13:14:49 -0700
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <A0292BC1-BD87-473E-ACE1-65C0B563F32B@fuhm.net>
References: <20060624172920.6639.qmail@web31512.mail.mud.yahoo.com>
	<FEDC1056-DB9A-449C-824F-C94CC08ECCAB@fuhm.net>
	<A0292BC1-BD87-473E-ACE1-65C0B563F32B@fuhm.net>
Message-ID: <20060628201449.GA3794@panix.com>

On Wed, Jun 28, 2006, James Y Knight wrote:
>
> I just found another reason to dislike the warnings: my homedir on  
> one machine has a lot of random directories in it. One of them is  
> named "readline". Every time I run python 2.5, it now helpfully notes:
>    sys:1: ImportWarning: Not importing directory 'readline': missing  
> __init__.py
> 
> It used to be the case that it was very unlikely that running python  
> in your homedir would cause issues. Even though the current directory  
> is on the default pythonpath, you needed to have either a file ending  
> in .py or a directory with an __init__.py with the same name as a  
> python module to cause problems. And that is generally unlikely to  
> happen. Now, however, you get warnings just by having _any_ directory  
> in your CWD with the same name as a python module. That's much more  
> likely to happen; I can't be the only one who will have this issue.

Oooooo!  Yuck!  I am now +1 for reverting the warning.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From trentm at activestate.com  Wed Jun 28 22:18:05 2006
From: trentm at activestate.com (Trent Mick)
Date: Wed, 28 Jun 2006 13:18:05 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <bbaeab100606281054l49bc7442sd4d14ccb005f35e1@mail.gmail.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>	<Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>	<bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>	<44A2BF2E.4020706@activestate.com>
	<bbaeab100606281054l49bc7442sd4d14ccb005f35e1@mail.gmail.com>
Message-ID: <44A2E3FD.4000608@activestate.com>

Brett Cannon wrote:
> > > The idea is that there be a separate Python interpreter per web
> > > browser page instance.
> 
> >     I think there may be scaling issues there. JavaScript isn't doing that
> >     is it, do you know? As well, that doesn't seem like it would translate
> >     well to sharing execution between separate chrome windows in a
> >     non-browser XUL/Mozilla-based app.
> 
> And if you don't think it is going to scale, how do you think it should 
> be done?

That was an ignorant response (I haven't read what you've suggested and 
really though about it). Sorry for the unsubstantiated babbling.

To Bob's question on how much interpreter state *is* there: I don't 
know. Have you done any measuring of that, Brett?

Trent

-- 
Trent Mick
trentm at activestate.com

From fredrik at pythonware.com  Wed Jun 28 22:22:06 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 28 Jun 2006 22:22:06 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
	ATTENTION PLEASE!
In-Reply-To: <44A2E0FF.2010808@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>
	<44A2E0FF.2010808@aon.at>
Message-ID: <e7uod8$nnc$1@sea.gmane.org>

Gregor Lingl wrote:

> What a shame!! An immanent bug, persistent
> for years now!
> 
> Is this what Anthony Baxter calls
> "the most solid Python release ever"

do you really think stuff like this helps your cause ?

</F>


From brett at python.org  Wed Jun 28 22:33:39 2006
From: brett at python.org (Brett Cannon)
Date: Wed, 28 Jun 2006 13:33:39 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <44A2E3FD.4000608@activestate.com>
References: <bbaeab100606211733x3dd58226g15e2919e8e2cdd7e@mail.gmail.com>
	<Pine.LNX.4.58.0606261937050.17937@server1.LFW.org>
	<bbaeab100606261800x5949cb89h97424fc052e33534@mail.gmail.com>
	<44A2BF2E.4020706@activestate.com>
	<bbaeab100606281054l49bc7442sd4d14ccb005f35e1@mail.gmail.com>
	<44A2E3FD.4000608@activestate.com>
Message-ID: <bbaeab100606281333q209eb5f1h787f2355ff81e4ce@mail.gmail.com>

On 6/28/06, Trent Mick <trentm at activestate.com> wrote:
>
> Brett Cannon wrote:
> > > > The idea is that there be a separate Python interpreter per web
> > > > browser page instance.
> >
> > >     I think there may be scaling issues there. JavaScript isn't doing
> that
> > >     is it, do you know? As well, that doesn't seem like it would
> translate
> > >     well to sharing execution between separate chrome windows in a
> > >     non-browser XUL/Mozilla-based app.
> >
> > And if you don't think it is going to scale, how do you think it should
> > be done?
>
> That was an ignorant response (I haven't read what you've suggested and
> really though about it). Sorry for the unsubstantiated babbling.
>
> To Bob's question on how much interpreter state *is* there: I don't
> know. Have you done any measuring of that, Brett?



Not yet; as of right now I just want a coherent security model since this
whole idea is dead in the water without it.  But I do know that interpreters
are basically execution stack, a new sys module, and a new sys.modules.  It
isn't horrendously heavy.  And C extension modules are shared between them.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060628/45584f62/attachment.html 

From bob at redivi.com  Wed Jun 28 22:53:32 2006
From: bob at redivi.com (Bob Ippolito)
Date: Wed, 28 Jun 2006 13:53:32 -0700
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
	ATTENTION PLEASE!
In-Reply-To: <44A2E0FF.2010808@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
Message-ID: <48949D68-5260-4D8D-BE28-D199E165DA8B@redivi.com>


On Jun 28, 2006, at 1:05 PM, Gregor Lingl wrote:

> Martin v. L?wis schrieb:
>> Collin Winter wrote:
>>
>>> While I have no opinion on Gregor's app, and while I fully agree  
>>> that
>>> new language features and stdlib modules should generally stay  
>>> out of
>>> bug-fix point releases, xturtle doesn't seem to rise to that level
>>> (and hence, those restrictions).
>>>
>>
>> It's a stdlib module, even if no other stdlib modules depend on it;
>> try "import turtle".
>>
>> In the specific case, the problem with adding it to 2.5 is that  
>> xturtle
>> is a huge rewrite, so ideally, the code should be reviewed before  
>> being
>> added. Given that this is a lot of code, nobody will have the time to
>> perform a serious review. It will be hard enough to find somebody to
>> review it for 2.6 - often, changes of this size take several years to
>> review (primarily because it is so specialized that only few people
>> even consider reviewing it).
>>
> Sorry Martin, but to me this seems not to be the right way to  
> manage things.
> We have turtle.py revised in Python2.5b1
>
> Please try this example (as I  just did) :
>
> IDLE 1.2b1      ==== No Subprocess ====
> >>> from turtle import *
> >>> begin_fill()
> >>> circle(100,90)  # observe the turtle
> >>> backward(200)
> >>> circle(100,90)
> >>> color("red")
> >>> end_fill()
> IDLE internal error in runcode()
> Traceback (most recent call last):
>  File "<pyshell#6>", line 1, in <module>
>    end_fill()
>  File "C:\Python25\lib\lib-tk\turtle.py", line 724, in end_fill
>    def end_fill(): _getpen.end_fill()
> AttributeError: 'function' object has no attribute 'end_fill'
> >>>
>
> An error occurs, because in line 724 it should read
> def end_fill(): _getpen().end_fill()

File a patch, this is a bug fix and should definitely be appropriate  
for inclusion before the release of Python 2.5!

-bob


From guido at python.org  Wed Jun 28 23:00:34 2006
From: guido at python.org (Guido van Rossum)
Date: Wed, 28 Jun 2006 14:00:34 -0700
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
	ATTENTION PLEASE!
In-Reply-To: <48949D68-5260-4D8D-BE28-D199E165DA8B@redivi.com>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at> <20060628094550.109C.JCARLSON@uci.edu>
	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
	<48949D68-5260-4D8D-BE28-D199E165DA8B@redivi.com>
Message-ID: <ca471dc20606281400x15455529m38f54310d48915d6@mail.gmail.com>

It was already patched by the other Georg. Thanks for the quick fix, georgbot!

--Guido

On 6/28/06, Bob Ippolito <bob at redivi.com> wrote:
>
> On Jun 28, 2006, at 1:05 PM, Gregor Lingl wrote:
>
> > Martin v. L?wis schrieb:
> >> Collin Winter wrote:
> >>
> >>> While I have no opinion on Gregor's app, and while I fully agree
> >>> that
> >>> new language features and stdlib modules should generally stay
> >>> out of
> >>> bug-fix point releases, xturtle doesn't seem to rise to that level
> >>> (and hence, those restrictions).
> >>>
> >>
> >> It's a stdlib module, even if no other stdlib modules depend on it;
> >> try "import turtle".
> >>
> >> In the specific case, the problem with adding it to 2.5 is that
> >> xturtle
> >> is a huge rewrite, so ideally, the code should be reviewed before
> >> being
> >> added. Given that this is a lot of code, nobody will have the time to
> >> perform a serious review. It will be hard enough to find somebody to
> >> review it for 2.6 - often, changes of this size take several years to
> >> review (primarily because it is so specialized that only few people
> >> even consider reviewing it).
> >>
> > Sorry Martin, but to me this seems not to be the right way to
> > manage things.
> > We have turtle.py revised in Python2.5b1
> >
> > Please try this example (as I  just did) :
> >
> > IDLE 1.2b1      ==== No Subprocess ====
> > >>> from turtle import *
> > >>> begin_fill()
> > >>> circle(100,90)  # observe the turtle
> > >>> backward(200)
> > >>> circle(100,90)
> > >>> color("red")
> > >>> end_fill()
> > IDLE internal error in runcode()
> > Traceback (most recent call last):
> >  File "<pyshell#6>", line 1, in <module>
> >    end_fill()
> >  File "C:\Python25\lib\lib-tk\turtle.py", line 724, in end_fill
> >    def end_fill(): _getpen.end_fill()
> > AttributeError: 'function' object has no attribute 'end_fill'
> > >>>
> >
> > An error occurs, because in line 724 it should read
> > def end_fill(): _getpen().end_fill()
>
> File a patch, this is a bug fix and should definitely be appropriate
> for inclusion before the release of Python 2.5!
>
> -bob
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From martin at v.loewis.de  Wed Jun 28 23:17:48 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 23:17:48 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
 ATTENTION PLEASE!
In-Reply-To: <44A2E0FF.2010808@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
Message-ID: <44A2F1FC.4010307@v.loewis.de>

Gregor Lingl wrote:
> Sorry Martin, but to me this seems not to be the right way to manage
> things.

As you explain later, this is precisely the right way; it is unfortunate
that it isn't always followed.

> (Who reviewed it? This is a _newly_added_ function -
> did nobody try it out yet? Incredible!!)

Apparently not. Thanks for pointing that out; Georg (who committed the
patch originally) just fixed it in r47151.

This illustrates the main point: Even small changes need careful review.
Much more so large changes.

[turtle does not just fill the shape, but the entire boundary polygon]
> What a shame!! An immanent bug, persistent
> for years now!

If the bug had existed for years, somebody could have contributed a
patch.

> Bugs like the one I detected above (by chance) cannot occur in the code of
> my xturtle, because I don't have to type the definitions of those
> frunctions
> into the script by hand. Instead they are generated automatically from the
> corresponding methods of RawPen and Pen respectively.

That's all good and well. It still needs to be reviewed.

> And aren't 25+ bugfree samplescripts of great variety
> and broad range in complexity to consider a more
> reliable proof of at least usability than the procedure
> you applied?

It's not only about finding bugs. It's also about studying the
consistency of the new API, and so on.

As for "reliable proofs": An automatic test suite for turtle.py
would be a good thing to have.

> A more courageous and less bureaucratic approach to the problem
> would be appropriate. Perhaps combined with some fantasy.

This bureaucracy has worked fine all the years, and in cases
where it was relaxed, we had to regret the changes we accepted
more often than not (just like the bug you discovered: the
patch should not have been accepted without test cases).

> P.S.: If this posting doesn't move points of view, at least
> it reveals one fixable bug in turtle.py (albeit also one unfixable!)

The approach used in xturtle (i.e. represent circles as polylines)
could also be used for turtle.py, no?

Regards,
Martin

From python at rcn.com  Wed Jun 28 23:10:31 2006
From: python at rcn.com (python at rcn.com)
Date: Wed, 28 Jun 2006 17:10:31 -0400 (EDT)
Subject: [Python-Dev] xturtle.py
Message-ID: <20060628171031.AWH96240@ms03.lnh.mail.rcn.net>

[Collin Winter]
>> While I have no opinion on Gregor's app, and while I fully 
agree that
>> new language features and stdlib modules should generally 
stay out of
>> bug-fix point releases, xturtle doesn't seem to rise to that 
level
>> (and hence, those restrictions).

[Martin]
> It's a stdlib module, even if no other stdlib modules depend 
on it;
> try "import turtle".
>
> In the specific case, the problem with adding it to 2.5 is that 
xturtle
> is a huge rewrite, so ideally, the code should be reviewed 
before being
> added. Given that this is a lot of code, nobody will have the 
time to
> perform a serious review. It will be hard enough to find 
somebody to
> review it for 2.6 - often, changes of this size take several 
years to
> review (primarily because it is so specialized that only few 
people 
> even consider reviewing it).

As a compromise. we could tack Gregor Lingl's module under 
the Tools directory. This makes the tool more readily available 
for student use and allows it a more liberal zone to evolve than 
if it were in the standard library.

One other thought -- at PyCon, I talked with a group of 
educators.  While they needed some minor tweaks to the Turtle 
module, there were no requests for an extensive rewrite or a 
fatter API.  The name of the game was to have a single module 
with a minimal toolset supporting a few simple programs, just 
rich enough to inspire, but small enough to fit into tiny slots in 
the curriculum (one sixth grade class gets is allocated three 55 
minute sessions to learn programming).


Raymond

From martin at v.loewis.de  Wed Jun 28 23:23:04 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 23:23:04 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
 ATTENTION PLEASE!
In-Reply-To: <44A2E0FF.2010808@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
Message-ID: <44A2F338.8060506@v.loewis.de>

Gregor Lingl wrote:
> For example: put turtle.py and  xturtle.py both into beta2 and
> see which one stands better the (beta)test of time. Or perhaps you have
> an even better idea!

As a compromise, we could put an ad into the turtle document (a "see
also" link).

Regards,
Martin

From rrr at ronadam.com  Wed Jun 28 23:24:22 2006
From: rrr at ronadam.com (Ron Adam)
Date: Wed, 28 Jun 2006 16:24:22 -0500
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>
	<ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
Message-ID: <44A2F386.7080003@ronadam.com>

> I believe at least one poster has pointed out that 'once' (if defined
> suitably) could be used as a better way to do this:
> 
>   def index_functions(n):
>     return [(lambda: once i) for i in range(n)]
> 
> But delaying the evaluation of the once argument until the function is
> called would break this, since none of these functions are called
> until after the loop is over, so the original bug would be back.


I've been trying to sort out the different terms once, const, and 
static.  Below is what feels right to me.  Not that it is right, but how 
I think I would interpret them as if I were new to python and what would 
be easiest to understand.

The "once" below isn't what is being discussed but it seems to me what 
the word implies.


once = Evaluate an expression once and use that value in place of the 
expression if the same line is executed again.

     for n in range(10):
        print n                    # print 0 through 9

     for n in range(10):
        print (once n)             # prints 0 ten times

    a = (once 3 * pi)              # replaces 3 * pi with value

    b = i + (once sum(range(10)))  # evaluate 'sum()' only once
                                   # use the result many times
                                   # in a loop


const = Bind a name to a value and protect it from further change in the 
current scope at execution time.  This protects the name, but not the 
object.  A constant mutable is still mutable.  The name just can't be 
rebound to another object.

    def foo(i):
       i += 1        # this is ok
       const i       # it becomes locked at this point
       i = 2         # this gives an exception


static = Evaluate an expression at function compile time.  Any values in 
the expression need to be known at function compile time.  They can be 
static names in the current scope previously evaluated.

    a, b = 1, 2
    def foo(i):
       static j = a+b
       static k = j*2

       k = 25           # ok if not also const
       const j          # protect it from further change
       j = 12           # give an exception


The term static does seem to suggest lack of change, so it could also 
have the const property by default.  If it were allowed to be changed, 
then would it keep the changed value on the next call?  Probably not a 
good idea for the use cases being discussed.


So given the above only the static version solves the above lambda loop 
and returns a list of functions that return values 0 through n.

I think all three of these properties are useful, but I don't think we 
need all three of them.


Cheers,
    Ron

(* I'll be away from my computer for about a week after Tomorrow morning.)



From g.brandl at gmx.net  Wed Jun 28 23:34:04 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 28 Jun 2006 23:34:04 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
	ATTENTION PLEASE!
In-Reply-To: <ca471dc20606281400x15455529m38f54310d48915d6@mail.gmail.com>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>
	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>
	<44A2E0FF.2010808@aon.at>	<48949D68-5260-4D8D-BE28-D199E165DA8B@redivi.com>
	<ca471dc20606281400x15455529m38f54310d48915d6@mail.gmail.com>
Message-ID: <e7uskc$76i$1@sea.gmane.org>

Guido van Rossum wrote:
> It was already patched by the other Georg. Thanks for the quick fix, georgbot!

My pleasure, even if there's a difference between "Georg" and "Gregor" ;)

cheers,
Georg


From martin at v.loewis.de  Wed Jun 28 23:39:49 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 28 Jun 2006 23:39:49 +0200
Subject: [Python-Dev] xturtle.py
In-Reply-To: <20060628171031.AWH96240@ms03.lnh.mail.rcn.net>
References: <20060628171031.AWH96240@ms03.lnh.mail.rcn.net>
Message-ID: <44A2F725.9070009@v.loewis.de>

python at rcn.com wrote:
> As a compromise. we could tack Gregor Lingl's module under 
> the Tools directory. This makes the tool more readily available 
> for student use and allows it a more liberal zone to evolve than 
> if it were in the standard library.

That could also work. See my other compromise proposal: advertising
it in the docs.

> One other thought -- at PyCon, I talked with a group of 
> educators.  While they needed some minor tweaks to the Turtle 
> module, there were no requests for an extensive rewrite or a 
> fatter API.  The name of the game was to have a single module 
> with a minimal toolset supporting a few simple programs, just 
> rich enough to inspire, but small enough to fit into tiny slots in 
> the curriculum (one sixth grade class gets is allocated three 55 
> minute sessions to learn programming).

Thanks for the report. xturtle does provide a fatter API; it goes
up from 50 turtle functions in turtle.py to 93 in xturtle.py
(counting with len([s for s in dir(turtle) if 'a' < s <'z']) - I
think turtle should grow an __all__ attribute).

Regards,
Martin

From glingl at aon.at  Thu Jun 29 00:37:49 2006
From: glingl at aon.at (Gregor Lingl)
Date: Thu, 29 Jun 2006 00:37:49 +0200
Subject: [Python-Dev] xturtle.py a replacement for
 turtle.py(!?)	ATTENTION PLEASE!
In-Reply-To: <e7uod8$nnc$1@sea.gmane.org>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>	<44A2E0FF.2010808@aon.at>
	<e7uod8$nnc$1@sea.gmane.org>
Message-ID: <44A304BD.5020403@aon.at>

Fredrik Lundh schrieb:
> Gregor Lingl wrote:
>
>   
>> What a shame!! An immanent bug, persistent
>> for years now!
>>
>> Is this what Anthony Baxter calls
>> "the most solid Python release ever"
>>     
>
> do you really think stuff like this helps your cause ?
>
>   
Perhaps it dosn't help the turtle - cause. (I confess, I was a bit
upset, please excuse!)

But please let me clarify one point.

I made xturtle.py and that was a big effort. And I offer it to replace
turtle.py. I do this because I'm a Python enthusiast and I want a better
Python. (And I know very well that my contribution is rather marginal).
We all, I think, have this motive. And of course it was my
fault to submit it too late.

So, if you can't accept that offer - now, or even ever - , because it 
contradicts your rules,
that's o.k. But it's not 'my cause'. I concieve it to be the community's 
cause.

I, for my part, can and will use xturtle.py, knowing and having the 
experience, that it is
far superior to turtle.py. So I have no problem. And I'll offer it for 
download from
the xturtle-webpage or from wherever you suggest. So it will be freely 
available.
(Perhaps a sourceforge project would be appropriate. Give me your 
advice, please)

The only point is, that it leaves Python's turtle.py an (imho) 
unsatisfactory solution.
See for instance Vern Ceder judgment:
http://mail.python.org/pipermail/edu-sig/2006-June/006625.html

Regards,
Gregor

Final remark: I know, that my English is not very good. So I feel, that 
possibly I  have not complete
control over the 'undertones' of my writing. If sombody feels offended, 
please excuse,
that was not my intent.




> </F>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/glingl%40aon.at
>
>
>   


From greg.ewing at canterbury.ac.nz  Thu Jun 29 01:32:33 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 29 Jun 2006 11:32:33 +1200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A2363D.8090300@gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com>
	<ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>
	<44A2363D.8090300@gmail.com>
Message-ID: <44A31191.3090705@canterbury.ac.nz>

Nick Coghlan wrote:

> By 'current namespace' I really do mean locals() - the cell objects themselves
> would be local variables from the point of view of the currently executing code.

This is wrong. Cells are *parameters* implicitly passed
in by the calling function. They may temporarily be
referenced from the current scope, but their "home"
has to be in an outer scope, otherwise they won't
survive between calls.

--
Greg


From greg.ewing at canterbury.ac.nz  Thu Jun 29 01:34:58 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 29 Jun 2006 11:34:58 +1200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A2380F.6000503@acm.org>
References: <ca471dc20606271020s1ee6d48q324a0d0ef84e096e@mail.gmail.com>
	<44A21934.40801@acm.org> <20060627233313.1090.JCARLSON@uci.edu>
	<44A2380F.6000503@acm.org>
Message-ID: <44A31222.7070802@canterbury.ac.nz>

Talin wrote:

> The case -> sub mapping doesn't need to be defined every time - that's 
> the point, you as the programmer decide when and how to construct the 
> dictionary,

Then you seem to be proposing a variation on the constant-only
case option, with a more convoluted control flow.

--
Greg

From jimjjewett at gmail.com  Thu Jun 29 01:36:58 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 28 Jun 2006 19:36:58 -0400
Subject: [Python-Dev] xturtle.py
Message-ID: <fb6fbf560606281636j41f0081dnffd142f301a51732@mail.gmail.com>

Raymond wrote:

> One other thought -- at PyCon, I talked with a group of
> educators.  While they needed some minor tweaks to the Turtle
> module, there were no requests for an extensive rewrite or a
> fatter API.  The name of the game was to have a single module
> with a minimal toolset supporting a few simple programs, just
> rich enough to inspire, but small enough to fit into tiny slots in
> the curriculum (one sixth grade class gets is allocated three 55
> minute sessions to learn programming).

This argues against xturtle as it stands today.

By all means, mention it in the docs as a possibly superior
replacement that people can install themselves.

Consider it for 2.6.

But give it some time to mature before freezing it into the stdlib.

(1)  The API got much bigger.
(2)  Despite minor backwards compatibility problems, such as no longer
re-exporting math.*
(3)  The auto-generation of code is clever, but probably not the best
example to start with when teaching a raw beginner.

I think that by 2.6, it probably will be ready to replace the existing turtle.py

But I also think that if it goes in today, there will be at least a
few decisions that we regret.  These are much easier to fix while it
is still an independent project.

-jJ

From greg.ewing at canterbury.ac.nz  Thu Jun 29 02:01:43 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 29 Jun 2006 12:01:43 +1200
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
References: <fb6fbf560606270750l29c0c5a8s521f121518cbbc67@mail.gmail.com>
	<bbaeab100606270847p6b210a86t88c091835e4ff501@mail.gmail.com>
	<ee2a432c0606272044x38efccaexe4c32b97d6005ff6@mail.gmail.com>
	<fb6fbf560606280642r166514d8uba8876b97b76a0e1@mail.gmail.com>
Message-ID: <44A31867.8070108@canterbury.ac.nz>

Jim Jewett wrote:

> IMHO, I would prefer that it limit disk consumption; a deleted or
> overwritten file would not count against the process, but even a
> temporary spike would need to be less than the cap.

The problem is that there's no easy way to reliably measure
disk consumption by a particular process, particularly on
Unix. For example, os.unlink() doesn't necessarily free
the space used by a file -- there could be other links to
it, or the same or another process may hold another file
descriptor referencing it.

Another problem is that Unix files can have "holes" in
them, e.g. if you create a file, seek to position
1000000, and write a byte, you're not using a megabyte
of disk.

Accounting for all these possibilities reliably would
be very complicated, and maybe even impossible to get
exactly right.

--
Greg

From greg.ewing at canterbury.ac.nz  Thu Jun 29 02:49:20 2006
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 29 Jun 2006 12:49:20 +1200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <44A2D18C.3000705@v.loewis.de>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at> <20060628094550.109C.JCARLSON@uci.edu>
	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de>
Message-ID: <44A32390.8050600@canterbury.ac.nz>

Martin v. L?wis wrote:
> xturtle

BTW, I'm not sure if 'xturtle' is such a good name.
There's a tradition of X Windows executables having
names starting with 'x', whereas this is presumably
platform-independent.

Maybe 'turtleplus' or something?

--
Greg

From martin at v.loewis.de  Thu Jun 29 03:21:12 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 29 Jun 2006 03:21:12 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <44A32390.8050600@canterbury.ac.nz>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>
	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>
	<44A32390.8050600@canterbury.ac.nz>
Message-ID: <44A32B08.2090700@v.loewis.de>

Greg Ewing wrote:
> BTW, I'm not sure if 'xturtle' is such a good name.
> There's a tradition of X Windows executables having
> names starting with 'x', whereas this is presumably
> platform-independent.
> 
> Maybe 'turtleplus' or something?

When it goes into Python, it will be 'turtle'.

Regards,
Martin

From mhammond at skippinet.com.au  Thu Jun 29 08:28:16 2006
From: mhammond at skippinet.com.au (Mark Hammond)
Date: Thu, 29 Jun 2006 16:28:16 +1000
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <B8D7CBAD-6C47-40F7-B64F-0D99763912F5@redivi.com>
Message-ID: <043001c69b45$35742290$100a0a0a@enfoldsystems.local>

Bob writes:

> I don't know how JavaScript is doing it yet.  The critical thing
> for me for this month was trying to come up with a security model.

I don't fully understand how JS does it either, certainly not in any detail.
I know that it uses the concept of a "principal" (the IDL file can be seen
at http://lxr.mozilla.org/seamonkey/source/caps/idl/nsIPrincipal.idl) and I
think that the absence of any principals == "trusted code".  I believe the
principals are obtained either from the JS stack, or from the "event source"
and a few other obscure exceptions.  There is also lots of C code littered
with explicit "is this code trusted" calls that makes implicit and explicit
javascript assumptions - not particularly deep assumptions, but they exist.

Cross-language calls will also need consideration.  JS will be able to
implicitly or explicitly call Python functions, which again will implicitly
or explicitly call JS functions.  Some of those frames will always be
unrestricted (ie, they are "components" - often written in C++, they can do
*anything*), but some will not.  We have managed to punt on that given that
Python is currently always unrestricted.

In the early stages though, Mozilla is happy to have Python enabled only for
trusted sources - that means it is limited to Mozilla extensions, or even a
completely new app using the Mozilla framework.  From a practical viewpoint,
that helps "mozilla the platform" more than it helps "firebox the browser"
etc.  This sandboxing would help the browser, which is great!

I'm confident that when the time comes we will get the ear of Brendan Eich
to help steer us forward.

Cheers,

Mark.


From mhammond at skippinet.com.au  Thu Jun 29 08:43:21 2006
From: mhammond at skippinet.com.au (Mark Hammond)
Date: Thu, 29 Jun 2006 16:43:21 +1000
Subject: [Python-Dev] doc for new restricted execution design for Python
Message-ID: <044401c69b47$50a14f50$100a0a0a@enfoldsystems.local>

I wrote:

> Bob writes:

Ack - sorry about that - the HTML mail confused me :)  It was Brett, of
course.

Mark


From fredrik at pythonware.com  Thu Jun 29 09:24:08 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 29 Jun 2006 09:24:08 +0200
Subject: [Python-Dev] xturtle.py
References: <20060628171031.AWH96240@ms03.lnh.mail.rcn.net>
Message-ID: <e7vv6o$ug3$1@sea.gmane.org>

python at rcn.com wrote:

> One other thought -- at PyCon, I talked with a group of
> educators.  While they needed some minor tweaks to the Turtle
> module, there were no requests for an extensive rewrite or a
> fatter API.  The name of the game was to have a single module
> with a minimal toolset supporting a few simple programs, just
> rich enough to inspire, but small enough to fit into tiny slots in
> the curriculum (one sixth grade class gets is allocated three 55
> minute sessions to learn programming).

which makes RUR-PLE a much better choice than turtles, really.

</F> 




From ncoghlan at gmail.com  Thu Jun 29 12:56:28 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 20:56:28 +1000
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <ca471dc20606280945j5891a132g6aad2d1c49b6c986@mail.gmail.com>
References: <44A11EA1.1000605@iinet.net.au>	
	<ca471dc20606270808p4fe32945lf6019005bc3b054f@mail.gmail.com>	
	<44A22C81.5070701@gmail.com> <44A281B2.2080309@gmail.com>
	<ca471dc20606280945j5891a132g6aad2d1c49b6c986@mail.gmail.com>
Message-ID: <44A3B1DC.70105@gmail.com>

Guido van Rossum wrote:
> On 6/28/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
>> The workaround to replace __name__ with __module_name__ in order to 
>> enable
>> relative imports turned out to be pretty ugly, so I also worked up a 
>> patch to
>> import.c to get it to treat __module_name__ as an override for 
>> __name__ when
>> __name__ == '__main__'.
> 
> Ah, clever. +1.

In that case, I'll check it straight in. It was actually surprisingly easy to 
do, given how finicky import.c can get (this particular change was able to be 
handled entirely inside get_parent()).

>> So given a test_foo.py that started like this:
>>
>>    import unittest
>>    import ..foo
> 
> Um, that's not legal syntax last I looked. Leading dots can only be
> used in "from ... import". Did you change that too? I really hope you
> didn't!

It's OK - I just spelt it wrong in the example. It should have said "from .. 
import foo".

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From p.f.moore at gmail.com  Thu Jun 29 13:03:30 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 29 Jun 2006 12:03:30 +0100
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
	ATTENTION PLEASE!
In-Reply-To: <44A304BD.5020403@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>
	<44A2A9EB.6050802@aon.at> <20060628094550.109C.JCARLSON@uci.edu>
	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
	<e7uod8$nnc$1@sea.gmane.org> <44A304BD.5020403@aon.at>
Message-ID: <79990c6b0606290403l77204fb7p58a357c464e9ce6e@mail.gmail.com>

On 6/28/06, Gregor Lingl <glingl at aon.at> wrote:
> I made xturtle.py and that was a big effort. And I offer it to replace
> turtle.py. I do this because I'm a Python enthusiast and I want a better
> Python. (And I know very well that my contribution is rather marginal).
> We all, I think, have this motive. And of course it was my
> fault to submit it too late.

I am certainly interested in your module, and will have a look at it
in due course (to use it, not as a review for inclusion in Python).

> So, if you can't accept that offer - now, or even ever - , because it
> contradicts your rules, that's o.k. But it's not 'my cause'. I concieve
>  it to be the community's cause.

It's purely a timing issue. You offered the module just before the
Python 2.5 feature freeze. At that point in time, a brand new module
intended to replace an existing one is almost certainly going to be
rejected, simply from time constraints.

I see no reason at all why you can't offer the module for Python 2.6, however.

> The only point is, that it leaves Python's turtle.py an (imho)
> unsatisfactory solution.

Please be aware that *someone* will need to champion your module for
inclusion into Python 2.6 As Martin points out, review will require
some effort - and particularly if the proposal is to replace turtle.py
rather than sitting alongside it. It will be necessary to persuade one
of the core developers to care enough to spend time on this. They are
all doing this in their spare time, and have their own interests which
will come first.

I know from experience that getting developer time is hard. It's
possible that it would help to leave the module as an external project
for a while, until enough other people in the Python community have
acknowledged its usefulness, and can testify that it gives them no
issues. At that point, the job of a reviewer becomes much easier
(there's a user base confirming most of the things a reviewer has to
consider) and so it is more likely that your module will be accepted.

Paul.

From ncoghlan at gmail.com  Thu Jun 29 13:15:20 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 21:15:20 +1000
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <039201c69ad7$4dc91df0$d503030a@trilan>
References: <44A11EA1.1000605@iinet.net.au>	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>	<033001c69a07$44140890$d503030a@trilan>	<44A1CE4A.2000900@canterbury.ac.nz>	<ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
	<039201c69ad7$4dc91df0$d503030a@trilan>
Message-ID: <44A3B648.2060104@gmail.com>

Giovanni Bajo wrote:
> Guido van Rossum wrote:
> 
>>>> This is where I wonder why the "def __main__()" PEP was rejected in
>>>> the first place. It would have solved this problem as well.
>>> Could this be reconsidered for Py3k?
>> You have a point.
> 
> AFAICT, there's nothing preventing it from being added in 2.6. It won't
> break existing code with the "if name == main" paradigm.

Writing modules that use the approach but want to work with both 2.5 and 2.6 
becomes a little more annoying - such modules have to finish with the coda:

if __name__ == '__main__':
   from sys import version_info, argv
   if version_info < (2, 6):
       sys.exit(__main__(argv))

The interpreter would also have to be careful to ensure that a __main__ 
variable in the globals isn't the result of a module doing "import __main__".

The above two reasons are what got PEP 299 killed the first time (the thread 
is even linked from the PEP ;).

Another downside I've discovered recently is that calling sys.exit() prevents 
the use of a post-mortem debugging session triggered by -i or PYTHONINSPECT. 
sys.exit() crashes out of the entire process, so the post-mortem interactive 
session never even gets started.

The only real upside I can see to PEP 299 is that "main is a function" is more 
familiar to people coming from languages like C where you can't have run-time 
code at the top level of a module. Python's a scripting language though, and 
having run-time logic at the top level of a script is perfectly normal!

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Thu Jun 29 13:27:26 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 21:27:26 +1000
Subject: [Python-Dev] RFC: trunk checkins between now and 2.5 final
In-Reply-To: <200606290342.58447.anthony@interlink.com.au>
References: <200606290342.58447.anthony@interlink.com.au>
Message-ID: <44A3B91E.3010008@gmail.com>

Anthony Baxter wrote:
> Anyway, this is the current thinking. Am I being too dogmatic here? 
> Comments solicited.

Seems like a fair policy to me.

> As far as people to sign off on things, Neal, myself or Guido should 
> be the ones to do it. Course, Guido will probably decide he doesn't 
> want this dubious honour <wink>.

I consider the proposed import change (looking for __module_name__ in the main 
module) a bug fix for the interaction between PEP 338 and 328, but I'll hold 
off on committing it until I get the OK from yourself or Neal (and put the 
patch on SF in the meantime).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Thu Jun 29 14:04:06 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 22:04:06 +1000
Subject: [Python-Dev] RFC: trunk checkins between now and 2.5 final
In-Reply-To: <44A3B91E.3010008@gmail.com>
References: <200606290342.58447.anthony@interlink.com.au>
	<44A3B91E.3010008@gmail.com>
Message-ID: <44A3C1B6.4080703@gmail.com>

Nick Coghlan wrote:
> I consider the proposed import change (looking for __module_name__ in the main 
> module) a bug fix for the interaction between PEP 338 and 328, but I'll hold 
> off on committing it until I get the OK from yourself or Neal (and put the 
> patch on SF in the meantime).

Or maybe not, since SF is still broken :(

You can find the diff here instead:

http://members.iinet.net.au/~ncoghlan/main_relative_imports.diff

The patch includes updates to import.c so that relative imports from a main 
module executed with -m will work automatically, some additional tests in 
test_runpy to make sure this all works as intended, and a couple of paragraphs 
in the tutorial about using explicit relative imports instead of implicit ones.

The changes to make runpy set '__module_name__' as well as '__name__' (and the 
associated doc and test changes) have already been committed.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Thu Jun 29 14:23:18 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 22:23:18 +1000
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A31191.3090705@canterbury.ac.nz>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
	<44A13461.508@gmail.com>
	<ca471dc20606270835g65d11893m1c069f9c0d003ba7@mail.gmail.com>
	<44A2363D.8090300@gmail.com> <44A31191.3090705@canterbury.ac.nz>
Message-ID: <44A3C636.2000102@gmail.com>

Greg Ewing wrote:
> Nick Coghlan wrote:
> 
>> By 'current namespace' I really do mean locals() - the cell objects 
>> themselves
>> would be local variables from the point of view of the currently 
>> executing code.
> 
> This is wrong. Cells are *parameters* implicitly passed
> in by the calling function. They may temporarily be
> referenced from the current scope, but their "home"
> has to be in an outer scope, otherwise they won't
> survive between calls.

As far as I'm aware, the cell objects get kept alive by the references to them 
from the closure attribute of the inner function. The actual execution frame 
of the outer function still goes away. The cell values persist because the 
function object persists between calls - it's only the execution frame that 
gets reinitialised every time.

However, I'm now clearer on the fact that Guido's main interest is in true 
once-per-process semantics for case expressions, which changes the design 
goals I was working towards.

So I think I'll try to take a break from this discussion, and let ideas 
percolate in the back of my head for a while :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From rasky at develer.com  Thu Jun 29 14:29:26 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Thu, 29 Jun 2006 14:29:26 +0200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
References: <44A11EA1.1000605@iinet.net.au>	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>	<033001c69a07$44140890$d503030a@trilan>	<44A1CE4A.2000900@canterbury.ac.nz>	<ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
	<039201c69ad7$4dc91df0$d503030a@trilan>
	<44A3B648.2060104@gmail.com>
Message-ID: <018e01c69b77$a9172e40$d503030a@trilan>

Nick Coghlan wrote:

> Writing modules that use the approach but want to work with both 2.5
> and 2.6 becomes a little more annoying - such modules have to finish
> with the coda:
>
> if __name__ == '__main__':
>    from sys import version_info, argv
>    if version_info < (2, 6):
>        sys.exit(__main__(argv))

Actually, this should be enough:

if __name__ == '__main__':
    sys.exit(__main__(argv))

and it will still work for the "python -mpackage.module" case which we're
discussing about. The if suite can be dropped when you won't need pre-2.6
compatibility anymore.

> The interpreter would also have to be careful to ensure that a
> __main__ variable in the globals isn't the result of a module doing
> "import __main__".

Real-world usage case for import __main__? Otherwise, I say screw it :)

> Another downside I've discovered recently is that calling sys.exit()
> prevents the use of a post-mortem debugging session triggered by -i
> or PYTHONINSPECT. sys.exit() crashes out of the entire process, so
> the post-mortem interactive session never even gets started.

In fact, this is an *upside* of implementing the __main__ PEP, because the
call to sys.exit() is not needed in that case. All of my Python programs
right now need a sys.exit() *because* the __main__ PEP was not implemented.

> The only real upside I can see to PEP 299 is that "main is a
> function" is more familiar to people coming from languages like C
> where you can't have run-time code at the top level of a module.
> Python's a scripting language though, and having run-time logic at
> the top level of a script is perfectly normal!

My personal argument is that if __name__ == '__main__' is totally
counter-intuitve and unpythonic. It also proves my memory: after many years,
I still have to think a couple of seconds before rememebering whether I
should use __file__, __name__ or __main__ and where to put the damn quotes.
The fact that you're comparing a variable name and a string literal which
seems very similar (both with the double underscore syntax) is totally
confusing at best.

Also, try teaching it to a beginner and he will go "huh wtf". To fully
understand it, you must understand how import exactly works (that is, the
fact that importing a module equals evaluating all of its statement one by
one). A function called __main__ which is magically invoked by the python
itself is much much easier to grasp. A different, clearer spelling for the
if condition (like: "if not __imported__") would help as well.
-- 
Giovanni Bajo


From ncoghlan at gmail.com  Thu Jun 29 14:37:07 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 22:37:07 +1000
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>
References: <20060627233313.1090.JCARLSON@uci.edu>
	<44A2380F.6000503@acm.org>	<20060628090725.1099.JCARLSON@uci.edu>	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>	<44A2BB87.50406@acm.org>	<ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
	<eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>
Message-ID: <44A3C973.7070601@gmail.com>

Eric Sumner wrote:
>>> Forget subroutines for a moment - the main point of the thread was the
>>> idea that the dispatch table was built explicitly rather than
>>> automatically - that instead of arguing over first-use vs.
>>> function-definition, we let the user decide. I'm sure that my specific
>>> proposal isn't the only way that this could be done.
>> But anything that makes the build explicit is going to be so much more
>> ugly. And I still think you're trying to solve the wrong problem.
> 
> Only if the programmer has to see it.  The dispatch table need not
> include the behaviors of each of the cases; it only needs to define
> what the cases are.  In most of the use cases I've seen, switch is
> used to define behavior for different values of an enumeration.  The
> dispatch table for an enumeration can be built wherever the values for
> the enumeration are defined (such as in a module).  Programmers don't
> need to bother with making a dispatch table unless they are defining
> enumeration values themselves.

You mean something like this?:

   switch x in colours:
     case RED:
         # whatever
     case GREEN:
         # whatever
     case BLUE:
         # whatever

I think Guido's right. It doesn't solve the underlying problem because the 
compiler still has to figure out how to build a dispatch table from the 
possible values in colours to the actual bytecode offsets of the cases.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Thu Jun 29 14:42:49 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 22:42:49 +1000
Subject: [Python-Dev] [Python-checkins] r47142 - in
 python/trunk:	Doc/lib/librunpy.tex Lib/runpy.py Lib/test/test_runpy.py
In-Reply-To: <200606290120.06338.anthony@interlink.com.au>
References: <20060628104148.362A01E4004@bag.python.org>
	<200606290120.06338.anthony@interlink.com.au>
Message-ID: <44A3CAC9.2020409@gmail.com>

Anthony Baxter wrote:
> On Wednesday 28 June 2006 20:41, nick.coghlan wrote:
>> Author: nick.coghlan
>> Date: Wed Jun 28 12:41:47 2006
>> New Revision: 47142
>>
>> Modified:
>>    python/trunk/Doc/lib/librunpy.tex
>>    python/trunk/Lib/runpy.py
>>    python/trunk/Lib/test/test_runpy.py
>> Log:
>> Make full module name available as __module_name__ even when
>> __name__ is set to something else (like '__main__')
> 
> Er. Um. Feature freeze?

Sorry about that - I was trying to deal with a conflict between PEP 328 and 
338 (bug 1510172) and didn't even think about the fact that this counted as a 
new feature.

See my response to your RFC about tightening up control of the trunk - I'd 
really like to make these two PEPs play nicely together before beta 2.

Cheers,
Nick.


-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From ncoghlan at gmail.com  Thu Jun 29 15:21:00 2006
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 29 Jun 2006 23:21:00 +1000
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <ca471dc20606281147l5efc3feevd1e902b4e1daf202@mail.gmail.com>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com>	<ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>	<fb6fbf560606281017u22b9ec29nacdee07acbf5dfb9@mail.gmail.com>	<ca471dc20606281033w667d0f76la8b0624c72d149aa@mail.gmail.com>	<fb6fbf560606281138l6d46cceidf20cc96624ef7fd@mail.gmail.com>
	<ca471dc20606281147l5efc3feevd1e902b4e1daf202@mail.gmail.com>
Message-ID: <44A3D3BC.2040106@gmail.com>

Guido van Rossum wrote:
> So we have what seems to be an impasse. Some people would really like
> once-expressions to be captured at def-time rather than at the first
> execution per def; this is the only way to use it so solve the "outer
> loop variable reference" problem. Others would really hate it if a
> once could be hidden in unreachable code but still execute, possibly
> with a side effect.
> 
> I'm not sure that the possibility of writing obfuscated code should
> kill a useful feature. What do others think? It's basically impossible
> to prevent obfuscated code and we've had this argument before:
> preventing bad code is not the goal of the language; encouraging good
> code is.

I'm coming around to liking the idea of Fredrik's static expressions. def-time 
really is a clean way to define when something happens, it provides a nice 
readable solution to the early-vs-late binding question, and the only ways 
I've managed to break it are by deliberately writing code that's altogether 
too clever for its own good.

It should be possible to find some reasonable way to handle module level code, 
and pychecker and the like can warn about static expressions in unreachable code.

I even worked out how to rewrite my 
side-effect-free-but-still-too-clever-for-its-own-good example so it worked 
under Option 3:

    def outer(cases=None):
        if cases is None:
            # Use unmatchable cases
            cases = [() for x in range(3)]
        def inner(option, force_default=False):
            if not force_default:
                switch option:
                    case in cases[0]:
                        # case 0 handling
                    case in cases[1]:
                        # case 1 handling
                    case in cases[2]:
                        # case 2 handling
            # Default handling
        return inner

I'm happy I've made the best case I can for Option 2, and it's left even me 
thinking that Option 3 is a cleaner, more useful way to go :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org

From martin at v.loewis.de  Thu Jun 29 15:27:44 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 29 Jun 2006 15:27:44 +0200
Subject: [Python-Dev] xturtle.py a replacement for
 turtle.py(!?)	ATTENTION PLEASE!
In-Reply-To: <44A304BD.5020403@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>	<44A2E0FF.2010808@aon.at>	<e7uod8$nnc$1@sea.gmane.org>
	<44A304BD.5020403@aon.at>
Message-ID: <44A3D550.2070207@v.loewis.de>

Gregor Lingl wrote:
> So, if you can't accept that offer - now, or even ever - , because it
>  contradicts your rules, that's o.k. But it's not 'my cause'. I 
> concieve it to be the community's cause.

All "we" said is that we cannot integrate it now, as a policy matter.
Nobody said it can't be integrated for 2.6; I am in favour of doing
that.

However, I do think that a number of changes need to be made still;
I'll post my first review on the SF tracker item when SF comes back.

> I, for my part, can and will use xturtle.py, knowing and having the 
> experience, that it is far superior to turtle.py. So I have no 
> problem. And I'll offer it for download from the xturtle-webpage or 
> from wherever you suggest. So it will be freely available. (Perhaps a
> sourceforge project would be appropriate. Give me your advice, 
> please)

You should add it into the Cheeshop: python.org/pypi
Notice that the Cheeseshop already knows about turtle2.py
by Mark Summerfield.

> The only point is, that it leaves Python's turtle.py an (imho) 
> unsatisfactory solution.

Looking at the feature list on #1513695, I think none of the
new feature really make turtle.py look "unsatisfactory":

- better animation of turtle movements: yes, this is a good
  thing to have, but not absolutely necessary. The current
  turtle already displays the orientation after it has turned.
- different turtle shapes. It's probably fun to play with
  these, but (IMO) a distraction from the module's primary
  purpose (although fun certainly also is a purpose of the
  module). OTOH, perhaps the original Logo turtle icon
  should be the default?
- fine control over turtle movement (in particular speed)
  Why are these needed?
- Aliases for the most common functions. I guess it's useful,
  but if it was unsatisfactory not to have them, somebody
  would have contributed a patch for turtle.py already.
- scrollable canvas. I had a hard time to figure out what
  method to use to resize the canvas (and am still uncertain
  whether rescaling is supported or not)
- background color and image. Again, this looks like a
  distraction to me, but I see that Logo tutorials use
  this (along with turtle shapes like "C64 sprites"), so
  I guess there is a point to them, also.

The only respect in which I would consider turtle.py
unsatisfactory is the true bugs. At the moment, I can
only see one open turtle.py bug reported, namely
#1047540 (where the submitter later says it might be
an IDLE bug).

Regards,
Martin


From ashemedai at gmail.com  Thu Jun 29 15:52:56 2006
From: ashemedai at gmail.com (Jeroen Ruigrok van der Werven)
Date: Thu, 29 Jun 2006 15:52:56 +0200
Subject: [Python-Dev] msvccompiler.py: some remarks
Message-ID: <3e1553560606290652k40a4fe8k8bc8d7bb9825e9f@mail.gmail.com>

I am testing/working on some Python code on Windows.
During this I encounter some issues where I am being told I don't have
the .Net SDK installed. So I started investigating this issue and came
to http://www.vrplumber.com/programming/mstoolkit/index.html

I also checked the latest repository version of msvccompiler.py and I
noticed a few potential issues:

1) If MSSdk is set it does not automatically mean that cl.exe and the
rest are available. With the latest SDKs, Windows 2003 R2 at least,
the bin directory contains no compilers, linkers or the like. On the
other hand, it is perfectly valid to set MSSdk to your Platform SDK
installation directory. So this is unfortunately a problematic
solution as introduced in revision 42515.

2) As far as I have been able to determine .Net 2.0 uses
sdkInstallRootv2.0. Also it installs by default under C:\Program
Files\Microsoft Visual Studio 8\SDK\v2.0\

3) The Windows 2003 R2 Platform SDK uses
HKLM\SOFTWARE\Microsoft\MicrosoftSDK\InstalledSDKs\D2FF9F89-8AA2-4373-8A31-C838BF4DBBE1,
which in turn has a entry for 'Install Dir' which lists the
installation directory for the Platform SDK.

4) One line has p = r"Software\Microsoft\NET Framework Setup\Product",
however, there's no subkey at all under my NET Framework Setup entry,
only NDP, which in itself has two subkeys, namely: v1.1.4322 and
v2.0.50727. The NET Framework Setup\Product seems to be limited to the
old 1.0 setup which used a subkey like:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework
Setup\Product\Microsoft .NET Framework Full v1.0.3705 (1033)
This is what my 1.1 and 2.0 give:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework
Setup\NDP\v1.1.4322 and HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET
Framework Setup\NDP\v2.0.50727
So effectively on newer installations (and 1.0 is more or less
deprecate in favour of 1.1) this piece of code is rendered unusable.

So basically a bunch of logic needs to be rewritten for newer version
support and I will investigate this. Are there any other people
working on this so that I can throw back/forth some ideas to make sure
things keep working for various versions?

-- 
Jeroen Ruigrok van der Werven

From tzot at mediconsa.com  Thu Jun 29 15:57:10 2006
From: tzot at mediconsa.com (Christos Georgiou)
Date: Thu, 29 Jun 2006 16:57:10 +0300
Subject: [Python-Dev] once [was: Simple Switch statementZ]
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com><ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>
	<44A2F386.7080003@ronadam.com>
Message-ID: <e80m6u$cil$1@sea.gmane.org>

I haven't followed the complete discussion about once, but I would assume it 
would be used as such:

once <name> = <expression>

that is, always an assignment, with the value stored as a cellvar, perhaps, 
on first execution 0f the code.

Typically I would use it as:

def function(a):
    once pathjoin = os.path.join
    <etc>



From kd5bjo at gmail.com  Thu Jun 29 16:25:03 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Thu, 29 Jun 2006 09:25:03 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <44A3C973.7070601@gmail.com>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>
	<20060628090725.1099.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
	<44A2BB87.50406@acm.org>
	<ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
	<eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>
	<44A3C973.7070601@gmail.com>
Message-ID: <eaaf21dc0606290725g52e78563o9cddc80700165055@mail.gmail.com>

On 6/29/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> You mean something like this?:
>
>    switch x in colours:
>      case RED:
>          # whatever
>      case GREEN:
>          # whatever
>      case BLUE:
>          # whatever
>
> I think Guido's right. It doesn't solve the underlying problem because the
> compiler still has to figure out how to build a dispatch table from the
> possible values in colours to the actual bytecode offsets of the cases.

To implement this, you actually need two lookup tables: one particular
to the switch that maps labels to bytecode offsets, and one in the
dispatch table to map values to labels.  The former is built when the
switch is compiled, and the latter is built wherever the dispatch
table is defined.  Each lookup is still O(1), so the whole operation
remains O(1).

It is O(n) or worse to check that all of the cases in the switch are
defined in the dispatch table, but that only has to be done once per
dispatch table/switch statement pair, and can then be stred in one or
the other (probably the dispatch table, as that will be a proper
object).

  -- Eric Sumner

From fredrik at pythonware.com  Thu Jun 29 16:38:20 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 29 Jun 2006 16:38:20 +0200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
References: <20060627233313.1090.JCARLSON@uci.edu>
	<44A2380F.6000503@acm.org><20060628090725.1099.JCARLSON@uci.edu><ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com><44A2BB87.50406@acm.org><ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com><eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com><44A3C973.7070601@gmail.com>
	<eaaf21dc0606290725g52e78563o9cddc80700165055@mail.gmail.com>
Message-ID: <e80ol1$m2e$1@sea.gmane.org>

Eric Sumner wrote:

>> You mean something like this?:
>>
>>    switch x in colours:
>>      case RED:
>>          # whatever
>>      case GREEN:
>>          # whatever
>>      case BLUE:
>>          # whatever
>>
>> I think Guido's right. It doesn't solve the underlying problem because the
>> compiler still has to figure out how to build a dispatch table from the
>> possible values in colours to the actual bytecode offsets of the cases.
>
> To implement this, you actually need two lookup tables: one particular
> to the switch that maps labels to bytecode offsets, and one in the
> dispatch table to map values to labels.  The former is built when the
> switch is compiled, and the latter is built wherever the dispatch
> table is defined.  Each lookup is still O(1), so the whole operation
> remains O(1).

what's a "label" ?

</F> 




From anthony at interlink.com.au  Thu Jun 29 16:39:44 2006
From: anthony at interlink.com.au (Anthony Baxter)
Date: Fri, 30 Jun 2006 00:39:44 +1000
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <44A3B1DC.70105@gmail.com>
References: <44A11EA1.1000605@iinet.net.au>
	<ca471dc20606280945j5891a132g6aad2d1c49b6c986@mail.gmail.com>
	<44A3B1DC.70105@gmail.com>
Message-ID: <200606300039.47715.anthony@interlink.com.au>

On Thursday 29 June 2006 20:56, Nick Coghlan wrote:
> Guido van Rossum wrote:
> > On 6/28/06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> >> The workaround to replace __name__ with __module_name__ in order
> >> to enable
> >> relative imports turned out to be pretty ugly, so I also worked
> >> up a patch to
> >> import.c to get it to treat __module_name__ as an override for
> >> __name__ when
> >> __name__ == '__main__'.
> >
> > Ah, clever. +1.
>
> In that case, I'll check it straight in. It was actually
> surprisingly easy to do, given how finicky import.c can get (this
> particular change was able to be handled entirely inside
> get_parent()).

Please, please DON'T.

At this point in the release cycle, making a change like this without 
review (particularly to something as diabolically hairy as import.c) 
is going to make me _unbelievably_ cranky. I'll try to make time to 
review the patch you posted tomorrow.

Anthony


-- 
Anthony Baxter     <anthony at interlink.com.au>
It's never too late to have a happy childhood.

From kd5bjo at gmail.com  Thu Jun 29 16:47:31 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Thu, 29 Jun 2006 09:47:31 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <e80ol1$m2e$1@sea.gmane.org>
References: <20060627233313.1090.JCARLSON@uci.edu> <44A2380F.6000503@acm.org>
	<20060628090725.1099.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
	<44A2BB87.50406@acm.org>
	<ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
	<eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>
	<44A3C973.7070601@gmail.com>
	<eaaf21dc0606290725g52e78563o9cddc80700165055@mail.gmail.com>
	<e80ol1$m2e$1@sea.gmane.org>
Message-ID: <eaaf21dc0606290747l119fb364nf66cbca91cc501f7@mail.gmail.com>

> >> You mean something like this?:
> >>
> >>    switch x in colours:
> >>      case RED:
> >>          # whatever
> >>      case GREEN:
> >>          # whatever
> >>      case BLUE:
> >>          # whatever
> >>
> >> I think Guido's right. It doesn't solve the underlying problem because the
> >> compiler still has to figure out how to build a dispatch table from the
> >> possible values in colours to the actual bytecode offsets of the cases.
> >
> > To implement this, you actually need two lookup tables: one particular
> > to the switch that maps labels to bytecode offsets, and one in the
> > dispatch table to map values to labels.  The former is built when the
> > switch is compiled, and the latter is built wherever the dispatch
> > table is defined.  Each lookup is still O(1), so the whole operation
> > remains O(1).
>
> what's a "label" ?

In your example, RED, GREEN, and BLUE.  colours provides a mapping
from values to labels/cases, and the switch statement provides a
mapping from labels/cases to code.  Sorry about introducing a new term
without saying anything about it.

  -- Eric Sumner

From glingl at aon.at  Thu Jun 29 16:51:17 2006
From: glingl at aon.at (Gregor Lingl)
Date: Thu, 29 Jun 2006 16:51:17 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
 ATTENTION PLEASE!
In-Reply-To: <44A2F1FC.4010307@v.loewis.de>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
	<44A2F1FC.4010307@v.loewis.de>
Message-ID: <44A3E8E5.9090409@aon.at>

Martin v. L?wis schrieb:
> Gregor Lingl wrote:
>   
> ...
>> (Who reviewed it? This is a _newly_added_ function -
>> did nobody try it out yet? Incredible!!)
>>     
>
> Apparently not. Thanks for pointing that out; Georg (who committed the
> patch originally) just fixed it in r47151.
>
> This illustrates the main point: Even small changes need careful review.
> Much more so large changes.
>
>   
I understand that now.
> [turtle does not just fill the shape, but the entire boundary polygon]
>   
>> What a shame!! An immanent bug, persistent
>> for years now!
>>     
>
> If the bug had existed for years, somebody could have contributed a
> patch.
>   
I've 2 remarks on this point:
(i) apparingly turtle.py isn't used that much, that things like these 
come out necessarily
(ii) I had a discussion with Vern Ceder about exactly this point (on 
edupython list). He had the
opinion, that this couldn't be fixed. Somebody else promised to fix it 
anyway, but he didn't.
> ...
> It's not only about finding bugs. It's also about studying the
> consistency of the new API, and so on.
>   
That's right and very important. I would be very happy to have somebody 
to discuss
questions like these. It was not so easy to make all those decisions, 
and I know, of
course, others necessarily would have decided differently in some points. 

One question in this respect - how important do you  consider backward 
compatibility.
When designing a new module the requirement backward copmpatibility can 
have a big
impact on the code although it may in some parts be questionable. As an 
example let me
mention the radians() function.

> As for "reliable proofs": An automatic test suite for turtle.py
> would be a good thing to have.
>   
Yes,, and I have some ideas in this respect, but mainly a prioncipal 
question. I read about
using doctest and unittest, but how does one devise
automatic test suites for graphical output. In the end it depends on how 
it looks like.
That was one reason, why I made my example scripts. I use them for (not 
automatic)
testing and I can _see_ if things go wrong. Example: how do you test 
automatically if a
shape is filled correctly or not (as in the above mentioned bug)?
>> A more courageous and less bureaucratic approach to the problem
>> would be appropriate. Perhaps combined with some fantasy.
>>     
>
> ...
> The approach used in xturtle (i.e. represent circles as polylines)
> could also be used for turtle.py, no?
>
>   
Yes. I've done that patch right now, and I'll put it (as a suggestion) 
on the path manger, along
with a test script, when it's online again. It works as expected. See if 
you like it.

Believe it or not, when testing this patch I discovered (within half an 
hour) three more
bugs of turte.py:

I did the following interactive session:

 >>> from turtle import *
 >>> circle(100,90)
 >>> radians()
 >>> circle(100, pi/2)

two bugs occured:
(i) after calling radians() the turtle moves a
wrong path (I assume because of misinterpretation
of its heading, which doesn't know of the change
of units) (as it does when executing e. g. forward(50)
(ii) it doesn't draw the arc(!) (if as up() were given - I don't know why)

restoring degrees() it draws again.
In the meantime I had put the drawing window away
from the center to be better able to use the Shell
window. When I constructed a new Pen:

 >>> p = Pen()

(ii) the graphcis window jumped into the screencenter again (and it does 
so with every newly constructed Pen).
Apparently one shouldn't have  setup() called in Pen's __init__() 
method. This again seems to be a newly
introduced bug.

 I'll put them on the bug manager, when it's online again.

Regards,
Gregor


> Regards,
> Martin
>
>
>   


From fredrik at pythonware.com  Thu Jun 29 16:56:40 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 29 Jun 2006 16:56:40 +0200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
References: <20060627233313.1090.JCARLSON@uci.edu>
	<44A2380F.6000503@acm.org><20060628090725.1099.JCARLSON@uci.edu><ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com><44A2BB87.50406@acm.org><ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com><eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com><44A3C973.7070601@gmail.com><eaaf21dc0606290725g52e78563o9cddc80700165055@mail.gmail.com><e80ol1$m2e$1@sea.gmane.org>
	<eaaf21dc0606290747l119fb364nf66cbca91cc501f7@mail.gmail.com>
Message-ID: <e80pn9$q33$1@sea.gmane.org>

Eric Sumner wrote:

>> what's a "label" ?
>
> In your example, RED, GREEN, and BLUE.  colours provides a mapping
> from values to labels/cases, and the switch statement provides a
> mapping from labels/cases to code.  Sorry about introducing a new term
> without saying anything about it.

yeah, but what are they?  integers?  strings?  names without an associated value?
how do you create new labels?  where are they stored?  who keeps track of them?

</F> 




From kd5bjo at gmail.com  Thu Jun 29 17:18:12 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Thu, 29 Jun 2006 10:18:12 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <e80pn9$q33$1@sea.gmane.org>
References: <20060627233313.1090.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
	<44A2BB87.50406@acm.org>
	<ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
	<eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>
	<44A3C973.7070601@gmail.com>
	<eaaf21dc0606290725g52e78563o9cddc80700165055@mail.gmail.com>
	<e80ol1$m2e$1@sea.gmane.org>
	<eaaf21dc0606290747l119fb364nf66cbca91cc501f7@mail.gmail.com>
	<e80pn9$q33$1@sea.gmane.org>
Message-ID: <eaaf21dc0606290818jb1e8bfbhc6ded66484d907aa@mail.gmail.com>

> >> what's a "label" ?
> >
> > In your example, RED, GREEN, and BLUE.  colours provides a mapping
> > from values to labels/cases, and the switch statement provides a
> > mapping from labels/cases to code.  Sorry about introducing a new term
> > without saying anything about it.
>
> yeah, but what are they?  integers?  strings?  names without an associated value?
Syntactically, they are bare words (names).  They are constants, and
compare equal to other identical labels.

> how do you create new labels?
To the programmer, all valid labels exist; you just use them.  They
are only used in very particular places in the grammar.  Internally,
they are probably represented by strings.

> where are they stored?
They are stored by the internal representation of any construct that
uses them.  That would be dispatch table objects and compiled switch
statements.

> who keeps track of them?
Each construct keeps track of its own copies, and destroys them when
they are no longer needed.

  -- Eric Sumner

From martin at v.loewis.de  Thu Jun 29 17:27:16 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 29 Jun 2006 17:27:16 +0200
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
 ATTENTION PLEASE!
In-Reply-To: <44A3E8E5.9090409@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>
	<44A2D18C.3000705@v.loewis.de> <44A2E0FF.2010808@aon.at>
	<44A2F1FC.4010307@v.loewis.de> <44A3E8E5.9090409@aon.at>
Message-ID: <44A3F154.4080001@v.loewis.de>

Gregor Lingl wrote:
> One question in this respect - how important do you  consider
> backward compatibility. When designing a new module the requirement
> backward copmpatibility can have a big impact on the code although it
> may in some parts be questionable. As an example let me mention the
> radians() function.

It's fairly important. Text books have been written that refer to
the turtle module; the examples in these text books must continue
to work. As we don't know what features these examples use, we
must rather err on the conservative side, breaking the API only
for a very good reason.

> Yes,, and I have some ideas in this respect, but mainly a prioncipal 
> question. I read about using doctest and unittest, but how does one
> devise automatic test suites for graphical output.

It might be ok not to verify the output. OTOH, this is a canvas widget,
so it should be possible to get all items on the screen at any point
with primitive canvas methods. These could then be compared to
precompiled lists.

> In the end it
> depends on how it looks like. That was one reason, why I made my
> example scripts. I use them for (not automatic) testing and I can
> _see_ if things go wrong. Example: how do you test automatically if a
>  shape is filled correctly or not (as in the above mentioned bug)?

You could check whether there is a polygon with the "right" shape,
where "right" is specified by a series of coordinates.

This is regression testing, and perhaps also coverage: we want to know
whether changes to the module effect the current behaviour. When we
test discovers a behaviour change, somebody manually will have to
determine whether the test is wrong or the new code, and update the
test if it is the former.

Thanks your investigations about the current turtle.py.

Regards,
Martin

From edloper at gradient.cis.upenn.edu  Thu Jun 29 17:34:39 2006
From: edloper at gradient.cis.upenn.edu (Edward Loper)
Date: Thu, 29 Jun 2006 11:34:39 -0400
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
 ATTENTION PLEASE!
In-Reply-To: <44A3E8E5.9090409@aon.at>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>
	<44A2E0FF.2010808@aon.at>	<44A2F1FC.4010307@v.loewis.de>
	<44A3E8E5.9090409@aon.at>
Message-ID: <44A3F30F.2010603@gradient.cis.upenn.edu>

Gregor Lingl wrote:
> Yes,, and I have some ideas in this respect, but mainly a prioncipal 
> question. I read about
> using doctest and unittest, but how does one devise
> automatic test suites for graphical output. In the end it depends on how 
> it looks like.

There are a few options here..  Two that come to mind are:

   - Check the output -- e.g., run a demo, and then use Tkinter.Canvas to
     write its output to postscript, and then check the contents of that
     postscript file against a known correct file.

   - Monkey-patching -- replace specific classes (e.g.,  ScrolledCanvas?)
     with new testing classes that simply intercept drawing primitives,
     rather than displaying graphics.  Then check that the right drawing
     primitives (lines, circles, etc) are generated in the right order.

The former may be more robust, but requires that you have a display 
surface available.  With the former approach, you may also run into some 
problems with different postscript files being generated on different 
systems (esp. with respect to font sizes -- I seem to remember that 
using negative font sizes might help there?).

-Edward

From kd5bjo at gmail.com  Thu Jun 29 17:42:07 2006
From: kd5bjo at gmail.com (Eric Sumner)
Date: Thu, 29 Jun 2006 10:42:07 -0500
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <e80pn9$q33$1@sea.gmane.org>
References: <20060627233313.1090.JCARLSON@uci.edu>
	<ca471dc20606280950i4bcc522leda3325e052d516d@mail.gmail.com>
	<44A2BB87.50406@acm.org>
	<ca471dc20606281036w8111c4bgd41339de2916f07b@mail.gmail.com>
	<eaaf21dc0606281125y731096f0u4bce8834f0943c91@mail.gmail.com>
	<44A3C973.7070601@gmail.com>
	<eaaf21dc0606290725g52e78563o9cddc80700165055@mail.gmail.com>
	<e80ol1$m2e$1@sea.gmane.org>
	<eaaf21dc0606290747l119fb364nf66cbca91cc501f7@mail.gmail.com>
	<e80pn9$q33$1@sea.gmane.org>
Message-ID: <eaaf21dc0606290842q2145250chd5a58b7b915d13e2@mail.gmail.com>

> yeah, but what are they?  integers?  strings?  names without an associated value?
> how do you create new labels?  where are they stored?  who keeps track of them?

In this scheme, dispatch tables can be considered to be reverse-lookup
namespaces.  Where a regular namespace is used to look up a value
given its name, a dispatch table is used to look up a name given its
value.  The switch statement then lets you actually do something based
on which name is returned.

  -- Eric Sumner

From Scott.Daniels at Acm.Org  Thu Jun 29 17:49:07 2006
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Thu, 29 Jun 2006 08:49:07 -0700
Subject: [Python-Dev] Joke: Rush Limbaugh (a joke in and of himself)
Message-ID: <e80so9$6cr$1@sea.gmane.org>

Rush Limbaugh was detained and questioned for transporting a possible
illegal Viagra prescription into the country.

Well... a least we know his back is feeling better.

-- Scott David Daniels
Scott.Daniels at Acm.Org


From guido at python.org  Thu Jun 29 18:39:42 2006
From: guido at python.org (Guido van Rossum)
Date: Thu, 29 Jun 2006 09:39:42 -0700
Subject: [Python-Dev] PEP 328 and PEP 338, redux
In-Reply-To: <018e01c69b77$a9172e40$d503030a@trilan>
References: <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
	<033001c69a07$44140890$d503030a@trilan>
	<44A1CE4A.2000900@canterbury.ac.nz>
	<ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
	<039201c69ad7$4dc91df0$d503030a@trilan> <44A3B648.2060104@gmail.com>
	<018e01c69b77$a9172e40$d503030a@trilan>
Message-ID: <ca471dc20606290939y2cb512bbj7420a3371f51d5a5@mail.gmail.com>

On 6/29/06, Giovanni Bajo <rasky at develer.com> wrote:
> Real-world usage case for import __main__? Otherwise, I say screw it :)
[...]
> My personal argument is that if __name__ == '__main__' is totally
> counter-intuitve and unpythonic. It also proves my memory: after many years,
> I still have to think a couple of seconds before rememebering whether I
> should use __file__, __name__ or __main__ and where to put the damn quotes.
> The fact that you're comparing a variable name and a string literal which
> seems very similar (both with the double underscore syntax) is totally
> confusing at best.
>
> Also, try teaching it to a beginner and he will go "huh wtf". To fully
> understand it, you must understand how import exactly works (that is, the
> fact that importing a module equals evaluating all of its statement one by
> one). A function called __main__ which is magically invoked by the python
> itself is much much easier to grasp. A different, clearer spelling for the
> if condition (like: "if not __imported__") would help as well.

You need to watch your attitude, and try to present better arguments
than "I don't like it".

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From g.brandl at gmx.net  Thu Jun 29 20:31:16 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 29 Jun 2006 20:31:16 +0200
Subject: [Python-Dev] document @property?
Message-ID: <e80vp5$gel$1@sea.gmane.org>

In followup to a clpy discussion, should the docs contain
a note that property can be used as a decorator for creating
read-only properties?

Georg


From aahz at pythoncraft.com  Thu Jun 29 18:47:20 2006
From: aahz at pythoncraft.com (Aahz)
Date: Thu, 29 Jun 2006 09:47:20 -0700
Subject: [Python-Dev] Joke: Rush Limbaugh (a joke in and of himself)
In-Reply-To: <e80so9$6cr$1@sea.gmane.org>
References: <e80so9$6cr$1@sea.gmane.org>
Message-ID: <20060629164720.GA6114@panix.com>

On Thu, Jun 29, 2006, Scott David Daniels wrote:
>
> Rush Limbaugh was detained and questioned for transporting a possible
> illegal Viagra prescription into the country.
> 
> Well... a least we know his back is feeling better.

I'm hoping this was a typo of an e-mail address for sending, because
this is not appropriate for python-dev.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From theller at python.net  Thu Jun 29 18:47:23 2006
From: theller at python.net (Thomas Heller)
Date: Thu, 29 Jun 2006 18:47:23 +0200
Subject: [Python-Dev] PyGIL_ and --without-threads
Message-ID: <e8106r$gt2$1@sea.gmane.org>

The PyGIL_ function prototypes in the header files are not protected
within #ifdef WITH_THREADS ... #endif blocks.

I think it is worth do implement this although currently I don't have
time for that.

Thanks,
Thomas


From alexander.belopolsky at gmail.com  Thu Jun 29 18:47:30 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 29 Jun 2006 16:47:30 +0000 (UTC)
Subject: [Python-Dev] =?utf-8?q?Proposal_to_eliminate_PySet=5FFini?=
References: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>
	<e7solq$o6$1@sea.gmane.org>
Message-ID: <loom.20060629T183800-666@post.gmane.org>

Fredrik Lundh <fredrik <at> pythonware.com> writes:


> given that CPython has about a dozen Fini functions, what exactly is it 
> that makes PySet_Fini so problematic ?
> 

I have not been bitten by the other _Fini yet. ;-)

I was bitten by PySet_Fini when I tried to replace the "interned" dict with a
set. Since setobject is finalized before stringobject, interpretor crashed when
cleaning "interned".

I feel that set is a more basic object than dict, but dictobject module is never
finalized (is this a bug or a feature?), so dict api functions are always safe.
For example, I can use dict API in atexit callbacks, but not set API.


From rasky at develer.com  Thu Jun 29 18:58:19 2006
From: rasky at develer.com (Giovanni Bajo)
Date: Thu, 29 Jun 2006 18:58:19 +0200
Subject: [Python-Dev] PEP 328 and PEP 338, redux
References: <44A11EA1.1000605@iinet.net.au>
	<5.1.1.6.0.20060627120926.02021fe0@sparrow.telecommunity.com>
	<033001c69a07$44140890$d503030a@trilan>
	<44A1CE4A.2000900@canterbury.ac.nz>
	<ca471dc20606271846l2a0de4fevf667c60dec004039@mail.gmail.com>
	<039201c69ad7$4dc91df0$d503030a@trilan>
	<44A3B648.2060104@gmail.com>
	<018e01c69b77$a9172e40$d503030a@trilan>
	<ca471dc20606290939y2cb512bbj7420a3371f51d5a5@mail.gmail.com>
Message-ID: <037401c69b9d$391adb70$d503030a@trilan>

Guido van Rossum wrote:

>> Real-world usage case for import __main__? Otherwise, I say screw it
>> :) [...] My personal argument is that if __name__ == '__main__' is
>> totally counter-intuitve and unpythonic. It also proves my memory:
>> after many years, I still have to think a couple of seconds before
>> rememebering whether I should use __file__, __name__ or __main__ and
>> where to put the damn quotes. The fact that you're comparing a
>> variable name and a string literal which seems very similar (both
>> with the double underscore syntax) is totally confusing at best.
>>
>> Also, try teaching it to a beginner and he will go "huh wtf". To
>> fully understand it, you must understand how import exactly works
>> (that is, the fact that importing a module equals evaluating all of
>> its statement one by one). A function called __main__ which is
>> magically invoked by the python itself is much much easier to grasp.
>> A different, clearer spelling for the if condition (like: "if not
>> __imported__") would help as well.
>
> You need to watch your attitude, and try to present better arguments
> than "I don't like it".

Sorry for the attitude. I though I brought arguments against if __name__:

- Harder to learn for beginners (requires deeper understanding of import
workings)
- Harder to remember (a string literal compared to a name with the same
naming convention)
- Often requires explicit sys.exit() which breaks python -i
- Broken by -mpkg.mod, and we ended up with another __magic_name__ just
because of this.
- (new) Defining a main() function is already the preferred style for
reusability, so __main__ would encourage the preferred style.

If you believe that these arguments collapse to "I don't like it", then no,
I don't have any arguments.
-- 
Giovanni Bajo


From fdrake at acm.org  Thu Jun 29 19:09:27 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 29 Jun 2006 13:09:27 -0400
Subject: [Python-Dev] document @property?
In-Reply-To: <e80vp5$gel$1@sea.gmane.org>
References: <e80vp5$gel$1@sea.gmane.org>
Message-ID: <200606291309.28457.fdrake@acm.org>

On Thursday 29 June 2006 14:31, Georg Brandl wrote:
 > In followup to a clpy discussion, should the docs contain
 > a note that property can be used as a decorator for creating
 > read-only properties?

I certainly wouldn't object.  This is a very handy feature of property that I 
use frequently.


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From fredrik at pythonware.com  Thu Jun 29 19:16:20 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 29 Jun 2006 19:16:20 +0200
Subject: [Python-Dev] document @property?
In-Reply-To: <e80vp5$gel$1@sea.gmane.org>
References: <e80vp5$gel$1@sea.gmane.org>
Message-ID: <e811sv$o6d$1@sea.gmane.org>

Georg Brandl wrote:

> In followup to a clpy discussion, should the docs contain
> a note that property can be used as a decorator for creating
> read-only properties?

feel free to steal the extended example and the read-only example from 
the pyref wiki:

     http://pyref.infogami.com/property

</F>


From brett at python.org  Thu Jun 29 19:21:14 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 29 Jun 2006 10:21:14 -0700
Subject: [Python-Dev] doc for new restricted execution design for Python
In-Reply-To: <043001c69b45$35742290$100a0a0a@enfoldsystems.local>
References: <B8D7CBAD-6C47-40F7-B64F-0D99763912F5@redivi.com>
	<043001c69b45$35742290$100a0a0a@enfoldsystems.local>
Message-ID: <bbaeab100606291021q601d41cbpe69a1083bd06a69c@mail.gmail.com>

On 6/28/06, Mark Hammond <mhammond at skippinet.com.au> wrote:
>
> Bob writes:
>
> > I don't know how JavaScript is doing it yet.  The critical thing
> > for me for this month was trying to come up with a security model.
>
> I don't fully understand how JS does it either, certainly not in any
> detail.
> I know that it uses the concept of a "principal" (the IDL file can be seen
> at http://lxr.mozilla.org/seamonkey/source/caps/idl/nsIPrincipal.idl) and
> I
> think that the absence of any principals == "trusted code".  I believe the
> principals are obtained either from the JS stack, or from the "event
> source"
> and a few other obscure exceptions.  There is also lots of C code littered
> with explicit "is this code trusted" calls that makes implicit and
> explicit
> javascript assumptions - not particularly deep assumptions, but they
> exist.


Yeah.  Luckily I am interning at Google this summer and so I have access to
some Mozilla people internally to get help in pointing me in the right
direction.  =)

Cross-language calls will also need consideration.  JS will be able to
> implicitly or explicitly call Python functions, which again will
> implicitly
> or explicitly call JS functions.  Some of those frames will always be
> unrestricted (ie, they are "components" - often written in C++, they can
> do
> *anything*), but some will not.  We have managed to punt on that given
> that
> Python is currently always unrestricted.


How to work with JS will need to be dealt with eventually.

In the early stages though, Mozilla is happy to have Python enabled only for
> trusted sources - that means it is limited to Mozilla extensions, or even
> a
> completely new app using the Mozilla framework.  From a practical
> viewpoint,
> that helps "mozilla the platform" more than it helps "firebox the browser"
> etc.  This sandboxing would help the browser, which is great!


Yep!  Also, to help with the "contribution to the field" part of my
dissertation I hope to help develop ways to make developing web apps with
Python easier and better than with JS.  So the goal is to just make it a
no-brainer to dev with Python on the web.

I'm confident that when the time comes we will get the ear of Brendan Eich
> to help steer us forward.


Cool.

Mark, can you email me (publically or privately, don't care) links and stuff
about pyXPCOM so that when I start working on stuff I know where you are at
and such with integration?  Obviously I want to keep you in the loop overall
on this whole endeavour.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060629/ae62cb53/attachment.html 

From rrr at ronadam.com  Thu Jun 29 19:22:25 2006
From: rrr at ronadam.com (Ron Adam)
Date: Thu, 29 Jun 2006 12:22:25 -0500
Subject: [Python-Dev] once [was: Simple Switch statementZ]
In-Reply-To: <e80m6u$cil$1@sea.gmane.org>
References: <fb6fbf560606280912v127b8ccal6f65c7fa4e2b2ab5@mail.gmail.com><ca471dc20606280940x3871a0a5i23c33e315a3d520a@mail.gmail.com>	<44A2F386.7080003@ronadam.com>
	<e80m6u$cil$1@sea.gmane.org>
Message-ID: <44A40C51.6040100@ronadam.com>

Christos Georgiou wrote:
> I haven't followed the complete discussion about once, but I would assume it 
> would be used as such:
> 
> once <name> = <expression>
> 
> that is, always an assignment, with the value stored as a cellvar, perhaps, 
> on first execution 0f the code.
> 
> Typically I would use it as:
> 
> def function(a):
>     once pathjoin = os.path.join
>     <etc>


In the "name = (once expr)" form I gave, the property of a constant name 
that can't be rebound or that of a value that persists across function 
call invocations isn't needed.  I was trying to separate the different 
behaviors cleanly and clearly.

     # once as constant assignment and skipped line later.
     for n in range(x, 10):
         once startcube x**3     # assigns constant value, skips later
         print startcube
         startcube += 1          # give an exception

So this is the same as "const startcube x**3", except it's ignored if it 
is executed again instead of giving an excepton.


Here the constantness property isn't needed.

     # once as calc once, use result many times expression.
     for n in range(x, 10):
         startcube = (once x**3)     # calculated once used many
         print startcube
         startcube += 1              # Ok to do this


I wasn't suggesting which behavior (or combination of) is correct.  That 
would depend on what problem is meant to solved.

A fourth property of external has been touched on in these threads where 
  some of the suggestions require doing a calculation on a yet to be 
known value. That's usually handled by linkers in other languages and 
probably isn't something desired in a dynamic language like Python.


Cheers,
    Ron

* I may not be able to reply, do to leaving on a trip.  Already should 
be gone. ;-)




From Scott.Daniels at Acm.Org  Thu Jun 29 20:34:09 2006
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Thu, 29 Jun 2006 11:34:09 -0700
Subject: [Python-Dev] Joke: Rush Limbaugh (a joke in and of himself)
In-Reply-To: <20060629164720.GA6114@panix.com>
References: <e80so9$6cr$1@sea.gmane.org> <20060629164720.GA6114@panix.com>
Message-ID: <e816dm$82o$1@sea.gmane.org>

Aahz wrote:
> On Thu, Jun 29, 2006, Scott David Daniels wrote:
>> <a quoted joke>.
> I'm hoping this was a typo of an e-mail address for sending, because
> this is not appropriate for python-dev.

This absolutely was a matter of clicking the wrong spot.  I completely
agree it would be inappropriate for this forum.  I retracted it as soon
as I could, and I apologize to the group.

-- Scott David Daniels
Scott.Daniels at Acm.Org


From brett at python.org  Thu Jun 29 20:48:36 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 29 Jun 2006 11:48:36 -0700
Subject: [Python-Dev] For sandboxing: alternative to crippling file()
Message-ID: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>

I have gotten some questions from people about why cripple 'file' (and
probably 'socket' if they cared), asking whey I didn't just remove the
'file' built-in from built-ins.  Problem is that I still want to provide
some protection for files.

So an option I have been thinking of is making sure 'file' does not end up
in built-ins by just not inserting it into tstate->interp->builtins (see
Include/pystate.h to see what other fields there are; also will answer
Trent's question about how "heavy", roughly, interpreters are).  Then, there
can be a file delegator class that can, at the C level, store a reference to
the actual 'file' object that is open.  Finally, handling whether a path is
legal or not can be handled by open().

And the open() thing is the key here.  Guido always talks about how open()
should be treated more like a factory function that could some day return
the proper object based on its argument.  Well, perhaps we should start
doing that and add support for HTTP addresses and IP addresses.  Then the
file and networking settings can be viewed more as global settings to be
followed for file handling instead of specific restrictions placed on the
'file' and socket types.

My worry, as has been from the start, is containing 'file'.  The ``del
__builtins__`` bug for 'rexec' started me as skittish towards hiding stuff
from the built-in namespace.  And knowing how easy it tends to be to get at
objects and types in Python in general makes me worry even more about hiding
objects and types properly from people (within reason, of course; if you
want to allow blatent 'file' access we should be able to give it to you
somehow).  But perhaps removing 'file' from the builtin dict in the
PyInterpreterState state is enough to hide the type.

So, my question to python-dev, is:

1) Is removing 'file' from the builtins dict in PyInterpreterState (and
maybe some other things) going to be safe enough to sufficiently hide 'file'
confidently (short of someone being stupid in their C extension module and
exposing 'file' directly)?

2) Changing open() to return C-implemented delegate objects for files (and
thus won't type check, but this is Python so I am not worried about that too
much) and delegate socket objects for IP and URL addresses.


Basically, do people think doing this instead of modifying the 'file' object
directly and crippling is better and safer in terms of possible security
breach issues from implementing this?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060629/415095e2/attachment-0001.htm 

From amk at amk.ca  Thu Jun 29 21:14:42 2006
From: amk at amk.ca (A.M. Kuchling)
Date: Thu, 29 Jun 2006 15:14:42 -0400
Subject: [Python-Dev] For sandboxing: alternative to crippling file()
In-Reply-To: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
References: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
Message-ID: <20060629191442.GA15028@rogue.amk.ca>

On Thu, Jun 29, 2006 at 11:48:36AM -0700, Brett Cannon wrote:
> My worry, as has been from the start, is containing 'file'.  The ``del
> __builtins__`` bug for 'rexec' started me as skittish towards hiding stuff
> from the built-in namespace.  And knowing how easy it tends to be to get at
> objects and types in Python in general makes me worry even more about hiding
> objects and types properly from people (within reason, of course; if you

Random, only tangentially-related thought: what if each interpreter
had a blacklist of objects that should never be made available to
untrusted code?  You could then put __builtins__, file, and anything
else on this list.  Then, using some #ifdef'ery in ceval.c, check if
an object is on this blacklist before pushing it onto the evaluation
stack; if it's a blacklisted object, replace it with None (or raise an
exception).

This entails a performance hit and makes it impossible to support
Bastion-like functionality, where untrusted code could call code that
would be treated as trusted, but it also means that, even if you find
some type(foo).__dict__['blah'].co_magic incantation that lets you get
to a dangerous type object or module, it wouldn't matter because the
dangerous value is silently substituted, and the untrusted code has no
way of breaking out of this.  (Could you fool a C extension into doing
stuff with a dangerous object?  Don't know...)

This thought was sparked by the piece on failure-oblivious computing
in today's Linux Weekly News about this paper:
http://www.usenix.org/events/osdi04/tech/rinard.html.  The authors
tried continuing to run after a memory error instead of segfaulting:
out-of-bounds writes were ignored, and OOB reads returned generated
values.  See the LWN discussion for more (subscribers only).

--amk

From brett at python.org  Thu Jun 29 21:31:22 2006
From: brett at python.org (Brett Cannon)
Date: Thu, 29 Jun 2006 12:31:22 -0700
Subject: [Python-Dev] For sandboxing: alternative to crippling file()
In-Reply-To: <20060629191442.GA15028@rogue.amk.ca>
References: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
	<20060629191442.GA15028@rogue.amk.ca>
Message-ID: <bbaeab100606291231yf024b6fp5898f6eac475cdb2@mail.gmail.com>

On 6/29/06, A.M. Kuchling <amk at amk.ca> wrote:
>
> On Thu, Jun 29, 2006 at 11:48:36AM -0700, Brett Cannon wrote:
> > My worry, as has been from the start, is containing 'file'.  The ``del
> > __builtins__`` bug for 'rexec' started me as skittish towards hiding
> stuff
> > from the built-in namespace.  And knowing how easy it tends to be to get
> at
> > objects and types in Python in general makes me worry even more about
> hiding
> > objects and types properly from people (within reason, of course; if you
>
> Random, only tangentially-related thought: what if each interpreter
> had a blacklist of objects that should never be made available to
> untrusted code?  You could then put __builtins__, file, and anything
> else on this list.  Then, using some #ifdef'ery in ceval.c, check if
> an object is on this blacklist before pushing it onto the evaluation
> stack; if it's a blacklisted object, replace it with None (or raise an
> exception).


Huh.  Interesting idea.  I would go with the exception position (pushing
None feels very Lua/JavaScript-like),

This entails a performance hit and makes it impossible to support
> Bastion-like functionality, where untrusted code could call code that
> would be treated as trusted, but it also means that, even if you find
> some type(foo).__dict__['blah'].co_magic incantation that lets you get
> to a dangerous type object or module, it wouldn't matter because the
> dangerous value is silently substituted, and the untrusted code has no
> way of breaking out of this.  (Could you fool a C extension into doing
> stuff with a dangerous object?  Don't know...)


Yeah, that would definitely help with the issue.  And I am not even thinking
of Bastion functionality.  If you want something like that, write a delegate
in C.

And this could be extended so that a list of objects that should be banned
could be added to and checked as needed.  Performance would drop, but I
don't know by how much.

This thought was sparked by the piece on failure-oblivious computing
> in today's Linux Weekly News about this paper:
> http://www.usenix.org/events/osdi04/tech/rinard.html.  The authors
> tried continuing to run after a memory error instead of segfaulting:
> out-of-bounds writes were ignored, and OOB reads returned generated
> values.  See the LWN discussion for more (subscribers only).


Thanks for the link, Andrew!

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060629/f4f62fe1/attachment.html 

From martin at v.loewis.de  Thu Jun 29 21:47:55 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 29 Jun 2006 21:47:55 +0200
Subject: [Python-Dev] msvccompiler.py: some remarks
In-Reply-To: <3e1553560606290652k40a4fe8k8bc8d7bb9825e9f@mail.gmail.com>
References: <3e1553560606290652k40a4fe8k8bc8d7bb9825e9f@mail.gmail.com>
Message-ID: <44A42E6B.8040400@v.loewis.de>

Jeroen Ruigrok van der Werven wrote:
> I am testing/working on some Python code on Windows.
> During this I encounter some issues where I am being told I don't have
> the .Net SDK installed. So I started investigating this issue and came
> to http://www.vrplumber.com/programming/mstoolkit/index.html

We should remove/change this comment. It is utterly misleading.

> 1) If MSSdk is set it does not automatically mean that cl.exe and the
> rest are available. With the latest SDKs, Windows 2003 R2 at least,
> the bin directory contains no compilers, linkers or the like. On the
> other hand, it is perfectly valid to set MSSdk to your Platform SDK
> installation directory. So this is unfortunately a problematic
> solution as introduced in revision 42515.

I meant to leave this as a per-shell choice. If you set MSSdk, you
indicate that the environment you created is "right", and distutils
should not second-guess you. This is problematic if the user did
"register environment variables" when installing the SDK, so I plan
to change this to look for a different environment variable (in
addition)

> 2) As far as I have been able to determine .Net 2.0 uses
> sdkInstallRootv2.0. Also it installs by default under C:\Program
> Files\Microsoft Visual Studio 8\SDK\v2.0\

Forget about Visual Studio 8 and .NET 2.0. It won't help here.

> 3) The Windows 2003 R2 Platform SDK uses
> HKLM\SOFTWARE\Microsoft\MicrosoftSDK\InstalledSDKs\D2FF9F89-8AA2-4373-8A31-C838BF4DBBE1,
> which in turn has a entry for 'Install Dir' which lists the
> installation directory for the Platform SDK.

Correct. This helps for Itanium and AMD64 extension modules.

> So basically a bunch of logic needs to be rewritten for newer version
> support and I will investigate this.

No. The checks are all fine.

Regards,
Martin

From benji at benjiyork.com  Thu Jun 29 21:55:00 2006
From: benji at benjiyork.com (Benji York)
Date: Thu, 29 Jun 2006 15:55:00 -0400
Subject: [Python-Dev] For sandboxing: alternative to crippling file()
In-Reply-To: <20060629191442.GA15028@rogue.amk.ca>
References: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
	<20060629191442.GA15028@rogue.amk.ca>
Message-ID: <44A43014.90602@benjiyork.com>

A.M. Kuchling wrote:
> This thought was sparked by the piece on failure-oblivious computing
> in today's Linux Weekly News about this paper:
> http://www.usenix.org/events/osdi04/tech/rinard.html. 

The paper is also available from one of the authors at 
http://www.cag.lcs.mit.edu/~rinard/paper/osdi04.pdf
--
Benji York

From martin at v.loewis.de  Thu Jun 29 22:01:38 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 29 Jun 2006 22:01:38 +0200
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <loom.20060629T183800-666@post.gmane.org>
References: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>	<e7solq$o6$1@sea.gmane.org>
	<loom.20060629T183800-666@post.gmane.org>
Message-ID: <44A431A2.5030707@v.loewis.de>

Alexander Belopolsky wrote:
> I feel that set is a more basic object than dict

I don't feel that way; dict is more basic, set is just a special case of
dict for performance reasons. Also, dict is used to define and implement
the language itself, set is "just" a predefined type.

> but dictobject module is never finalized (is this a bug or a feature?)

I guess it's a feature. What should PyDict_Fini do? Release the dummy
object? That can't work, and won't help.

> For example, I can use dict API in atexit callbacks, but not set API.

Right. It is by design that you can use the dict API everywhere, since
dict is part of the language itself. set wasn't designed with such a
goal (the same is true for many other types, I would guess).

Regards,
Martin

From alexander.belopolsky at gmail.com  Thu Jun 29 22:26:41 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 29 Jun 2006 16:26:41 -0400
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <44A431A2.5030707@v.loewis.de>
References: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>
	<e7solq$o6$1@sea.gmane.org> <loom.20060629T183800-666@post.gmane.org>
	<44A431A2.5030707@v.loewis.de>
Message-ID: <d38f5330606291326g7e916651vc8a3fc509e8c7457@mail.gmail.com>

On 6/29/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>... dict is more basic, set is just a special case of
> dict for performance reasons. Also, dict is used to define and implement
> the language itself, set is "just" a predefined type.
>
I guess it can be seen either way, just as a chicken and an egg.  Does
python-3000 still plan to integrate sets and dicts so that a set is a
view of a dict?  That would support the view that a set is more basic
(dict code will depend on set code but not the other way around).

If set has better performance than dict (which I have not noticed so
far), it will be appropriate to use it in the language implementation
where it can replace a dict.  The prime example is the "interned"
dict.

>.... What should PyDict_Fini do? Release the dummy
> object?

That and a list of free dicts.

> That can't work, and won't help.
>
Probably, but I am not arguing that PyDict_Fini is needed. Dict dummy
should be static as well and free dicts list is probably not needed in
the presence of pymalloc.

> ... It is by design that you can use the dict API everywhere, since
> dict is part of the language itself. set wasn't designed with such a
> goal (the same is true for many other types, I would guess).

That's probably the hart of my proposal.  I would like to see sets
usable as a part of the language, or an application that embeds the
language everywhere where dicts can be used today.

From jimjjewett at gmail.com  Fri Jun 30 00:16:48 2006
From: jimjjewett at gmail.com (Jim Jewett)
Date: Thu, 29 Jun 2006 18:16:48 -0400
Subject: [Python-Dev] PEP 328 and PEP 338, redux
Message-ID: <fb6fbf560606291516r490dd263g7d2f6e49c97ab7a4@mail.gmail.com>

> Real-world usage case for import __main__?

Typically for inter-module communication.  A standard name (such as
lobby, or __settings__) would work as well, but __main__ is what we
have, for backwards compatibility.

-jJ

From Scott.Daniels at Acm.Org  Fri Jun 30 01:18:53 2006
From: Scott.Daniels at Acm.Org (Scott David Daniels)
Date: Thu, 29 Jun 2006 16:18:53 -0700
Subject: [Python-Dev] xturtle.py a replacement for turtle.py(!?)
In-Reply-To: <44A32B08.2090700@v.loewis.de>
References: <200606290116.38007.anthony@interlink.com.au>	<44A2A9EB.6050802@aon.at>	<20060628094550.109C.JCARLSON@uci.edu>	<43aa6ff70606281003t591aa254t3b0250154785773e@mail.gmail.com>	<44A2D18C.3000705@v.loewis.de>	<44A32390.8050600@canterbury.ac.nz>
	<44A32B08.2090700@v.loewis.de>
Message-ID: <e81n3i$vcn$1@sea.gmane.org>

Martin v. L?wis wrote:
> Greg Ewing wrote:
>> BTW, I'm not sure if 'xturtle' is such a good name.
>> There's a tradition of X Windows executables having
>> names starting with 'x', whereas this is presumably
>> platform-independent.
>>
>> Maybe 'turtleplus' or something?
> 
> When it goes into Python, it will be 'turtle'.
> 
Perhaps in the meantime (if xturtle is not loved),
you could go with "turtle_" as in "like the standard
turtle, but my definition."

-- 
-- Scott David Daniels
Scott.Daniels at Acm.Org


From jcarlson at uci.edu  Fri Jun 30 02:37:20 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Thu, 29 Jun 2006 17:37:20 -0700
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <d38f5330606291326g7e916651vc8a3fc509e8c7457@mail.gmail.com>
References: <44A431A2.5030707@v.loewis.de>
	<d38f5330606291326g7e916651vc8a3fc509e8c7457@mail.gmail.com>
Message-ID: <20060629173028.10B3.JCARLSON@uci.edu>


"Alexander Belopolsky" <alexander.belopolsky at gmail.com> wrote:
> On 6/29/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> >... dict is more basic, set is just a special case of
> > dict for performance reasons. Also, dict is used to define and implement
> > the language itself, set is "just" a predefined type.
> >
> I guess it can be seen either way, just as a chicken and an egg.  Does
> python-3000 still plan to integrate sets and dicts so that a set is a
> view of a dict?  That would support the view that a set is more basic
> (dict code will depend on set code but not the other way around).

I don't think that makes sense.  I see a basic structure as one that can
be used to implement other structures.  A dict can emulate a set, but a
set cannot emulate a dict. Thus, a set is a specialization of a dict
with fewer features than the regular dict.

> If set has better performance than dict (which I have not noticed so
> far), it will be appropriate to use it in the language implementation
> where it can replace a dict.  The prime example is the "interned"
> dict.

The performance, I believe, is based on a Python 2.5 optimization
that reduces memory consuption per entry from 12 to 8 bytes per entry.

> > ... It is by design that you can use the dict API everywhere, since
> > dict is part of the language itself. set wasn't designed with such a
> > goal (the same is true for many other types, I would guess).
> 
> That's probably the hart of my proposal.  I would like to see sets
> usable as a part of the language, or an application that embeds the
> language everywhere where dicts can be used today.

I disagree.  You can get everything you need with a dict, and making
sets a part of the language (besides being a builtin type), would
necessarily add more overhead and maintenance to the language for little
gain.  If you need set-like functionality, and you need it to not be
finalized, use a dict; it is available today, can do all the same things, and
you don't need to wait at least 1.5 years until Python 2.6 is out.

 - Josiah


From alexander.belopolsky at gmail.com  Fri Jun 30 03:06:18 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 29 Jun 2006 21:06:18 -0400
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <20060629173028.10B3.JCARLSON@uci.edu>
References: <44A431A2.5030707@v.loewis.de>
	<d38f5330606291326g7e916651vc8a3fc509e8c7457@mail.gmail.com>
	<20060629173028.10B3.JCARLSON@uci.edu>
Message-ID: <d38f5330606291806pf669d29u8afe42631e395cfd@mail.gmail.com>

On 6/29/06, Josiah Carlson <jcarlson at uci.edu> wrote:
> I disagree.  You can get everything you need with a dict, and making
> sets a part of the language (besides being a builtin type), would
> necessarily add more overhead and maintenance to the language for little
> gain.  If you need set-like functionality, and you need it to not be
> finalized, use a dict; it is available today, can do all the same things, and
> you don't need to wait at least 1.5 years until Python 2.6 is out.
>

That was a purely altruistic proposal.  I've already discovered that
sets are finalized and that some code that works with dict emulating a
set may not work with a set.  It will not make much difference for me
if my proposal will be implemented in 2.6 or even in 3.0, but the
sooner it will happen the fewer people will stumble on the same
problem that I did. I also feel that dummy allocated on the heap and
the free set list are complicating the code with no gain.

Given negaive feedback, I will probably not try to make a patch, but
such patch would mostly consist of removed lines.

From t-bruch at microsoft.com  Fri Jun 30 03:17:29 2006
From: t-bruch at microsoft.com (Bruce Christensen)
Date: Thu, 29 Jun 2006 18:17:29 -0700
Subject: [Python-Dev] Pickle implementation questions
Message-ID: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>

In developing a cPickle module for IronPython that's as compatible as
possible with CPython, these questions have come up: 

 - Where are object.__reduce__ and object.__reduce_ex__ defined, and how
does copy_reg._reduce_ex fit into the picture? PEP 307 states that the
default __reduce__ implementation for new-style classes implemented in
Python is copy_reg._reduce. However, in  Python 2.4.3 dir(copy_reg)
indicates that it has no _reduce method. (Tangentially, is there a way
to inspect a method_descriptor object to determine the function it's
bound to?)

 - When the optional constructor argument is passed to copy_reg.pickle,
where is it stored and how is it used by pickle?

 - What does copy_reg.constructor() do?

Thanks!

--Bruce

From martin at v.loewis.de  Fri Jun 30 07:09:19 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 07:09:19 +0200
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
Message-ID: <44A4B1FF.7030709@v.loewis.de>

Bruce Christensen wrote:
> In developing a cPickle module for IronPython that's as compatible as
> possible with CPython, these questions have come up: 

[I wish you were allowed to read the source code of Python]

>  - Where are object.__reduce__ and object.__reduce_ex__ defined, and how
> does copy_reg._reduce_ex fit into the picture? 

See

http://docs.python.org/lib/node69.html


> PEP 307 states that the
> default __reduce__ implementation for new-style classes implemented in
> Python is copy_reg._reduce. However, in  Python 2.4.3 dir(copy_reg)
> indicates that it has no _reduce method.

Yes, it calls copy_reg._reduce_ex now (which also expects the protocol
version)

>  - When the optional constructor argument is passed to copy_reg.pickle,
> where is it stored and how is it used by pickle?

It's not used anymore. A comment says

    # The constructor_ob function is a vestige of safe for unpickling.
    # There is no reason for the caller to pass it anymore.

>  - What does copy_reg.constructor() do?

It does this:

def constructor(object):
    if not callable(object):
        raise TypeError("constructors must be callable")

Regards,
Martin

From ashemedai at gmail.com  Fri Jun 30 07:29:41 2006
From: ashemedai at gmail.com (Jeroen Ruigrok van der Werven)
Date: Fri, 30 Jun 2006 07:29:41 +0200
Subject: [Python-Dev] msvccompiler.py: some remarks
In-Reply-To: <44A42E6B.8040400@v.loewis.de>
References: <3e1553560606290652k40a4fe8k8bc8d7bb9825e9f@mail.gmail.com>
	<44A42E6B.8040400@v.loewis.de>
Message-ID: <3e1553560606292229h4229026dwf93811878b9f0e8c@mail.gmail.com>

Hi Martin,

On 6/29/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> We should remove/change this comment. It is utterly misleading.

To a warning/error stating that you miss a compiler?

> I meant to leave this as a per-shell choice. If you set MSSdk, you
> indicate that the environment you created is "right", and distutils
> should not second-guess you. This is problematic if the user did
> "register environment variables" when installing the SDK, so I plan
> to change this to look for a different environment variable (in
> addition)

OK, that makes sense.

> > 2) As far as I have been able to determine .Net 2.0 uses
> > sdkInstallRootv2.0. Also it installs by default under C:\Program
> > Files\Microsoft Visual Studio 8\SDK\v2.0\
>
> Forget about Visual Studio 8 and .NET 2.0. It won't help here.

I only have .NET 1.1 and 2.0 and Visual Studio 2005 (8) installed. Why
should I forget about it? Is Python compiled with much older compilers
and thus unable to work together in a nice way or?

> > So basically a bunch of logic needs to be rewritten for newer version
> > support and I will investigate this.
>
> No. The checks are all fine.

For what I can see not if you have newer versions of .NET such as 2.0,
which is basically the defacto standard at the moment.
So please elaborate a bit more so that I gain some insight about this,
because I am needing this in order to build a working pyDB2 on my
Windows system to do some testing.

-- 
Jeroen Ruigrok van der Werven

From fredrik at pythonware.com  Fri Jun 30 08:59:10 2006
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 30 Jun 2006 08:59:10 +0200
Subject: [Python-Dev] Pickle implementation questions
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
	<44A4B1FF.7030709@v.loewis.de>
Message-ID: <e82i3u$alm$1@sea.gmane.org>

Martin v. Löwis wrote:

>> In developing a cPickle module for IronPython that's as compatible as
>> possible with CPython, these questions have come up:
>
> [I wish you were allowed to read the source code of Python]

on the other hand, it would be nice if someone actually used Bruce's questions
and the clarifications to update the documentation; the ideas behind the internal
pickle interfaces aren't exactly obvious, even if you have the source.

</F> 




From nnorwitz at gmail.com  Fri Jun 30 09:05:10 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Fri, 30 Jun 2006 00:05:10 -0700
Subject: [Python-Dev] 2.5 and beyond
Message-ID: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>

I'm glad to see Anthony ratcheting down.  At this point, we need to be
fixing bugs and improving doc.  Maybe Anthony and I should have a
contest to see who can revert the most changes. :-)

There are at least 6 bugs that really, really need to be fixed before
release.  Several of these are AST bugs.  Jeremy knows about them and
plans to fix them once he's back from vacation.  Anyone else wanna
help out?  One is for a socket problem and another is for doc.  The
current list of serious bugs are in the PEP:

  http://www.python.org/dev/peps/pep-0356/

If there are any bugs you think should be considered show stoppers,
mail them to the list and I will update the PEP.  If you are a
committer, just update the PEP yourself.  We really need everyone to
help.  There were a lot of changes that didn't have tests and/or NEWS
entries.  I tried to reply to the checkin messages for those I
noticed.  I have tons of messages in my inbox where I don't know if
the issue was addressed or not.  Can everyone try to find the holes?
And new ones keep popping up!  Please let the author know they need to
fix the problem.  It's really tempting to just revert these changes...

We also need to fix the test suite.  This warning needs to be addressed!

    Lib/struct.py:63: DeprecationWarning: struct integer overflow
masking is deprecated return o.pack(*args)

Since we are in feature freeze, now seems like a good time to make a
PEP for 2.6:

  http://www.python.org/dev/peps/pep-0361/

It's pretty empty right now.  The plan is to make the schedule in a
year from now.  Start adding your new features to the PEP, not the
code.

n

From arigo at tunes.org  Fri Jun 30 10:19:21 2006
From: arigo at tunes.org (Armin Rigo)
Date: Fri, 30 Jun 2006 10:19:21 +0200
Subject: [Python-Dev] PEP 3103: A Switch/Case Statement
In-Reply-To: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
References: <ca471dc20606261223g55126e1fg457c30049dbbf34b@mail.gmail.com>
Message-ID: <20060630081921.GA31034@code0.codespeak.net>

Hi,

On Mon, Jun 26, 2006 at 12:23:00PM -0700, Guido van Rossum wrote:
> Feedback (also about misrepresentation of alternatives I don't favor)
> is most welcome, either to me directly or as a followup to this post.

So my 2 cents, particularly about when things are computed and ways to
control that explicitly: there was a point in time where I could say
that I liked Python because language design was not constrained by
performance issues.  Looks like it's getting a matter of the past, small
step by small step.  I'll have to get used to mentally filter out
'static' or whatever the keyword will be, liberally sprinkled in
programs I read to make them slightly faster.

Maybe I should, more constructively, propose to start a thread on the
subject of: what would be required to achieve similar effects as the
intended one at the implementation level, without strange
early-computation semantics?

I'm not talking about Psyco stuff here; there are way to do this with
reasonably-simple refactorings of global variable accesses.  I have
experimented a couple of years ago with making them more direct (just
like a lot of people did, about the "faster LOAD_GLOBAL" trend).  I
dropped this as it didn't make things much faster, but it had a nice
side-effect: allowing call-backs for binding changes.  This would be a
good base on top of which to make transparent, recomputed-when-changed
constant-folding of simple expressions.  Building dicts for switch and
keeping them up-to-date...  Does it make sense for me to continue
this discussion?


A bientot,

Armin.

From theller at python.net  Fri Jun 30 11:04:06 2006
From: theller at python.net (Thomas Heller)
Date: Fri, 30 Jun 2006 11:04:06 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
In-Reply-To: <449C5157.5050004@v.loewis.de>
References: <e7ge79$kg8$1@sea.gmane.org>
	<20060623145602.GB10250@niemeyer.net>	<e7h24t$t7c$1@sea.gmane.org>
	<449C5157.5050004@v.loewis.de>
Message-ID: <e82pea$22d$1@sea.gmane.org>

Martin v. L?wis schrieb:
> Thomas Heller wrote:
>> What I did was at a certain time develop in the 'branch_1_0' branch, leaving
>> HEAD for experimental work.  Later I decided that this was wrong, cvs removed all
>> files in HEAD, and added them back from a branch_1_0 checkout.  Maybe doing
>> this was another bad idea, as the trunk in the converted SVN repository
>> only lists _ctypes.c revisions corresponding to the CVS version numbers
>> 1.307 up to the current CVS head 1.340.  All the older versions from 1.1 up to
>> 1.226.2.55 show up in the branch_1_0 branch that cvs2svn has created - although
>> in CVS only the versions 1.226.0.2 up to 1.226.2.55 were ever in the branch_1_0
>> branch.  Is that a bug in cvs2svnn?
> 
> I doubt it. I'm pretty sure the subversion repository *does* contain all
> the old files, in the old revisions. What happens if you do the
> following on your converted subversion repository:
> 
> 1. find out the oldest version of the files from svn log. Say this is
>    version 1000.
> 2. Explicitly check out the trunk at version 950 (i.e. some plenty
>    revisions before your copied the files from the branch).
> 
> I expect that this will give you the files just before you deleted
> them; doing "svn log" on this sandbox will then give you all the old
> log messages and versions.
> 
> If that is what happens, here is why: "svn log" will trace a file
> through all its revisions, and across "svn copy"s, back to when it
> was added into the repository. At that point, "svn log" stops.
> An earlier file with the same name which got removed is considered
> as a different file, so "svn log" does not show its revisions.
> 
> If you don't want that do happen, you could try to "outdate" (cvs -o)
> the deletion and readdition in CVS, purging that piece of history.
> I'm not entirely certain whether this should work.

You mean 'cvs admin -o', right?

Yes, that works.  Here is what I did:

- got a local copy of the cvs repository from SF with rsync.
  All the following was done on thos local copy.

- I ran 'cvs log' over the whole repository, and noted the files
  that were removed in the HEAD and later readded from a branch.

- I wrote a script that calls 'cvs admin -o' on them.

- removed those directories that should not be added to Python SVN,
  these were the ctypes-java, misc and CVSROOT directories.


- called 'cvs2svn --dump-only' to create a subversion dumpfile.
  The file size is around 100 MB.

- Created a local SVN repository, called 'svnadmin load'.

Now I can checkout from the local SVN repository, and the whole
history on the files that I checked is present.

The svn checkup has this structure, which is OK imo:

ctypes
  branches
  tags
  trunk

The remaining questions are:

- Do I need special rights to call 'svnadmin load' to import this dumpfile
  into Python SVN, or are the normal commit rights sufficient?
  What exactly is the URL/PATH where it should be imported (some sandbox,
  I assume)?

- What about the Python trunk?  Should changes from the sandbox be merged
  into Modules/_ctypes and Lib/ctypes, or would it be better (or possible at all)
  to use the external mechanism?

Thanks,
Thomas


From python-dev at zesty.ca  Fri Jun 30 11:36:38 2006
From: python-dev at zesty.ca (Ka-Ping Yee)
Date: Fri, 30 Jun 2006 04:36:38 -0500 (CDT)
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
Message-ID: <Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>

On Fri, 30 Jun 2006, Neal Norwitz wrote:
> The current list of serious bugs are in the PEP:
>
>   http://www.python.org/dev/peps/pep-0356/

Among them is this one:

    Incorrect LOAD/STORE_GLOBAL generation
    http://python.org/sf/1501934

The question is, what behaviour is preferable for this code:

    g = 1
    def f():
        g += 1

    f()

Should this raise an UnboundLocalError or should it increment g?

(Or, in other words, should augmented assignment be considered
a local binding like regular assignment, or not?)


-- ?!ng

From kristjan at ccpgames.com  Fri Jun 30 11:46:56 2006
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_V=2E_J=F3nsson?=)
Date: Fri, 30 Jun 2006 09:46:56 -0000
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
Message-ID: <129CEF95A523704B9D46959C922A280002FE982C@nemesis.central.ccp.cc>

 

> 
> That was a purely altruistic proposal.  I've already 
> discovered that sets are finalized and that some code that 
> works with dict emulating a set may not work with a set.  It 
> will not make much difference for me if my proposal will be 
> implemented in 2.6 or even in 3.0, but the sooner it will 
> happen the fewer people will stumble on the same problem that 
> I did. I also feel that dummy allocated on the heap and the 
> free set list are complicating the code with no gain.
> 

Can this not be resolved by carefully adjusting the order of finalization?  If code can be bootstrapped it can be strootbapped.
As a side note, is there a finalization order list for imported modules?
Kristj?n

From p.f.moore at gmail.com  Fri Jun 30 11:58:06 2006
From: p.f.moore at gmail.com (Paul Moore)
Date: Fri, 30 Jun 2006 10:58:06 +0100
Subject: [Python-Dev] msvccompiler.py: some remarks
In-Reply-To: <3e1553560606292229h4229026dwf93811878b9f0e8c@mail.gmail.com>
References: <3e1553560606290652k40a4fe8k8bc8d7bb9825e9f@mail.gmail.com>
	<44A42E6B.8040400@v.loewis.de>
	<3e1553560606292229h4229026dwf93811878b9f0e8c@mail.gmail.com>
Message-ID: <79990c6b0606300258qa929efar65584a1f88dd3279@mail.gmail.com>

On 6/30/06, Jeroen Ruigrok van der Werven <ashemedai at gmail.com> wrote:
> > Forget about Visual Studio 8 and .NET 2.0. It won't help here.
>
> I only have .NET 1.1 and 2.0 and Visual Studio 2005 (8) installed. Why
> should I forget about it? Is Python compiled with much older compilers
> and thus unable to work together in a nice way or?

The standard Python binary uses the MSVC 7.1 CRT (msvcr71.dll). Visual
Studio 2005 will not compile code which uses that CRT, so Python
extensions built with that compiler are not compatible with Python
built to use msvcr71.dll.

The only compilers supported for building extensions compatible with
the standard Python binary are gcc (mingw) and VS 2003 (MSVC 7.1)
(including the free MS Toolkit compiler 2003 if you have it, but sadly
MS have withdrawn it from distribution) - precisely because they have
options to link with msvcr71.dll.

Paul.

From tim.peters at gmail.com  Fri Jun 30 12:01:17 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 30 Jun 2006 06:01:17 -0400
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
	<Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
Message-ID: <1f7befae0606300301u160f5193wb2d5dc2bb0da3ff7@mail.gmail.com>

[Ka-Ping Yee, on
  http://www.python.org/dev/peps/pep-0356/
]
> Among them is this one:
>
>     Incorrect LOAD/STORE_GLOBAL generation
>     http://python.org/sf/1501934
>
> The question is, what behaviour is preferable for this code:
>
>     g = 1
>     def f():
>         g += 1
>
>     f()
>
> Should this raise an UnboundLocalError or should it increment g?
>
> (Or, in other words, should augmented assignment be considered
> a local binding like regular assignment, or not?)

Of course it should, since that's the way it _is_ treated in all
released Pythons, and there was no intent to change the semantics (let
alone a FutureWarning in 2.4 to alert users that the meaning was going
to change in 2.5).  The Reference Manual also makes no distinction
between assignment statements and augmented assignment statements in
any relevant respect.  The change in behavior here in 2.5 is plainly a
bug (although someone may want to argue "it should" be different in
P3K).

From mwh at python.net  Fri Jun 30 12:05:48 2006
From: mwh at python.net (Michael Hudson)
Date: Fri, 30 Jun 2006 11:05:48 +0100
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <Pine.LNX.4.58.0606300428100.17937@server1.LFW.org> (Ka-Ping
	Yee's message of "Fri, 30 Jun 2006 04:36:38 -0500 (CDT)")
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
	<Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
Message-ID: <2mr7177ybn.fsf@starship.python.net>

Ka-Ping Yee <python-dev at zesty.ca> writes:

> On Fri, 30 Jun 2006, Neal Norwitz wrote:
>> The current list of serious bugs are in the PEP:
>>
>>   http://www.python.org/dev/peps/pep-0356/
>
> Among them is this one:
>
>     Incorrect LOAD/STORE_GLOBAL generation
>     http://python.org/sf/1501934
>
> The question is, what behaviour is preferable for this code:
>
>     g = 1
>     def f():
>         g += 1
>
>     f()
>
> Should this raise an UnboundLocalError or should it increment g?

I didn't think there was any question: this change in behaviour from
2.4 is just an accidental change, a bug that should be fixed.  If you
want to elevate it to feature status, I think we've missed the freeze
:) (and also, I oppose the idea).

Cheers,
mwh

-- 
  If I didn't have my part-time performance art income to help 
  pay the bills, I could never afford to support my programming 
  lifestyle.                                -- Jeff Bauer, 21 Apr 2000

From skip at pobox.com  Fri Jun 30 13:33:03 2006
From: skip at pobox.com (skip at pobox.com)
Date: Fri, 30 Jun 2006 06:33:03 -0500
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
	<Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
Message-ID: <17573.3055.448488.573754@montanaro.dyndns.org>


    Ping> The question is, what behaviour is preferable for this code:

    Ping>     g = 1
    Ping>     def f():
    Ping>         g += 1

    Ping>     f()

If you treat "g += 1" as "g = g + 1" then it should create a local variable
with a value of 2.  There being no global statement in f() it must not
modify the global variable.

Skip

From exarkun at divmod.com  Fri Jun 30 14:49:11 2006
From: exarkun at divmod.com (Jean-Paul Calderone)
Date: Fri, 30 Jun 2006 08:49:11 -0400
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
Message-ID: <20060630124911.29014.360583188.divmod.quotient.16329@ohm>

On Fri, 30 Jun 2006 00:05:10 -0700, Neal Norwitz <nnorwitz at gmail.com> wrote:
>I'm glad to see Anthony ratcheting down.  At this point, we need to be
>fixing bugs and improving doc.  Maybe Anthony and I should have a
>contest to see who can revert the most changes. :-)
>
>There are at least 6 bugs that really, really need to be fixed before
>release.  Several of these are AST bugs.  Jeremy knows about them and
>plans to fix them once he's back from vacation.  Anyone else wanna
>help out?  One is for a socket problem and another is for doc.  The
>current list of serious bugs are in the PEP:
>
>  http://www.python.org/dev/peps/pep-0356/
>

Please add #1494314 to the list.

http://sourceforge.net/tracker/index.php?func=detail&aid=1494314&group_id=5470&atid=105470

Jean-Paul

From fwierzbicki at gmail.com  Fri Jun 30 16:05:14 2006
From: fwierzbicki at gmail.com (Frank Wierzbicki)
Date: Fri, 30 Jun 2006 10:05:14 -0400
Subject: [Python-Dev] Cleanup of test harness for Python
Message-ID: <4dab5f760606300705l41c208c8tfb83f09f74badf2e@mail.gmail.com>

Hello all,

According to the thread that includes
http://mail.python.org/pipermail/python-dev/2006-June/065727.html
there will be some effort in 2.6 to make the tests in Python more
consistent.  I would like to help with that effort, partly to sneak in
some checks for CPython internal tests that should be excluded from
Jython, but mainly to understand the future implementation of Python
for which the tests provide the only real spec.  Which of the current
tests is closest to an "ideal" test, so I can use it as a model?

Thanks,

-Frank Wierzbicki

From foom at fuhm.net  Fri Jun 30 16:54:09 2006
From: foom at fuhm.net (James Y Knight)
Date: Fri, 30 Jun 2006 10:54:09 -0400
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
Message-ID: <86FF939A-5D9F-42C0-B633-95837FD7C991@fuhm.net>

On Jun 30, 2006, at 3:05 AM, Neal Norwitz wrote:
> If there are any bugs you think should be considered show stoppers,
> mail them to the list and I will update the PEP.

I just submitted http://python.org/sf/1515169 for the ImportWarning  
issue previously discussed here. IMO it's important.

James

From martin at v.loewis.de  Fri Jun 30 17:52:37 2006
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Fri, 30 Jun 2006 17:52:37 +0200
Subject: [Python-Dev] msvccompiler.py: some remarks
In-Reply-To: <3e1553560606292229h4229026dwf93811878b9f0e8c@mail.gmail.com>
References: <3e1553560606290652k40a4fe8k8bc8d7bb9825e9f@mail.gmail.com>	
	<44A42E6B.8040400@v.loewis.de>
	<3e1553560606292229h4229026dwf93811878b9f0e8c@mail.gmail.com>
Message-ID: <44A548C5.7010602@v.loewis.de>

Jeroen Ruigrok van der Werven wrote:
> On 6/29/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>> We should remove/change this comment. It is utterly misleading.
> 
> To a warning/error stating that you miss a compiler?

Correct: that you miss VS 2003, or should request mingw.

>> Forget about Visual Studio 8 and .NET 2.0. It won't help here.
> 
> I only have .NET 1.1 and 2.0 and Visual Studio 2005 (8) installed. Why
> should I forget about it? Is Python compiled with much older compilers
> and thus unable to work together in a nice way or?

"Much" is a relative thing, but yes. Python 2.3 and before is compiled
with VC6, Python 2.4 and 2.5 are compiled with VS 2003. You cannot
compile extensions with a different compiler version because the
CRT versions will clash (msvcrt4 vs. msvcr71 vs. msvcr8)

Google for details, this has been discussed both technically and
politically many times before.

Regards,
Martin

From jcarlson at uci.edu  Fri Jun 30 18:46:55 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 30 Jun 2006 09:46:55 -0700
Subject: [Python-Dev] sys.settrace() in Python 2.3 vs. 2.4
Message-ID: <20060630094140.10C8.JCARLSON@uci.edu>


I've previously asked on python-list, but have recieved no responses or
explanations.  Maybe someone here with a better memory can help, and I
apologize for asking a somewhat off-topic question about such an archaic
version of Python.

According to my reading of Python 2.3 docs, the call to goo() should
exit with a KeyboardInterrupt...

    import sys

    def goo():
        while 1:
            pass

    count = [100]
    def foo(frame, event, arg):
        count[0] -= 1
        if not count[0]:
            raise KeyboardInterrupt
        return foo

    sys.settrace(foo)

    goo()

In Python 2.3, the above call to goo() doesn't return.  Adding print
statements suggests that foo() is only called for each line executed in
goo() once.  It exits with a KeyboardInterrupt in 2.4, as expected,
where foo() is called for essentially every operaition performed. Does
anyone have an idea why this is the case? I've checked the release notes
for both 2.3 and 2.4 and found no discussion of trace functions in them
or in sourceforge (I could be entering the wrong search terms, of course).

Any pointers as to why there is a difference would be appreciated. Thank
you,
 - Josiah 


From ark at acm.org  Fri Jun 30 18:30:28 2006
From: ark at acm.org (Andrew Koenig)
Date: Fri, 30 Jun 2006 12:30:28 -0400
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
Message-ID: <000d01c69c62$7fac6390$6402a8c0@arkdesktop>

> The question is, what behaviour is preferable for this code:
> 
>     g = 1
>     def f():
>         g += 1
> 
>     f()
> 
> Should this raise an UnboundLocalError or should it increment g?

I think it should increment (i.e. rebind) g, for the same reason that

	g = [1]
	def f():
		g[0] += 1
	f()

rebinds g[0].




From ark at acm.org  Fri Jun 30 18:52:53 2006
From: ark at acm.org (Andrew Koenig)
Date: Fri, 30 Jun 2006 12:52:53 -0400
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <000d01c69c62$7fac6390$6402a8c0@arkdesktop>
Message-ID: <001201c69c65$b3869750$6402a8c0@arkdesktop>

> I think it should increment (i.e. rebind) g, for the same reason that
> 
> 	g = [1]
> 	def f():
> 		g[0] += 1
> 	f()
> 
> rebinds g[0].

I saw messages out of sequence and did not realize that this would be a
change in behavior from 2.4.  Sigh.

I hope Py3000 has lexical scoping a la Scheme...




From brett at python.org  Fri Jun 30 18:56:40 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 30 Jun 2006 09:56:40 -0700
Subject: [Python-Dev] Cleanup of test harness for Python
In-Reply-To: <4dab5f760606300705l41c208c8tfb83f09f74badf2e@mail.gmail.com>
References: <4dab5f760606300705l41c208c8tfb83f09f74badf2e@mail.gmail.com>
Message-ID: <bbaeab100606300956u235e1304y7cef2e4106096d5c@mail.gmail.com>

On 6/30/06, Frank Wierzbicki <fwierzbicki at gmail.com> wrote:
>
> Hello all,
>
> According to the thread that includes
> http://mail.python.org/pipermail/python-dev/2006-June/065727.html
> there will be some effort in 2.6 to make the tests in Python more
> consistent.  I would like to help with that effort, partly to sneak in
> some checks for CPython internal tests that should be excluded from
> Jython, but mainly to understand the future implementation of Python
> for which the tests provide the only real spec.  Which of the current
> tests is closest to an "ideal" test, so I can use it as a model?


We don't have any labeled as "ideal".  Either doctests or unittest tests are
considered good form these days.  Probably looking at the newer tests would
be a good start.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060630/ba2e1a7d/attachment.htm 

From t-bruch at microsoft.com  Fri Jun 30 19:16:54 2006
From: t-bruch at microsoft.com (Bruce Christensen)
Date: Fri, 30 Jun 2006 10:16:54 -0700
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <44A4B1FF.7030709@v.loewis.de>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
	<44A4B1FF.7030709@v.loewis.de>
Message-ID: <3581AA168D87A2479D88EA319BDF7D32BC24C0@RED-MSG-80.redmond.corp.microsoft.com>

Thanks for your responses, Martin!

Martin v. L?wis wrote:
> Bruce Christensen wrote:
> >  - Where are object.__reduce__ and object.__reduce_ex__ defined, and how
> > does copy_reg._reduce_ex fit into the picture? 
> 
> See
> 
> http://docs.python.org/lib/node69.html

So just to be clear, is it something like this?

class object:
    def __reduce__(self):
        return copy_reg._reduce_ex(self, -1)

    def __reduce_ex__(self, protocol):
        return copy_reg._reduce_ex(self, protocol)

Does _reduce_ex's behavior actually change depending on the specified protocol version? The only difference that I can see or think of is that an assert causes it to fail if the protocol is >= 2.

> >  - What does copy_reg.constructor() do?
> 
> It does this:
> 
> def constructor(object):
>     if not callable(object):
>         raise TypeError("constructors must be callable")

So it is part of the public interface? It's exported in __all__, but it appears
that it's undocumented.

Thanks,

--Bruce

From t-bruch at microsoft.com  Fri Jun 30 19:22:34 2006
From: t-bruch at microsoft.com (Bruce Christensen)
Date: Fri, 30 Jun 2006 10:22:34 -0700
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <e82i3u$alm$1@sea.gmane.org>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com><44A4B1FF.7030709@v.loewis.de>
	<e82i3u$alm$1@sea.gmane.org>
Message-ID: <3581AA168D87A2479D88EA319BDF7D32BC24D8@RED-MSG-80.redmond.corp.microsoft.com>

Fredrik Lundh wrote:
> on the other hand, it would be nice if someone actually used Bruce's
questions
> and the clarifications to update the documentation; the ideas behind
the
> internal pickle interfaces aren't exactly obvious, even if you have
the
> source.

I've found a few other places where the docs are misleading at best, and
nonexistent or simply wrong at worst. I've been able to figure out most
things
by reverse-engineering pickle's behavior, but that's often a slow
process.

If anyone is interested, I'd be happy to compile a list of places the
docs could
be improved.

--Bruce

From nnorwitz at gmail.com  Fri Jun 30 19:25:17 2006
From: nnorwitz at gmail.com (Neal Norwitz)
Date: Fri, 30 Jun 2006 10:25:17 -0700
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <3581AA168D87A2479D88EA319BDF7D32BC24D8@RED-MSG-80.redmond.corp.microsoft.com>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
	<44A4B1FF.7030709@v.loewis.de> <e82i3u$alm$1@sea.gmane.org>
	<3581AA168D87A2479D88EA319BDF7D32BC24D8@RED-MSG-80.redmond.corp.microsoft.com>
Message-ID: <ee2a432c0606301025s79ac853bgaecaa81af338bcfd@mail.gmail.com>

Please do help us improve the docs.  Patches are the best (most likely
to be applied the fastest), bug reports are welcome too.  Especially
when they contain your preferred wording in the text.

n
--

On 6/30/06, Bruce Christensen <t-bruch at microsoft.com> wrote:
> Fredrik Lundh wrote:
> > on the other hand, it would be nice if someone actually used Bruce's
> questions
> > and the clarifications to update the documentation; the ideas behind
> the
> > internal pickle interfaces aren't exactly obvious, even if you have
> the
> > source.
>
> I've found a few other places where the docs are misleading at best, and
> nonexistent or simply wrong at worst. I've been able to figure out most
> things
> by reverse-engineering pickle's behavior, but that's often a slow
> process.
>
> If anyone is interested, I'd be happy to compile a list of places the
> docs could
> be improved.
>
> --Bruce
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com
>

From arigo at tunes.org  Fri Jun 30 19:52:05 2006
From: arigo at tunes.org (Armin Rigo)
Date: Fri, 30 Jun 2006 19:52:05 +0200
Subject: [Python-Dev] For sandboxing: alternative to crippling file()
In-Reply-To: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
References: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
Message-ID: <20060630175205.GA17748@code0.codespeak.net>

Hi Brett,

On Thu, Jun 29, 2006 at 11:48:36AM -0700, Brett Cannon wrote:
> 1) Is removing 'file' from the builtins dict in PyInterpreterState (and
> maybe some other things) going to be safe enough to sufficiently hide 'file'
> confidently (short of someone being stupid in their C extension module and
> exposing 'file' directly)?

No.

    >>> object.__subclasses__()
    [..., <type 'file'>]

Maybe this one won't work if __subclasses__ is forbidden, but in general
I think there *will* be a way to find this object.


A bientot,

Armin

From brett at python.org  Fri Jun 30 20:09:58 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 30 Jun 2006 11:09:58 -0700
Subject: [Python-Dev] For sandboxing: alternative to crippling file()
In-Reply-To: <20060630175205.GA17748@code0.codespeak.net>
References: <bbaeab100606291148o1e03d44ardbb0d3358aae7513@mail.gmail.com>
	<20060630175205.GA17748@code0.codespeak.net>
Message-ID: <bbaeab100606301109v558005bdrab4d5b39c18c7654@mail.gmail.com>

On 6/30/06, Armin Rigo <arigo at tunes.org> wrote:
>
> Hi Brett,
>
> On Thu, Jun 29, 2006 at 11:48:36AM -0700, Brett Cannon wrote:
> > 1) Is removing 'file' from the builtins dict in PyInterpreterState (and
> > maybe some other things) going to be safe enough to sufficiently hide
> 'file'
> > confidently (short of someone being stupid in their C extension module
> and
> > exposing 'file' directly)?
>
> No.
>
>     >>> object.__subclasses__()
>     [..., <type 'file'>]
>
> Maybe this one won't work if __subclasses__ is forbidden, but in general
> I think there *will* be a way to find this object.



Yeah, that's been my (what I thought was paranoid) feeling.  Glad I am not
the only one who thinks that hiding file() is near impossible.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060630/79a5d0be/attachment.htm 

From brett at python.org  Fri Jun 30 20:19:18 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 30 Jun 2006 11:19:18 -0700
Subject: [Python-Dev] how long to wait for expat to incorporate a fix to
	prevent a crasher?
Message-ID: <bbaeab100606301119jebac400vc10b96d8ac1d3029@mail.gmail.com>

Lib/test/crashers/xml_parsers.py is a crasher that involves expat (bug
report at http://python.org/sf/1296433).  What is at issue here is that
there is a 'for' loop in expat where the status of the parser is not
checked.  Because of this, the loop continues on its merry way, which is a
problem because pyexpat sets all handlers to 0 upon error and the 'for' loop
executes a handler.  =)  We all know what happens if you try to execute
memory location 0x0.

Anyway, the fault is not on our end since expat should be checking the
status of the parser before going around the loop again instead of blindly
assuming that everything is fine after a characterDataHandler() call
(especially since there is no error return code and there is a parser status
flag for this exact reason).  I have filed a bug report at
http://sourceforge.net/support/tracker.php?aid=1515266 and attached a
possible patch.

The question is how long do we wait for the expat developers to patch and do
a micro release?  Do we just leave this possible crasher in and just rely
entirely on the expat developers, or do we patch our copy and use that until
they get around to doing their next version push?

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060630/37261cfc/attachment.html 

From alexander.belopolsky at gmail.com  Fri Jun 30 20:21:22 2006
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Fri, 30 Jun 2006 18:21:22 +0000 (UTC)
Subject: [Python-Dev] =?utf-8?q?Proposal_to_eliminate_PySet=5FFini?=
References: <129CEF95A523704B9D46959C922A280002FE982C@nemesis.central.ccp.cc>
Message-ID: <loom.20060630T201221-247@post.gmane.org>

Kristj?n V. J?nsson <kristjan <at> ccpgames.com> writes:

> Can this not be resolved by carefully adjusting the order of finalization?

Absolutely.  This is exactly what I did in my "interned" patch and this
is what prompted my proposal.

> If code can be bootstrapped it can be strootbapped.

Agree. However, the code that does not need bootstaping is often
simpler and less fragile for the same reason.

> As a side note, is there a finalization order list for imported modules?

I did not know that imported modules could be finalized. If they could,
I would guess the proper order would ne the reverse on the order of
initialization.





From tim.peters at gmail.com  Fri Jun 30 20:29:43 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 30 Jun 2006 14:29:43 -0400
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <3581AA168D87A2479D88EA319BDF7D32BC24C0@RED-MSG-80.redmond.corp.microsoft.com>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
	<44A4B1FF.7030709@v.loewis.de>
	<3581AA168D87A2479D88EA319BDF7D32BC24C0@RED-MSG-80.redmond.corp.microsoft.com>
Message-ID: <1f7befae0606301129o3985969blafaa44e3aa53e978@mail.gmail.com>

[Bruce Christensen]
> So just to be clear, is it something like this?

I hope you've read PEP 307:

    http://www.python.org/dev/peps/pep-0307/

That's where __reduce_ex__ was introduced (along with all the rest of
pickle protocol 2).

> class object:
>     def __reduce__(self):
>         return copy_reg._reduce_ex(self, -1)
>
>     def __reduce_ex__(self, protocol):
>         return copy_reg._reduce_ex(self, protocol)

The implementation is more like:

class object:
    def __common_reduce__(self, proto=0):

        if self.__class__.__reduce__ is not object.__reduce__:
            # The class overrode __reduce__, so call the override.
            # From PEP 307:
            #     The 'object' class implements both __reduce__ and
            #     __reduce_ex__;  however, if a subclass overrides __reduce__
            #     but not __reduce_ex__,  the __reduce_ex__ implementation
            #     detects this and calls   __reduce__.
            return self.__reduce__()

        elif proto < 2:
            return copy_reg._reduce_ex(self, proto)

        else:
            # about 130 lines of C code exploiting proto 2

    __reduce__ = __reduce_ex__ = __common_reduce__

> Does _reduce_ex's behavior actually change depending on the specified protocol
> version? The only difference that I can see or think of is that an assert causes it to
> fail if the protocol is >= 2.

That's right.  As above, the object reduce methods never call
copy_reg._reduce_ex() when proto >= 2.

Note that __reduce_ex__ doesn't exist for the _benefit_ of object:  it
was introduced in protocol 2 for the benefit of user classes that want
to exploit protocol-specific pickle opcodes in their own __reduce__
methods.  They couldn't do that using the old-time __reduce__ because
__reduce__ wasn't passed the protocol version.

copy_reg._reduce_ex exists only because Guido got fatally weary of
writing mountains of C code, so left what "should be" a rarely-taken
path coded in Python.

>>>  - What does copy_reg.constructor() do?

>> It does this:
>>
>> def constructor(object):
>>     if not callable(object):
>>         raise TypeError("constructors must be callable")

> So it is part of the public interface? It's exported in __all__, but it appears
> that it's undocumented.

It's documented in the Library Reference Manual, in the `copy_reg` docs:

"""
constructor(object)

Declares object to be a valid constructor. If object is not callable
(and hence not valid as a constructor), raises TypeError.
"""

Unfortunately, while all the "safe for unpickling?" gimmicks (of which
this is one -- see PEP 307 again) were abandoned in Python 2.3, the
docs and code comments still haven't entirely caught up.
copy_reg.constructor() exists now only for backward compatibility (old
code may still call it, but it no longer has any real use).

From g.brandl at gmx.net  Fri Jun 30 20:39:13 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 30 Jun 2006 20:39:13 +0200
Subject: [Python-Dev] LOAD_CONST POP_TOP
Message-ID: <e83r4i$t31$1@sea.gmane.org>

Hi,

the following patch tries to fix the LOAD_CONST POP_TOP optimization
lost in 2.5 (bug #1333982).

An example for this is:

def f():
  'a' # docstring
  'b'

Georg

PS: Hmm. While looking, I see that 2.4 doesn't optimize away other constants like

def g():
  1


Index: Python/compile.c
===================================================================
--- Python/compile.c    (Revision 47150)
+++ Python/compile.c    (Arbeitskopie)
@@ -775,10 +775,16 @@
                                 }
                                 break;

+                       case LOAD_CONST:
+                               cumlc = lastlc + 1;
+                               /* Skip over LOAD_CONST POP_TOP */
+                               if (codestr[i+3] == POP_TOP) {
+                                       memset(codestr+i, NOP, 4);
+                                       cumlc = 0;
+                                       break;
+                               }
                                 /* Skip over LOAD_CONST trueconst
                                     JUMP_IF_FALSE xx  POP_TOP */
-                       case LOAD_CONST:
-                               cumlc = lastlc + 1;
                                 j = GETARG(codestr, i);
                                 if (codestr[i+3] != JUMP_IF_FALSE  ||
                                     codestr[i+6] != POP_TOP  ||


From fdrake at acm.org  Fri Jun 30 20:40:17 2006
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 30 Jun 2006 14:40:17 -0400
Subject: [Python-Dev] how long to wait for expat to incorporate a fix to
	prevent a crasher?
In-Reply-To: <bbaeab100606301119jebac400vc10b96d8ac1d3029@mail.gmail.com>
References: <bbaeab100606301119jebac400vc10b96d8ac1d3029@mail.gmail.com>
Message-ID: <200606301440.17859.fdrake@acm.org>

On Friday 30 June 2006 14:19, Brett Cannon wrote:
 > The question is how long do we wait for the expat developers to patch and
 > do a micro release?  Do we just leave this possible crasher in and just
 > rely entirely on the expat developers, or do we patch our copy and use
 > that until they get around to doing their next version push?

Sigh.  Too much to do all around.

I'll try to take a look at this over the weekend.


  -Fred

-- 
Fred L. Drake, Jr.   <fdrake at acm.org>

From nmm1 at cus.cam.ac.uk  Fri Jun 30 20:44:22 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 30 Jun 2006 19:44:22 +0100
Subject: [Python-Dev] Python memory model (low level)
Message-ID: <E1FwNyY-0006z0-S0@virgo.cus.cam.ac.uk>

I Have been thinking about software floating point, and there are
some aspects of Python and decimal that puzzle me.  Basically, they
are things that are wanted for this sort of thing and seem to be
done in very contorted ways, so I may have missed something.

Firstly, can Python C code assume no COMPACTING garbage collector,
or should it allow for things shifting under its feet?

Secondly, is there any documentation on the constraints and necessary
ritual when allocating chunks of raw data and/or types of variable
size?  Decimal avoids the latter.

Thirdly, I can't find an efficient way for object-mangling code to
access class data and/or have some raw data attached to a class (as
distinct from an instance).

Fourthly, can I assume that no instance of a class will remain active
AFTER the class disappears?  This would mean that it could use a
pointer to class-level raw data.

I can explain why all of those are the 'right' way to approach the
problem, at an abstract level, but it is quite possible that Python
does not support the abstract model of class implementation that I
am thinking of.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From aahz at pythoncraft.com  Fri Jun 30 20:58:28 2006
From: aahz at pythoncraft.com (Aahz)
Date: Fri, 30 Jun 2006 11:58:28 -0700
Subject: [Python-Dev] Python memory model (low level)
In-Reply-To: <E1FwNyY-0006z0-S0@virgo.cus.cam.ac.uk>
References: <E1FwNyY-0006z0-S0@virgo.cus.cam.ac.uk>
Message-ID: <20060630185828.GA17859@panix.com>

On Fri, Jun 30, 2006, Nick Maclaren wrote:
>
> I Have been thinking about software floating point, and there are
> some aspects of Python and decimal that puzzle me.  Basically, they
> are things that are wanted for this sort of thing and seem to be
> done in very contorted ways, so I may have missed something.
> 
> Firstly, can Python C code assume no COMPACTING garbage collector,
> or should it allow for things shifting under its feet?
> 
> Secondly, is there any documentation on the constraints and necessary
> ritual when allocating chunks of raw data and/or types of variable
> size?  Decimal avoids the latter.

Without answering your specific questions, keep in mind that Python and
Python-C code are very different things.  The current Decimal
implementation was designed to be *readable* and efficient *Python* code.
For a look at what the Python-C implementation of Decimal might look
closer to, take a look at the Python long implementation.
-- 
Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/

"I saw `cout' being shifted "Hello world" times to the left and stopped
right there."  --Steve Gonedes

From nmm1 at cus.cam.ac.uk  Fri Jun 30 21:13:07 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 30 Jun 2006 20:13:07 +0100
Subject: [Python-Dev] Python memory model (low level)
In-Reply-To: Your message of "Fri, 30 Jun 2006 11:58:28 PDT."
	<20060630185828.GA17859@panix.com> 
Message-ID: <E1FwOQN-0007CY-DV@virgo.cus.cam.ac.uk>

Aahz <aahz at pythoncraft.com> wrote:
> 
> Without answering your specific questions, keep in mind that Python and
> Python-C code are very different things.  The current Decimal
> implementation was designed to be *readable* and efficient *Python* code.
> For a look at what the Python-C implementation of Decimal might look
> closer to, take a look at the Python long implementation.

Er, perhaps I should have said explicitly that I was looking at the
Decimal-in-C code and not the Python.  Most of my questions don't
make any sense at the Python level.

But you have a good point.  The long code will be both simpler and
have had a LOT more work done on it - but it will address only the
object of variable size issue, as it doesn't need class-level data
in the same way as Decimal and I do.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From martin at v.loewis.de  Fri Jun 30 21:14:06 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 21:14:06 +0200
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <17573.3055.448488.573754@montanaro.dyndns.org>
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>	<Pine.LNX.4.58.0606300428100.17937@server1.LFW.org>
	<17573.3055.448488.573754@montanaro.dyndns.org>
Message-ID: <44A577FE.20701@v.loewis.de>

skip at pobox.com wrote:
>     Ping> The question is, what behaviour is preferable for this code:
> 
>     Ping>     g = 1
>     Ping>     def f():
>     Ping>         g += 1
> 
>     Ping>     f()
> 
> If you treat "g += 1" as "g = g + 1" then it should create a local variable
> with a value of 2.

py> g = 1
py> def f():
...   g = g + 1
...
py> f()
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "<stdin>", line 2, in f
UnboundLocalError: local variable 'g' referenced before assignment

Regards,
Martin

From tim.peters at gmail.com  Fri Jun 30 21:14:24 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 30 Jun 2006 15:14:24 -0400
Subject: [Python-Dev] Python memory model (low level)
In-Reply-To: <E1FwNyY-0006z0-S0@virgo.cus.cam.ac.uk>
References: <E1FwNyY-0006z0-S0@virgo.cus.cam.ac.uk>
Message-ID: <1f7befae0606301214o7a88087cnc5feeb8e25ec3dd8@mail.gmail.com>

[Nick Maclaren]
> I Have been thinking about software floating point, and there are
> some aspects of Python and decimal that puzzle me.  Basically, they
> are things that are wanted for this sort of thing and seem to be
> done in very contorted ways, so I may have missed something.
>
> Firstly, can Python C code assume no COMPACTING garbage collector,
> or should it allow for things shifting under its feet?

CPython never relocates objects.  The address of a Python object is
fixed from birth to death.

> Secondly, is there any documentation on the constraints and necessary
> ritual when allocating chunks of raw data and/or types of variable
> size?  Decimal avoids the latter.

Note that because CPython never relocates objects, it's impossible to
allocate "in one gulp" an object of a mutable type with a dynamically
size-varying member.  So, e.g., list objects have a fixed-size object
header (which never moves), while the list "guts" are separately
allocated; OTOH, tuples are immutable, so can (& do) allocate all the
space they need in one gulp.

See the Python/C API Reference Manual, esp. "Allocating Objects on the
Heap", "Supporting Cyclic Garbage Collection", and "Memory
Management".

> Thirdly, I can't find an efficient way for object-mangling code to
> access class data and/or have some raw data attached to a class (as
> distinct from an instance).

Don't know what "raw data" might mean here.  Any Python object can be
bound to any attribute of a class.  In Python, e.g.,

class MyClass:

     mydata = ['xyz', 12]

     def method(self):
         MyClass.mydata.append(-1)
         # or, more inheritance-friendly
         self.__class__.mydata.append(-1)

This is substantially more long-winded in C.

> Fourthly, can I assume that no instance of a class will remain active
> AFTER the class disappears?  This would mean that it could use a
> pointer to class-level raw data.

It would be an error to free the memory for a class if the class is
reachable.  So long as an instance I of a class C is reachable, C is
also reachable (at least via I.__class__) so C _must_ not disappear.
C disappears only if it becomes unreachable, and one of the garbage
collection mechanisms then gets around to freeing its memory.  All gc
mechanisms in CPython rely on accurate reference counts.

> I can explain why all of those are the 'right' way to approach the
> problem, at an abstract level, but it is quite possible that Python
> does not support the abstract model of class implementation that I
> am thinking of.

Since we don't know what you're thinking of, you're on your own there ;-)

From martin at v.loewis.de  Fri Jun 30 21:19:48 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 21:19:48 +0200
Subject: [Python-Dev] 2.5 and beyond
In-Reply-To: <86FF939A-5D9F-42C0-B633-95837FD7C991@fuhm.net>
References: <ee2a432c0606300005g256b3391na6e684123d3e9e93@mail.gmail.com>
	<86FF939A-5D9F-42C0-B633-95837FD7C991@fuhm.net>
Message-ID: <44A57954.7030905@v.loewis.de>

James Y Knight wrote:
> On Jun 30, 2006, at 3:05 AM, Neal Norwitz wrote:
>> If there are any bugs you think should be considered show stoppers,
>> mail them to the list and I will update the PEP.
> 
> I just submitted http://python.org/sf/1515169 for the ImportWarning  
> issue previously discussed here. IMO it's important.

At the moment (i.e. without an acceptable alternative implementation)
it's primarily a policy issue. There really isn't any bug here;
(to speak with Microsoft's words): This behavior is by design.

Only the release manager or the BDFL could revert the feature, and
Guido already stated that the warning stays until Python 3, and
probably even after that. I personally believe the only chance to
get this changed now is a well-designed alternative implementation
(although this is no promise that such an alternative would actually
be accepted).

Regards,
Martin

From t-bruch at microsoft.com  Fri Jun 30 21:20:45 2006
From: t-bruch at microsoft.com (Bruce Christensen)
Date: Fri, 30 Jun 2006 12:20:45 -0700
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <1f7befae0606301129o3985969blafaa44e3aa53e978@mail.gmail.com>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
	<44A4B1FF.7030709@v.loewis.de>
	<3581AA168D87A2479D88EA319BDF7D32BC24C0@RED-MSG-80.redmond.corp.microsoft.com>
	<1f7befae0606301129o3985969blafaa44e3aa53e978@mail.gmail.com>
Message-ID: <3581AA168D87A2479D88EA319BDF7D32BC26A6@RED-MSG-80.redmond.corp.microsoft.com>

Tim Peters wrote:

> I hope you've read PEP 307:

I have. Thanks to you and Guido for writing it! It's been a huge help.

> The implementation is more like:
[snip]

Thanks! That helps a lot. PEP 307 and the pickle module docs describe the end
result pretty well, but they don't always make it clear where things are
implemented. I'm trying to make sure that I'm getting the right interaction
between object.__reduce(_ex)__, pickle, and copy_reg..

One (hopefully) last question: is object.__reduce(_ex)__ really implemented in
object? The tracebacks below would indicate that pickle directly implements the
behavior that the specs say is implemented in object. However, that could be
because frames from C code don't show up in tracebacks. I'm not familiar enough
with CPython to know for sure.

>>> import copy_reg
>>> def bomb(*args, **kwargs):
...     raise Exception('KABOOM! %r %r' % (args, kwargs))
...
>>> copy_reg._reduce_ex = bomb
>>> import pickle
>>> pickle.dumps(object())
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "C:\Python24\lib\pickle.py", line 1386, in dumps
    Pickler(file, protocol, bin).dump(obj)
  File "C:\Python24\lib\pickle.py", line 231, in dump
    self.save(obj)
  File "C:\Python24\lib\pickle.py", line 313, in save
    rv = reduce(self.proto)
  File "<stdin>", line 2, in bomb
Exception: KABOOM! (<object object at 0x01E3C448>, 0) {}

>>> class NewObj(object):
...     def __reduce__(self):
...             raise Exception("reducing NewObj")
...
>>> import pickle
>>> pickle.dumps(NewObj())
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "C:\Python24\lib\pickle.py", line 1386, in dumps
    Pickler(file, protocol, bin).dump(obj)
  File "C:\Python24\lib\pickle.py", line 231, in dump
    self.save(obj)
  File "C:\Python24\lib\pickle.py", line 313, in save
    rv = reduce(self.proto)
  File "<stdin>", line 3, in __reduce__
Exception: reducing NewObj

> It's documented in the Library Reference Manual, in the `copy_reg` docs:

Oops. :)

Again, thanks for the help.

--Bruce

From nmm1 at cus.cam.ac.uk  Fri Jun 30 21:33:44 2006
From: nmm1 at cus.cam.ac.uk (Nick Maclaren)
Date: Fri, 30 Jun 2006 20:33:44 +0100
Subject: [Python-Dev] Python memory model (low level)
Message-ID: <E1FwOkK-0007OB-JK@virgo.cus.cam.ac.uk>

"Tim Peters" <tim.peters at gmail.com> wrote:

[ Many useful answers ]

Thanks very much!  That helps.  Here are a few points where we are at
cross-purposes.

I am talking about the C level.  What I am thinking of is the standard
method of implementing the complicated housekeeping of a class (e.g.
inheritance) in Python, and the basic operations in C (for efficiency).
The model that I would like to stick to is that the Python layer never
knows about the actual object implementation, and the C never knows
about the housekeeping.

The housekeeping would include the class derivation, which would (inter
alia) fix the size of a number.  The C code would need to allocate
some space to store various constants and workspace, shared between
all instances of the derived class.  This would be accessible from the
object it returns.

Each instance would be of a length specified by its derivation (i.e.
like Decimal), but would be constant for all members of the class
(i.e. not like long).  So it would be most similar to tuple in that
respect.

Operations like addition would copy the pointer to the class data
from the arguments, and ones like creation would need to be passed
the appropriate class and whatever input data they need.

I believe that, using the above approach, it would be possible to
achieve good efficiency with very little C - certainly, it has worked
in other languages.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679

From martin at v.loewis.de  Fri Jun 30 21:37:17 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 21:37:17 +0200
Subject: [Python-Dev] Moving the ctypes repository to python.org
In-Reply-To: <e82pea$22d$1@sea.gmane.org>
References: <e7ge79$kg8$1@sea.gmane.org>	<20060623145602.GB10250@niemeyer.net>	<e7h24t$t7c$1@sea.gmane.org>	<449C5157.5050004@v.loewis.de>
	<e82pea$22d$1@sea.gmane.org>
Message-ID: <44A57D6D.70701@v.loewis.de>

Thomas Heller wrote:
> - Do I need special rights to call 'svnadmin load' to import this dumpfile
>   into Python SVN, or are the normal commit rights sufficient?

It's called "svnadmin" for a reason :-)

Neal Norwitz or myself will have to do that; we need to do it on the
repository machine locally. I would likely take subversion write
access offline for the time of the import, so that I can rollback
the entire repository in case of an operator mistake.

>   What exactly is the URL/PATH where it should be imported (some sandbox,
>   I assume)?

My view is that this is the "projects" repository; with ctypes being a
project, it should go into the root directory (i.e. as a sibling to
python, peps, distutils, stackless, ...). If you prefer to see it in
sandbox, this could work as well.

> - What about the Python trunk?  Should changes from the sandbox be merged
>   into Modules/_ctypes and Lib/ctypes, or would it be better (or possible at all)
>   to use the external mechanism?

I would prefer to see two-way merges going on, at least until 2.5 is
released (i.e. no changes to Modules/ctypes except for bug fixes).

Using svn:external is a risky thing wrt. to branching/tagging:

When we tag the Python tree, we want to tag the entire source tree.
With svn:external, only the external link would be in the tag,
i.e. later changes to the external link would modify old tags.
This is undesirable.

This problem could be solved with a versioned external link;
this would mean that ctypes could not be edited directly, but
that one would have to go through the original repository
URL to perform modifications, and then update the external
link.

So I think I still would prefer two-way merges. There are
tools to make the merges pretty mechanic.

Regards,
Martin

From martin at v.loewis.de  Fri Jun 30 21:41:43 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 21:41:43 +0200
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <129CEF95A523704B9D46959C922A280002FE982C@nemesis.central.ccp.cc>
References: <129CEF95A523704B9D46959C922A280002FE982C@nemesis.central.ccp.cc>
Message-ID: <44A57E77.5090304@v.loewis.de>

Kristj?n V. J?nsson wrote:
> As a side note, is there a finalization order list for imported modules?

If they are Python modules, more or less, yes. Extension modules
cannot currently be finalized (I plan to change that for Py3k).
See PyImport_Cleanup for the precise algorithm used; there are
patches floating around to make this rely more on the garbage
collector.

Regards,
Martin



From exarkun at divmod.com  Fri Jun 30 21:43:31 2006
From: exarkun at divmod.com (Jean-Paul Calderone)
Date: Fri, 30 Jun 2006 15:43:31 -0400
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>
Message-ID: <20060630194331.29014.836424634.divmod.quotient.16780@ohm>

On Sun, 25 Jun 2006 17:51:17 -0700, Guido van Rossum <guido at python.org> wrote:
>On 6/24/06, Jean-Paul Calderone <exarkun at divmod.com> wrote:
>> >Actually, your application *was* pretty close to being broken a few
>> >weeks ago, when Guido wanted to drop the requirement that a package
>> >must contain an __init__ file. In that case, "import math" would have
>> >imported the directory, and given you an empty package.
>>
>>But this change was *not* made, and afaict it is not going to be made.
>
>Correct. We'll stick with the warning. (At least until Py3k but most
>likely also in Py3k.)
>

Even given that it emits completely spurious warnings for any package that
happens to share a name with a directory in whatever the working path is
(say, your home directory)?

How about if someone grovels through import.c and figures out how to make
the warning information only show up if the import actually fails?

Jean-Paul

From guido at python.org  Fri Jun 30 21:51:34 2006
From: guido at python.org (Guido van Rossum)
Date: Fri, 30 Jun 2006 12:51:34 -0700
Subject: [Python-Dev] ImportWarning flood
In-Reply-To: <20060630194331.29014.836424634.divmod.quotient.16780@ohm>
References: <ca471dc20606251751q36f2accbr3ff3fe8fbd24b20c@mail.gmail.com>
	<20060630194331.29014.836424634.divmod.quotient.16780@ohm>
Message-ID: <ca471dc20606301251m6e05f50ayd81e6ba3afd33095@mail.gmail.com>

On 6/30/06, Jean-Paul Calderone <exarkun at divmod.com> wrote:
> On Sun, 25 Jun 2006 17:51:17 -0700, Guido van Rossum <guido at python.org> wrote:
> >On 6/24/06, Jean-Paul Calderone <exarkun at divmod.com> wrote:
> >> >Actually, your application *was* pretty close to being broken a few
> >> >weeks ago, when Guido wanted to drop the requirement that a package
> >> >must contain an __init__ file. In that case, "import math" would have
> >> >imported the directory, and given you an empty package.
> >>
> >>But this change was *not* made, and afaict it is not going to be made.
> >
> >Correct. We'll stick with the warning. (At least until Py3k but most
> >likely also in Py3k.)
> >
>
> Even given that it emits completely spurious warnings for any package that
> happens to share a name with a directory in whatever the working path is
> (say, your home directory)?
>
> How about if someone grovels through import.c and figures out how to make
> the warning information only show up if the import actually fails?

That would work I think. But it's not easy.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)

From martin at v.loewis.de  Fri Jun 30 21:59:42 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 21:59:42 +0200
Subject: [Python-Dev] sys.settrace() in Python 2.3 vs. 2.4
In-Reply-To: <20060630094140.10C8.JCARLSON@uci.edu>
References: <20060630094140.10C8.JCARLSON@uci.edu>
Message-ID: <44A582AE.7010902@v.loewis.de>

Josiah Carlson wrote:
> Any pointers as to why there is a difference would be appreciated. 

This was fixed in r35540, r35541, r35542, r35543, by Nick Bastin
and Armin Rigo, in response to #765624. Enough pointers :-?

Regards,
Martin

From martin at v.loewis.de  Fri Jun 30 22:03:50 2006
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Fri, 30 Jun 2006 22:03:50 +0200
Subject: [Python-Dev] how long to wait for expat to incorporate a fix
 to	prevent a crasher?
In-Reply-To: <bbaeab100606301119jebac400vc10b96d8ac1d3029@mail.gmail.com>
References: <bbaeab100606301119jebac400vc10b96d8ac1d3029@mail.gmail.com>
Message-ID: <44A583A6.6010602@v.loewis.de>

Brett Cannon wrote:
> The question is how long do we wait for the expat developers to patch
> and do a micro release?  Do we just leave this possible crasher in and
> just rely entirely on the expat developers, or do we patch our copy and
> use that until they get around to doing their next version push?

If you have a patch, you should commit it to our copy. Make sure you
activate the test case, so that somebody incorporating the next Expat
release doesn't mistakenly roll back your change.

Of course, you might wait a few days to see whether Fred creates another
release that we could incorporate without introducing new features.

Regards,
Martin


From skip at pobox.com  Fri Jun 30 22:07:47 2006
From: skip at pobox.com (skip at pobox.com)
Date: Fri, 30 Jun 2006 15:07:47 -0500
Subject: [Python-Dev] Empty Subscript PEP on Wiki - keep or toss?
Message-ID: <17573.33939.473091.920283@montanaro.dyndns.org>

Noam Raphael posted an empty subscript PEP on the Python Wiki:

    http://wiki.python.org/moin/EmptySubscriptListPEP

It's not linked to by any other pages on the wiki.  Is there a reason it
wasn't added to the peps repository?

Skip

From martin at v.loewis.de  Fri Jun 30 22:12:57 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 22:12:57 +0200
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <3581AA168D87A2479D88EA319BDF7D32BC26A6@RED-MSG-80.redmond.corp.microsoft.com>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>	<44A4B1FF.7030709@v.loewis.de>	<3581AA168D87A2479D88EA319BDF7D32BC24C0@RED-MSG-80.redmond.corp.microsoft.com>	<1f7befae0606301129o3985969blafaa44e3aa53e978@mail.gmail.com>
	<3581AA168D87A2479D88EA319BDF7D32BC26A6@RED-MSG-80.redmond.corp.microsoft.com>
Message-ID: <44A585C9.1000508@v.loewis.de>

Bruce Christensen wrote:
> Thanks! That helps a lot. PEP 307 and the pickle module docs describe the end
> result pretty well, but they don't always make it clear where things are
> implemented. I'm trying to make sure that I'm getting the right interaction
> between object.__reduce(_ex)__, pickle, and copy_reg..

You really should ignore the existance of copy_reg._reduce_ex. It's an
implementation detail - it could have been implemented just as well in
C directly, in which case you couldn't as easily replace it with a
different function. It might be implemented that way in the next
release, or the entire __reduce_ex__ implementation might be lifted
to Python some day.

> One (hopefully) last question: is object.__reduce(_ex)__ really implemented in
> object?

Sure.

> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
>   File "C:\Python24\lib\pickle.py", line 1386, in dumps
>     Pickler(file, protocol, bin).dump(obj)
>   File "C:\Python24\lib\pickle.py", line 231, in dump
>     self.save(obj)
>   File "C:\Python24\lib\pickle.py", line 313, in save
>     rv = reduce(self.proto)
>   File "<stdin>", line 2, in bomb
> Exception: KABOOM! (<object object at 0x01E3C448>, 0) {}

You don't get a stack frame for C functions (normally, anyway):
there is no file/line number information available.

The reduce thing you are seeing really comes from

   # Check for a __reduce_ex__ method, fall back to __reduce__
   reduce = getattr(obj, "__reduce_ex__", None)

Regards,
Martin

From jcarlson at uci.edu  Fri Jun 30 22:16:09 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 30 Jun 2006 13:16:09 -0700
Subject: [Python-Dev] Python memory model (low level)
In-Reply-To: <E1FwOkK-0007OB-JK@virgo.cus.cam.ac.uk>
References: <E1FwOkK-0007OB-JK@virgo.cus.cam.ac.uk>
Message-ID: <20060630131131.10CB.JCARLSON@uci.edu>


Nick Maclaren <nmm1 at cus.cam.ac.uk> wrote:
> "Tim Peters" <tim.peters at gmail.com> wrote:
> 
> [ Many useful answers ]
> 
> Thanks very much!  That helps.  Here are a few points where we are at
> cross-purposes.
[snip]
> I believe that, using the above approach, it would be possible to
> achieve good efficiency with very little C - certainly, it has worked
> in other languages.

If I understand you correctly (I apologize if I am not), you are talking
about subclassing. Subclassing already has a mechanism and
implementation in CPython, both in the Python-language and
C-implementation levels. Further, I would expect that everything
relating to Decimal's current functionality (contexts, etc.) will be
implemented appropriately.  If I were to offer any advice to you, it
would be to relax for a few months and let the Google Summer of Code
project complete.  I have faith in Facundo's mentoring capability (in
that I have faith in his original decimal implementation), and I expect
that the C implementation of Decimal will satisfy the vast majority of
your concerns about floating point math and Python.

 - Josiah


From martin at v.loewis.de  Fri Jun 30 22:14:42 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 22:14:42 +0200
Subject: [Python-Dev] Empty Subscript PEP on Wiki - keep or toss?
In-Reply-To: <17573.33939.473091.920283@montanaro.dyndns.org>
References: <17573.33939.473091.920283@montanaro.dyndns.org>
Message-ID: <44A58632.20505@v.loewis.de>

skip at pobox.com wrote:
> Noam Raphael posted an empty subscript PEP on the Python Wiki:
> 
>     http://wiki.python.org/moin/EmptySubscriptListPEP
> 
> It's not linked to by any other pages on the wiki.  Is there a reason it
> wasn't added to the peps repository?

The most likely reason is that he didn't submit the PEP to the PEP
editors. The next likely reason is that the PEP editors did not have
time to add it, yet.

Regards,
Martin

From g.brandl at gmx.net  Fri Jun 30 22:15:51 2006
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 30 Jun 2006 22:15:51 +0200
Subject: [Python-Dev] Empty Subscript PEP on Wiki - keep or toss?
In-Reply-To: <17573.33939.473091.920283@montanaro.dyndns.org>
References: <17573.33939.473091.920283@montanaro.dyndns.org>
Message-ID: <e840po$k1i$1@sea.gmane.org>

skip at pobox.com wrote:
> Noam Raphael posted an empty subscript PEP on the Python Wiki:
> 
>     http://wiki.python.org/moin/EmptySubscriptListPEP
> 
> It's not linked to by any other pages on the wiki.  Is there a reason it
> wasn't added to the peps repository?

Perhaps the author forgot to submit it to the PEP editor, or he decided
to abandon it after the mostly negative discussion here.

Georg


From tim.peters at gmail.com  Fri Jun 30 22:24:10 2006
From: tim.peters at gmail.com (Tim Peters)
Date: Fri, 30 Jun 2006 16:24:10 -0400
Subject: [Python-Dev] Pickle implementation questions
In-Reply-To: <3581AA168D87A2479D88EA319BDF7D32BC26A6@RED-MSG-80.redmond.corp.microsoft.com>
References: <3581AA168D87A2479D88EA319BDF7D32BC21A4@RED-MSG-80.redmond.corp.microsoft.com>
	<44A4B1FF.7030709@v.loewis.de>
	<3581AA168D87A2479D88EA319BDF7D32BC24C0@RED-MSG-80.redmond.corp.microsoft.com>
	<1f7befae0606301129o3985969blafaa44e3aa53e978@mail.gmail.com>
	<3581AA168D87A2479D88EA319BDF7D32BC26A6@RED-MSG-80.redmond.corp.microsoft.com>
Message-ID: <1f7befae0606301324y7d7ab15q3b63babd01058f08@mail.gmail.com>

[Tim Peters]
>> I hope you've read PEP 307:

[Bruce Christensen]
> I have. Thanks to you and Guido for writing it! It's been a huge help.

You're welcome -- although we were paid for that, so thanks aren't needed ;-)

>> The implementation is more like:
>> [snip]

> Thanks! That helps a lot. PEP 307 and the pickle module docs describe the end
> result pretty well, but they don't always make it clear where things are
> implemented.

Well, "where" and "how" are implementation details.  Alas, those
aren't always clearly separated from the semantics (and since Guido &
I both like operational definitions, stuff we write is especially
prone to muddiness on such points).  The layers of backward
compatibility for now out-of-favor gimmicks don't help either -- this
is akin to reading the Windows API docs, finding around six functions
that _sound_ relevant, and then painfully discovering none of them
actually do what you hope they do, one at a time :-)

> I'm trying to make sure that I'm getting the right interaction between
> object.__reduce(_ex)__, pickle, and copy_reg.

Alas, I'm sure I don't remember sufficient details anymore myself.

> One (hopefully) last question: is object.__reduce(_ex)__ really implemented in
> object?

Yes, although I think you're overlooking this bit of the "acts as if"
pseudo-implementation from my last note:

       elif proto < 2:
           return copy_reg._reduce_ex(self, proto)

That is, the `object` implementation left the proto < 2 cases coded in
Python.  You won't get to the (hoped to be) common path:

        else:
           # about 130 lines of C code exploiting proto 2

unless you ask for proto 2.

> The tracebacks below would indicate that pickle directly implements the
> behavior that the specs say is implemented in object. However, that could be
> because frames from C code don't show up in tracebacks.

That's right, they don't, and the C `object` code calls back into
copy_reg in proto < 2 cases.

> I'm not familiar enough with CPython to know for sure.
>
> >>> import copy_reg
> >>> def bomb(*args, **kwargs):
> ...     raise Exception('KABOOM! %r %r' % (args, kwargs))
> ...
> >>> copy_reg._reduce_ex = bomb
> >>> import pickle
> >>> pickle.dumps(object())

You're defaulting to protocol 0 there, so, as above, the `object`
implementation actually calls copy_reg._reduce_ex(self, 0) in this
case.  Much the same if you do:

>>> pickle.dumps(object(), 1)

I think it's a misfeature of pickle that it defaults to the oldest
protocol instead of the newest, but not much to be done about that in
Python 2.

Do one of these instead and the traceback will go away:

>>> pickle.dumps(object(), 2)
>>> pickle.dumps(object(), -1)
>>> pickle.dumps(object(), pickle.HIGHEST_PROTOCOL)

> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
>   File "C:\Python24\lib\pickle.py", line 1386, in dumps
>     Pickler(file, protocol, bin).dump(obj)
>   File "C:\Python24\lib\pickle.py", line 231, in dump
>     self.save(obj)
>   File "C:\Python24\lib\pickle.py", line 313, in save
>     rv = reduce(self.proto)
>   File "<stdin>", line 2, in bomb
> Exception: KABOOM! (<object object at 0x01E3C448>, 0) {}

It's _certainly_ an implementation accident that the `object` coding
happens to call back into `copy_reg` here.  There was no intent that
users be able to monkey-patch copy_reg and replace _reduce_ex().  It
was left coded in Python purely as a cost/benefit tradeoff.

> >>> class NewObj(object):
> ...     def __reduce__(self):
> ...             raise Exception("reducing NewObj")

In this case, it doesn't matter at all how `object` implements
__reduce__ or __reduce_ex__, because you're explicitly saying that
NewObj has its own __reduce__ method, and that overrides `object`'s
implementation.  IOW, you're getting exactly what you ask for in this
case, and regardless of pickle protocol specified:

> >>> import pickle
> >>> pickle.dumps(NewObj())

Ask for protocols 1 or 2 here, and you'll get the same traceback.  It
would be a bug if you didn't :-)

> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
>   File "C:\Python24\lib\pickle.py", line 1386, in dumps
>     Pickler(file, protocol, bin).dump(obj)
>   File "C:\Python24\lib\pickle.py", line 231, in dump
>     self.save(obj)
>   File "C:\Python24\lib\pickle.py", line 313, in save
>     rv = reduce(self.proto)
>   File "<stdin>", line 3, in __reduce__
> Exception: reducing NewObj

From jcarlson at uci.edu  Fri Jun 30 22:27:24 2006
From: jcarlson at uci.edu (Josiah Carlson)
Date: Fri, 30 Jun 2006 13:27:24 -0700
Subject: [Python-Dev] sys.settrace() in Python 2.3 vs. 2.4
In-Reply-To: <44A582AE.7010902@v.loewis.de>
References: <20060630094140.10C8.JCARLSON@uci.edu>
	<44A582AE.7010902@v.loewis.de>
Message-ID: <20060630131620.10CE.JCARLSON@uci.edu>


"Martin v. L?wis" <martin at v.loewis.de> wrote:
> 
> Josiah Carlson wrote:
> > Any pointers as to why there is a difference would be appreciated. 
> 
> This was fixed in r35540, r35541, r35542, r35543, by Nick Bastin
> and Armin Rigo, in response to #765624. Enough pointers :-?

Yes, thank you Martin.  I would guess it wasn't backported to the 2.3
branch due to a change in the maybe_call_line_trace() definition, which
answers my question, but makes me sad (I wanted this functionality
*pout*).

I'll just have to gracefully degrade functionality for older Pythons. 
Thank you again Martin,
 - Josiah


From martin at v.loewis.de  Fri Jun 30 22:30:47 2006
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 30 Jun 2006 22:30:47 +0200
Subject: [Python-Dev] Python memory model (low level)
In-Reply-To: <1f7befae0606301214o7a88087cnc5feeb8e25ec3dd8@mail.gmail.com>
References: <E1FwNyY-0006z0-S0@virgo.cus.cam.ac.uk>
	<1f7befae0606301214o7a88087cnc5feeb8e25ec3dd8@mail.gmail.com>
Message-ID: <44A589F7.3000506@v.loewis.de>

Tim Peters wrote:
> Don't know what "raw data" might mean here.  Any Python object can be
> bound to any attribute of a class.  In Python, e.g.,
> 
> class MyClass:
> 
>      mydata = ['xyz', 12]
> 
>      def method(self):
>          MyClass.mydata.append(-1)
>          # or, more inheritance-friendly
>          self.__class__.mydata.append(-1)
> 
> This is substantially more long-winded in C.

To get a starting point:
PyDict_GetItemString(MyClass->tp_dict, "mydata")
is the equivalent of
self.__class__.mydata

That way, the raw data would get exposed to the Python level.
If you don't want this to happen, you could also revert the
lookup:

static PyObject *mydata; /* = PyDict_New() */
and then
PyDict_GetItem(mydata, MyClass)

If "raw" means "non-PyObject", you would have to wrap the
raw data pointer with a CObject first.

Regards,
Martin

From brett at python.org  Fri Jun 30 22:54:13 2006
From: brett at python.org (Brett Cannon)
Date: Fri, 30 Jun 2006 13:54:13 -0700
Subject: [Python-Dev] how long to wait for expat to incorporate a fix to
	prevent a crasher?
In-Reply-To: <44A583A6.6010602@v.loewis.de>
References: <bbaeab100606301119jebac400vc10b96d8ac1d3029@mail.gmail.com>
	<44A583A6.6010602@v.loewis.de>
Message-ID: <bbaeab100606301354w41ce6d63gcfd0b93aef2d2965@mail.gmail.com>

On 6/30/06, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>
> Brett Cannon wrote:
> > The question is how long do we wait for the expat developers to patch
> > and do a micro release?  Do we just leave this possible crasher in and
> > just rely entirely on the expat developers, or do we patch our copy and
> > use that until they get around to doing their next version push?
>
> If you have a patch, you should commit it to our copy. Make sure you
> activate the test case, so that somebody incorporating the next Expat
> release doesn't mistakenly roll back your change.


OK, will do.

Of course, you might wait a few days to see whether Fred creates another
> release that we could incorporate without introducing new features.


 Yeah, I am going to wait a little while.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060630/e61c9632/attachment.html 

From jack at psynchronous.com  Tue Jun 27 20:44:05 2006
From: jack at psynchronous.com (Jack Diederich)
Date: Tue, 27 Jun 2006 14:44:05 -0400
Subject: [Python-Dev] Proposal to eliminate PySet_Fini
In-Reply-To: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>
References: <d38f5330606271109s39f64022w53261832cd17c6b@mail.gmail.com>
Message-ID: <20060627184404.GL10485@performancedrivers.com>

On Tue, Jun 27, 2006 at 02:09:19PM -0400, Alexander Belopolsky wrote:
> Setobject code allocates several internal objects on the heap that are
> cleaned up by the PySet_Fini function.  This is a fine design choice,
> but it often makes debugging applications with embedded python more
> difficult.
> 
> I propose to eliminate the need for PySet_Fini as follows:
> 
> 1. Make dummy and emptyfrozenset static objects similar to Py_None
> 2. Eliminate the free sets reuse scheme.
> 
> The second proposal is probably more controversial, but is there any
> real benefit from that scheme when pymalloc is enabled?

These are optimizations and not likely to go away. tuples especially get
reused frequently.  In the case of tuples you can #define MAXSAVEDTUPLES
to zero in a custom python build to disable free-listing.  You can submit
a patch that #ifdef'd the other free list in a similar way (sets don't
currently have the ifdef check, for instance) and hope it gets accepted.
I don't see why it wouldn't.

PyObject_MALLOC does a good job of reusing small allocations but it
can't quite manage the same speed as a free list, especially for things that
have some extra setup involved (tuples have a free list for each length).

-Jack

From yi.s.ding at gmail.com  Wed Jun 28 23:40:49 2006
From: yi.s.ding at gmail.com (Yi Ding)
Date: Wed, 28 Jun 2006 14:40:49 -0700
Subject: [Python-Dev] bug #1513646
Message-ID: <cbc7af680606281440i21443532n3d431e01b4a160d@mail.gmail.com>

Hi guys,

I filed this bug but sourceforge is down so I can't update it:
http://sourceforge.net/tracker/index.php?func=detail&aid=1513646&group_id=5470&atid=105470

Basically, os.access returns the wrong result for W_OK, and that's
because instead of using & it uses && to see if the file is read only.

diff -urdN accessfix/python/Modules/posixmodule.c
stock/python/Modules/posixmodule.c
--- accessfix/python/Modules/posixmodule.c      2006-06-28
14:15:31.368649100 -0700
+++ stock/python/Modules/posixmodule.c  2006-06-28 14:20:26.138047100 -0700
@@ -1402,7 +1402,7 @@
                return PyBool_FromLong(0);
        /* Access is possible if either write access wasn't requested, or
           the file isn't read-only. */
-       return PyBool_FromLong(!(mode & 2) || !(attr &&
FILE_ATTRIBUTE_READONLY));
+       return PyBool_FromLong(!(mode & 2) || !(attr &
FILE_ATTRIBUTE_READONLY));
 #else
        int res;
        if (!PyArg_ParseTuple(args, "eti:access",

Thanks,
Yi