From ncoghlan at gmail.com  Fri Apr  1 00:29:29 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 1 Apr 2011 08:29:29 +1000
Subject: [Python-Dev] devguide: Add a table of contents to the FAQ.
In-Reply-To: <20110331163426.C4ABCD64A7@kimball.webabinitio.net>
References: <E1Q4e9O-0001X5-Jr@dinsdale.python.org>
	<20110330222004.5b23bcc5@pitrou.net> <4D94A21B.9040501@gmail.com>
	<20110331163426.C4ABCD64A7@kimball.webabinitio.net>
Message-ID: <AANLkTinJ+4SeLBHzQkwBgtQppq=ow-UJx2rtZPQhGvMH@mail.gmail.com>

On Fri, Apr 1, 2011 at 2:34 AM, R. David Murray <rdmurray at bitdance.com> wrote:
> I agree with this point. ?The sidebar list of questions is effectively
> useless.

Indeed. If it's simple, I'd actually be inclined to reduce the depth
of the sidebar in this case to only show the categories and not the
individual questions.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Fri Apr  1 00:37:51 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 1 Apr 2011 08:37:51 +1000
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D94BB4D.8030405@netwok.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
Message-ID: <AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>

On Fri, Apr 1, 2011 at 3:35 AM, ?ric Araujo <merwok at netwok.org> wrote:
> If I understand the policy correctly, 2.5 and 2.6 are not considered
> active branches, so any doc, build or bug fixes are not acceptable.

Actual build fixes may be acceptable, if they're needed to allow
people to build from a version control checkout or from source (since
they need to be able to do that in order to apply security patches).

However, the combination of "running on Ubuntu 11.04+" and "need to
build security patched version of old Python" seems unlikely.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From barry at python.org  Fri Apr  1 01:01:06 2011
From: barry at python.org (Barry Warsaw)
Date: Thu, 31 Mar 2011 19:01:06 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
Message-ID: <20110331190106.7b557db1@neurotica.wooz.org>

On Apr 01, 2011, at 08:37 AM, Nick Coghlan wrote:

>On Fri, Apr 1, 2011 at 3:35 AM, ?ric Araujo <merwok at netwok.org> wrote:
>> If I understand the policy correctly, 2.5 and 2.6 are not considered
>> active branches, so any doc, build or bug fixes are not acceptable.
>
>Actual build fixes may be acceptable, if they're needed to allow
>people to build from a version control checkout or from source (since
>they need to be able to do that in order to apply security patches).
>
>However, the combination of "running on Ubuntu 11.04+" and "need to
>build security patched version of old Python" seems unlikely.

I'll just plead as RM for 2.6 that it's not as unlikely as it seems :).

I'm happy to defer to MvL on its (non) applicability to 2.5.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/b77add3f/attachment.pgp>

From v+python at g.nevcal.com  Fri Apr  1 01:06:03 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 31 Mar 2011 16:06:03 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <in2bg7$v5q$1@dough.gmane.org>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
Message-ID: <4D9508DB.2080703@g.nevcal.com>

On 3/31/2011 9:52 AM, Terry Reedy wrote:
> I would like to try putting the comment box after the last (most 
> recent) comment, as that is the message one most ofter responds to. 
> Having to now scroll up and down between comment box and last 
> message(s) is often of a nuisance. 

+1.   Or +0 reverse time sequence the messages.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/be991103/attachment.html>

From rdmurray at bitdance.com  Fri Apr  1 01:08:21 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 31 Mar 2011 19:08:21 -0400
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <4D94F948.3010203@v.loewis.de>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331013208.0a708867@pitrou.net> <4D94F948.3010203@v.loewis.de>
Message-ID: <20110331230745.AFDAD2B673@kimball.webabinitio.net>

On Thu, 31 Mar 2011 23:59:36 +0200, =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= <martin at v.loewis.de> wrote:
> Notice that the issue title was always there, in your browser's title
> bar (unless you have a web browser that doesn't display the page title).

I do.

--
R. David Murray           http://www.bitdance.com

From ethan at stoneleaf.us  Fri Apr  1 01:21:31 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 31 Mar 2011 16:21:31 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <4D9508DB.2080703@g.nevcal.com>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>
	<4D9508DB.2080703@g.nevcal.com>
Message-ID: <4D950C7B.4060106@stoneleaf.us>

Glenn Linderman wrote:
>   On 3/31/2011 9:52 AM, Terry Reedy wrote:
>> I would like to try putting the comment box after the last (most 
>> recent) comment, as that is the message one most ofter responds to. 
>> Having to now scroll up and down between comment box and last 
>> message(s) is often of a nuisance. 
> 
> +1.   Or +0 reverse time sequence the messages.

-1 on reverse time sequence of messages -- no top posting!  ;)

~Ethan~

From benjamin at python.org  Fri Apr  1 01:11:46 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 31 Mar 2011 18:11:46 -0500
Subject: [Python-Dev] warn_unused_result warnings
Message-ID: <AANLkTik96YJkR4oxYJRmNaZY9ymPX4ahOMAqfjUMDTfz@mail.gmail.com>

I'm rather sick of seeing this warnings on all compiles, so I propose
we enable the -Wno-unused-results option. I judge that most of the
cases where this occurs are error reporting functions, where not much
with return code can be done.

-- 
Regards,
Benjamin

From rdmurray at bitdance.com  Fri Apr  1 01:12:01 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 31 Mar 2011 19:12:01 -0400
Subject: [Python-Dev] devguide: Add a table of contents to the FAQ.
In-Reply-To: <AANLkTinJ+4SeLBHzQkwBgtQppq=ow-UJx2rtZPQhGvMH@mail.gmail.com>
References: <E1Q4e9O-0001X5-Jr@dinsdale.python.org>
	<20110330222004.5b23bcc5@pitrou.net> <4D94A21B.9040501@gmail.com>
	<20110331163426.C4ABCD64A7@kimball.webabinitio.net>
	<AANLkTinJ+4SeLBHzQkwBgtQppq=ow-UJx2rtZPQhGvMH@mail.gmail.com>
Message-ID: <20110331231125.7B06B2B673@kimball.webabinitio.net>

On Fri, 01 Apr 2011 08:29:29 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Fri, Apr 1, 2011 at 2:34 AM, R. David Murray <rdmurray at bitdance.com> wro=
> te:
> > I agree with this point. =A0The sidebar list of questions is effectively
> > useless.
> 
> Indeed. If it's simple, I'd actually be inclined to reduce the depth
> of the sidebar in this case to only show the categories and not the
> individual questions.

I believe that requires editing the sphinx page template and adding
a special case of some sort.

--
R. David Murray           http://www.bitdance.com

From raymond.hettinger at gmail.com  Fri Apr  1 01:15:48 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 31 Mar 2011 16:15:48 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
Message-ID: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>

The Hg source viewer needs to be tweaked to improve its usability.
What we've got now is a step backwards from the previous svn viewer.

Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
there are two issues.   1) the code cannot be cut-and-pasted because the
line numbers are commingled with the source text.  2) the code is hard
to read because of the alternating white and gray bars.

Contrast that to the more typical, beautiful presentations with a solid
background and the ability to cut-and-paste without grabbing line
numbers:

  http://dpaste.org/qyKv/

  http://code.activestate.com/recipes/577629-namedtupleabc-abstract-base-class-mix-in-for-named/


Raymond


P.S.  The old svn viewer worked great (looked good and could be cut),
but that was changed just before the Mercurial switchover (the fonts
changed, the line numbering code changed, and the leading changed),
so it is not a good comparison anymore.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/08190449/attachment.html>

From raymond.hettinger at gmail.com  Fri Apr  1 01:27:27 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 31 Mar 2011 16:27:27 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <in2bg7$v5q$1@dough.gmane.org>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
Message-ID: <1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>


On Mar 31, 2011, at 9:52 AM, Terry Reedy wrote:

> I would like to try putting the comment box after the last (most recent) comment, as that is the message one most ofter responds to. Having to now scroll up and down between comment box and last message(s) is often of a nuisance.

While that sounds logical, I think it will be a usability problem.  If someone doesn't see a the comment box immediately, they may not know to scroll down past dozens of messages to find it.

Rather that being trial-and-error amateur web page designers, it would be better to follow proven models.  All of the following have the comment box at the top and the messages in reverse chronological order:

* http://news.ycombinator.com/item?id=2393587  
* http://digg.com/news/entertainment/top_12_game_shows_of_all_time
* https://twitter.com/


Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/6907555f/attachment.html>

From benjamin at python.org  Fri Apr  1 01:38:48 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 31 Mar 2011 18:38:48 -0500
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
Message-ID: <AANLkTimJc6m=M3T2c55M6-Xocjd9PtLZJEpDb5Bxe-wg@mail.gmail.com>

2011/3/31 Raymond Hettinger <raymond.hettinger at gmail.com>:
>
> On Mar 31, 2011, at 9:52 AM, Terry Reedy wrote:
>
> I would like to try putting the comment box after the last (most recent)
> comment, as that is the message one most ofter responds to. Having to now
> scroll up and down between comment box and last message(s) is often of a
> nuisance.
>
> While that sounds logical, I think it will be a usability problem. ?If
> someone doesn't see a the comment box immediately, they may not know to
> scroll down past dozens of messages to find it.
> Rather that being trial-and-error amateur web page designers, it would be
> better to follow proven models. ?All of the following have the comment box
> at the top and the messages in reverse chronological order:

Please no reverse chronological order! Every bug tracker I know which
isn't underconfigured roundup uses chronological order.



-- 
Regards,
Benjamin

From anikom15 at gmail.com  Fri Apr  1 02:02:44 2011
From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=)
Date: Thu, 31 Mar 2011 17:02:44 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
Message-ID: <1301616164.7031.5.camel@localhost.localdomain>

On Thu, 2011-03-31 at 16:27 -0700, Raymond Hettinger wrote:
> 
> On Mar 31, 2011, at 9:52 AM, Terry Reedy wrote:
> 
> > I would like to try putting the comment box after the last (most
> > recent) comment, as that is the message one most ofter responds to.
> > Having to now scroll up and down between comment box and last
> > message(s) is often of a nuisance.
> > 
> 
> 
> While that sounds logical, I think it will be a usability problem.  If
> someone doesn't see a the comment box immediately, they may not know
> to scroll down past dozens of messages to find it.
> 
> 
> Rather that being trial-and-error amateur web page designers, it would
> be better to follow proven models.  All of the following have the
> comment box at the top and the messages in reverse chronological
> order:
> 
> 
> * http://news.ycombinator.com/item?id=2393587  
> * http://digg.com/news/entertainment/top_12_game_shows_of_all_time
> * https://twitter.com/
> 
> 
> 
> 
> Raymond

How 'bout no? YouTube uses this and it's horrid and unnatural, and
bulletin boards have been using chronological order for whiles with
great success. Reverse chronological order has a niche for feeds,
updates, whatever you want to call it, but when it comes to following a
discussion it's much easier to start with the first word.


From raymond.hettinger at gmail.com  Fri Apr  1 02:22:23 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 31 Mar 2011 17:22:23 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <1301616164.7031.5.camel@localhost.localdomain>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
	<1301616164.7031.5.camel@localhost.localdomain>
Message-ID: <8DBBAB30-756D-46E2-9E2E-22B73836285E@gmail.com>


On Mar 31, 2011, at 5:02 PM, Westley Mart?nez wrote:
> 
> How 'bout no? YouTube uses this and it's horrid and unnatural, and
> bulletin boards have been using chronological order for whiles with
> great success. Reverse chronological order has a niche for feeds,
> updates, whatever you want to call it, but when it comes to following a
> discussion it's much easier to start with the first word.

Perhaps the most important part is that the comment box goes at the top.


Raymond


From tjreedy at udel.edu  Fri Apr  1 02:22:18 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 31 Mar 2011 20:22:18 -0400
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
Message-ID: <in35rp$jt4$1@dough.gmane.org>

On 3/31/2011 7:27 PM, Raymond Hettinger wrote:
>
> On Mar 31, 2011, at 9:52 AM, Terry Reedy wrote:
>
>> I would like to try putting the comment box after the last (most
>> recent) comment, as that is the message one most ofter responds to.
>> Having to now scroll up and down between comment box and last
>> message(s) is often of a nuisance.
>
> While that sounds logical, I think it will be a usability problem. If
> someone doesn't see a the comment box immediately, they may not know to
> scroll down past dozens of messages to find it.

Even though such is standard for the majority of web fora?

> Rather that being trial-and-error amateur web page designers, it would
> be better to follow proven models. All of the following have the comment
> box at the top and the messages in reverse chronological order:

I really hate that since it means scrolling down before reading, and 
because this is unusually, so by habit I start reading at the top.
>
> * http://news.ycombinator.com/item?id=2393587
> * http://digg.com/news/entertainment/top_12_game_shows_of_all_time
> * https://twitter.com/

In my experience, reverse is maybe 20% of sites I have visited. Forward: 
guido's blog, and I presume others at site; ars technica, stackoverflow, 
slashdot, most or all php-based web fora, ....

-- 
Terry Jan Reedy


From solipsis at pitrou.net  Fri Apr  1 02:25:18 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 02:25:18 +0200
Subject: [Python-Dev] Please revert autofolding of tracker edit form
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331013208.0a708867@pitrou.net> <4D94F948.3010203@v.loewis.de>
Message-ID: <20110401022518.3ae186b3@pitrou.net>

On Thu, 31 Mar 2011 23:59:36 +0200
"Martin v. L?wis" <martin at v.loewis.de> wrote:
> > What's more, it lacks the most important: the issue title.
> 
> Notice that the issue title was always there, in your browser's title
> bar (unless you have a web browser that doesn't display the page title).

Sure, but it's far removed from where the rest of the issue is
displayed.

Regards

Antoine.



From tjreedy at udel.edu  Fri Apr  1 02:28:27 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 31 Mar 2011 20:28:27 -0400
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
Message-ID: <in3679$l8g$1@dough.gmane.org>

On 3/31/2011 7:15 PM, Raymond Hettinger wrote:
> The Hg source viewer needs to be tweaked to improve its usability.
> What we've got now is a step backwards from the previous svn viewer.
>
> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py
> for example,
> there are two issues. 1) the code cannot be cut-and-pasted because the
> line numbers are commingled with the source text. 2) the code is hard
> to read because of the alternating white and gray bars.
>
> Contrast that to the more typical, beautiful presentations with a solid
> background and the ability to cut-and-paste without grabbing line
> numbers:

I complete agree with this. The bars are for super long lines, 
especially of data, as with 132 char Fortran output on old ibm printers. 
Even then, the bars were 3 lines wide. 80- char text lines do not need them.


-- 
Terry Jan Reedy


From solipsis at pitrou.net  Fri Apr  1 02:26:36 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 02:26:36 +0200
Subject: [Python-Dev] Please revert autofolding of tracker edit form
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
Message-ID: <20110401022636.14945972@pitrou.net>

On Thu, 31 Mar 2011 12:52:23 -0400
Terry Reedy <tjreedy at udel.edu> wrote:
> 
> Here is my proposal for a redesign based on an analysis of my usage ;-).
> I have a 1600x1050 (or thereabouts), 20" (measured) diagonal, 17" across 
> screen.
> 
> The left column has a 7/8" margin, 2 3/8" text area, and 1" gutter. 
> These could be shrunk to say, 1/4, 2, 1/4, saving 1 3/8".
> The comment box is 8 1/2", message boxes are wider, but the extra width 
> is not used if one uses hard returns in the comment box. In any case, 
> the message boxes could be narrowed by 1 1/8".
> This would allow a right column of 1/4+2+1/4".

Let's say that by using non-metric units you have already lost me,
sorry.

Regards

Antoine.



From solipsis at pitrou.net  Fri Apr  1 02:30:32 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 02:30:32 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
Message-ID: <20110401023032.3837e0a6@pitrou.net>

On Thu, 31 Mar 2011 16:15:48 -0700
Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> The Hg source viewer needs to be tweaked to improve its usability.
> What we've got now is a step backwards from the previous svn viewer.
> 
> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
> there are two issues.   1) the code cannot be cut-and-pasted because the
> line numbers are commingled with the source text.  2) the code is hard
> to read because of the alternating white and gray bars.
> 
> Contrast that to the more typical, beautiful presentations with a solid
> background and the ability to cut-and-paste without grabbing line
> numbers:

This is something you need to discuss with the Mercurial project.
See http://mercurial.selenic.com/bts/ and
http://mercurial.selenic.com/wiki/ContributingChanges

The advantage of Mercurial over SVN is that it's written in Python ;)

Regards

Antoine.



From solipsis at pitrou.net  Fri Apr  1 02:28:42 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 02:28:42 +0200
Subject: [Python-Dev] warn_unused_result warnings
References: <AANLkTik96YJkR4oxYJRmNaZY9ymPX4ahOMAqfjUMDTfz@mail.gmail.com>
Message-ID: <20110401022842.56b6b492@pitrou.net>

On Thu, 31 Mar 2011 18:11:46 -0500
Benjamin Peterson <benjamin at python.org> wrote:
> I'm rather sick of seeing this warnings on all compiles, so I propose
> we enable the -Wno-unused-results option. I judge that most of the
> cases where this occurs are error reporting functions, where not much
> with return code can be done.

If you manage to hack this gcc-specific option into the build chain
without breaking other compilers, why not :)

Regards

Antoine.



From raymond.hettinger at gmail.com  Fri Apr  1 02:46:12 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 31 Mar 2011 17:46:12 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <20110401023032.3837e0a6@pitrou.net>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
Message-ID: <B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>


On Mar 31, 2011, at 5:30 PM, Antoine Pitrou wrote:

> On Thu, 31 Mar 2011 16:15:48 -0700
> Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
>> The Hg source viewer needs to be tweaked to improve its usability.
>> What we've got now is a step backwards from the previous svn viewer.
>> 
>> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
>> there are two issues.   1) the code cannot be cut-and-pasted because the
>> line numbers are commingled with the source text.  2) the code is hard
>> to read because of the alternating white and gray bars.
>> 
>> Contrast that to the more typical, beautiful presentations with a solid
>> background and the ability to cut-and-paste without grabbing line
>> numbers:
> 
> This is something you need to discuss with the Mercurial project.
> See http://mercurial.selenic.com/bts/ and
> http://mercurial.selenic.com/wiki/ContributingChanges

Are you saying that our official code viewer isn't configurable
without getting a change through the Hg project itself?

Does that mean that we have have to live with it in its crippled form?


Raymond


From solipsis at pitrou.net  Fri Apr  1 02:55:10 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 01 Apr 2011 02:55:10 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
	<B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>
Message-ID: <1301619310.3523.4.camel@localhost.localdomain>

Le jeudi 31 mars 2011 ? 17:46 -0700, Raymond Hettinger a ?crit :
> On Mar 31, 2011, at 5:30 PM, Antoine Pitrou wrote:
> 
> > On Thu, 31 Mar 2011 16:15:48 -0700
> > Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> >> The Hg source viewer needs to be tweaked to improve its usability.
> >> What we've got now is a step backwards from the previous svn viewer.
> >> 
> >> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
> >> there are two issues.   1) the code cannot be cut-and-pasted because the
> >> line numbers are commingled with the source text.  2) the code is hard
> >> to read because of the alternating white and gray bars.
> >> 
> >> Contrast that to the more typical, beautiful presentations with a solid
> >> background and the ability to cut-and-paste without grabbing line
> >> numbers:
> > 
> > This is something you need to discuss with the Mercurial project.
> > See http://mercurial.selenic.com/bts/ and
> > http://mercurial.selenic.com/wiki/ContributingChanges
> 
> Are you saying that our official code viewer isn't configurable
> without getting a change through the Hg project itself?

Well, it is something that is configurable through patching.
You might want to keep the patch private to hg.python.org, of course.
But perhaps you can also convince Mercurial devs that they should it
themselves, if you are persuasive enough ;)

> Does that mean that we have have to live with it in its crippled form?

Well, I'm sure we have lived with lots of things in "crippled form"
along the years, including SVN itself. I don't think the "source code
viewer" is impacting anybody's ability to contribute. At worse you can
click the "raw" link on the left and get a nice clean view of the source
in your editor of choice.

Regards

Antoine.



From ethan at stoneleaf.us  Fri Apr  1 03:06:30 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 31 Mar 2011 18:06:30 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
Message-ID: <4D952516.70405@stoneleaf.us>

Raymond Hettinger wrote:
> On Mar 31, 2011, at 9:52 AM, Terry Reedy wrote:
>> I would like to try putting the comment box after the last (most 
>> recent) comment, as that is the message one most ofter responds to. 
>> Having to now scroll up and down between comment box and last 
>> message(s) is often of a nuisance.
> 
> While that sounds logical, I think it will be a usability problem.  If 
> someone doesn't see a the comment box immediately, they may not know to 
> scroll down past dozens of messages to find it.

Are there cases where someone should be posting new comments who 
/hasn't/ read the existing comments?  I would hope that new comments 
would come only after reading what has already transpired -- in which 
case one would find the comment box when one runs out of previous 
comments to read.

~Ethan~

From raymond.hettinger at gmail.com  Fri Apr  1 03:20:53 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 31 Mar 2011 18:20:53 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <1301619310.3523.4.camel@localhost.localdomain>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
	<B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>
	<1301619310.3523.4.camel@localhost.localdomain>
Message-ID: <16094AE6-5063-424C-82A9-0CF2DBC594DE@gmail.com>


On Mar 31, 2011, at 5:55 PM, Antoine Pitrou wrote:

> Le jeudi 31 mars 2011 ? 17:46 -0700, Raymond Hettinger a ?crit :
>> On Mar 31, 2011, at 5:30 PM, Antoine Pitrou wrote:
>> 
>>> On Thu, 31 Mar 2011 16:15:48 -0700
>>> Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
>>>> The Hg source viewer needs to be tweaked to improve its usability.
>>>> What we've got now is a step backwards from the previous svn viewer.
>>>> 
>>>> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
>>>> there are two issues.   1) the code cannot be cut-and-pasted because the
>>>> line numbers are commingled with the source text.  2) the code is hard
>>>> to read because of the alternating white and gray bars.
>>>> 
>>>> Contrast that to the more typical, beautiful presentations with a solid
>>>> background and the ability to cut-and-paste without grabbing line
>>>> numbers:
>>> 
>>> This is something you need to discuss with the Mercurial project.
>>> See http://mercurial.selenic.com/bts/ and
>>> http://mercurial.selenic.com/wiki/ContributingChanges
>> 
>> Are you saying that our official code viewer isn't configurable
>> without getting a change through the Hg project itself?
> 
> Well, it is something that is configurable through patching.
> You might want to keep the patch private to hg.python.org, of course.
> But perhaps you can also convince Mercurial devs that they should it
> themselves, if you are persuasive enough ;)

Surely, we at least have control over our own CSS.
At http://hg.python.org/cpython/static/style-paper.css 
there are two lines that control the alternating bars:

.parity0 { background-color: #f0f0f0; }
.parity1 { background-color: white; }

One of those could be changed to match the other so that we
at can at least get a solid background.


Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/e97a0bc1/attachment.html>

From victor.stinner at haypocalc.com  Fri Apr  1 03:23:10 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Fri, 01 Apr 2011 03:23:10 +0200
Subject: [Python-Dev] warn_unused_result warnings
In-Reply-To: <AANLkTik96YJkR4oxYJRmNaZY9ymPX4ahOMAqfjUMDTfz@mail.gmail.com>
References: <AANLkTik96YJkR4oxYJRmNaZY9ymPX4ahOMAqfjUMDTfz@mail.gmail.com>
Message-ID: <4D9528FE.2060609@haypocalc.com>

Le 01/04/2011 01:11, Benjamin Peterson a ?crit :
> I'm rather sick of seeing this warnings on all compiles, so I propose
> we enable the -Wno-unused-results option. I judge that most of the
> cases where this occurs are error reporting functions, where not much
> with return code can be done.
Can't we try to fix the warnings instead of turning them off? Or is it 
possible to only turn off these warnings on a specific function?

Modules/faulthandler.c emits a lot of such compiler warning, but there 
is nothing interesting to do on write() error. I tried to turn off the 
warning on an instruction using (void)write(...), but gcc doesn't 
understand that I don't care of write() result here.

Victor

From benjamin at python.org  Fri Apr  1 03:28:42 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 31 Mar 2011 20:28:42 -0500
Subject: [Python-Dev] warn_unused_result warnings
In-Reply-To: <4D9528FE.2060609@haypocalc.com>
References: <AANLkTik96YJkR4oxYJRmNaZY9ymPX4ahOMAqfjUMDTfz@mail.gmail.com>
	<4D9528FE.2060609@haypocalc.com>
Message-ID: <AANLkTikTiFNoVf-oG8+RXWp0F-KJjA+RCb9neQB+y8J_@mail.gmail.com>

2011/3/31 Victor Stinner <victor.stinner at haypocalc.com>:
> Le 01/04/2011 01:11, Benjamin Peterson a ?crit :
>>
>> I'm rather sick of seeing this warnings on all compiles, so I propose
>> we enable the -Wno-unused-results option. I judge that most of the
>> cases where this occurs are error reporting functions, where not much
>> with return code can be done.
>
> Can't we try to fix the warnings instead of turning them off? Or is it
> possible to only turn off these warnings on a specific function?

It strikes me as excessively ugly. (see below)

>
> Modules/faulthandler.c emits a lot of such compiler warning, but there is
> nothing interesting to do on write() error. I tried to turn off the warning
> on an instruction using (void)write(...), but gcc doesn't understand that I
> don't care of write() result here.

You have actually have an assignment, like x = write().


-- 
Regards,
Benjamin

From victor.stinner at haypocalc.com  Fri Apr  1 03:28:39 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Fri, 01 Apr 2011 03:28:39 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
Message-ID: <4D952A47.6060707@haypocalc.com>

Le 01/04/2011 01:15, Raymond Hettinger a ?crit :
> The Hg source viewer needs to be tweaked to improve its usability.
> What we've got now is a step backwards from the previous svn viewer.
>
> Looking at 
> http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
> there are two issues.   1) the code cannot be cut-and-pasted because the
> line numbers are commingled with the source text.  2) the code is hard
> to read because of the alternating white and gray bars.
You can use mirrors like:
https://bitbucket.org/mirror/cpython/

On Bitbucket, line numbers are displayed, but you can copy/paste code 
without the line number. And the background is just white. For example:
https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Modules/faulthandler.c

Victor

From raymond.hettinger at gmail.com  Fri Apr  1 03:42:35 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 31 Mar 2011 18:42:35 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <4D952A47.6060707@haypocalc.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<4D952A47.6060707@haypocalc.com>
Message-ID: <571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>


On Mar 31, 2011, at 6:28 PM, Victor Stinner wrote:

> Le 01/04/2011 01:15, Raymond Hettinger a ?crit :
>> The Hg source viewer needs to be tweaked to improve its usability.
>> What we've got now is a step backwards from the previous svn viewer.
>> 
>> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
>> there are two issues.   1) the code cannot be cut-and-pasted because the
>> line numbers are commingled with the source text.  2) the code is hard
>> to read because of the alternating white and gray bars.
> You can use mirrors like:
> https://bitbucket.org/mirror/cpython/
> 
> On Bitbucket, line numbers are displayed, but you can copy/paste code without the line number. And the background is just white. For example:
> https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Modules/faulthandler.c
> 

That's *way* better:

  https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Lib/linecache.py

Why can't we have that for our primary source viewer.


Raymond

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/7d7c72ae/attachment.html>

From benjamin at python.org  Fri Apr  1 03:44:09 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 31 Mar 2011 20:44:09 -0500
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<4D952A47.6060707@haypocalc.com>
	<571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
Message-ID: <AANLkTi=JK4U9nSf+PFOxXFqRKVhVnoHmy1fntFWDQZWy@mail.gmail.com>

2011/3/31 Raymond Hettinger <raymond.hettinger at gmail.com>:
>
> On Mar 31, 2011, at 6:28 PM, Victor Stinner wrote:
>
> Le 01/04/2011 01:15, Raymond Hettinger a ?crit :
>
> The Hg source viewer needs to be tweaked to improve its usability.
>
> What we've got now is a step backwards from the previous svn viewer.
>
> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for
> example,
>
> there are two issues. ??1) the code cannot be cut-and-pasted because the
>
> line numbers are commingled with the source text. ?2) the code is hard
>
> to read because of the alternating white and gray bars.
>
> You can use mirrors like:
> https://bitbucket.org/mirror/cpython/
>
> On Bitbucket, line numbers are displayed, but you can copy/paste code
> without the line number. And the background is just white. For example:
> https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Modules/faulthandler.c
>
>
> That's *way* better:
> ??https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Lib/linecache.py
> Why can't we have that for our primary source viewer.

Because it's closed source.



-- 
Regards,
Benjamin

From scott+python-dev at scottdial.com  Fri Apr  1 03:18:21 2011
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Thu, 31 Mar 2011 21:18:21 -0400
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <20110401023032.3837e0a6@pitrou.net>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
Message-ID: <4D9527DD.4050109@scottdial.com>

On 3/31/2011 8:30 PM, Antoine Pitrou wrote:
> On Thu, 31 Mar 2011 16:15:48 -0700
> Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
>> The Hg source viewer needs to be tweaked to improve its usability.
>> What we've got now is a step backwards from the previous svn viewer.
>>
>> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for example,
>> there are two issues.   1) the code cannot be cut-and-pasted because the
>> line numbers are commingled with the source text.  2) the code is hard
>> to read because of the alternating white and gray bars.
>>
>> Contrast that to the more typical, beautiful presentations with a solid
>> background and the ability to cut-and-paste without grabbing line
>> numbers:
> 
> This is something you need to discuss with the Mercurial project.
> See http://mercurial.selenic.com/bts/ and
> http://mercurial.selenic.com/wiki/ContributingChanges

The hgweb interface is templated. You can already change it via "style"
in the hgweb.conf. There are several styles already available in the
templates folder of the install, and you could provide your own if you
like too.

-- 
Scott Dial
scott at scottdial.com
scodial at cs.indiana.edu

From guido at python.org  Fri Apr  1 03:49:22 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 31 Mar 2011 18:49:22 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <AANLkTi=JK4U9nSf+PFOxXFqRKVhVnoHmy1fntFWDQZWy@mail.gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<4D952A47.6060707@haypocalc.com>
	<571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
	<AANLkTi=JK4U9nSf+PFOxXFqRKVhVnoHmy1fntFWDQZWy@mail.gmail.com>
Message-ID: <AANLkTine3RsqpvtCQBzOcdZS_tQzd2rJxZbSr+qOHzwE@mail.gmail.com>

Can someone give Raymond write access to the website already so he can
fix it himself?

On Thu, Mar 31, 2011 at 6:44 PM, Benjamin Peterson <benjamin at python.org> wrote:
> 2011/3/31 Raymond Hettinger <raymond.hettinger at gmail.com>:
>>
>> On Mar 31, 2011, at 6:28 PM, Victor Stinner wrote:
>>
>> Le 01/04/2011 01:15, Raymond Hettinger a ?crit :
>>
>> The Hg source viewer needs to be tweaked to improve its usability.
>>
>> What we've got now is a step backwards from the previous svn viewer.
>>
>> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for
>> example,
>>
>> there are two issues. ??1) the code cannot be cut-and-pasted because the
>>
>> line numbers are commingled with the source text. ?2) the code is hard
>>
>> to read because of the alternating white and gray bars.
>>
>> You can use mirrors like:
>> https://bitbucket.org/mirror/cpython/
>>
>> On Bitbucket, line numbers are displayed, but you can copy/paste code
>> without the line number. And the background is just white. For example:
>> https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Modules/faulthandler.c
>>
>>
>> That's *way* better:
>> ??https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Lib/linecache.py
>> Why can't we have that for our primary source viewer.
>
> Because it's closed source.
>
>
>
> --
> Regards,
> Benjamin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)

From stephen at xemacs.org  Fri Apr  1 04:02:48 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 01 Apr 2011 11:02:48 +0900
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <4D950C7B.4060106@stoneleaf.us>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org> <4D9508DB.2080703@g.nevcal.com>
	<4D950C7B.4060106@stoneleaf.us>
Message-ID: <87d3l6vid3.fsf@uwakimon.sk.tsukuba.ac.jp>

Ethan Furman writes:

 > -1 on reverse time sequence of messages -- no top posting!  ;)

I'd really like this to be a browse-time option, and for bonus points,
it should be "sticky".  For issues I'm not familiar with, I want to
read in more or less chronological order.  For issues I *am* familiar
with, I want reverse chronological order.

From skip at pobox.com  Fri Apr  1 04:12:47 2011
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 31 Mar 2011 21:12:47 -0500
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
Message-ID: <19861.13471.594291.609821@montanaro.dyndns.org>


    >> I would like to try putting the comment box after the last (most
    >> recent) comment, as that is the message one most ofter responds
    >> to. Having to now scroll up and down between comment box and last
    >> message(s) is often of a nuisance.

    Raymond> While that sounds logical, I think it will be a usability
    Raymond> problem.  If someone doesn't see a the comment box immediately,
    Raymond> they may not know to scroll down past dozens of messages to
    Raymond> find it.

For me, the comment box is never available immediately.  I always have to
scroll.  After the first two sections have been filled in, most of that
information is static, excepting the nosy list changes and the occasional
change of state.  Almost all the action is going to be in the comments,
review buttons and patches which you always have to scroll to see.  That's
one reason I asked for the collapsible expanders.

If nothing else, You could make it easy to jump to the comments or the list
of patches with a link near the top of the page labelled "Jump to comments"
(or similar) which links to an anchor further down the page.

    Raymond> Rather that being trial-and-error amateur web page designers,
    Raymond> it would be better to follow proven models. 

I don't know who here uses Chrome, but if you have access to it, take a look
at the bookmark manager.  While it doesn't have what I had in mind at the
leaf level (I'd like the leaves to expand under their parents, not in a
separate pane), it does use expanders to reveal different amounts of detail.
It's a model many other "directory traversal" GUIs use.  Admittedly, we have
a bit flatter hierarchy, but the leaves are huge.  See attached.  (Apologies
for the image size.  Who would have thought such a modest image would be so
hard to compress?)

Skip

-------------- next part --------------
A non-text attachment was scrubbed...
Name: expanders.jpg
Type: image/jpeg
Size: 26587 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/24197c02/attachment.jpg>

From skip at pobox.com  Fri Apr  1 04:18:07 2011
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 31 Mar 2011 21:18:07 -0500
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <in35rp$jt4$1@dough.gmane.org>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
	<in35rp$jt4$1@dough.gmane.org>
Message-ID: <19861.13791.237601.49550@montanaro.dyndns.org>


    Terry> Even though such is standard for the majority of web fora?

I participate in a couple different web forums (914s and swimming).  Both
present their topics in chronological order and provide a link at the top of
every page which jumps the user to the first unread message (no matter how
many pages there are in the thread or where you happen to be at the moment).

Reverse chronological order is a nightmare for anybody trying to bring
themselves up-to-speed for the first time on a long discussion.  In my mind,
the only place where reverse order makes sense is in cases where messages
rapidly become less important as they age.  Think following the Japanese
earthquake/tsunami aftermath.

Skip

From skip at pobox.com  Fri Apr  1 04:21:45 2011
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 31 Mar 2011 21:21:45 -0500
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <20110401022636.14945972@pitrou.net>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org> <20110401022636.14945972@pitrou.net>
Message-ID: <19861.14009.499410.635920@montanaro.dyndns.org>


    Antoine> Let's say that by using non-metric units you have already lost
    Antoine> me, sorry.

Wouldn't it be cool if you could feed the text to Google Translator and have
it not only translate the English to bad French, but translate the units to
metric (hopefully with more accuracy than the language conversion)? :-)

Skip

From skip at pobox.com  Fri Apr  1 04:22:51 2011
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 31 Mar 2011 21:22:51 -0500
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <4D952516.70405@stoneleaf.us>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
	<4D952516.70405@stoneleaf.us>
Message-ID: <19861.14075.988070.438343@montanaro.dyndns.org>


    Ethan> Are there cases where someone should be posting new comments who
    Ethan> /hasn't/ read the existing comments?  I would hope that new
    Ethan> comments would come only after reading what has already
    Ethan> transpired -- in which case one would find the comment box when
    Ethan> one runs out of previous comments to read.

Again, drawing on the forum model, the new message box is always at the
bottom (of each page) in my experience.

Skip

From v+python at g.nevcal.com  Fri Apr  1 04:32:23 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 31 Mar 2011 19:32:23 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <8DBBAB30-756D-46E2-9E2E-22B73836285E@gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>	<1301616164.7031.5.camel@localhost.localdomain>
	<8DBBAB30-756D-46E2-9E2E-22B73836285E@gmail.com>
Message-ID: <4D953937.9040102@g.nevcal.com>

On 3/31/2011 5:22 PM, Raymond Hettinger wrote:
> Perhaps the most important part is that the comment box goes at the top.

As long as it is adjacent to the last comment, that would be fine.

(said while tongue removing the last bit of supper from the space 
outside the teeth)

From solipsis at pitrou.net  Fri Apr  1 05:44:16 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 05:44:16 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <16094AE6-5063-424C-82A9-0CF2DBC594DE@gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
	<B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>
	<1301619310.3523.4.camel@localhost.localdomain>
	<16094AE6-5063-424C-82A9-0CF2DBC594DE@gmail.com>
Message-ID: <20110401054416.16e265bb@pitrou.net>

On Thu, 31 Mar 2011 18:20:53 -0700
Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> 
> Surely, we at least have control over our own CSS.
> At http://hg.python.org/cpython/static/style-paper.css 
> there are two lines that control the alternating bars:
> 
> .parity0 { background-color: #f0f0f0; }
> .parity1 { background-color: white; }
> 
> One of those could be changed to match the other so that we
> at can at least get a solid background.

It also applies to the changelog and therefore would make the changelog
uglier (you had already asked me to make that change and I reverted it
after I tried it). The changelog is, IMHO, a bit more important than the
source viewer.

Impacting only the source viewer looks like it would require a patch
to the generation logic, although I could be mistaken.

Regards

Antoine.

From anikom15 at gmail.com  Fri Apr  1 05:44:46 2011
From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=)
Date: Thu, 31 Mar 2011 20:44:46 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <19861.13791.237601.49550@montanaro.dyndns.org>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<1166C08E-76BB-4E55-BA6D-827D57A994DF@gmail.com>
	<in35rp$jt4$1@dough.gmane.org>
	<19861.13791.237601.49550@montanaro.dyndns.org>
Message-ID: <1301629486.8671.2.camel@localhost.localdomain>

On Thu, 2011-03-31 at 21:18 -0500, skip at pobox.com wrote:
> Terry> Even though such is standard for the majority of web fora?
> 
> I participate in a couple different web forums (914s and swimming).  Both
> present their topics in chronological order and provide a link at the top of
> every page which jumps the user to the first unread message (no matter how
> many pages there are in the thread or where you happen to be at the moment).
> 
> Reverse chronological order is a nightmare for anybody trying to bring
> themselves up-to-speed for the first time on a long discussion.  In my mind,
> the only place where reverse order makes sense is in cases where messages
> rapidly become less important as they age.  Think following the Japanese
> earthquake/tsunami aftermath.
> 
> Skip

Exactly; my blog is in reverse chronological order because it's more of
a news bulletin and less of a discussion thread.  As for the comment
box, why not have it at the top AND bottom.  The top could have the
entire form and the bottom could have just a small quick-reply box, or
perhaps a back to top button (which probably exists already).


From anikom15 at gmail.com  Fri Apr  1 05:46:18 2011
From: anikom15 at gmail.com (Westley =?ISO-8859-1?Q?Mart=EDnez?=)
Date: Thu, 31 Mar 2011 20:46:18 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <19861.14009.499410.635920@montanaro.dyndns.org>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org> <20110401022636.14945972@pitrou.net>
	<19861.14009.499410.635920@montanaro.dyndns.org>
Message-ID: <1301629578.8671.3.camel@localhost.localdomain>

On Thu, 2011-03-31 at 21:21 -0500, skip at pobox.com wrote:
> Antoine> Let's say that by using non-metric units you have already lost
>     Antoine> me, sorry.
> 
> Wouldn't it be cool if you could feed the text to Google Translator and have
> it not only translate the English to bad French, but translate the units to
> metric (hopefully with more accuracy than the language conversion)? :-)
> 
> Skip

It'd be accurate enough for most cases, but still limited by double
precision floating-point math.


From tjreedy at udel.edu  Fri Apr  1 06:02:18 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 01 Apr 2011 00:02:18 -0400
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <20110401022636.14945972@pitrou.net>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>
	<20110401022636.14945972@pitrou.net>
Message-ID: <in3io8$8a7$1@dough.gmane.org>

On 3/31/2011 8:26 PM, Antoine Pitrou wrote:
> On Thu, 31 Mar 2011 12:52:23 -0400
> Terry Reedy<tjreedy at udel.edu>  wrote:
>>
>> Here is my proposal for a redesign based on an analysis of my usage ;-).
>> I have a 1600x1050 (or thereabouts), 20" (measured) diagonal, 17" across
>> screen.
>>
>> The left column has a 7/8" margin, 2 3/8" text area, and 1" gutter.
>> These could be shrunk to say, 1/4, 2, 1/4, saving 1 3/8".
>> The comment box is 8 1/2", message boxes are wider, but the extra width
>> is not used if one uses hard returns in the comment box. In any case,
>> the message boxes could be narrowed by 1 1/8".
>> This would allow a right column of 1/4+2+1/4".
>
> Let's say that by using non-metric units you have already lost me,

My bad. Im science context I have always used S.I units, and wish U.S. 
would switch. Just forgot here. Multiply everything by 2.4 for cm.

Screen 43 cm wide, Left column 2.2 + 6 + 2.4, perhaps shrink to .6 + 4.8 
+ .6. Add right column with same. Comment box is 21 cm. Message boxes 
wider, could lose 2.7.

-- 
Terry Jan Reedy


From brendan at kublai.com  Fri Apr  1 05:56:39 2011
From: brendan at kublai.com (Brendan Cully)
Date: Thu, 31 Mar 2011 20:56:39 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <20110401054416.16e265bb@pitrou.net>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
	<B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>
	<1301619310.3523.4.camel@localhost.localdomain>
	<16094AE6-5063-424C-82A9-0CF2DBC594DE@gmail.com>
	<20110401054416.16e265bb@pitrou.net>
Message-ID: <62832192-6E2D-40F1-8E9E-BA7EE3C8D135@kublai.com>


On 2011-03-31, at 8:44 PM, Antoine Pitrou wrote:

> On Thu, 31 Mar 2011 18:20:53 -0700
> Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
>> 
>> Surely, we at least have control over our own CSS.
>> At http://hg.python.org/cpython/static/style-paper.css 
>> there are two lines that control the alternating bars:
>> 
>> .parity0 { background-color: #f0f0f0; }
>> .parity1 { background-color: white; }
>> 
>> One of those could be changed to match the other so that we
>> at can at least get a solid background.
> 
> It also applies to the changelog and therefore would make the changelog
> uglier (you had already asked me to make that change and I reverted it
> after I tried it). The changelog is, IMHO, a bit more important than the
> source viewer.
> 
> Impacting only the source viewer looks like it would require a patch
> to the generation logic, although I could be mistaken.

It shouldn't. You just need to change the template. The easiest thing to do is probably to copy the 'paper' style into a new directory, adjust your hgweb style parameter to point to it, and edit the 'fileline' entry in the 'map' file for your new style.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1691 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/3beed963/attachment.bin>

From v+python at g.nevcal.com  Fri Apr  1 06:32:51 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 31 Mar 2011 21:32:51 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <20110401054416.16e265bb@pitrou.net>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>	<20110401023032.3837e0a6@pitrou.net>	<B96CF3A2-37C0-4B35-AE60-1B1330A9EC7C@gmail.com>	<1301619310.3523.4.camel@localhost.localdomain>	<16094AE6-5063-424C-82A9-0CF2DBC594DE@gmail.com>
	<20110401054416.16e265bb@pitrou.net>
Message-ID: <4D955573.6080204@g.nevcal.com>

On 3/31/2011 8:44 PM, Antoine Pitrou wrote:
> Impacting only the source viewer looks like it would require a patch
> to the generation logic, although I could be mistaken.

Is there not something in the context surrounding the changelog and the 
source viewer that is different?  And therefore something like


.something-specific-to-sourceviewer .parity0 { background-color: white; }


might be possible, to affect only one of them, and not the other?  That 
is the whole point of CSS, of course.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110331/82757366/attachment.html>

From dirkjan at ochtman.nl  Fri Apr  1 09:25:19 2011
From: dirkjan at ochtman.nl (Dirkjan Ochtman)
Date: Fri, 1 Apr 2011 09:25:19 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <20110401023032.3837e0a6@pitrou.net>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<20110401023032.3837e0a6@pitrou.net>
Message-ID: <AANLkTimjuq_4MCNyQ5zs3hGCXvH9HXSq_WLuodPK7cHQ@mail.gmail.com>

On Fri, Apr 1, 2011 at 02:30, Antoine Pitrou <solipsis at pitrou.net> wrote:
> This is something you need to discuss with the Mercurial project.
> See http://mercurial.selenic.com/bts/ and
> http://mercurial.selenic.com/wiki/ContributingChanges

There's a lot you can change by just starting a new set of templates
(with Mercurial's templating language).

I even wrote an extension that'll let you use Jinja for the
templating, so it shouldn't be hard to make changes here -- changes
like Raymond proposes most certainly don't require code changes inside
Mercurial.

Cheers,

Dirkjan

From g.brandl at gmx.net  Fri Apr  1 12:42:06 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 12:42:06 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <AANLkTi=JK4U9nSf+PFOxXFqRKVhVnoHmy1fntFWDQZWy@mail.gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>	<4D952A47.6060707@haypocalc.com>	<571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
	<AANLkTi=JK4U9nSf+PFOxXFqRKVhVnoHmy1fntFWDQZWy@mail.gmail.com>
Message-ID: <in4a62$ran$1@dough.gmane.org>

Am 01.04.2011 03:44, schrieb Benjamin Peterson:
> 2011/3/31 Raymond Hettinger <raymond.hettinger at gmail.com>:
>>
>> On Mar 31, 2011, at 6:28 PM, Victor Stinner wrote:
>>
>> Le 01/04/2011 01:15, Raymond Hettinger a ?crit :
>>
>> The Hg source viewer needs to be tweaked to improve its usability.
>>
>> What we've got now is a step backwards from the previous svn viewer.
>>
>> Looking at http://hg.python.org/cpython/file/default/Lib/linecache.py for
>> example,
>>
>> there are two issues.   1) the code cannot be cut-and-pasted because the
>>
>> line numbers are commingled with the source text.  2) the code is hard
>>
>> to read because of the alternating white and gray bars.
>>
>> You can use mirrors like:
>> https://bitbucket.org/mirror/cpython/
>>
>> On Bitbucket, line numbers are displayed, but you can copy/paste code
>> without the line number. And the background is just white. For example:
>> https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Modules/faulthandler.c
>>
>>
>> That's *way* better:
>>   https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Lib/linecache.py
>> Why can't we have that for our primary source viewer.
> 
> Because it's closed source.

There are of course other Mercurial-web frontends that are free.  hgweb is just
the first choice because it's included.  (Just like Tkinter.)

For example, I was recently pointed to RhodeCode
(http://pypi.python.org/pypi/RhodeCode/), but I haven't had a closer look yet.

Georg


From g.brandl at gmx.net  Fri Apr  1 12:44:27 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 12:44:27 +0200
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <in3io8$8a7$1@dough.gmane.org>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>	<20110401022636.14945972@pitrou.net>
	<in3io8$8a7$1@dough.gmane.org>
Message-ID: <in4aae$ran$2@dough.gmane.org>

Am 01.04.2011 06:02, schrieb Terry Reedy:
> On 3/31/2011 8:26 PM, Antoine Pitrou wrote:
>> On Thu, 31 Mar 2011 12:52:23 -0400
>> Terry Reedy<tjreedy at udel.edu>  wrote:
>>>
>>> Here is my proposal for a redesign based on an analysis of my usage ;-).
>>> I have a 1600x1050 (or thereabouts), 20" (measured) diagonal, 17" across
>>> screen.
>>>
>>> The left column has a 7/8" margin, 2 3/8" text area, and 1" gutter.
>>> These could be shrunk to say, 1/4, 2, 1/4, saving 1 3/8".
>>> The comment box is 8 1/2", message boxes are wider, but the extra width
>>> is not used if one uses hard returns in the comment box. In any case,
>>> the message boxes could be narrowed by 1 1/8".
>>> This would allow a right column of 1/4+2+1/4".
>>
>> Let's say that by using non-metric units you have already lost me,
> 
> My bad. Im science context I have always used S.I units, and wish U.S. 
> would switch. Just forgot here. Multiply everything by 2.4 for cm.

Or by 2.54, if you're using SI cm :)

Georg


From g.brandl at gmx.net  Fri Apr  1 12:47:12 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 12:47:12 +0200
Subject: [Python-Dev] devguide: Add a table of contents to the FAQ.
In-Reply-To: <20110331231125.7B06B2B673@kimball.webabinitio.net>
References: <E1Q4e9O-0001X5-Jr@dinsdale.python.org>	<20110330222004.5b23bcc5@pitrou.net>
	<4D94A21B.9040501@gmail.com>	<20110331163426.C4ABCD64A7@kimball.webabinitio.net>	<AANLkTinJ+4SeLBHzQkwBgtQppq=ow-UJx2rtZPQhGvMH@mail.gmail.com>
	<20110331231125.7B06B2B673@kimball.webabinitio.net>
Message-ID: <in4afj$ran$4@dough.gmane.org>

Am 01.04.2011 01:12, schrieb R. David Murray:
> On Fri, 01 Apr 2011 08:29:29 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> On Fri, Apr 1, 2011 at 2:34 AM, R. David Murray <rdmurray at bitdance.com> wro=
>> te:
>> > I agree with this point. =A0The sidebar list of questions is effectively
>> > useless.
>> 
>> Indeed. If it's simple, I'd actually be inclined to reduce the depth
>> of the sidebar in this case to only show the categories and not the
>> individual questions.
> 
> I believe that requires editing the sphinx page template and adding
> a special case of some sort.

Use

:tocdepth: x

at the top of the rst file.

Georg


From timwintle at gmail.com  Fri Apr  1 12:54:07 2011
From: timwintle at gmail.com (Tim Wintle)
Date: Fri, 01 Apr 2011 11:54:07 +0100
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
Message-ID: <1301655247.6531.65.camel@tim-laptop>

On Fri, 2011-04-01 at 08:37 +1000, Nick Coghlan wrote:
> On Fri, Apr 1, 2011 at 3:35 AM, ?ric Araujo <merwok at netwok.org> wrote:
> > If I understand the policy correctly, 2.5 and 2.6 are not considered
> > active branches, so any doc, build or bug fixes are not acceptable.
> 
> Actual build fixes may be acceptable, if they're needed to allow
> people to build from a version control checkout or from source (since
> they need to be able to do that in order to apply security patches).
> 
> However, the combination of "running on Ubuntu 11.04+" and "need to
> build security patched version of old Python" seems unlikely.

I disagree.

FWIW - I maintain legacy code for python2.4, and 2.5 (mainly 2.5).

I've reviewed upgrading this code to run on 2.7 - and it's too much work
to do in the near future.

I develop on Ubuntu (and will probably update to 11.04 in a few months)
- so this will directly affect me.

I'm fairly sure that others will be in the same situation.

Even if their servers won't run ubuntu 11.04+ (or something with the
same library paths), their development environments will.

As a result, I'm very much +1 on integrating this patch to previous
versions.

Tim Wintle


From g.brandl at gmx.net  Fri Apr  1 12:46:41 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 12:46:41 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
	multiarch Debian/Ubuntu
In-Reply-To: <4D94BB4D.8030405@netwok.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
Message-ID: <in4aek$ran$3@dough.gmane.org>

Am 31.03.2011 19:35, schrieb ?ric Araujo:
>> I would like to apply this patch (or its moral equivalent) to all active,
>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3.3, as
>> soon as possible.  Without this, it will be very difficult for anyone on
>> future Ubuntu or Debian releases to build Python.  Since it's not a new
>> feature, but just a minor fix to the build process, I think it should be okay
>> to back port.
> 
> If I understand the policy correctly, 2.5 and 2.6 are not considered
> active branches, so any doc, build or bug fixes are not acceptable.

I wouldn't say doc fixes are not acceptable, but they are rather pointless
since there won't be any more online docs or released docs for those versions.

Georg


From fuzzyman at voidspace.org.uk  Fri Apr  1 13:57:41 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 01 Apr 2011 12:57:41 +0100
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <in4aek$ran$3@dough.gmane.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>
Message-ID: <4D95BDB5.9080601@voidspace.org.uk>

On 01/04/2011 11:46, Georg Brandl wrote:
> Am 31.03.2011 19:35, schrieb ?ric Araujo:
>>> I would like to apply this patch (or its moral equivalent) to all active,
>>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3.3, as
>>> soon as possible.  Without this, it will be very difficult for anyone on
>>> future Ubuntu or Debian releases to build Python.  Since it's not a new
>>> feature, but just a minor fix to the build process, I think it should be okay
>>> to back port.
>> If I understand the policy correctly, 2.5 and 2.6 are not considered
>> active branches, so any doc, build or bug fixes are not acceptable.
> I wouldn't say doc fixes are not acceptable, but they are rather pointless
> since there won't be any more online docs or released docs for those versions.
In the case that docs are wrong for unmaintained (but still used) 
versions of Python, is there any reason other than policy not to fix and 
update online docs?

All the best,

Michael

> Georg
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From eric at trueblade.com  Fri Apr  1 13:57:53 2011
From: eric at trueblade.com (Eric Smith)
Date: Fri, 01 Apr 2011 07:57:53 -0400
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <in4aek$ran$3@dough.gmane.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>
Message-ID: <4D95BDC1.70504@trueblade.com>

On 4/1/2011 6:46 AM, Georg Brandl wrote:
> Am 31.03.2011 19:35, schrieb ?ric Araujo:
>>> I would like to apply this patch (or its moral equivalent) to all active,
>>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3.3, as
>>> soon as possible.  Without this, it will be very difficult for anyone on
>>> future Ubuntu or Debian releases to build Python.  Since it's not a new
>>> feature, but just a minor fix to the build process, I think it should be okay
>>> to back port.
>>
>> If I understand the policy correctly, 2.5 and 2.6 are not considered
>> active branches, so any doc, build or bug fixes are not acceptable.
> 
> I wouldn't say doc fixes are not acceptable, but they are rather pointless
> since there won't be any more online docs or released docs for those versions.

And I don't see a problem with build fixes. It's not like we're adding
language features. If it makes someone's life easier, then what's the harm?

Eric.

From solipsis at pitrou.net  Fri Apr  1 14:07:47 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 14:07:47 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDC1.70504@trueblade.com>
Message-ID: <20110401140747.5366c5cd@pitrou.net>

On Fri, 01 Apr 2011 07:57:53 -0400
Eric Smith <eric at trueblade.com> wrote:
> On 4/1/2011 6:46 AM, Georg Brandl wrote:
> > Am 31.03.2011 19:35, schrieb ?ric Araujo:
> >>> I would like to apply this patch (or its moral equivalent) to all active,
> >>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3.3, as
> >>> soon as possible.  Without this, it will be very difficult for anyone on
> >>> future Ubuntu or Debian releases to build Python.  Since it's not a new
> >>> feature, but just a minor fix to the build process, I think it should be okay
> >>> to back port.
> >>
> >> If I understand the policy correctly, 2.5 and 2.6 are not considered
> >> active branches, so any doc, build or bug fixes are not acceptable.
> > 
> > I wouldn't say doc fixes are not acceptable, but they are rather pointless
> > since there won't be any more online docs or released docs for those versions.
> 
> And I don't see a problem with build fixes. It's not like we're adding
> language features. If it makes someone's life easier, then what's the harm?

Well, how is this different from bug fixes?
The policy is that we don't do bug fixes in security branches. We could
change it of course, but introducing special cases through a weird
interpretation of the rule sounds like a recipe for confusion,
theirs and ours.

(and, no, I don't think building an old Python on a new Debian/Ubuntu
system is anymore important than other kinds of bug or build fixes;
let's stop implying that Ubuntu is the dominant OS out there, because
it's really not)

Regards

Antoine.



From fuzzyman at voidspace.org.uk  Fri Apr  1 14:12:06 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 01 Apr 2011 13:12:06 +0100
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <20110401140747.5366c5cd@pitrou.net>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>	<4D95BDC1.70504@trueblade.com>
	<20110401140747.5366c5cd@pitrou.net>
Message-ID: <4D95C116.8000504@voidspace.org.uk>

On 01/04/2011 13:07, Antoine Pitrou wrote:
> On Fri, 01 Apr 2011 07:57:53 -0400
> Eric Smith<eric at trueblade.com>  wrote:
>> On 4/1/2011 6:46 AM, Georg Brandl wrote:
>>> Am 31.03.2011 19:35, schrieb ?ric Araujo:
>>>>> I would like to apply this patch (or its moral equivalent) to all active,
>>>>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3.3, as
>>>>> soon as possible.  Without this, it will be very difficult for anyone on
>>>>> future Ubuntu or Debian releases to build Python.  Since it's not a new
>>>>> feature, but just a minor fix to the build process, I think it should be okay
>>>>> to back port.
>>>> If I understand the policy correctly, 2.5 and 2.6 are not considered
>>>> active branches, so any doc, build or bug fixes are not acceptable.
>>> I wouldn't say doc fixes are not acceptable, but they are rather pointless
>>> since there won't be any more online docs or released docs for those versions.
>> And I don't see a problem with build fixes. It's not like we're adding
>> language features. If it makes someone's life easier, then what's the harm?
> Well, how is this different from bug fixes?
> The policy is that we don't do bug fixes in security branches. We could
> change it of course, but introducing special cases through a weird
> interpretation of the rule sounds like a recipe for confusion,
> theirs and ours.
Possibly. But online docs fixes feels like a very particular special 
case that isn't hard to understand or likely to cause confusion.

All the best,

Michael

> (and, no, I don't think building an old Python on a new Debian/Ubuntu
> system is anymore important than other kinds of bug or build fixes;
> let's stop implying that Ubuntu is the dominant OS out there, because
> it's really not)
>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From merwok at netwok.org  Fri Apr  1 14:28:07 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Fri, 01 Apr 2011 14:28:07 +0200
Subject: [Python-Dev] [Python-checkins] cpython (3.2): Add links to make
 the math docs more usable.
In-Reply-To: <4D9511CC.3010901@udel.edu>
References: <E1Q5NDA-0007qk-7g@dinsdale.python.org>	<4D94D2A8.3000405@gmail.com>
	<4D9511CC.3010901@udel.edu>
Message-ID: <4D95C4D7.1030405@netwok.org>

>> There's a space missing here, and the link doesn't work.
> It does for me. This may depend on the mail reader and whether it parses 
> the url out in spite of the missing space.

Victor was talking about the rendered HTML, not his email client. :)

Cheers

From g.brandl at gmx.net  Fri Apr  1 14:32:01 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 14:32:01 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
	multiarch Debian/Ubuntu
In-Reply-To: <4D95BDB5.9080601@voidspace.org.uk>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>
	<4D95BDB5.9080601@voidspace.org.uk>
Message-ID: <in4gk5$4h4$1@dough.gmane.org>

Am 01.04.2011 13:57, schrieb Michael Foord:
> On 01/04/2011 11:46, Georg Brandl wrote:
>> Am 31.03.2011 19:35, schrieb ?ric Araujo:
>>>> I would like to apply this patch (or its moral equivalent) to all active,
>>>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3..3, as
>>>> soon as possible.  Without this, it will be very difficult for anyone on
>>>> future Ubuntu or Debian releases to build Python.  Since it's not a new
>>>> feature, but just a minor fix to the build process, I think it should be okay
>>>> to back port.
>>> If I understand the policy correctly, 2.5 and 2.6 are not considered
>>> active branches, so any doc, build or bug fixes are not acceptable.
>> I wouldn't say doc fixes are not acceptable, but they are rather pointless
>> since there won't be any more online docs or released docs for those versions.
> In the case that docs are wrong for unmaintained (but still used) 
> versions of Python, is there any reason other than policy not to fix and 
> update online docs?

I think I was unclear: I'm not advocating doing doc fixes in security-only
branches; I'm just explaining why it wouldn't even make sense to do these
fixes.

Let's not make life harder for the RMs of security-only branches...

Georg


From fuzzyman at voidspace.org.uk  Fri Apr  1 14:37:42 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 01 Apr 2011 13:37:42 +0100
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <in4gk5$4h4$1@dough.gmane.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>
	<in4gk5$4h4$1@dough.gmane.org>
Message-ID: <4D95C716.4090502@voidspace.org.uk>

On 01/04/2011 13:32, Georg Brandl wrote:
> Am 01.04.2011 13:57, schrieb Michael Foord:
>> On 01/04/2011 11:46, Georg Brandl wrote:
>>> Am 31.03.2011 19:35, schrieb ?ric Araujo:
>>>>> I would like to apply this patch (or its moral equivalent) to all active,
>>>>> affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3..3, as
>>>>> soon as possible.  Without this, it will be very difficult for anyone on
>>>>> future Ubuntu or Debian releases to build Python.  Since it's not a new
>>>>> feature, but just a minor fix to the build process, I think it should be okay
>>>>> to back port.
>>>> If I understand the policy correctly, 2.5 and 2.6 are not considered
>>>> active branches, so any doc, build or bug fixes are not acceptable.
>>> I wouldn't say doc fixes are not acceptable, but they are rather pointless
>>> since there won't be any more online docs or released docs for those versions.
>> In the case that docs are wrong for unmaintained (but still used)
>> versions of Python, is there any reason other than policy not to fix and
>> update online docs?
> I think I was unclear: I'm not advocating doing doc fixes in security-only
> branches; I'm just explaining why it wouldn't even make sense to do these
> fixes.
>
I understood. I was suggesting we modify to allow doc changes that fix 
errors and push updated docs *online* (not do fresh releases) and asking 
why not do that (other than policy)?

I don't see any advantage in leaving erroneous docs online even if we 
aren't going to do any new releases.

Michael

> Let's not make life harder for the RMs of security-only branches...
>
> Georg
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From merwok at netwok.org  Fri Apr  1 14:42:49 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Fri, 01 Apr 2011 14:42:49 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D95C716.4090502@voidspace.org.uk>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>	<in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk>
Message-ID: <4D95C849.301@netwok.org>

> I don't see any advantage in leaving erroneous docs online even if we 
> aren't going to do any new releases.

See thread starting at
http://mail.python.org/pipermail/python-dev/2010-August/103263.html

Regards

From solipsis at pitrou.net  Fri Apr  1 14:49:10 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 14:49:10 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDB5.9080601@voidspace.org.uk> <in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk>
Message-ID: <20110401144911.39ee36ce@pitrou.net>

On Fri, 01 Apr 2011 13:37:42 +0100
Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> > I think I was unclear: I'm not advocating doing doc fixes in security-only
> > branches; I'm just explaining why it wouldn't even make sense to do these
> > fixes.
> >
> I understood. I was suggesting we modify to allow doc changes that fix 
> errors and push updated docs *online* (not do fresh releases) and asking 
> why not do that (other than policy)?

Well, I think the tradeoff is simply: do you want to do more work?
(or, given the same amount of work, do you think allocating your
workforce to backporting doc fixes is worthwhile?)

I'm sure that if enough people want to do such backports, it can happen.

Regards

Antoine.



From fuzzyman at voidspace.org.uk  Fri Apr  1 15:45:49 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 01 Apr 2011 14:45:49 +0100
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D95C849.301@netwok.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>	<in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk> <4D95C849.301@netwok.org>
Message-ID: <4D95D70D.2050303@voidspace.org.uk>

On 01/04/2011 13:42, ?ric Araujo wrote:
>> I don't see any advantage in leaving erroneous docs online even if we
>> aren't going to do any new releases.
> See thread starting at
> http://mail.python.org/pipermail/python-dev/2010-August/103263.html
As far as I can tell there was no clear decision there either. :-) 
(Other than no *need* to bother, which doesn't answer the question of 
what if developers *want* to fix errors in the docs - and I'm in favour 
of *permitting* but not requiring it.)

All the best,

Michael Foord

> Regards


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From merwok at netwok.org  Fri Apr  1 15:49:09 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Fri, 01 Apr 2011 15:49:09 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D95D70D.2050303@voidspace.org.uk>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>	<in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk> <4D95C849.301@netwok.org>
	<4D95D70D.2050303@voidspace.org.uk>
Message-ID: <4D95D7D5.6010304@netwok.org>

> As far as I can tell there was no clear decision there either. :-)
Not my understanding:
http://mail.python.org/pipermail/python-dev/2010-August/103351.html

Regards

From fuzzyman at voidspace.org.uk  Fri Apr  1 16:07:18 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 01 Apr 2011 15:07:18 +0100
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D95D7D5.6010304@netwok.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>	<in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk> <4D95C849.301@netwok.org>
	<4D95D70D.2050303@voidspace.org.uk> <4D95D7D5.6010304@netwok.org>
Message-ID: <4D95DC16.9060905@voidspace.org.uk>

On 01/04/2011 14:49, ?ric Araujo wrote:
>> As far as I can tell there was no clear decision there either. :-)
> Not my understanding:
> http://mail.python.org/pipermail/python-dev/2010-August/103351.html

That was about whether the release manager should backport doc fixes 
from 2.7 to the 2.6 branch and the conclusion was "not to bother", which 
is very different from saying that individual developers *can't* apply 
doc fixes if *they want*.

Of course if the release manager says *do not* (which is different from 
*we won't be bothering*) then that is their decision and should be honoured.

All the best,

Michael

> Regards


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From barry at python.org  Fri Apr  1 17:03:50 2011
From: barry at python.org (Barry Warsaw)
Date: Fri, 1 Apr 2011 11:03:50 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <20110401140747.5366c5cd@pitrou.net>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDC1.70504@trueblade.com> <20110401140747.5366c5cd@pitrou.net>
Message-ID: <20110401110350.3d193447@neurotica.wooz.org>

On Apr 01, 2011, at 02:07 PM, Antoine Pitrou wrote:

>(and, no, I don't think building an old Python on a new Debian/Ubuntu
>system is anymore important than other kinds of bug or build fixes;
>let's stop implying that Ubuntu is the dominant OS out there, because
>it's really not)

For the record, I wouldn't object to build fixes required to continue to build
Python on say Windows 7 after some security patch broke the build.  Or Gentoo,
or OS X.  I think there's no harm in build system or doc fixes that will have
no effect on functionality.  The difference is that even the simplest bug fix
can change behavior, but a build system fix or doc fix will not.

I agree with the position that back porting such fixes should not be required
but also aren't prohibited.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110401/e3d37f33/attachment.pgp>

From barry at python.org  Fri Apr  1 17:17:27 2011
From: barry at python.org (Barry Warsaw)
Date: Fri, 1 Apr 2011 11:17:27 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D95DC16.9060905@voidspace.org.uk>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDB5.9080601@voidspace.org.uk> <in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk> <4D95C849.301@netwok.org>
	<4D95D70D.2050303@voidspace.org.uk> <4D95D7D5.6010304@netwok.org>
	<4D95DC16.9060905@voidspace.org.uk>
Message-ID: <20110401111727.4f8d0e29@neurotica.wooz.org>

On Apr 01, 2011, at 03:07 PM, Michael Foord wrote:

>That was about whether the release manager should backport doc fixes from 2.7
>to the 2.6 branch and the conclusion was "not to bother", which is very
>different from saying that individual developers *can't* apply doc fixes if
>*they want*.
>
>Of course if the release manager says *do not* (which is different from *we
>won't be bothering*) then that is their decision and should be honoured.

Yeah, I know what I said before but I really am still on the fence about
non-behavior changing fixes.  Both sides have valid positions, IMO. :/

But as before, I'll abide by consensus, if such a thing can be determined.
Not applying the patch to 2.6 will make things harder for me if I ever have to
do another 2.6 release, but not impossible.

However, because of the hg forward porting policy, I would like to decide
asap on how far back to port the fix.  As I see it, the patch is
uncontroversial for 3.3, 3.2, and 2.7.  And it definitely will not be applied
to 3.0.  That leaves 2.5, 2.6, and 3.1.  If you really care one way or the
other, please register your vote in the tracker.

http://bugs.python.org/issue11715

(Hey, tracker voting would be a cool GSoC project perhaps)

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110401/6b254418/attachment.pgp>

From g.brandl at gmx.net  Fri Apr  1 17:39:59 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 17:39:59 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
	multiarch Debian/Ubuntu
In-Reply-To: <20110401144911.39ee36ce@pitrou.net>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>
	<in4gk5$4h4$1@dough.gmane.org>	<4D95C716.4090502@voidspace.org.uk>
	<20110401144911.39ee36ce@pitrou.net>
Message-ID: <in4rki$9d7$1@dough.gmane.org>

Am 01.04.2011 14:49, schrieb Antoine Pitrou:
> On Fri, 01 Apr 2011 13:37:42 +0100
> Michael Foord <fuzzyman at voidspace.org.uk> wrote:
>> > I think I was unclear: I'm not advocating doing doc fixes in security-only
>> > branches; I'm just explaining why it wouldn't even make sense to do these
>> > fixes.
>> >
>> I understood. I was suggesting we modify to allow doc changes that fix 
>> errors and push updated docs *online* (not do fresh releases) and asking 
>> why not do that (other than policy)?
> 
> Well, I think the tradeoff is simply: do you want to do more work?
> (or, given the same amount of work, do you think allocating your
> workforce to backporting doc fixes is worthwhile?)

Absolutely.  I don't want to maintain that infrastructure.

Georg


From ncoghlan at gmail.com  Fri Apr  1 17:58:03 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 2 Apr 2011 01:58:03 +1000
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <in4a62$ran$1@dough.gmane.org>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>
	<4D952A47.6060707@haypocalc.com>
	<571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
	<AANLkTi=JK4U9nSf+PFOxXFqRKVhVnoHmy1fntFWDQZWy@mail.gmail.com>
	<in4a62$ran$1@dough.gmane.org>
Message-ID: <AANLkTi=TNYHt-YLtcqPXoztX7qUZoQvQWjrROB0Q9tc_@mail.gmail.com>

On Fri, Apr 1, 2011 at 8:42 PM, Georg Brandl <g.brandl at gmx.net> wrote:
> There are of course other Mercurial-web frontends that are free. ?hgweb is just
> the first choice because it's included. ?(Just like Tkinter.)
>
> For example, I was recently pointed to RhodeCode
> (http://pypi.python.org/pypi/RhodeCode/), but I haven't had a closer look yet.

If you find one that will stay on the symbolic tip of a branch while
browsing, instead of hard linking to the *current* tip the way hgweb
does, that would be really nice :)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From tjreedy at udel.edu  Fri Apr  1 18:00:38 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 01 Apr 2011 12:00:38 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
	multiarch Debian/Ubuntu
In-Reply-To: <4D95D70D.2050303@voidspace.org.uk>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>	<in4gk5$4h4$1@dough.gmane.org>	<4D95C716.4090502@voidspace.org.uk>
	<4D95C849.301@netwok.org> <4D95D70D.2050303@voidspace.org.uk>
Message-ID: <in4sr6$i5e$1@dough.gmane.org>

On 4/1/2011 9:45 AM, Michael Foord wrote:

>> See thread starting at
>> http://mail.python.org/pipermail/python-dev/2010-August/103263.html
> As far as I can tell there was no clear decision there either. :-)

I read it as deciding no doc fixes.

> (Other than no *need* to bother, which doesn't answer the question of
> what if developers *want* to fix errors in the docs - and I'm in favour
> of *permitting* but not requiring it.)

I see three reasons not to backport doc fixes:

1. we have too few people and too little time to do all we can/should 
with current releases.

2. anyone wanting up-to-date 2.6 docs should really consult 2.7 docs 
which include 2.6, with differences carefully noted. It was suggested in 
the thread that older docs, such as 2.6, say so. The point we should 
advertise is that the 'x.y' docs are really the cumulative Python x 
docs. We do extra work to make them be that.

(If nothing else, restarting the docs fresh will eventually be a reason 
for a Python4 release.)

3. sporadic updates to 2.6 docs will not benefits windows users or 
anyone else with a local copy at all; they will only deceptively benefit 
site visitors, which will still miss out on everything not backported.

-- 
Terry Jan Reedy


From status at bugs.python.org  Fri Apr  1 18:07:19 2011
From: status at bugs.python.org (Python tracker)
Date: Fri,  1 Apr 2011 18:07:19 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20110401160719.E61001CC83@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2011-03-25 - 2011-04-01)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    2733 ( -2)
  closed 20787 (+69)
  total  23520 (+67)

Open issues with patches: 1167 


Issues opened (49)
==================

#11557: Increase coverage in logging module
http://bugs.python.org/issue11557  reopened by r.david.murray

#11674: list(obj), tuple(obj) swallow TypeError (in _PyObject_LengthHi
http://bugs.python.org/issue11674  opened by Elvis.Pranskevichus

#11676: imp.load_module and submodules - doc issue, or bug?
http://bugs.python.org/issue11676  opened by Dave Peck

#11677: make test has horrendous performance on an ecryptfs
http://bugs.python.org/issue11677  opened by barry

#11678: Add support for Arch Linux	to	platform.linux_distributions()
http://bugs.python.org/issue11678  opened by anikom15

#11679: readline interferes with characters beginning with byte \xe9
http://bugs.python.org/issue11679  opened by takluyver

#11681: -b option undocumented
http://bugs.python.org/issue11681  opened by eric.araujo

#11682: PEP 380 reference implementation for 3.3
http://bugs.python.org/issue11682  opened by ncoghlan

#11683: unittest discover should recurse into packages which are alrea
http://bugs.python.org/issue11683  opened by Calvin.Spealman

#11684: (Maybe) Add email.parser.BytesHeaderParser
http://bugs.python.org/issue11684  opened by sdaoden

#11685: possible SQL injection into db APIs via table names... sqlite3
http://bugs.python.org/issue11685  opened by illume

#11686: Update of some email/ __all__ lists
http://bugs.python.org/issue11686  opened by sdaoden

#11688: SQLite trace callback
http://bugs.python.org/issue11688  opened by torsten

#11689: sqlite: Incorrect unit test fails to detect failure
http://bugs.python.org/issue11689  opened by torsten

#11690: Devguide: Add "communication" FAQ
http://bugs.python.org/issue11690  opened by ncoghlan

#11691: sqlite3 Cursor.description doesn't set type_code
http://bugs.python.org/issue11691  opened by wesclemens

#11693: memory leak in email.generator.Generator().flatten() method
http://bugs.python.org/issue11693  opened by Kaushik.Kannan

#11694: xdrlib raises ConversionError in inconsistent way
http://bugs.python.org/issue11694  opened by gruszczy

#11695: Improve argparse usage/help customization
http://bugs.python.org/issue11695  opened by bethard

#11697: Unsigned type in mmap_move_method
http://bugs.python.org/issue11697  opened by rmib

#11698: Improve repr for structseq objects to show named, but unindexe
http://bugs.python.org/issue11698  opened by rhettinger

#11699: Documentation for get_option_group is wrong
http://bugs.python.org/issue11699  opened by weeble

#11700: mailbox.py proxy updates
http://bugs.python.org/issue11700  opened by sdaoden

#11701: email.parser.BytesParser() uses TextIOWrapper
http://bugs.python.org/issue11701  opened by sdaoden

#11702: dir on return value of msilib.OpenDatabase() crashes python
http://bugs.python.org/issue11702  opened by markm

#11703: Bug in python >= 2.7 with urllib2 fragment
http://bugs.python.org/issue11703  opened by Ivan.Ivanenko

#11704: functools.partial doesn't create bound methods
http://bugs.python.org/issue11704  opened by alex

#11705: sys.excepthook doesn't work in imported modules
http://bugs.python.org/issue11705  opened by mikez302

#11707: Create C version of functools.cmp_to_key()
http://bugs.python.org/issue11707  opened by rhettinger

#11708: argparse: suggestion for formatting optional positional args
http://bugs.python.org/issue11708  opened by pwil3058

#11709: help-method crashes if sys.stdin is None
http://bugs.python.org/issue11709  opened by palm.kevin

#11710: Landing pages in docs for standard library packages
http://bugs.python.org/issue11710  opened by ncoghlan

#11714: threading.Semaphore does not use try...finally
http://bugs.python.org/issue11714  opened by glglgl

#11715: Building Python on multiarch Debian and Ubuntu
http://bugs.python.org/issue11715  opened by barry

#11717: conflicting definition of ssize_t in pyconfig.h
http://bugs.python.org/issue11717  opened by wrohdewald

#11718: Teach IDLE's open-module command to find packages
http://bugs.python.org/issue11718  opened by rhettinger

#11719: test_msilib skip unexpected on non-Windows platforms
http://bugs.python.org/issue11719  opened by nvawda

#11722: mingw64 does not link when building extensions
http://bugs.python.org/issue11722  opened by moog

#11723: No proper support for mingw64 - patch to add
http://bugs.python.org/issue11723  opened by moog

#11726: linecache becomes specific to Python scripts in Python 3
http://bugs.python.org/issue11726  opened by haypo

#11728: mbox parser incorrect behaviour
http://bugs.python.org/issue11728  opened by wally1980

#11729: libffi assembler relocation check is not robust, fails with cl
http://bugs.python.org/issue11729  opened by cartman

#11730: Setting sys.stdin to an invalid input stream causes interprete
http://bugs.python.org/issue11730  opened by ysj.ray

#11731: Simplify email API via 'policy' objects
http://bugs.python.org/issue11731  opened by r.david.murray

#11732: Skip decorator for tests requiring manual intervention on Wind
http://bugs.python.org/issue11732  opened by brian.curtin

#11733: Implement a `Counter.elements_count` method
http://bugs.python.org/issue11733  opened by cool-RR

#11734: Add half-float (16-bit) support to struct module
http://bugs.python.org/issue11734  opened by Eli.Stevens

#11736: windows installers ssl module / openssl broken for some sites
http://bugs.python.org/issue11736  opened by kiilerix

#11738: ThreadSignals.test_signals() of test_threadsignals hangs on PP
http://bugs.python.org/issue11738  opened by haypo



Most recent 15 issues with no replies (15)
==========================================

#11736: windows installers ssl module / openssl broken for some sites
http://bugs.python.org/issue11736

#11731: Simplify email API via 'policy' objects
http://bugs.python.org/issue11731

#11730: Setting sys.stdin to an invalid input stream causes interprete
http://bugs.python.org/issue11730

#11726: linecache becomes specific to Python scripts in Python 3
http://bugs.python.org/issue11726

#11719: test_msilib skip unexpected on non-Windows platforms
http://bugs.python.org/issue11719

#11718: Teach IDLE's open-module command to find packages
http://bugs.python.org/issue11718

#11710: Landing pages in docs for standard library packages
http://bugs.python.org/issue11710

#11708: argparse: suggestion for formatting optional positional args
http://bugs.python.org/issue11708

#11707: Create C version of functools.cmp_to_key()
http://bugs.python.org/issue11707

#11701: email.parser.BytesParser() uses TextIOWrapper
http://bugs.python.org/issue11701

#11699: Documentation for get_option_group is wrong
http://bugs.python.org/issue11699

#11698: Improve repr for structseq objects to show named, but unindexe
http://bugs.python.org/issue11698

#11695: Improve argparse usage/help customization
http://bugs.python.org/issue11695

#11694: xdrlib raises ConversionError in inconsistent way
http://bugs.python.org/issue11694

#11690: Devguide: Add "communication" FAQ
http://bugs.python.org/issue11690



Most recent 15 issues waiting for review (15)
=============================================

#11734: Add half-float (16-bit) support to struct module
http://bugs.python.org/issue11734

#11732: Skip decorator for tests requiring manual intervention on Wind
http://bugs.python.org/issue11732

#11731: Simplify email API via 'policy' objects
http://bugs.python.org/issue11731

#11723: No proper support for mingw64 - patch to add
http://bugs.python.org/issue11723

#11719: test_msilib skip unexpected on non-Windows platforms
http://bugs.python.org/issue11719

#11717: conflicting definition of ssize_t in pyconfig.h
http://bugs.python.org/issue11717

#11715: Building Python on multiarch Debian and Ubuntu
http://bugs.python.org/issue11715

#11714: threading.Semaphore does not use try...finally
http://bugs.python.org/issue11714

#11709: help-method crashes if sys.stdin is None
http://bugs.python.org/issue11709

#11703: Bug in python >= 2.7 with urllib2 fragment
http://bugs.python.org/issue11703

#11702: dir on return value of msilib.OpenDatabase() crashes python
http://bugs.python.org/issue11702

#11700: mailbox.py proxy updates
http://bugs.python.org/issue11700

#11691: sqlite3 Cursor.description doesn't set type_code
http://bugs.python.org/issue11691

#11689: sqlite: Incorrect unit test fails to detect failure
http://bugs.python.org/issue11689

#11688: SQLite trace callback
http://bugs.python.org/issue11688



Top 10 most discussed issues (10)
=================================

#6498: Py_Main() does not return on SystemExit
http://bugs.python.org/issue6498  21 msgs

#11549: Rewrite peephole to work on AST
http://bugs.python.org/issue11549  15 msgs

#1294959: Problems with /usr/lib64 builds.
http://bugs.python.org/issue1294959  12 msgs

#8052: subprocess close_fds behavior should only close open fds
http://bugs.python.org/issue8052  10 msgs

#11340: test_distutils fails
http://bugs.python.org/issue11340   9 msgs

#11647: function decorated with a context manager can only be invoked 
http://bugs.python.org/issue11647   8 msgs

#11678: Add support for Arch Linux	to	platform.linux_distributions()
http://bugs.python.org/issue11678   8 msgs

#11685: possible SQL injection into db APIs via table names... sqlite3
http://bugs.python.org/issue11685   8 msgs

#11610: Improving property to accept abstract methods
http://bugs.python.org/issue11610   8 msgs

#1690608: email.utils.formataddr() should be rfc2047 aware
http://bugs.python.org/issue1690608   8 msgs



Issues closed (62)
==================

#1128: msilib.Directory.make_short only handles file names with a sin
http://bugs.python.org/issue1128  closed by loewis

#2694: msilib file names check too strict ?
http://bugs.python.org/issue2694  closed by loewis

#4676: python3 closes + home keys
http://bugs.python.org/issue4676  closed by kbk

#5872: New C API for declaring Python types
http://bugs.python.org/issue5872  closed by loewis

#6457: subprocess.Popen.communicate can lose data from output/error s
http://bugs.python.org/issue6457  closed by rosslagerwall

#7124: idle.py -n : help() doesn't work in a reopened shell window
http://bugs.python.org/issue7124  closed by sandro.tosi

#7440: distutils shows incorrect Python version in MSI installers
http://bugs.python.org/issue7440  closed by loewis

#7639: bdist_msi fails on files with long names
http://bugs.python.org/issue7639  closed by python-dev

#8150: urllib needs ability to set METHOD for HTTP requests
http://bugs.python.org/issue8150  closed by brian.curtin

#8554: suspicious comment in msilib.py/__init__.py
http://bugs.python.org/issue8554  closed by loewis

#8624: Aliasing warnings in multiprocessing.c
http://bugs.python.org/issue8624  closed by sandro.tosi

#8919: python should read ~/.pythonrc.py by default
http://bugs.python.org/issue8919  closed by eric.araujo

#8976: subprocess module causes segmentation fault
http://bugs.python.org/issue8976  closed by rosslagerwall

#8982: argparse docs cross reference Namespace as a class but the Nam
http://bugs.python.org/issue8982  closed by bethard

#9026: argparse subcommands not printed in the same order they were a
http://bugs.python.org/issue9026  closed by bethard

#9181: Solaris extension building does not work with 64 bit python
http://bugs.python.org/issue9181  closed by pitrou

#9331: sys.setprofile is not described as CPython implementation deta
http://bugs.python.org/issue9331  closed by sandro.tosi

#9343: Document that argparse "parents" must be fully declared before
http://bugs.python.org/issue9343  closed by bethard

#9348: Calling argparse's add_argument with the wrong number of metav
http://bugs.python.org/issue9348  closed by bethard

#9557: test_mailbox failure under a Windows VM
http://bugs.python.org/issue9557  closed by r.david.murray

#9652: Enhance argparse help output customizability
http://bugs.python.org/issue9652  closed by bethard

#9653: New default argparse output to be added
http://bugs.python.org/issue9653  closed by bethard

#9696: xdrlib's pack_int generates DeprecationWarnings for negative i
http://bugs.python.org/issue9696  closed by mark.dickinson

#9929: subprocess.Popen unbuffered not work
http://bugs.python.org/issue9929  closed by rosslagerwall

#10219: BufferedReader.read1 does not check for closed file
http://bugs.python.org/issue10219  closed by amaury.forgeotdarc

#10234: ResourceWarnings in test_subprocess
http://bugs.python.org/issue10234  closed by sandro.tosi

#10617: Collections ABCs can???t be linked to
http://bugs.python.org/issue10617  closed by ezio.melotti

#10680: argparse: titles and add_mutually_exclusive_group don't mix (e
http://bugs.python.org/issue10680  closed by bethard

#10998: Remove last traces of -Q / sys.flags.division_warning / Py_Div
http://bugs.python.org/issue10998  closed by eric.araujo

#11144: int(float) may return a long for no reason
http://bugs.python.org/issue11144  closed by mark.dickinson

#11174: add argparse formatting option to display type names for metav
http://bugs.python.org/issue11174  closed by bethard

#11256: inspect.getcallargs raises TypeError on valid arguments
http://bugs.python.org/issue11256  closed by python-dev

#11284: slow close file descriptors in subprocess, popen2, os.popen*
http://bugs.python.org/issue11284  closed by rosslagerwall

#11370: Fix distutils to carry configure's LIBS through to extension m
http://bugs.python.org/issue11370  closed by jszakmeister

#11393: Integrate faulthandler module into Python 3.3
http://bugs.python.org/issue11393  closed by haypo

#11584: email.decode_header fails if msg.__getitem__ returns Header ob
http://bugs.python.org/issue11584  closed by r.david.murray

#11635: concurrent.futures uses polling
http://bugs.python.org/issue11635  closed by pitrou

#11639: Documentation for *Config functions in logging module should b
http://bugs.python.org/issue11639  closed by vinay.sajip

#11655: map() must not swallow exceptions from PyObject_GetIter
http://bugs.python.org/issue11655  closed by terry.reedy

#11659: Fix ResourceWarning in test_subprocess
http://bugs.python.org/issue11659  closed by rosslagerwall

#11662: Redirect vulnerability in urllib/urllib2
http://bugs.python.org/issue11662  closed by gvanrossum

#11663: multiprocessing (and concurrent.futures) doesn't detect killed
http://bugs.python.org/issue11663  closed by haypo

#11666: Teach pydoc to display full help for named tuples
http://bugs.python.org/issue11666  closed by rhettinger

#11672: multiprocessing.Array fails if size parameter is a long
http://bugs.python.org/issue11672  closed by mark.dickinson

#11673: RawArray does not accept long
http://bugs.python.org/issue11673  closed by mark.dickinson

#11675: multiprocessing Arrays not automatically zeroed.
http://bugs.python.org/issue11675  closed by mark.dickinson

#11680: decimal module generates AttributeError: on call to as_integer
http://bugs.python.org/issue11680  closed by mark.dickinson

#11687: distutils register does not work from the command line
http://bugs.python.org/issue11687  closed by brian.curtin

#11692: subprocess demo functions
http://bugs.python.org/issue11692  closed by rosslagerwall

#11696: msilib.make_id() is not safe for non ASCII characters.
http://bugs.python.org/issue11696  closed by python-dev

#11706: Build from hg fails in Modules/getbuildinfo.c when built using
http://bugs.python.org/issue11706  closed by dmalcolm

#11711: socketpair does not accept AF_INET family argument [Linux]
http://bugs.python.org/issue11711  closed by Cloudberry

#11712: Doc list.sort(cmp=,key=) result.
http://bugs.python.org/issue11712  closed by terry.reedy

#11713: collections.deque docstring wrong/misleading
http://bugs.python.org/issue11713  closed by rhettinger

#11716: mixing calls to io.TextIOWrapper.write and io.BufferedWriter.w
http://bugs.python.org/issue11716  closed by Sean.Sherrard

#11720: PyErr_WriteUnraisable while running cProfile
http://bugs.python.org/issue11720  closed by tleeuwenburg at gmail.com

#11721: socket.accept() with a timout socket creates bogus socket
http://bugs.python.org/issue11721  closed by krisvale

#11724: concurrent.futures: executor.submit() runs until completion ev
http://bugs.python.org/issue11724  closed by pitrou

#11725: httplib and urllib2 failed ssl connection httplib.BadStatusLin
http://bugs.python.org/issue11725  closed by ned.deily

#11727: Add a --timeout option to regrtest.py using the faulthandler m
http://bugs.python.org/issue11727  closed by haypo

#11735: Python Crash on strftime with %f
http://bugs.python.org/issue11735  closed by amaury.forgeotdarc

#11737: and is not a logical conjugation
http://bugs.python.org/issue11737  closed by ezio.melotti

From tjreedy at udel.edu  Fri Apr  1 18:08:01 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 01 Apr 2011 12:08:01 -0400
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <in4aae$ran$2@dough.gmane.org>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>	<20110401022636.14945972@pitrou.net>	<in3io8$8a7$1@dough.gmane.org>
	<in4aae$ran$2@dough.gmane.org>
Message-ID: <in4t8v$kql$1@dough.gmane.org>

On 4/1/2011 6:44 AM, Georg Brandl wrote:
> Am 01.04.2011 06:02, schrieb Terry Reedy:
>
>> would switch. Just forgot here. Multiply everything by 2.4 for cm.
>
> Or by 2.54, if you're using SI cm :)

Then its a good thing I did the conversions with a dual scale ruler ;-).
So the number were accurate.
I envy all of you who only have to learn and use one relatively sensible 
unit system.

-- 
Terry Jan Reedy


From rdmurray at bitdance.com  Fri Apr  1 18:17:38 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Fri, 01 Apr 2011 12:17:38 -0400
Subject: [Python-Dev] devguide: Add a table of contents to the FAQ.
In-Reply-To: <in4afj$ran$4@dough.gmane.org>
References: <E1Q4e9O-0001X5-Jr@dinsdale.python.org>
	<20110330222004.5b23bcc5@pitrou.net> <4D94A21B.9040501@gmail.com>
	<20110331163426.C4ABCD64A7@kimball.webabinitio.net>
	<AANLkTinJ+4SeLBHzQkwBgtQppq=ow-UJx2rtZPQhGvMH@mail.gmail.com>
	<20110331231125.7B06B2B673@kimball.webabinitio.net>
	<in4afj$ran$4@dough.gmane.org>
Message-ID: <20110401161734.22B1717569F@kimball.webabinitio.net>

On Fri, 01 Apr 2011 12:47:12 +0200, Georg Brandl <g.brandl at gmx.net> wrote:
> Am 01.04.2011 01:12, schrieb R. David Murray:
> > On Fri, 01 Apr 2011 08:29:29 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
> >> On Fri, Apr 1, 2011 at 2:34 AM, R. David Murray <rdmurray at bitdance.com> wro=
> >> te:
> >> > I agree with this point. =A0The sidebar list of questions is effectively
> >> > useless.
> >> 
> >> Indeed. If it's simple, I'd actually be inclined to reduce the depth
> >> of the sidebar in this case to only show the categories and not the
> >> individual questions.
> > 
> > I believe that requires editing the sphinx page template and adding
> > a special case of some sort.
> 
> Use
> 
> :tocdepth: x
> 
> at the top of the rst file.

Ah, nice.

--
R. David Murray           http://www.bitdance.com

From solipsis at pitrou.net  Fri Apr  1 18:30:14 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 18:30:14 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDB5.9080601@voidspace.org.uk> <in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk> <4D95C849.301@netwok.org>
	<4D95D70D.2050303@voidspace.org.uk> <4D95D7D5.6010304@netwok.org>
	<4D95DC16.9060905@voidspace.org.uk>
	<20110401111727.4f8d0e29@neurotica.wooz.org>
Message-ID: <20110401183014.1df42de4@pitrou.net>

On Fri, 1 Apr 2011 11:17:27 -0400
Barry Warsaw <barry at python.org> wrote:
> 
> Yeah, I know what I said before but I really am still on the fence about
> non-behavior changing fixes.  Both sides have valid positions, IMO. :/

Well, how can you be sure it's non-behaviour changing? A bugfix can
always introduce a regression.

> However, because of the hg forward porting policy, I would like to decide
> asap on how far back to port the fix.  As I see it, the patch is
> uncontroversial for 3.3, 3.2, and 2.7.  And it definitely will not be applied
> to 3.0.  That leaves 2.5, 2.6, and 3.1.

I think it's Martin who ultimately decides what goes into 2.5. He
seemed quite conservative about it.

Regards

Antoine.



From solipsis at pitrou.net  Fri Apr  1 18:31:10 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 1 Apr 2011 18:31:10 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDB5.9080601@voidspace.org.uk> <in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk>
	<20110401144911.39ee36ce@pitrou.net> <in4rki$9d7$1@dough.gmane.org>
Message-ID: <20110401183110.7181fdbe@pitrou.net>

On Fri, 01 Apr 2011 17:39:59 +0200
Georg Brandl <g.brandl at gmx.net> wrote:
> Am 01.04.2011 14:49, schrieb Antoine Pitrou:
> > On Fri, 01 Apr 2011 13:37:42 +0100
> > Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> >> > I think I was unclear: I'm not advocating doing doc fixes in security-only
> >> > branches; I'm just explaining why it wouldn't even make sense to do these
> >> > fixes.
> >> >
> >> I understood. I was suggesting we modify to allow doc changes that fix 
> >> errors and push updated docs *online* (not do fresh releases) and asking 
> >> why not do that (other than policy)?
> > 
> > Well, I think the tradeoff is simply: do you want to do more work?
> > (or, given the same amount of work, do you think allocating your
> > workforce to backporting doc fixes is worthwhile?)
> 
> Absolutely.  I don't want to maintain that infrastructure.

But perhaps Michael would like to maintain it? He could be given an
account on dinsdale if he wants to.

Regards

Antoine.



From ncoghlan at gmail.com  Fri Apr  1 18:48:53 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 2 Apr 2011 02:48:53 +1000
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <20110401183110.7181fdbe@pitrou.net>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org> <in4aek$ran$3@dough.gmane.org>
	<4D95BDB5.9080601@voidspace.org.uk> <in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk>
	<20110401144911.39ee36ce@pitrou.net> <in4rki$9d7$1@dough.gmane.org>
	<20110401183110.7181fdbe@pitrou.net>
Message-ID: <AANLkTin5qC9u6X_i2JzFjpnsS3Z9D4dp9PwAhtLXExLD@mail.gmail.com>

On Sat, Apr 2, 2011 at 2:31 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Fri, 01 Apr 2011 17:39:59 +0200
> Georg Brandl <g.brandl at gmx.net> wrote:
>> Am 01.04.2011 14:49, schrieb Antoine Pitrou:
>> > Well, I think the tradeoff is simply: do you want to do more work?
>> > (or, given the same amount of work, do you think allocating your
>> > workforce to backporting doc fixes is worthwhile?)
>>
>> Absolutely. ?I don't want to maintain that infrastructure.
>
> But perhaps Michael would like to maintain it? He could be given an
> account on dinsdale if he wants to.

As Terry pointed out, better to point people to the 2.7 docs, and
remind them to keep an eye out for "new in 2.7" or "changed in 2.7" if
they're using 2.6.

Really, the older versions should only be referenced if you're looking
at an offline version, or you want information on a deprecated feature
that doesn't exist in the latest version.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ethan at stoneleaf.us  Fri Apr  1 19:39:30 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 01 Apr 2011 10:39:30 -0700
Subject: [Python-Dev] faulthandler is now part of Python 3.3
In-Reply-To: <1301536474.23107.16.camel@marge>
References: <1301536474.23107.16.camel@marge>
Message-ID: <4D960DD2.6020805@stoneleaf.us>

Victor Stinner wrote:
> I pushed my faulthandler module into the default branch (Python 3.3).
> Since one week, I fixed a lot of bugs (platform issues), improved the
> tests and Antoine wrote a new implementation of dump_backtraces_later()
> using a thread (instead of SIGALRM+alarm()). It should now work on all
> platforms (but register() is not available on Windows).


I apologize -- I'm not going to have time to test this myself, and I'm 
really curious to know if it works:

Issue11603 describes a problem where Python either hangs or crashes 
depending on Python version/OS version... does the faulthandler work for 
this problem?

Remodeling-at-home-and-swamped-at-work-ly yours,
~Ethan~

From martin at v.loewis.de  Fri Apr  1 21:40:51 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 01 Apr 2011 21:40:51 +0200
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>	<4D952A47.6060707@haypocalc.com>
	<571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
Message-ID: <4D962A43.6040906@v.loewis.de>

> That's *way* better:
> 
>   https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Lib/linecache.py
> 
> Why can't we have that for our primary source viewer.

Would you like to install this, or something else, or change the
templates? If so, please let me know so I can give you access to
dinsdale.

Regards,
Martin

From martin at v.loewis.de  Fri Apr  1 21:47:07 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Fri, 01 Apr 2011 21:47:07 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <1301655247.6531.65.camel@tim-laptop>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
	<1301655247.6531.65.camel@tim-laptop>
Message-ID: <4D962BBB.6040205@v.loewis.de>

> FWIW - I maintain legacy code for python2.4, and 2.5 (mainly 2.5).
[...]
> As a result, I'm very much +1 on integrating this patch to previous
> versions.

Updating 2.4 is clearly out of question; and I veto changing 2.5 in
that respect.

> I develop on Ubuntu (and will probably update to 11.04 in a few months)
> - so this will directly affect me.

I think it is really Ubuntu's fault, not Python's, that it fails to
build. They fail to provide backwards compatibility. It also STM that
they fail to comply to the FHS with that change...

In any case, it's not that you can't build Python 2.4 anymore on Ubuntu
11.04. You just have to edit Modules/Setup (which *is* a standard build
procedure) to point it to the right library paths and names.

> Even if their servers won't run ubuntu 11.04+ (or something with the
> same library paths), their development environments will.

They can also patch the Python releases themselves, or use Ubuntu
packages that someone else made for them (they can probably just install
the old 2.4 packages just fine).

Regards,
Martin

From martin at v.loewis.de  Fri Apr  1 21:48:46 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 01 Apr 2011 21:48:46 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <in4aek$ran$3@dough.gmane.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>
Message-ID: <4D962C1E.7000509@v.loewis.de>

> I wouldn't say doc fixes are not acceptable, but they are rather pointless
> since there won't be any more online docs or released docs for those versions.

That's the reason I don't want to see the in the tree, though - if
people commit something, they expect to see it released at some point.

So by refusing these changes, I hope to reduce the frustration for not
getting them released.

Regards,
Martin

From martin at v.loewis.de  Fri Apr  1 21:50:14 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 01 Apr 2011 21:50:14 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <4D95C716.4090502@voidspace.org.uk>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>	<in4gk5$4h4$1@dough.gmane.org>
	<4D95C716.4090502@voidspace.org.uk>
Message-ID: <4D962C76.1040703@v.loewis.de>

> I understood. I was suggesting we modify to allow doc changes that fix
> errors and push updated docs *online* (not do fresh releases) and asking
> why not do that (other than policy)?

It's too much effort in the release process. I don't actually remember
anymore how to do 2.5 documentation releases.

Regards,
Martin

From martin at v.loewis.de  Fri Apr  1 21:52:28 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 01 Apr 2011 21:52:28 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <4D95BDC1.70504@trueblade.com>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>
	<4D95BDC1.70504@trueblade.com>
Message-ID: <4D962CFC.3060102@v.loewis.de>

> And I don't see a problem with build fixes. It's not like we're adding
> language features. If it makes someone's life easier, then what's the harm?

It's extra work with no volunteer doing it.

Regards,
Martin

From eric at trueblade.com  Fri Apr  1 21:54:54 2011
From: eric at trueblade.com (Eric Smith)
Date: Fri, 01 Apr 2011 15:54:54 -0400
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <4D962CFC.3060102@v.loewis.de>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>
	<4D95BDC1.70504@trueblade.com> <4D962CFC.3060102@v.loewis.de>
Message-ID: <4D962D8E.6040009@trueblade.com>

On 4/1/2011 3:52 PM, "Martin v. L?wis" wrote:
>> And I don't see a problem with build fixes. It's not like we're adding
>> language features. If it makes someone's life easier, then what's the harm?
> 
> It's extra work with no volunteer doing it.

I understood Barry was volunteering. Certainly if no one is motivated to
do the work, it won't get done.

Eric.

From martin at v.loewis.de  Fri Apr  1 22:01:32 2011
From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 01 Apr 2011 22:01:32 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <20110401110350.3d193447@neurotica.wooz.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>	<4D95BDC1.70504@trueblade.com>
	<20110401140747.5366c5cd@pitrou.net>
	<20110401110350.3d193447@neurotica.wooz.org>
Message-ID: <4D962F1C.70307@v.loewis.de>

Am 01.04.2011 17:03, schrieb Barry Warsaw:
> I think there's no harm in build system or doc fixes that will have
> no effect on functionality. 

I do believe that the build system changes can actually break things.
The first version of your patch produced additional output on stderr,
which may cause breakage on build infrastructures that filter the build
output (and, say, suddenly start sending cron email messages, every
fifteen minutes).

Your current change creates the temp build directories if they aren't
there. This may cause breakage on systems that create them themselves at
some point, and then fail because the directories are already there.

The change can also break build systems that patch setup.py, and now
fail since the patch doesn't apply anymore.

*Any* change to behavior can potentially break something. In a
security-only release, the only acceptable tradeoff to this breakage is
that a security concern is resolved in return for the breakage.

Regards,
Martin

From martin at v.loewis.de  Fri Apr  1 22:04:23 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Fri, 01 Apr 2011 22:04:23 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
 on	multiarch Debian/Ubuntu
In-Reply-To: <4D962D8E.6040009@trueblade.com>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<in4aek$ran$3@dough.gmane.org>
	<4D95BDC1.70504@trueblade.com> <4D962CFC.3060102@v.loewis.de>
	<4D962D8E.6040009@trueblade.com>
Message-ID: <4D962FC7.6040409@v.loewis.de>

Am 01.04.2011 21:54, schrieb Eric Smith:
> On 4/1/2011 3:52 PM, "Martin v. L?wis" wrote:
>>> And I don't see a problem with build fixes. It's not like we're adding
>>> language features. If it makes someone's life easier, then what's the harm?
>>
>> It's extra work with no volunteer doing it.
> 
> I understood Barry was volunteering. Certainly if no one is motivated to
> do the work, it won't get done.

Ah, I somehow misread that you were talking about documentation changes
(where Barry didn't volunteer to produce the documentation set for an
upcoming 2.5 release - but for the change at hand, it wouldn't be
necessary, either).

Wrt. the build changes, I think they can actually break stuff, and
therefore shouldn't be applied - see my other message.

Regards,
Martin

From v+python at g.nevcal.com  Fri Apr  1 22:06:47 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Fri, 01 Apr 2011 13:06:47 -0700
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <in4t8v$kql$1@dough.gmane.org>
References: <in07nd$5nb$1@dough.gmane.org>	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>	<20110331040513.A82B54B71F@kimball.webabinitio.net>	<19860.23698.260679.739252@montanaro.dyndns.org>	<19860.25443.131458.388306@montanaro.dyndns.org>	<20110331141612.5ad2097a@pitrou.net>	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>	<1301579959.3535.10.camel@localhost.localdomain>	<in2bg7$v5q$1@dough.gmane.org>	<20110401022636.14945972@pitrou.net>	<in3io8$8a7$1@dough.gmane.org>	<in4aae$ran$2@dough.gmane.org>
	<in4t8v$kql$1@dough.gmane.org>
Message-ID: <4D963057.3020509@g.nevcal.com>

On 4/1/2011 9:08 AM, Terry Reedy wrote:
>
> I envy all of you who only have to learn and use one relatively 
> sensible unit system. 

Me too.  But anyone that calls themselves a programmer should be able to 
realize that the numbers are proportional and Google happily finds 
online conversion calculators.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110401/fd700f97/attachment-0001.html>

From g.brandl at gmx.net  Fri Apr  1 22:24:51 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 01 Apr 2011 22:24:51 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
	multiarch Debian/Ubuntu
In-Reply-To: <20110401183110.7181fdbe@pitrou.net>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<in4aek$ran$3@dough.gmane.org>	<4D95BDB5.9080601@voidspace.org.uk>
	<in4gk5$4h4$1@dough.gmane.org>	<4D95C716.4090502@voidspace.org.uk>	<20110401144911.39ee36ce@pitrou.net>
	<in4rki$9d7$1@dough.gmane.org> <20110401183110.7181fdbe@pitrou.net>
Message-ID: <in5cao$e9i$1@dough.gmane.org>

Am 01.04.2011 18:31, schrieb Antoine Pitrou:
> On Fri, 01 Apr 2011 17:39:59 +0200
> Georg Brandl <g.brandl at gmx.net> wrote:
>> Am 01.04.2011 14:49, schrieb Antoine Pitrou:
>> > On Fri, 01 Apr 2011 13:37:42 +0100
>> > Michael Foord <fuzzyman at voidspace.org.uk> wrote:
>> >> > I think I was unclear: I'm not advocating doing doc fixes in security-only
>> >> > branches; I'm just explaining why it wouldn't even make sense to do these
>> >> > fixes.
>> >> >
>> >> I understood. I was suggesting we modify to allow doc changes that fix 
>> >> errors and push updated docs *online* (not do fresh releases) and asking 
>> >> why not do that (other than policy)?
>> > 
>> > Well, I think the tradeoff is simply: do you want to do more work?
>> > (or, given the same amount of work, do you think allocating your
>> > workforce to backporting doc fixes is worthwhile?)
>> 
>> Absolutely.  I don't want to maintain that infrastructure.
> 
> But perhaps Michael would like to maintain it? He could be given an
> account on dinsdale if he wants to.

I seriously doubt that Michael would like to resurrect the old tex2html
toolchain, and personally I also think that there are much better things
he can do with his volunteer time...

Georg


From raymond.hettinger at gmail.com  Fri Apr  1 22:41:14 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Fri, 1 Apr 2011 13:41:14 -0700
Subject: [Python-Dev] Impaired Usability of the Mercurial Source Viewer
In-Reply-To: <4D962A43.6040906@v.loewis.de>
References: <BE0A9A1F-89FA-4ECF-9CC1-8B09027370FF@gmail.com>	<4D952A47.6060707@haypocalc.com>
	<571B3262-D22D-433A-B79B-9015BAC49FE1@gmail.com>
	<4D962A43.6040906@v.loewis.de>
Message-ID: <195CC7B1-DF26-4D7B-83F6-CA124718F327@gmail.com>


On Apr 1, 2011, at 12:40 PM, Martin v. L?wis wrote:

>> That's *way* better:
>> 
>>  https://bitbucket.org/mirror/cpython/src/3558eecd84f0/Lib/linecache.py
>> 
>> Why can't we have that for our primary source viewer.
> 
> Would you like to install this, or something else, or change the
> templates? If so, please let me know so I can give you access to
> dinsdale.


Yes please.


Raymond


From arfrever.fta at gmail.com  Fri Apr  1 23:13:04 2011
From: arfrever.fta at gmail.com (Arfrever Frehtes Taifersar Arahesis)
Date: Fri, 1 Apr 2011 23:13:04 +0200
Subject: [Python-Dev] warn_unused_result warnings
In-Reply-To: <4D9528FE.2060609@haypocalc.com>
References: <AANLkTik96YJkR4oxYJRmNaZY9ymPX4ahOMAqfjUMDTfz@mail.gmail.com>
	<4D9528FE.2060609@haypocalc.com>
Message-ID: <201104012313.05699.Arfrever.FTA@gmail.com>

2011-04-01 03:23:10 Victor Stinner napisa?(a):
> Le 01/04/2011 01:11, Benjamin Peterson a ?crit :
> > I'm rather sick of seeing this warnings on all compiles, so I propose
> > we enable the -Wno-unused-results option. I judge that most of the
> > cases where this occurs are error reporting functions, where not much
> > with return code can be done.
> Can't we try to fix the warnings instead of turning them off? Or is it 
> possible to only turn off these warnings on a specific function?

http://gcc.gnu.org/gcc-4.6/changes.html
"Support for selectively enabling and disabling warnings via #pragma GCC diagnostic has been added."

-- 
Arfrever Frehtes Taifersar Arahesis
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110401/bf054424/attachment.pgp>

From victor.stinner at haypocalc.com  Sat Apr  2 00:47:18 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Sat, 02 Apr 2011 00:47:18 +0200
Subject: [Python-Dev] Use regrtest.py --timeout on buildbots
In-Reply-To: <1301589343.29322.7.camel@marge>
References: <1301589343.29322.7.camel@marge>
Message-ID: <1301698038.4439.8.camel@marge>

Le jeudi 31 mars 2011 ? 18:35 +0200, Victor Stinner a ?crit :
> Hi,
> 
> I just added a --timeout option to Lib/test/regrtest.py: if a test (one
> function, not a whole file) takes more than TIMEOUT seconds, the
> traceback is dumped and it exits. I tested it on 3 buildbots with a
> timeout of 5 minutes and it worked as expected: see #11727 for
> examples. 
> 
> It would be nice to have this feature enabled on all buildbots.

I enabled this timeout with a timeout of 15 minutes. Thanks to this
timeout, I have know a traceback for the strange test_threadsignals
hang:
http://bugs.python.org/issue11738

But I got also 3 buildbots (2 FreeBSD, 1 Solaris) failing on: test_io,
test_subprocess, test_signal.

I changed the default timeout to 30 minutes. The timeout was too long to
catch a test_ssl failure on Windows 7 (which uses a timeout of 20
minutes), but also long enough to avoid false positive on Solaris. The 2
FreeBSD buildbots still fails (test_io, test_socket).

I am not sure yet that the failures with timeouts of 15 or 30 minutes
are just false positive. For example, test_interrupted_write_buffered()
in test_io was interrupted after 30 minutes on "x86 FreeBSD 3.x" whereas
this test takes less then 5 seconds on my Linux box (and on my FreeBSD
VM).

Anyway, I am happy to have a working tool to get more information on
buildbot hang. At least, it is possible to enable it temporary to try to
learn more on a failure.

The timeout protects also the buildbot against bugs (hang or infinite
loop) in Python or in the test suite: bugs are detected earlier.

Victor


From barry at python.org  Sat Apr  2 01:52:53 2011
From: barry at python.org (Barry Warsaw)
Date: Fri, 1 Apr 2011 19:52:53 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <4D962BBB.6040205@v.loewis.de>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
	<1301655247.6531.65.camel@tim-laptop>
	<4D962BBB.6040205@v.loewis.de>
Message-ID: <20110401195253.77e48df3@neurotica.wooz.org>

On Apr 01, 2011, at 09:47 PM, Martin v. L?wis wrote:

>> FWIW - I maintain legacy code for python2.4, and 2.5 (mainly 2.5).
>[...]
>> As a result, I'm very much +1 on integrating this patch to previous
>> versions.
>
>Updating 2.4 is clearly out of question; and I veto changing 2.5 in
>that respect.

Fair enough.  I respect your decision for 2.5.

>> I develop on Ubuntu (and will probably update to 11.04 in a few months)
>> - so this will directly affect me.
>
>I think it is really Ubuntu's fault, not Python's, that it fails to
>build. They fail to provide backwards compatibility. It also STM that
>they fail to comply to the FHS with that change...

When I saw this change happen, I did let out a little groan knowing what kind
of resistance I'd likely encounter in python-dev.  ;)

>In any case, it's not that you can't build Python 2.4 anymore on Ubuntu
>11.04. You just have to edit Modules/Setup (which *is* a standard build
>procedure) to point it to the right library paths and names.

Yes, but it's something I'd prefer not to do when cutting a release, because
that's also error prone and could mask problems that users would have.  But I
do agree that we've ruled out any future full releases of Python 2.6, so the
kind of testing I would normally go through before a release will not be
necessary.

>> Even if their servers won't run ubuntu 11.04+ (or something with the
>> same library paths), their development environments will.
>
>They can also patch the Python releases themselves, or use Ubuntu
>packages that someone else made for them (they can probably just install
>the old 2.4 packages just fine).

The Python 2.6, 2.7, and 3.2 packages in Ubuntu 11.04 already have essentially
the same patch that I posted, so folks using Python 2.6 from the operating
system will not have a problem.  Without this patch in our repository, folks
building Python 2.6 from source will have to be aware of it.

Since it's easy enough to back port the patch to 2.6 later should it be
necessary, I leave it alone.  I think we're still due one last bug fix release
of Python 3.1, right?  So that leaves applying this patch to 2.7, and 3.1
through 3.3.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110401/963d2dee/attachment.pgp>

From solipsis at pitrou.net  Sat Apr  2 02:03:09 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 2 Apr 2011 02:03:09 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
	<1301655247.6531.65.camel@tim-laptop>
	<4D962BBB.6040205@v.loewis.de>
	<20110401195253.77e48df3@neurotica.wooz.org>
Message-ID: <20110402020309.7c7299c3@pitrou.net>


> >> Even if their servers won't run ubuntu 11.04+ (or something with the
> >> same library paths), their development environments will.
> >
> >They can also patch the Python releases themselves, or use Ubuntu
> >packages that someone else made for them (they can probably just install
> >the old 2.4 packages just fine).
> 
> The Python 2.6, 2.7, and 3.2 packages in Ubuntu 11.04 already have essentially
> the same patch that I posted, so folks using Python 2.6 from the operating
> system will not have a problem.  Without this patch in our repository, folks
> building Python 2.6 from source will have to be aware of it.

So let them use Python 2.6 from Ubuntu. Case closed!

cheers

Antoine.



From tjreedy at udel.edu  Sat Apr  2 03:24:57 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 01 Apr 2011 21:24:57 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
	multiarch Debian/Ubuntu
In-Reply-To: <20110401195253.77e48df3@neurotica.wooz.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>	<1301655247.6531.65.camel@tim-laptop>	<4D962BBB.6040205@v.loewis.de>
	<20110401195253.77e48df3@neurotica.wooz.org>
Message-ID: <in5tt6$umg$1@dough.gmane.org>

On 4/1/2011 7:52 PM, Barry Warsaw wrote:

> necessary, I leave it alone.  I think we're still due one last bug fix release
> of Python 3.1, right?

Yes, hopefully soon.

-- 
Terry Jan Reedy


From stefan at bytereef.org  Sat Apr  2 10:55:45 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Sat, 2 Apr 2011 10:55:45 +0200
Subject: [Python-Dev] Issue 11715: building Python from source
	on	multiarch Debian/Ubuntu
In-Reply-To: <20110402020309.7c7299c3@pitrou.net>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
	<1301655247.6531.65.camel@tim-laptop>
	<4D962BBB.6040205@v.loewis.de>
	<20110401195253.77e48df3@neurotica.wooz.org>
	<20110402020309.7c7299c3@pitrou.net>
Message-ID: <20110402085545.GA1381@sleipnir.bytereef.org>

Antoine Pitrou <solipsis at pitrou.net> wrote:
> > >> Even if their servers won't run ubuntu 11.04+ (or something with the
> > >> same library paths), their development environments will.
> > >
> > >They can also patch the Python releases themselves, or use Ubuntu
> > >packages that someone else made for them (they can probably just install
> > >the old 2.4 packages just fine).
> > 
> > The Python 2.6, 2.7, and 3.2 packages in Ubuntu 11.04 already have essentially
> > the same patch that I posted, so folks using Python 2.6 from the operating
> > system will not have a problem.  Without this patch in our repository, folks
> > building Python 2.6 from source will have to be aware of it.
> 
> So let them use Python 2.6 from Ubuntu. Case closed!

It isn't that simple. For example, I have automated scripts that test
cdecimal against decimal.py for *every* release from r25 to r32.

This is also the reason why I was unhappy that r25 did not build from
Mercurial initially.


There has been a lot of churn lately for module authors, starting with
__pycache__, cpython-32m.so suffixes and ending in the mercurial
transition.


In this case, it's clearly Ubuntu who is going to break things. Still,
the proposed patch could make life a lot easier for many people.


Stefan Krah



From techtonik at gmail.com  Sat Apr  2 15:00:53 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Sat, 2 Apr 2011 16:00:53 +0300
Subject: [Python-Dev] Python 3.3 release schedule posted
In-Reply-To: <imdj8n$dq0$1@dough.gmane.org>
References: <imdj8n$dq0$1@dough.gmane.org>
Message-ID: <BANLkTim+uBdXYQOYLmGMtktEoqs0Scd+KA@mail.gmail.com>

On Wed, Mar 23, 2011 at 9:56 PM, Georg Brandl <g.brandl at gmx.net> wrote:
> I've posted a very preliminary Python 3.3 release schedule as PEP 398.
> The final release is set to be about 18 months after 3.2 final, which
> is in August 2012.

Why this isn't being added to Release Calendar on the front page?
Do you have an estimate of Python 3.2.1 release?

From techtonik at gmail.com  Sat Apr  2 15:06:53 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Sat, 2 Apr 2011 16:06:53 +0300
Subject: [Python-Dev] Unicode module names (Was: Python 3.3 release schedule
	posted)
Message-ID: <BANLkTimxErMr6e+Em-RGf+OmPvSFwKeCZA@mail.gmail.com>

On Thu, Mar 24, 2011 at 2:41 AM, Victor Stinner
<victor.stinner at haypocalc.com> wrote:
>
> I am still working on the import machinery to fix last bugs related to
> Unicode. So it will be possible to do an useless "import caf?" in Python
> 3.3, on any platform. But it is not really an huge change (for the user,
> but an huge change in the code ;-)).

I don't like the idea of reading the code with some kind of Chinese
variable names in them. I'd prefer that I and l confusion will be the
only I should care about in valid Python syntax.

From techtonik at gmail.com  Sat Apr  2 15:11:07 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Sat, 2 Apr 2011 16:11:07 +0300
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <19854.3141.877605.103648@montanaro.dyndns.org>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
Message-ID: <BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>

On Sat, Mar 26, 2011 at 5:54 PM,  <skip at pobox.com> wrote:
>
> ? ?Antoine> Take a look at:
> ? ?Antoine> http://docs.python.org/devguide/committing.html
>
> What form should directed graphs be in for inclusion?

Pictures.

But so far I haven't seen any Graphviz-like tools in pure Python.
http://code.google.com/p/rainforce/issues/detail?id=4

From g.brandl at gmx.net  Sat Apr  2 15:32:03 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Sat, 02 Apr 2011 15:32:03 +0200
Subject: [Python-Dev] Unicode module names (Was: Python 3.3 release
	schedule posted)
In-Reply-To: <BANLkTimxErMr6e+Em-RGf+OmPvSFwKeCZA@mail.gmail.com>
References: <BANLkTimxErMr6e+Em-RGf+OmPvSFwKeCZA@mail.gmail.com>
Message-ID: <in78gq$boq$1@dough.gmane.org>

Am 02.04.2011 15:06, schrieb anatoly techtonik:
> On Thu, Mar 24, 2011 at 2:41 AM, Victor Stinner
> <victor.stinner at haypocalc.com> wrote:
>>
>> I am still working on the import machinery to fix last bugs related to
>> Unicode. So it will be possible to do an useless "import caf?" in Python
>> 3.3, on any platform. But it is not really an huge change (for the user,
>> but an huge change in the code ;-)).
> 
> I don't like the idea of reading the code with some kind of Chinese
> variable names in them. I'd prefer that I and l confusion will be the
> only I should care about in valid Python syntax.

Sorry, that ship sailed long ago, and Unicode identifiers are allowed since
Python 3.0.  Victor's work is just to extend this to importable module names.

Georg


From skip at pobox.com  Sat Apr  2 17:36:01 2011
From: skip at pobox.com (skip at pobox.com)
Date: Sat, 2 Apr 2011 10:36:01 -0500
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
Message-ID: <19863.16993.310043.706967@montanaro.dyndns.org>

    >> ? ?Antoine> Take a look at:
    >> ? ?Antoine> http://docs.python.org/devguide/committing.html
    >> 
    >> What form should directed graphs be in for inclusion?

    anatoly> Pictures.

    anatoly> But so far I haven't seen any Graphviz-like tools in pure Python.
    anatoly> http://code.google.com/p/rainforce/issues/detail?id=4

Yeah, I sort of figured that. :-) I meant JPEG? PNG?  ASCII art?  Some sort
of graph notation (like Graphviz)?  MoinMoin .draw notation?  Does ReST
support any sort of embedded images or diagrams?

Skip

From ncoghlan at gmail.com  Sat Apr  2 18:03:19 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 02:03:19 +1000
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <19863.16993.310043.706967@montanaro.dyndns.org>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org>
Message-ID: <BANLkTikFvpeZrJr-nQDR02DqfMRE4rbUvg@mail.gmail.com>

On Sun, Apr 3, 2011 at 1:36 AM,  <skip at pobox.com> wrote:
> Yeah, I sort of figured that. :-) I meant JPEG? PNG? ?ASCII art? ?Some sort
> of graph notation (like Graphviz)? ?MoinMoin .draw notation? ?Does ReST
> support any sort of embedded images or diagrams?

Taking PEP 1 as the precedent, I would suggest going with PNG (looking
at the PEP 1 source also shows how to embed an image in the ReST file,
too).

Cheers,
Nick.


-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From lac at openend.se  Sat Apr  2 18:14:11 2011
From: lac at openend.se (Laura Creighton)
Date: Sat, 02 Apr 2011 18:14:11 +0200
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: Message from skip@pobox.com of "Sat, 02 Apr 2011 10:36:01 CDT."
	<19863.16993.310043.706967@montanaro.dyndns.org> 
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org> 
Message-ID: <201104021614.p32GEB7a020150@theraft.openend.se>

In a message of Sat, 02 Apr 2011 10:36:01 CDT, skip at pobox.com writes:
>    >> =A0 =A0Antoine> Take a look at:
>    >> =A0 =A0Antoine> http://docs.python.org/devguide/committing.html
>    >> =
>
>    >> What form should directed graphs be in for inclusion?
>
>    anatoly> Pictures.
>
>    anatoly> But so far I haven't seen any Graphviz-like tools in pure Py
>th=
>on.
>    anatoly> http://code.google.com/p/rainforce/issues/detail?id=3D4
>
>Yeah, I sort of figured that. :-) I meant JPEG? PNG?  ASCII art?  Some so
>rt
>of graph notation (like Graphviz)?  MoinMoin .draw notation?  Does ReST
>support any sort of embedded images or diagrams?
>
>Skip

Sphinx lets you embed graphviz.
http://sphinx.pocoo.org/ext/graphviz.html?highlight=image

Laura

From wy at tungwaiyip.info  Sat Apr  2 17:55:28 2011
From: wy at tungwaiyip.info (Tung Wai Yip)
Date: Sat, 02 Apr 2011 08:55:28 -0700
Subject: [Python-Dev]  The path module PEP
Message-ID: <op.vtbiiqnewoerwv@wtung-think>

Hello BJ?rn,

Like you I've used the Path module one time or another. It is an excellent  
concept to refactor the confusing collection of file handling methods. I  
have not used it consistently, mainly because the module has not been  
maintained and it is not in the standard library as it should be.

I have found a proposal on PEP from a while ago. I wonder what is the  
status of it?

http://mail.python.org/pipermail/python-dev/2006-January/060026.html

Are you still actively using Python and the Path module? I have a couple  
of ideas that might improve it further. I hope to find enough fans of it  
polished it.

Cheer,

Wai Yip

From benjamin at python.org  Sat Apr  2 18:39:35 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Sat, 2 Apr 2011 11:39:35 -0500
Subject: [Python-Dev] not possible to install python 3.2
In-Reply-To: <AANLkTimqf_SifJWBFREqzpSKmmk3EgodvDiEtUeUkrM0@mail.gmail.com>
References: <AANLkTimqf_SifJWBFREqzpSKmmk3EgodvDiEtUeUkrM0@mail.gmail.com>
Message-ID: <BANLkTimGWNVvMnr2DPB5tXmXo3ne5PGHWQ@mail.gmail.com>

You should ask python-list instead of here.

2011/3/29 Laura <lauramdf at gmail.com>:
> I order to install Python as python3 on Linux, i did:
>
>
> ??? ./configure
> ??? make
> ??? make test
> ??? sudo make install
>
>
> However, when i typed "make test", i got two error messages:
>
> "test test_distutils failed -- multiple errors occurred; run in verbose mode
> for details"
>
> "sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name='/dev/null'
> mode='a' encoding='ANSI_X3.4-1968'>
> make: *** [test] Error 1"
>
> The Result?? When I type python on Linux, i get the older version 2.7.1
> instead of the version that i just installed (python 3.2).

Actually, it's because Python 3 is installed as python3.



-- 
Regards,
Benjamin

From skip at pobox.com  Sat Apr  2 19:56:13 2011
From: skip at pobox.com (skip at pobox.com)
Date: Sat, 2 Apr 2011 12:56:13 -0500
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <201104021614.p32GEB7a020150@theraft.openend.se>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org>
	<201104021614.p32GEB7a020150@theraft.openend.se>
Message-ID: <19863.25405.188019.644099@montanaro.dyndns.org>


    Laura> Sphinx lets you embed graphviz.
    Laura> http://sphinx.pocoo.org/ext/graphviz.html?highlight=image

Cool, thanks.  I'm going to try to reproduce Nick's setup as he described
it.  That would certainly be a whole lot easy for me to understand,
hopefully for others as well.

Skip


From lac at openend.se  Sat Apr  2 20:25:27 2011
From: lac at openend.se (Laura Creighton)
Date: Sat, 02 Apr 2011 20:25:27 +0200
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: Message from skip@pobox.com of "Sat, 02 Apr 2011 12:56:13 CDT."
	<19863.25405.188019.644099@montanaro.dyndns.org> 
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org>
	<201104021614.p32GEB7a020150@theraft.openend.se>
	<19863.25405.188019.644099@montanaro.dyndns.org> 
Message-ID: <201104021825.p32IPR4D031988@theraft.openend.se>

In a message of Sat, 02 Apr 2011 12:56:13 CDT, skip at pobox.com writes:
>
>    Laura> Sphinx lets you embed graphviz.
>    Laura> http://sphinx.pocoo.org/ext/graphviz.html?highlight=image
>
>Cool, thanks.  I'm going to try to reproduce Nick's setup as he described
>it.  That would certainly be a whole lot easy for me to understand,
>hopefully for others as well.
>
>Skip

*DEFINITELY* for me too!

Laura


From eltoder at gmail.com  Sun Apr  3 03:55:45 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Sat, 2 Apr 2011 21:55:45 -0400
Subject: [Python-Dev] Policy for making changes to the AST
Message-ID: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>

Hello,

While working on a rewrite of peephole optimizer that works on AST
(http://bugs.python.org/issue11549) I had to make a few changes to the
structure of AST. The changes are described in the issue. This raises the
question -- what is the policy for making changes to the AST? Documentation
for ast module does not warn about possible changes, but obviously changes
can occur, for example, when new constructs are introduced. What about other
changes? Is there a policy for what's acceptable and how this should be
handled?

Assuming we do want to make changes (not just extensions for new language
features), I see 3 options for handling them:

1. Do nothing. This will break code that currently uses AST, but doesn't add
any complexity to cpython.

2. Write a pure-Python compatibility layer. This will convert AST between old
and new formats, so that old code continues working. To do this
a) Introduce ast.compile function (similar to ast.parse), which should be the
recommended way of compiling to AST.
b) Add ast_version argument to ast.parse and ast.compile defaulting to 1.
c) Preserve old AST node classes and attributes in Python.
d) If ast_version specified is not the latest, ast.parse and ast.compile would
convert from/to latest version in Python using ast.NodeTransformer.

This is not fully backward compatible, but allows to do all staging in Python.

3. Full backward compatibility (with Python code). This means conversion is
done in compile(). It can either call Python conversion code from ast module,
or actually implement conversion in C using AST visitors. Using my visitors
generator this should not be very hard. Downsides here are a lot of C code and
no clear separation of deprecated AST nodes (they will remain in Python.asdl).
Otherwise it's similar to 2, with ast_version argument added to compile() and
ast.parse.

For 2 and 3 we can add a PendingDeprecationWarning when ast_version 1 is used.

In any case, C extensions that manipulate AST will be broken, but 3 provides a
simple way to update them -- insert calls to C conversion functions.

Eugene

From benjamin at python.org  Sun Apr  3 04:07:06 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Sat, 2 Apr 2011 21:07:06 -0500
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
Message-ID: <BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>

2011/4/2 Eugene Toder <eltoder at gmail.com>:
> Hello,
>
> While working on a rewrite of peephole optimizer that works on AST
> (http://bugs.python.org/issue11549) I had to make a few changes to the
> structure of AST. The changes are described in the issue. This raises the
> question -- what is the policy for making changes to the AST? Documentation
> for ast module does not warn about possible changes, but obviously changes
> can occur, for example, when new constructs are introduced. What about other
> changes? Is there a policy for what's acceptable and how this should be
> handled?
>
> Assuming we do want to make changes (not just extensions for new language
> features), I see 3 options for handling them:
>
> 1. Do nothing. This will break code that currently uses AST, but doesn't add
> any complexity to cpython.

I must say I prefer this option. I don't know how many people are
depending on AST, but I think they can expect it be broken
occasionally. AST analyzing code can expect to be broken anyway every
version new semantics are added.

Maintaining versioned asts and converting between them is just clunky.

There are some AST cleanups I'd like to happen, too, such as unifying
TryExcept and TryFinally into a single node.

-- 
Regards,
Benjamin

From ncoghlan at gmail.com  Sun Apr  3 04:09:09 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 12:09:09 +1000
Subject: [Python-Dev] not possible to install python 3.2
In-Reply-To: <AANLkTimqf_SifJWBFREqzpSKmmk3EgodvDiEtUeUkrM0@mail.gmail.com>
References: <AANLkTimqf_SifJWBFREqzpSKmmk3EgodvDiEtUeUkrM0@mail.gmail.com>
Message-ID: <BANLkTi=Qhp0r2pauspjWeVd9mjijh5cf+g@mail.gmail.com>

On Wed, Mar 30, 2011 at 8:48 AM, Laura <lauramdf at gmail.com> wrote:
> The Result?? When I type python on Linux, i get the older version 2.7.1
> instead of the version that i just installed (python 3.2).
>
> Could you help me?

As Benjamin noted, this is a question about using (/building) a
released version of Python, and hence more appropriate for
python-list. python-dev is for discussing the development of *future*
versions of Python.

Regards,
Nick.

P.S. However, also note that, due to the fact that it cannot run many
current Python 2.x scripts, the default installation process for
Python 3.x creates a "python3" command, rather than a "python"
command.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sun Apr  3 04:41:24 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 12:41:24 +1000
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <201104021825.p32IPR4D031988@theraft.openend.se>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org>
	<201104021614.p32GEB7a020150@theraft.openend.se>
	<19863.25405.188019.644099@montanaro.dyndns.org>
	<201104021825.p32IPR4D031988@theraft.openend.se>
Message-ID: <BANLkTi=BSL1Wb+oQOTw1g0-UGO87Hco1TA@mail.gmail.com>

On Sun, Apr 3, 2011 at 4:25 AM, Laura Creighton <lac at openend.se> wrote:
> In a message of Sat, 02 Apr 2011 12:56:13 CDT, skip at pobox.com writes:
>>
>> ? ?Laura> Sphinx lets you embed graphviz.
>> ? ?Laura> http://sphinx.pocoo.org/ext/graphviz.html?highlight=image
>>
>>Cool, thanks. ?I'm going to try to reproduce Nick's setup as he described
>>it. ?That would certainly be a whole lot easy for me to understand,
>>hopefully for others as well.
>>
>>Skip
>
> *DEFINITELY* for me too!

I'll reproduce it in dodgy ASCII art here, but a real diagram would
definitely help in making the flow of changes clearer:

public sandbox (hg.python.org/sandbox/ncoghlan)
<=> (push and pull)
local sandbox
<= (pull only)
main repository (hg.python.org/cpython)
<=>
local py3k <=> local python27
<=>
local python32
<=>
local python31

Once python31 drops into security-fixes-only mode after the next
maintenance release, I'll likely ditch that local repository.

In the sandbox, I try to leave the default branch as a clean copy of
cpython#default (and ditto for the maintenance branches), with
anything I am working on off in a named branch (usually branched from
default, but I could also branch from an older maintenance branch,
such as 2.7, if the situation called for it).

Having the separate sandbox also allows me to defer the decision on
how far back to apply a change until such time as I am actually
committing the patch to the official repositories.

To commit a fix that applies to 2.7, 3.1 and all subsequent branches
is a matter of doing:

cd python27
hg import --no-commit patch-for-27.diff (I'm still trying to get in
the habit of using this over patch, though)
# build and test
hg commit -m "Fix #123456789: Feed the half a bee"
hg push
cd ../python31
hg import --no-commit patch-for-31.diff
# build and test
hg commit -m "Fix #123456789: Feed the half a bee"
hg push
cd ../python32
hg merge 3.1
# build and quick test
hg commit -m "Merge from 3.1. Fix #123456789"
hg push
cd ../py3k
hg merge 3.2
# build and quick test
hg commit -m "Merge from 3.2. Fix #123456789"
hg push

The final push uploads the whole thing to hg.python.org/cpython as a
single consistent block - no temporary unmerged heads are created on
the maintenance branches.

If someone else has committed changes in the meantime, then I need to
hg pull and merge the changes all the way back down the chain. (This
is somewhat annoying for a full 4-branch commit like the one shown,
but most commits aren't that bad. New features are only on default,
and a lot of other changes are 3.2+default only)

If using the "share" extension, you could drop all of the "hg push"
commands except the last one (since there is only one local repository
in that case, there aren't any push/pull commands needed to
synchronise things).

The other thing I like about this setup is that you can use it as a
basis to explain other possible approaches to managing your local
workflow:

- Using "mq" is essentially an alternative to having a separate local sandbox
- Using "share" means that python27/32/31 become additional working
copies attached to the py3k repository rather than local repositories
in their own right
- You can leave out python27/32/31 entirely, and just do a lot more
switching and rebuilding in the py3k directory

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sun Apr  3 04:51:37 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 12:51:37 +1000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>
Message-ID: <BANLkTikWXkqbRb530tGmxNTHESpYY-1zZQ@mail.gmail.com>

On Sun, Apr 3, 2011 at 12:07 PM, Benjamin Peterson <benjamin at python.org> wrote:
>> Assuming we do want to make changes (not just extensions for new language
>> features), I see 3 options for handling them:
>>
>> 1. Do nothing. This will break code that currently uses AST, but doesn't add
>> any complexity to cpython.
>
> I must say I prefer this option. I don't know how many people are
> depending on AST, but I think they can expect it be broken
> occasionally. AST analyzing code can expect to be broken anyway every
> version new semantics are added.
>
> Maintaining versioned asts and converting between them is just clunky.

This is my preference as well, but I wanted to give *consumers* of the
AST a chance to scream at us before we break their world. The
compatibility problem doesn't go away if we ignore it - it just
devolves to the people doing the AST manipulation to invent their own
way of handling any cross-version compatibility dramas that arise.

However, I'm not sure we *can* do a general-purpose AST transformation
that handles both new nodes and changes to existing nodes correctly
for all applications. What changes are needed and/or acceptable will
likely be very heavily dependent on the specifics of what people are
doing with the AST.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From skip at pobox.com  Sun Apr  3 05:23:18 2011
From: skip at pobox.com (skip at pobox.com)
Date: Sat, 2 Apr 2011 22:23:18 -0500
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <BANLkTi=BSL1Wb+oQOTw1g0-UGO87Hco1TA@mail.gmail.com>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org>
	<201104021614.p32GEB7a020150@theraft.openend.se>
	<19863.25405.188019.644099@montanaro.dyndns.org>
	<201104021825.p32IPR4D031988@theraft.openend.se>
	<BANLkTi=BSL1Wb+oQOTw1g0-UGO87Hco1TA@mail.gmail.com>
Message-ID: <19863.59430.539225.671985@montanaro.dyndns.org>


    Skip> I'm going to try to reproduce Nick's setup as he described it.
    Skip> That would certainly be a whole lot easy for me to understand,
    Skip> hopefully for others as well.

    Laura> *DEFINITELY* for me too!

    Nick> I'll reproduce it in dodgy ASCII art here, but a real diagram
    Nick> would definitely help in making the flow of changes clearer:

    ...

This isn't exactly Nick's setup, and I've never used graphviz/dot before, so
I don't yet know how to lay things out, but, here's a first crack at it:

    http://www.smontanaro.net/hgpython.png
    http://www.smontanaro.net/hgpython.gv

Skip

From ncoghlan at gmail.com  Sun Apr  3 05:51:26 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 13:51:26 +1000
Subject: [Python-Dev] Decisions about workflow
In-Reply-To: <19863.59430.539225.671985@montanaro.dyndns.org>
References: <AANLkTikcxFtWuA84+7eaS6w_G+XURua3pVSy70RHjMpE@mail.gmail.com>
	<20110326164946.1e5bb78b@pitrou.net>
	<19854.3141.877605.103648@montanaro.dyndns.org>
	<BANLkTikGvvy7-1cEtwZ_zEeftC3=B0Jj7w@mail.gmail.com>
	<19863.16993.310043.706967@montanaro.dyndns.org>
	<201104021614.p32GEB7a020150@theraft.openend.se>
	<19863.25405.188019.644099@montanaro.dyndns.org>
	<201104021825.p32IPR4D031988@theraft.openend.se>
	<BANLkTi=BSL1Wb+oQOTw1g0-UGO87Hco1TA@mail.gmail.com>
	<19863.59430.539225.671985@montanaro.dyndns.org>
Message-ID: <BANLkTikmOTijLDV+TYVp7HCHSBWQCBMOpg@mail.gmail.com>

On Sun, Apr 3, 2011 at 1:23 PM,  <skip at pobox.com> wrote:
> This isn't exactly Nick's setup, and I've never used graphviz/dot before, so
> I don't yet know how to lay things out, but, here's a first crack at it:
>
> ? ?http://www.smontanaro.net/hgpython.png
> ? ?http://www.smontanaro.net/hgpython.gv

I don't think you can easily push changes between 2 remote
repositories like that. It's also somewhat painful to have local
changes that you *don't* want to push - you have to be selective about
what gets sent to the remote repository, so a simple "hg push" doesn't
work any more.

I forgot to mention the other advantage of having a local sandbox that
corresponds to your public sandbox - it makes it easy to keep your
work area in sync across multiple machines.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From eltoder at gmail.com  Sun Apr  3 06:24:04 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Sun, 3 Apr 2011 00:24:04 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTikWXkqbRb530tGmxNTHESpYY-1zZQ@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>
	<BANLkTikWXkqbRb530tGmxNTHESpYY-1zZQ@mail.gmail.com>
Message-ID: <AANLkTikf_8hrzuq0qK+yWoGGtLYTbr33g3WJym9dd8=x@mail.gmail.com>

> However, I'm not sure we *can* do a general-purpose AST transformation
> that handles both new nodes and changes to existing nodes correctly
> for all applications.

As long as both versions contain the same information we can write a
transformation that does a near-perfect job.
E.g. for my changes I can write a convertor that produces AST in
almost the same form as the current one, the only change being the new
'docstring' attribute set to None. (That's for converting AST before
optimizations, after optimizations it can contain nodes that couldn't
be represented before). I believe it's similar for Try change that
Benjamin mentioned above.

Also, if written in Python, conversion can at least serve as a
template even if it doesn't work out of the box.

Eugene

From ncoghlan at gmail.com  Sun Apr  3 07:25:04 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 15:25:04 +1000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTikf_8hrzuq0qK+yWoGGtLYTbr33g3WJym9dd8=x@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>
	<BANLkTikWXkqbRb530tGmxNTHESpYY-1zZQ@mail.gmail.com>
	<AANLkTikf_8hrzuq0qK+yWoGGtLYTbr33g3WJym9dd8=x@mail.gmail.com>
Message-ID: <BANLkTik51CYUczJxLsyhYWfDt=X6nSidWA@mail.gmail.com>

On Sun, Apr 3, 2011 at 2:24 PM, Eugene Toder <eltoder at gmail.com> wrote:
>> However, I'm not sure we *can* do a general-purpose AST transformation
>> that handles both new nodes and changes to existing nodes correctly
>> for all applications.
>
> As long as both versions contain the same information we can write a
> transformation that does a near-perfect job.
> E.g. for my changes I can write a convertor that produces AST in
> almost the same form as the current one, the only change being the new
> 'docstring' attribute set to None. (That's for converting AST before
> optimizations, after optimizations it can contain nodes that couldn't
> be represented before). I believe it's similar for Try change that
> Benjamin mentioned above.
>
> Also, if written in Python, conversion can at least serve as a
> template even if it doesn't work out of the box.

If it's do-able, your option 2 is probably the way to go. Out of the
box, it may just need to raise an exception if asked to down-convert
code that uses new constructs that can't readily be expressed using
the old AST (I'm specifically thinking of the challenge of converting
PEP 380's yield-from).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From eltoder at gmail.com  Sun Apr  3 07:42:24 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Sun, 3 Apr 2011 01:42:24 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTik51CYUczJxLsyhYWfDt=X6nSidWA@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>
	<BANLkTikWXkqbRb530tGmxNTHESpYY-1zZQ@mail.gmail.com>
	<AANLkTikf_8hrzuq0qK+yWoGGtLYTbr33g3WJym9dd8=x@mail.gmail.com>
	<BANLkTik51CYUczJxLsyhYWfDt=X6nSidWA@mail.gmail.com>
Message-ID: <AANLkTi=tW8LcFDKW3cAryc-j+F-PNo+pGn__cAntNOEx@mail.gmail.com>

> If it's do-able, your option 2 is probably the way to go. Out of the
> box, it may just need to raise an exception if asked to down-convert
> code that uses new constructs that can't readily be expressed using
> the old AST (I'm specifically thinking of the challenge of converting
> PEP 380's yield-from).

I was talking only about changes in AST for existing constructs. New
language features is another dimension. For example, we can leave them
even in "old" trees, so that they can be supported in existing code
with minimal changes. Or we can throw, forcing everyone who wants to
process them to catch up with all other AST changes.

I realized I overlooked one problem with supporting multiple versions
of AST. Functions from ast module might need to know which AST version
they've got. For example, ast.get_docstring will need to know whether
docstring was populated or it needs to look in the body. This can be
solved by attaching ast_version to affected nodes when converting.

Eugene

From g.brandl at gmx.net  Sun Apr  3 07:51:32 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 03 Apr 2011 07:51:32 +0200
Subject: [Python-Dev] Python 3.3 release schedule posted
In-Reply-To: <BANLkTim+uBdXYQOYLmGMtktEoqs0Scd+KA@mail.gmail.com>
References: <imdj8n$dq0$1@dough.gmane.org>
	<BANLkTim+uBdXYQOYLmGMtktEoqs0Scd+KA@mail.gmail.com>
Message-ID: <in91tc$2rp$1@dough.gmane.org>

Am 02.04.2011 15:00, schrieb anatoly techtonik:
> On Wed, Mar 23, 2011 at 9:56 PM, Georg Brandl <g.brandl at gmx.net> wrote:
>> I've posted a very preliminary Python 3.3 release schedule as PEP 398.
>> The final release is set to be about 18 months after 3.2 final, which
>> is in August 2012.
> 
> Why this isn't being added to Release Calendar on the front page?

It is.

> Do you have an estimate of Python 3.2.1 release?

Not yet.

Georg


From ncoghlan at gmail.com  Sun Apr  3 08:44:41 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 3 Apr 2011 16:44:41 +1000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTi=tW8LcFDKW3cAryc-j+F-PNo+pGn__cAntNOEx@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTikuweiKSHCTa5GV+oFmQMGaKVod2A@mail.gmail.com>
	<BANLkTikWXkqbRb530tGmxNTHESpYY-1zZQ@mail.gmail.com>
	<AANLkTikf_8hrzuq0qK+yWoGGtLYTbr33g3WJym9dd8=x@mail.gmail.com>
	<BANLkTik51CYUczJxLsyhYWfDt=X6nSidWA@mail.gmail.com>
	<AANLkTi=tW8LcFDKW3cAryc-j+F-PNo+pGn__cAntNOEx@mail.gmail.com>
Message-ID: <BANLkTikFjvSKdrxnuaq9VOQbDoDEXifNDg@mail.gmail.com>

On Sun, Apr 3, 2011 at 3:42 PM, Eugene Toder <eltoder at gmail.com> wrote:
>> If it's do-able, your option 2 is probably the way to go. Out of the
>> box, it may just need to raise an exception if asked to down-convert
>> code that uses new constructs that can't readily be expressed using
>> the old AST (I'm specifically thinking of the challenge of converting
>> PEP 380's yield-from).
>
> I was talking only about changes in AST for existing constructs. New
> language features is another dimension. For example, we can leave them
> even in "old" trees, so that they can be supported in existing code
> with minimal changes. Or we can throw, forcing everyone who wants to
> process them to catch up with all other AST changes.

I wonder if there's any existing research on the topic - we can't be
the first people to confront these kinds of problems.

> I realized I overlooked one problem with supporting multiple versions
> of AST. Functions from ast module might need to know which AST version
> they've got. For example, ast.get_docstring will need to know whether
> docstring was populated or it needs to look in the body. This can be
> solved by attaching ast_version to affected nodes when converting.

Or just have a top level version node. If it isn't there, then it's
version 1. (Although that could make working on subsections of the AST
a little trickier)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From martin at v.loewis.de  Sun Apr  3 08:55:40 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 03 Apr 2011 08:55:40 +0200
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
Message-ID: <4D9819EC.7040507@v.loewis.de>

> 1. Do nothing. This will break code that currently uses AST, but doesn't add
> any complexity to cpython.

I'm in favor of this approach as well. Notice that there is
ast.__version__ precisely so that applications can support multiple AST
versions.

Regards,
Martin

From victor.stinner at haypocalc.com  Sun Apr  3 10:31:36 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Sun, 03 Apr 2011 10:31:36 +0200
Subject: [Python-Dev] Unicode module names (Was: Python 3.3 release
 schedule posted)
In-Reply-To: <BANLkTimxErMr6e+Em-RGf+OmPvSFwKeCZA@mail.gmail.com>
References: <BANLkTimxErMr6e+Em-RGf+OmPvSFwKeCZA@mail.gmail.com>
Message-ID: <1301819496.8798.2.camel@marge>

Le samedi 02 avril 2011 ? 16:06 +0300, anatoly techtonik a ?crit :
> On Thu, Mar 24, 2011 at 2:41 AM, Victor Stinner
> <victor.stinner at haypocalc.com> wrote:
> >
> > I am still working on the import machinery to fix last bugs related to
> > Unicode. So it will be possible to do an useless "import caf?" in Python
> > 3.3, on any platform. But it is not really an huge change (for the user,
> > but an huge change in the code ;-)).
> 
> I don't like the idea of reading the code with some kind of Chinese
> variable names in them. I'd prefer that I and l confusion will be the
> only I should care about in valid Python syntax.

Please read the PEP 3131 (Supporting Non-ASCII Identifiers):
http://www.python.org/dev/peps/pep-3131/

And the thread "Import and unicode: part two" (19 Jan 2011) on
python-dev.

Victor


From victor.stinner at haypocalc.com  Sun Apr  3 10:56:45 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Sun, 03 Apr 2011 10:56:45 +0200
Subject: [Python-Dev] Python 3.3 release schedule posted
In-Reply-To: <BANLkTim+uBdXYQOYLmGMtktEoqs0Scd+KA@mail.gmail.com>
References: <imdj8n$dq0$1@dough.gmane.org>
	<BANLkTim+uBdXYQOYLmGMtktEoqs0Scd+KA@mail.gmail.com>
Message-ID: <1301821005.8798.7.camel@marge>

Le samedi 02 avril 2011 ? 16:00 +0300, anatoly techtonik a ?crit :
> Do you have an estimate of Python 3.2.1 release?

FYI I introduced (and then fixed) two regressions specific to Windows in
Python 3.2:
http://bugs.python.org/issue11272 (input)
http://bugs.python.org/issue11395 (print)

Issue #11272 is annoying: input() returns a string ending a string
ending by '\r'. The workaround is input().rstrip('\r')or
input().rstrip().

The second end is more a corner case: print fails if you write more than
60,000 bytes at once.

Victor


From p.f.moore at gmail.com  Sun Apr  3 13:17:12 2011
From: p.f.moore at gmail.com (Paul Moore)
Date: Sun, 3 Apr 2011 12:17:12 +0100
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <4D9819EC.7040507@v.loewis.de>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
Message-ID: <BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>

On 3 April 2011 07:55, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>> 1. Do nothing. This will break code that currently uses AST, but doesn't add
>> any complexity to cpython.
>
> I'm in favor of this approach as well. Notice that there is
> ast.__version__ precisely so that applications can support multiple AST
> versions.

This might be a suitable topic for a post to "Python Insider" - as
it's a policy question, that would make the discussion known to a
wider audience, giving people with an interest the opportunity to
comment. Hopefully, that also makes the "Do Nothing" approach more
acceptable to the wider Python community by publicising the
implications well in advance.

If there are no objections (I'm conscious that we don't want every
python-dev discussion dumped onto Python Insider) I'll post a short
article once a consensus has been reached.

Paul.

From merwok at netwok.org  Sun Apr  3 18:55:33 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Sun, 03 Apr 2011 18:55:33 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #5863: Rewrite
 BZ2File in pure Python, and allow it to accept
In-Reply-To: <E1Q6OvO-000604-Qj@dinsdale.python.org>
References: <E1Q6OvO-000604-Qj@dinsdale.python.org>
Message-ID: <4D98A685.1080401@netwok.org>

Hi,

> changeset:   69112:2cb07a46f4b5
> user:        Antoine Pitrou <solipsis at pitrou.net>
> date:        Sun Apr 03 17:05:46 2011 +0200
> summary:
>   Issue #5863: Rewrite BZ2File in pure Python, and allow it to accept
> file-like objects using a new `fileobj` constructor argument.  Patch by
> Nadeem Vawda.
> 
> files:
>   Doc/ACKS.txt         |     1 +

I think we use Misc/ACKS for code+docs contribution like this one,
Doc/ACKS.txt being used for doc-only changes.  This second file is not
comprehensive nor always used though, so maybe it should be superseded
by the former.

Regards

>   Doc/library/bz2.rst  |   221 +-
>   Lib/bz2.py           |   392 +++++
>   Lib/test/test_bz2.py |   142 +-
>   Misc/NEWS            |     4 +
>   Modules/bz2module.c  |  2281 ++++-------------------------
>   PCbuild/bz2.vcproj   |     4 +-
>   PCbuild/pcbuild.sln  |     2 +-
>   PCbuild/readme.txt   |     6 +-
>   setup.py             |     4 +-

From digitalxero at gmail.com  Sun Apr  3 19:08:59 2011
From: digitalxero at gmail.com (Dj Gilcrease)
Date: Sun, 3 Apr 2011 13:08:59 -0400
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <4D963057.3020509@g.nevcal.com>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org>
	<20110401022636.14945972@pitrou.net> <in3io8$8a7$1@dough.gmane.org>
	<in4aae$ran$2@dough.gmane.org> <in4t8v$kql$1@dough.gmane.org>
	<4D963057.3020509@g.nevcal.com>
Message-ID: <BANLkTi=p86wErL8GAtgOGG9MNtAr3sp5Aw@mail.gmail.com>

How about something like
http://andurin.com/python-issue-tracker/issue5863.htm but with proper
click to expand js not css hover expansion since the pure css solution
gets a little jumpy.

Dj Gilcrease
?____
( | ? ? \ ?o ? ?() ? | ?o ?|`|
? | ? ? ?| ? ? ?/`\_/| ? ? ?| | ? ,__ ? ,_, ? ,_, ? __, ? ?, ? ,_,
_| ? ? ?| | ? ?/ ? ? ?| ?| ? |/ ? / ? ? ?/ ? | ? |_/ ?/ ? ?| ? / \_|_/
(/\___/ ?|/ ?/(__,/ ?|_/|__/\___/ ? ?|_/|__/\__/|_/\,/ ?|__/
? ? ? ? ?/|
? ? ? ? ?\|

From eric at trueblade.com  Sun Apr  3 19:29:28 2011
From: eric at trueblade.com (Eric Smith)
Date: Sun, 03 Apr 2011 13:29:28 -0400
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
 Merge fix for issue #11746
In-Reply-To: <E1Q6Q2G-0002zA-Vj@dinsdale.python.org>
References: <E1Q6Q2G-0002zA-Vj@dinsdale.python.org>
Message-ID: <4D98AE78.8090508@trueblade.com>

On 4/3/2011 12:20 PM, antoine.pitrou wrote:
> http://hg.python.org/cpython/rev/c11e05a60d36
> changeset:   69115:c11e05a60d36
> parent:      69113:ff105faf1bac
> parent:      69114:88ed3de28520
> user:        Antoine Pitrou <solipsis at pitrou.net>
> date:        Sun Apr 03 18:16:50 2011 +0200
> summary:
>   Merge fix for issue #11746
> 
> files:
>   Misc/NEWS      |  3 +++
>   Modules/_ssl.c |  2 +-
>   2 files changed, 4 insertions(+), 1 deletions(-)

Test?

From solipsis at pitrou.net  Sun Apr  3 20:02:21 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 3 Apr 2011 20:02:21 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #5863: Rewrite
 BZ2File in pure Python, and allow it to accept
References: <E1Q6OvO-000604-Qj@dinsdale.python.org>
	<4D98A685.1080401@netwok.org>
Message-ID: <20110403200221.3c509a45@pitrou.net>

On Sun, 03 Apr 2011 18:55:33 +0200
?ric Araujo <merwok at netwok.org> wrote:
> Hi,
> 
> > changeset:   69112:2cb07a46f4b5
> > user:        Antoine Pitrou <solipsis at pitrou.net>
> > date:        Sun Apr 03 17:05:46 2011 +0200
> > summary:
> >   Issue #5863: Rewrite BZ2File in pure Python, and allow it to accept
> > file-like objects using a new `fileobj` constructor argument.  Patch by
> > Nadeem Vawda.
> > 
> > files:
> >   Doc/ACKS.txt         |     1 +
> 
> I think we use Misc/ACKS for code+docs contribution like this one,
> Doc/ACKS.txt being used for doc-only changes.  This second file is not
> comprehensive nor always used though, so maybe it should be superseded
> by the former.

Nadeem is already in Misc/ACKS.  I don't know what the policy is for
Doc/ACKS.txt, but since he added himself in the patch, I saw no good
reason for reverting the change.

+1 for merging these files by the way.

Regards

Antoine.



From solipsis at pitrou.net  Sun Apr  3 20:03:18 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 3 Apr 2011 20:03:18 +0200
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
 Merge fix for issue #11746
References: <E1Q6Q2G-0002zA-Vj@dinsdale.python.org>
	<4D98AE78.8090508@trueblade.com>
Message-ID: <20110403200318.646d1ea1@pitrou.net>

On Sun, 03 Apr 2011 13:29:28 -0400
Eric Smith <eric at trueblade.com> wrote:
> On 4/3/2011 12:20 PM, antoine.pitrou wrote:
> > http://hg.python.org/cpython/rev/c11e05a60d36
> > changeset:   69115:c11e05a60d36
> > parent:      69113:ff105faf1bac
> > parent:      69114:88ed3de28520
> > user:        Antoine Pitrou <solipsis at pitrou.net>
> > date:        Sun Apr 03 18:16:50 2011 +0200
> > summary:
> >   Merge fix for issue #11746
> > 
> > files:
> >   Misc/NEWS      |  3 +++
> >   Modules/_ssl.c |  2 +-
> >   2 files changed, 4 insertions(+), 1 deletions(-)
> 
> Test?

Good point.  If someone knows how to generate elliptic curve keys, a
patch for test_ssl.py is welcome.
(the patch was trivial enough in itself to commit it)

Regards

Antoine.



From skip at pobox.com  Sun Apr  3 20:12:54 2011
From: skip at pobox.com (skip at pobox.com)
Date: Sun, 3 Apr 2011 13:12:54 -0500
Subject: [Python-Dev] Please revert autofolding of tracker edit form
In-Reply-To: <BANLkTi=p86wErL8GAtgOGG9MNtAr3sp5Aw@mail.gmail.com>
References: <in07nd$5nb$1@dough.gmane.org>
	<AANLkTimtQVdm-9_cveQtbeCdkOj6cQ7C=Qa+qMqp+zBY@mail.gmail.com>
	<20110331040513.A82B54B71F@kimball.webabinitio.net>
	<19860.23698.260679.739252@montanaro.dyndns.org>
	<19860.25443.131458.388306@montanaro.dyndns.org>
	<20110331141612.5ad2097a@pitrou.net>
	<AANLkTik927x3CBJHdrXBuGNkOhtZCV9NSS7A_4UHnaoB@mail.gmail.com>
	<1301579959.3535.10.camel@localhost.localdomain>
	<in2bg7$v5q$1@dough.gmane.org> <20110401022636.14945972@pitrou.net>
	<in3io8$8a7$1@dough.gmane.org> <in4aae$ran$2@dough.gmane.org>
	<in4t8v$kql$1@dough.gmane.org> <4D963057.3020509@g.nevcal.com>
	<BANLkTi=p86wErL8GAtgOGG9MNtAr3sp5Aw@mail.gmail.com>
Message-ID: <19864.47270.321672.803783@montanaro.dyndns.org>


    Dj> How about something like
    Dj> http://andurin.com/python-issue-tracker/issue5863.htm but with
    Dj> proper click to expand js not css hover expansion since the pure css
    Dj> solution gets a little jumpy.

That's part of it.  Note the files list as well:

    bz2module-v1.diff       nvawda, 2011-01-24 20:15        ...
    bz2module-v2.diff       nvawda, 2011-01-25 17:07        
    bz2-v3.diff             nvawda, 2011-01-30 14:12        
    bz2-doc.diff            nvawda, 2011-02-05 20:31        
    bz2-v3b.diff            nvawda, 2011-02-08 21:57        
    bz2-v4.diff             nvawda, 2011-03-20 19:15                     
    bz2-v4-doc.diff         nvawda, 2011-03-20 19:18                        
    bz2-v5.diff             nvawda, 2011-04-02 07:34        
    bz2-v5-doc.diff         nvawda, 2011-04-02 07:38                        
    bz2-v6.diff             nvawda, 2011-04-02 18:14                        
    bz2-v6-doc.diff         nvawda, 2011-04-02 18:14        

It looks like there are seven versions of the bz2 patch and four versions of
the doc patch.  I think this list should be collapsed as well (ignoring that
the author seems to have changed his mind about the name of the patch):

    bz2.diff >       nvawda, 2011-04-02 18:14
    bz2-doc.diff >   nvawda, 2011-04-02 18:14

expanding the patch proper:

    bz2.diff v       nvawda, 2011-04-02 18:14
      bz2.diff       nvawda, 2011-04-02 07:34
      bz2.diff       nvawda, 2011-03-20 19:15
      ...
    bz2-doc.diff >   nvawda, 2011-04-02 18:14

Skip

From guido at python.org  Sun Apr  3 21:11:16 2011
From: guido at python.org (Guido van Rossum)
Date: Sun, 3 Apr 2011 12:11:16 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
Message-ID: <AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>

On Sun, Apr 3, 2011 at 4:17 AM, Paul Moore <p.f.moore at gmail.com> wrote:
> On 3 April 2011 07:55, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>>> 1. Do nothing. This will break code that currently uses AST, but doesn't add
>>> any complexity to cpython.
>>
>> I'm in favor of this approach as well. Notice that there is
>> ast.__version__ precisely so that applications can support multiple AST
>> versions.
>
> This might be a suitable topic for a post to "Python Insider" - as
> it's a policy question, that would make the discussion known to a
> wider audience, giving people with an interest the opportunity to
> comment. Hopefully, that also makes the "Do Nothing" approach more
> acceptable to the wider Python community by publicising the
> implications well in advance.
>
> If there are no objections (I'm conscious that we don't want every
> python-dev discussion dumped onto Python Insider) I'll post a short
> article once a consensus has been reached.

Agreed, it would be good to know what people do with AST nodes before
going too far.

In the mean time, until we hear differently, I'm also in favor of #1
(do nothing). I would (perhaps redundantly) say that such changes
should only go into new major releases (i.e. 3.3 right now), not
backported into bugfix releases (e.g. 3.2.1). AFAIK the AST is
CPython-specific so should be treated with the same attitude as
changes to the bytecode. That means, do it conservatively, since there
*are* people who like to write tools that manipulate or analyze this,
and while they know they're doing something CPython and
version-specific, they should not be broken by bugfix releases, since
the people who *use* their code probably have no idea of the deep
magic they're depending on.

-- 
--Guido van Rossum (python.org/~guido)

From brett at python.org  Sun Apr  3 21:40:24 2011
From: brett at python.org (Brett Cannon)
Date: Sun, 3 Apr 2011 12:40:24 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <4D9819EC.7040507@v.loewis.de>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
Message-ID: <BANLkTinJz6372FMGBM2nVFuUPG0v+VUh5Q@mail.gmail.com>

On Sat, Apr 2, 2011 at 23:55, "Martin v. L?wis" <martin at v.loewis.de> wrote:

> > 1. Do nothing. This will break code that currently uses AST, but doesn't
> add
> > any complexity to cpython.
>
> I'm in favor of this approach as well. Notice that there is
> ast.__version__ precisely so that applications can support multiple AST
> versions.
>

As someone who actually does use the AST (http://code.google.com/p/mnfy/), I
am in favour of #1 thanks to ast.__version__. I actually have a version
check in my code to make sure that if a change occurs my tests fail and I
know I need to update things.


>
> Regards,
> Martin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110403/5139debb/attachment.html>

From nadeem.vawda at gmail.com  Sun Apr  3 23:28:05 2011
From: nadeem.vawda at gmail.com (Nadeem Vawda)
Date: Sun, 3 Apr 2011 23:28:05 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #5863: Rewrite
 BZ2File in pure Python, and allow it to accept
In-Reply-To: <20110403200221.3c509a45@pitrou.net>
References: <E1Q6OvO-000604-Qj@dinsdale.python.org>
	<4D98A685.1080401@netwok.org> <20110403200221.3c509a45@pitrou.net>
Message-ID: <BANLkTin-PSEMPfCP-JLdLRNaqGfr-KJ-kw@mail.gmail.com>

On Sun, Apr 3, 2011 at 8:02 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Sun, 03 Apr 2011 18:55:33 +0200
> ?ric Araujo <merwok at netwok.org> wrote:
>> I think we use Misc/ACKS for code+docs contribution like this one,
>> Doc/ACKS.txt being used for doc-only changes. ?This second file is not
>> comprehensive nor always used though, so maybe it should be superseded
>> by the former.
>
> Nadeem is already in Misc/ACKS. ?I don't know what the policy is for
> Doc/ACKS.txt, but since he added himself in the patch, I saw no good
> reason for reverting the change.

I added myself because I assumed the policy for Doc/ACKS.txt to be the same
as the policy for Misc/ACKS - if you submit a patch, add your name. Looking
at the devguide, though, I can't find any mention of Doc/ACKS.txt.

> +1 for merging these files by the way.

Sounds good to me. The intro at the top of Misc/ACKS is pretty broad,
thanking people for all contributions (not just code). Unless there's some
plan to split the documentation off into a separate repository, I can't
think of any reason not to merge them.

Regards,
Nadeem

From martin at v.loewis.de  Sun Apr  3 23:58:18 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 03 Apr 2011 23:58:18 +0200
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
 Merge fix for issue #11746
In-Reply-To: <20110403200318.646d1ea1@pitrou.net>
References: <E1Q6Q2G-0002zA-Vj@dinsdale.python.org>	<4D98AE78.8090508@trueblade.com>
	<20110403200318.646d1ea1@pitrou.net>
Message-ID: <4D98ED7A.7020607@v.loewis.de>

> Good point.  If someone knows how to generate elliptic curve keys, a
> patch for test_ssl.py is welcome.

You can generate EC keys and certificates like this:

openssl ecparam -out server.key -name secp112r2  -genkey
openssl req -new -x509 -key server.key -out server.pem -subj /CN=www.test

(see "openssl ecparam -list_curves" for a list of valid names)

Regards,
Martin

From eltoder at gmail.com  Mon Apr  4 03:32:26 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Sun, 3 Apr 2011 21:32:26 -0400
Subject: [Python-Dev] Policy for versions of system python
Message-ID: <BANLkTim_0z8R0s+FyhgC5dGH4-pX4qJviA@mail.gmail.com>

Hello,

CPython source code currently contains a number of python scripts (e.g
Python/makeopcodetargets.py, Objects/typeslots.py, Parser/asdl_c.py)
that are used during the build of the python interpreter itself. For
this reason they are run with system installed python. What is the
policy regarding
the range of python versions that they should support?

I looked at some of the scripts and they seem to support both 2 and 3,
starting from at most 2.4. Python/makeopcodetargets.py says at the
top:
# This code should stay compatible with Python 2.3, at least while
# some of the buildbots have Python 2.3 as their system Python.
Is this the official minimal version or do we have this spelled out
more explicitly somewhere?

Eugene

From ncoghlan at gmail.com  Mon Apr  4 03:43:46 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 4 Apr 2011 11:43:46 +1000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
	<AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
Message-ID: <BANLkTinrWiwVJvzjH6r2xM4OtWLFPkO+WA@mail.gmail.com>

On Mon, Apr 4, 2011 at 5:11 AM, Guido van Rossum <guido at python.org> wrote:
> In the mean time, until we hear differently, I'm also in favor of #1
> (do nothing). I would (perhaps redundantly) say that such changes
> should only go into new major releases (i.e. 3.3 right now), not
> backported into bugfix releases (e.g. 3.2.1). AFAIK the AST is
> CPython-specific so should be treated with the same attitude as
> changes to the bytecode. That means, do it conservatively, since there
> *are* people who like to write tools that manipulate or analyze this,
> and while they know they're doing something CPython and
> version-specific, they should not be broken by bugfix releases, since
> the people who *use* their code probably have no idea of the deep
> magic they're depending on.

Perhaps we should add a warning to the ast module docs similar to the
one we have for the dis module, and use it to explicitly remind people
to check ast.__version__ before proceeding with AST manipulation?

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Mon Apr  4 04:02:33 2011
From: guido at python.org (Guido van Rossum)
Date: Sun, 3 Apr 2011 19:02:33 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTinrWiwVJvzjH6r2xM4OtWLFPkO+WA@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
	<AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
	<BANLkTinrWiwVJvzjH6r2xM4OtWLFPkO+WA@mail.gmail.com>
Message-ID: <BANLkTi=mcZn6uhJ0L4tECM_jTTovJ+MKSw@mail.gmail.com>

On Sun, Apr 3, 2011 at 6:43 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Mon, Apr 4, 2011 at 5:11 AM, Guido van Rossum <guido at python.org> wrote:
>> In the mean time, until we hear differently, I'm also in favor of #1
>> (do nothing). I would (perhaps redundantly) say that such changes
>> should only go into new major releases (i.e. 3.3 right now), not
>> backported into bugfix releases (e.g. 3.2.1). AFAIK the AST is
>> CPython-specific so should be treated with the same attitude as
>> changes to the bytecode. That means, do it conservatively, since there
>> *are* people who like to write tools that manipulate or analyze this,
>> and while they know they're doing something CPython and
>> version-specific, they should not be broken by bugfix releases, since
>> the people who *use* their code probably have no idea of the deep
>> magic they're depending on.
>
> Perhaps we should add a warning to the ast module docs similar to the
> one we have for the dis module, and use it to explicitly remind people
> to check ast.__version__ before proceeding with AST manipulation?

Sure, but do we have any indication that the warnings for dis make a difference?

-- 
--Guido van Rossum (python.org/~guido)

From ncoghlan at gmail.com  Mon Apr  4 05:03:37 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 4 Apr 2011 13:03:37 +1000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTi=mcZn6uhJ0L4tECM_jTTovJ+MKSw@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
	<AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
	<BANLkTinrWiwVJvzjH6r2xM4OtWLFPkO+WA@mail.gmail.com>
	<BANLkTi=mcZn6uhJ0L4tECM_jTTovJ+MKSw@mail.gmail.com>
Message-ID: <BANLkTinvEF_QjWbexnCVSR8-2nLYRt3=Xw@mail.gmail.com>

On Mon, Apr 4, 2011 at 12:02 PM, Guido van Rossum <guido at python.org> wrote:
>> Perhaps we should add a warning to the ast module docs similar to the
>> one we have for the dis module, and use it to explicitly remind people
>> to check ast.__version__ before proceeding with AST manipulation?
>
> Sure, but do we have any indication that the warnings for dis make a difference?

I know it makes *me* feel better when I commit anything that messes
with the bytecode. I don't know how much it really matters to end
users - bytecode hackery has been frowned upon for so long, the
warning is probably somewhat redundant.

Still, if we do articulate a clearer policy on the topic, it should be
officially documented somewhere, and the AST module docs are probably
the most discoverable of the available options.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From tjreedy at udel.edu  Mon Apr  4 07:05:12 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 04 Apr 2011 01:05:12 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
Message-ID: <inbjia$e1h$1@dough.gmane.org>

On 4/2/2011 9:55 PM, Eugene Toder wrote:
>  Documentation for ast module does not warn about possible changes,

The current boxed warning at the top of the dis doc is fairly recent.
The ast doc should gain something similar.  It currently does say:
"__version__ which is the decimal Subversion revision number of the file 
shown below."
which clearly implies at ast details can change. I presume that sentence 
needs revision for hg.

 > but obviously changes
> can occur, for example, when new constructs are introduced. What about other
> changes? Is there a policy for what's acceptable and how this should be
> handled?

Thanks for bringing this up. We need to be able to make changes to 
improve compilation as well as to add new features.

-- 
Terry Jan Reedy


From g.brandl at gmx.net  Mon Apr  4 11:55:07 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 04 Apr 2011 11:55:07 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #5863: Rewrite
 BZ2File in pure Python, and allow it to accept
In-Reply-To: <BANLkTin-PSEMPfCP-JLdLRNaqGfr-KJ-kw@mail.gmail.com>
References: <E1Q6OvO-000604-Qj@dinsdale.python.org>	<4D98A685.1080401@netwok.org>
	<20110403200221.3c509a45@pitrou.net>
	<BANLkTin-PSEMPfCP-JLdLRNaqGfr-KJ-kw@mail.gmail.com>
Message-ID: <inc4i4$di0$1@dough.gmane.org>

Am 03.04.2011 23:28, schrieb Nadeem Vawda:
> On Sun, Apr 3, 2011 at 8:02 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> On Sun, 03 Apr 2011 18:55:33 +0200
>> ?ric Araujo <merwok at netwok.org> wrote:
>>> I think we use Misc/ACKS for code+docs contribution like this one,
>>> Doc/ACKS.txt being used for doc-only changes.  This second file is not
>>> comprehensive nor always used though, so maybe it should be superseded
>>> by the former.
>>
>> Nadeem is already in Misc/ACKS.  I don't know what the policy is for
>> Doc/ACKS.txt, but since he added himself in the patch, I saw no good
>> reason for reverting the change.
> 
> I added myself because I assumed the policy for Doc/ACKS.txt to be the same
> as the policy for Misc/ACKS - if you submit a patch, add your name. Looking
> at the devguide, though, I can't find any mention of Doc/ACKS.txt.
> 
>> +1 for merging these files by the way.
> 
> Sounds good to me. The intro at the top of Misc/ACKS is pretty broad,
> thanking people for all contributions (not just code). Unless there's some
> plan to split the documentation off into a separate repository, I can't
> think of any reason not to merge them.

If we can get Misc/ACKS in a format that is includable in reST, I would be
+1 for a merger.  (That way we can still list acknowledgements in the docs.)

Georg


From nas at arctrix.com  Mon Apr  4 17:17:04 2011
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 4 Apr 2011 15:17:04 +0000 (UTC)
Subject: [Python-Dev] Policy for making changes to the AST
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTinJz6372FMGBM2nVFuUPG0v+VUh5Q@mail.gmail.com>
Message-ID: <incndg$3of$1@dough.gmane.org>

As a user of the AST, I as well favor just changing the AST and the
version.  IMHO, it is not intended to be stable between Python
releases (similar to bytecode).

  Neil


From tjreedy at udel.edu  Mon Apr  4 18:19:15 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 04 Apr 2011 12:19:15 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTi=mcZn6uhJ0L4tECM_jTTovJ+MKSw@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>	<4D9819EC.7040507@v.loewis.de>	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>	<AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>	<BANLkTinrWiwVJvzjH6r2xM4OtWLFPkO+WA@mail.gmail.com>
	<BANLkTi=mcZn6uhJ0L4tECM_jTTovJ+MKSw@mail.gmail.com>
Message-ID: <incr24$thr$1@dough.gmane.org>

On 4/3/2011 10:02 PM, Guido van Rossum wrote:

> Sure, but do we have any indication that the warnings for dis make a difference?

I think there had been a few grumbles about bytecode not being stable. 
Without that, it is part of the newish effort to specify in the docs 
what is CPython specific. In http://bugs.python.org/issue11762
I propose a lighter version of the dis notice:

"CPython implementation detail: The ast definition is specific to the 
CPython interpreter! Ast nodes may be added, removed, or changed between 
versions. Use *ast.__version__* to work across versions."

and that ast.__version__ get a normal formal entry

ast.__version__
     String constant with version number of the abstract grammar file.
     3.1: '67616'; 3.2: '82163'; 3.3: 'xxxxxxxxx'

-- 
Terry Jan Reedy


From fwierzbicki at gmail.com  Mon Apr  4 19:05:30 2011
From: fwierzbicki at gmail.com (fwierzbicki at gmail.com)
Date: Mon, 4 Apr 2011 10:05:30 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
Message-ID: <BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>

As a re-implementor of ast.py that tries to be node for node
compatible, I'm fine with #1 but would really like to have tests that
will fail in test_ast.py to alert me!

-Frank

From amauryfa at gmail.com  Mon Apr  4 19:11:52 2011
From: amauryfa at gmail.com (Amaury Forgeot d'Arc)
Date: Mon, 4 Apr 2011 19:11:52 +0200
Subject: [Python-Dev] Policy for versions of system python
In-Reply-To: <BANLkTim_0z8R0s+FyhgC5dGH4-pX4qJviA@mail.gmail.com>
References: <BANLkTim_0z8R0s+FyhgC5dGH4-pX4qJviA@mail.gmail.com>
Message-ID: <BANLkTinGGMvQw17Uex5AayGBCRe6Jv9=cQ@mail.gmail.com>

2011/4/4 Eugene Toder <eltoder at gmail.com>:
> Hello,
>
> CPython source code currently contains a number of python scripts (e.g
> Python/makeopcodetargets.py, Objects/typeslots.py, Parser/asdl_c.py)
> that are used during the build of the python interpreter itself. For
> this reason they are run with system installed python. What is the
> policy regarding
> the range of python versions that they should support?
>
> I looked at some of the scripts and they seem to support both 2 and 3,
> starting from at most 2.4. Python/makeopcodetargets.py says at the
> top:
> # This code should stay compatible with Python 2.3, at least while
> # some of the buildbots have Python 2.3 as their system Python.
> Is this the official minimal version or do we have this spelled out
> more explicitly somewhere?

Normally PEP291 lists the packages which should remain compatible
with previous versions of Python:
http://www.python.org/dev/peps/pep-0291/

makeopcodetargets.py is not mentioned there, though.

-- 
Amaury Forgeot d'Arc

From solipsis at pitrou.net  Mon Apr  4 19:19:49 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 4 Apr 2011 19:19:49 +0200
Subject: [Python-Dev] Policy for versions of system python
References: <BANLkTim_0z8R0s+FyhgC5dGH4-pX4qJviA@mail.gmail.com>
	<BANLkTinGGMvQw17Uex5AayGBCRe6Jv9=cQ@mail.gmail.com>
Message-ID: <20110404191949.07e13129@pitrou.net>

On Mon, 4 Apr 2011 19:11:52 +0200
"Amaury Forgeot d'Arc" <amauryfa at gmail.com> wrote:
> 2011/4/4 Eugene Toder <eltoder at gmail.com>:
> > Hello,
> >
> > CPython source code currently contains a number of python scripts (e.g
> > Python/makeopcodetargets.py, Objects/typeslots.py, Parser/asdl_c.py)
> > that are used during the build of the python interpreter itself. For
> > this reason they are run with system installed python. What is the
> > policy regarding
> > the range of python versions that they should support?
> >
> > I looked at some of the scripts and they seem to support both 2 and 3,
> > starting from at most 2.4. Python/makeopcodetargets.py says at the
> > top:
> > # This code should stay compatible with Python 2.3, at least while
> > # some of the buildbots have Python 2.3 as their system Python.
> > Is this the official minimal version or do we have this spelled out
> > more explicitly somewhere?
> 
> Normally PEP291 lists the packages which should remain compatible
> with previous versions of Python:
> http://www.python.org/dev/peps/pep-0291/

That's quite orthogonal. PEP 291 is about public stdlib modules, not
build scripts. Furthermore, ?this PEP has no bearing on the Python 3
standard library?.

To answer Eugene's question, there's no official policy but the
comment at the top of Python/makeopcodetargets.py can indeed serve as
an useful guideline. I wonder if we still have buildbots with 2.3 as
the system Python, by the way.

Regards

Antoine.



From fuzzyman at voidspace.org.uk  Mon Apr  4 19:38:24 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 04 Apr 2011 18:38:24 +0100
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
Message-ID: <4D9A0210.4000406@voidspace.org.uk>

On 04/04/2011 18:05, fwierzbicki at gmail.com wrote:
> As a re-implementor of ast.py that tries to be node for node
> compatible, I'm fine with #1 but would really like to have tests that
> will fail in test_ast.py to alert me!
>

A lot of tools that work with Python source code use ast - so even 
though other implementations may not use the same ast "under the hood" 
they will probably at least *want* to provide a compatible 
implementation. IronPython is in that boat too (although I don't know if 
we *have* a compatible implementation yet - we certainly feel like we 
*should* have one).

Michael

> -Frank
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From guido at python.org  Mon Apr  4 20:00:51 2011
From: guido at python.org (Guido van Rossum)
Date: Mon, 4 Apr 2011 11:00:51 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <4D9A0210.4000406@voidspace.org.uk>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
Message-ID: <BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>

On Mon, Apr 4, 2011 at 10:05 AM, fwierzbicki at gmail.com
<fwierzbicki at gmail.com> wrote:
> As a re-implementor of ast.py that tries to be node for node
> compatible, I'm fine with #1 but would really like to have tests that
> will fail in test_ast.py to alert me!

[and]

On Mon, Apr 4, 2011 at 10:38 AM, Michael Foord
<fuzzyman at voidspace.org.uk> wrote:
> A lot of tools that work with Python source code use ast - so even though
> other implementations may not use the same ast "under the hood" they will
> probably at least *want* to provide a compatible implementation. IronPython
> is in that boat too (although I don't know if we *have* a compatible
> implementation yet - we certainly feel like we *should* have one).

Ok, so it sounds like ast is *not* limited to CPython? That makes it
harder to justify changing it just so as to ease the compilation
process in CPython (as opposed to add new language features).

-- 
--Guido van Rossum (python.org/~guido)

From glyph at twistedmatrix.com  Mon Apr  4 20:07:47 2011
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Mon, 4 Apr 2011 14:07:47 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
Message-ID: <CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>


On Apr 4, 2011, at 2:00 PM, Guido van Rossum wrote:

> On Mon, Apr 4, 2011 at 10:05 AM, fwierzbicki at gmail.com
> <fwierzbicki at gmail.com> wrote:
>> As a re-implementor of ast.py that tries to be node for node
>> compatible, I'm fine with #1 but would really like to have tests that
>> will fail in test_ast.py to alert me!
> 
> [and]
> 
> On Mon, Apr 4, 2011 at 10:38 AM, Michael Foord
> <fuzzyman at voidspace.org.uk> wrote:
>> A lot of tools that work with Python source code use ast - so even though
>> other implementations may not use the same ast "under the hood" they will
>> probably at least *want* to provide a compatible implementation. IronPython
>> is in that boat too (although I don't know if we *have* a compatible
>> implementation yet - we certainly feel like we *should* have one).
> 
> Ok, so it sounds like ast is *not* limited to CPython?

Oh, definitely not.  I would be pretty dismayed if tools like <http://bazaar.launchpad.net/~divmod-dev/divmod.org/trunk/files/head:/Pyflakes/> would not run on Jython & PyPy.


From dinov at microsoft.com  Mon Apr  4 20:07:55 2011
From: dinov at microsoft.com (Dino Viehland)
Date: Mon, 4 Apr 2011 18:07:55 +0000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
Message-ID: <6C7ABA8B4E309440B857D74348836F2E150320A6@TK5EX14MBXC133.redmond.corp.microsoft.com>


Guido wrote:
> On Mon, Apr 4, 2011 at 10:05 AM, fwierzbicki at gmail.com
> <fwierzbicki at gmail.com> wrote:
> > As a re-implementor of ast.py that tries to be node for node
> > compatible, I'm fine with #1 but would really like to have tests that
> > will fail in test_ast.py to alert me!
> 
> [and]
> 
> On Mon, Apr 4, 2011 at 10:38 AM, Michael Foord
> <fuzzyman at voidspace.org.uk> wrote:
> > A lot of tools that work with Python source code use ast - so even
> > though other implementations may not use the same ast "under the hood"
> > they will probably at least *want* to provide a compatible
> > implementation. IronPython is in that boat too (although I don't know
> > if we *have* a compatible implementation yet - we certainly feel like we
> *should* have one).
> 
> Ok, so it sounds like ast is *not* limited to CPython? That makes it harder to
> justify changing it just so as to ease the compilation process in CPython (as
> opposed to add new language features).

Even so I think adding new features does allow new changes to the AST.  We'll
need to do the work to add support for the new features anyway so updating the
AST accordingly won't be much more work.  I agree with Frank that as long as there
are tests for the new features it's fine.  I think it'll also be better for consumers who
would probably prefer to see a YieldFrom node rather than its expansion (and not
all new language features will necessarily have a reasonable  expansion - consider if
goto ever happened ;) ).

Also, IronPython doesn't have ast yet but I think it has been requested that we 
implement it - we just haven't gotten around to it yet.

So I'm +1 on allowing changes to it.

From nad at acm.org  Mon Apr  4 20:20:11 2011
From: nad at acm.org (Ned Deily)
Date: Mon, 04 Apr 2011 11:20:11 -0700
Subject: [Python-Dev] Policy for versions of system python
References: <BANLkTim_0z8R0s+FyhgC5dGH4-pX4qJviA@mail.gmail.com>
	<BANLkTinGGMvQw17Uex5AayGBCRe6Jv9=cQ@mail.gmail.com>
	<20110404191949.07e13129@pitrou.net>
Message-ID: <nad-EAE92F.11201004042011@news.gmane.org>

In article <20110404191949.07e13129 at pitrou.net>,
 Antoine Pitrou <solipsis at pitrou.net> wrote:
> To answer Eugene's question, there's no official policy but the
> comment at the top of Python/makeopcodetargets.py can indeed serve as
> an useful guideline. I wonder if we still have buildbots with 2.3 as
> the system Python, by the way.

The system Python on Mac OS X 10.4 (Tiger) is Python 2.3.  For 10.5 
(Leopard) it's 2.5.  10.6 (Snow Leopard) has both 2.6 and 2.5.

-- 
 Ned Deily,
 nad at acm.org


From flub at devork.be  Mon Apr  4 20:45:34 2011
From: flub at devork.be (Floris Bruynooghe)
Date: Mon, 4 Apr 2011 19:45:34 +0100
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
	<CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>
Message-ID: <BANLkTinfJq8w5HyBMfONLQKxPkS36FLd8w@mail.gmail.com>

On 4 April 2011 19:07, Glyph Lefkowitz <glyph at twistedmatrix.com> wrote:
>
> On Apr 4, 2011, at 2:00 PM, Guido van Rossum wrote:
>
>> On Mon, Apr 4, 2011 at 10:05 AM, fwierzbicki at gmail.com
>> <fwierzbicki at gmail.com> wrote:
>>> As a re-implementor of ast.py that tries to be node for node
>>> compatible, I'm fine with #1 but would really like to have tests that
>>> will fail in test_ast.py to alert me!
>>
>> [and]
>>
>> On Mon, Apr 4, 2011 at 10:38 AM, Michael Foord
>> <fuzzyman at voidspace.org.uk> wrote:
>>> A lot of tools that work with Python source code use ast - so even though
>>> other implementations may not use the same ast "under the hood" they will
>>> probably at least *want* to provide a compatible implementation. IronPython
>>> is in that boat too (although I don't know if we *have* a compatible
>>> implementation yet - we certainly feel like we *should* have one).
>>
>> Ok, so it sounds like ast is *not* limited to CPython?
>
> Oh, definitely not. ?I would be pretty dismayed if tools like <http://bazaar.launchpad.net/~divmod-dev/divmod.org/trunk/files/head:/Pyflakes/> would not run on Jython & PyPy.

Add py.test as an application that uses the AST to support Jython,
PyPy and CPython in a portable way.  I always assumed AST was created
*because* bytecode was too CPython specific (but then I've never
implemented a language).

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org

From tjreedy at udel.edu  Mon Apr  4 21:56:16 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 04 Apr 2011 15:56:16 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
Message-ID: <ind7p1$fvn$1@dough.gmane.org>

On 4/4/2011 2:00 PM, Guido van Rossum wrote:
> On Mon, Apr 4, 2011 at 10:05 AM, fwierzbicki at gmail.com
> <fwierzbicki at gmail.com>  wrote:
>> As a re-implementor of ast.py that tries to be node for node
>> compatible, I'm fine with #1 but would really like to have tests that
>> will fail in test_ast.py to alert me!
>
> [and]
>
> On Mon, Apr 4, 2011 at 10:38 AM, Michael Foord
> <fuzzyman at voidspace.org.uk>  wrote:
>> A lot of tools that work with Python source code use ast - so even though
>> other implementations may not use the same ast "under the hood" they will
>> probably at least *want* to provide a compatible implementation. IronPython
>> is in that boat too (although I don't know if we *have* a compatible
>> implementation yet - we certainly feel like we *should* have one).
>
> Ok, so it sounds like ast is *not* limited to CPython? That makes it
> harder to justify changing it just so as to ease the compilation
> process in CPython (as opposed to add new language features).

Harder, but not impossible. Moving optimizations from bytecode (where 
they are demonstrably a bit fragile) to ast manipulations (where we 
presume they will be more robust and can be broader) should be a win in 
itself and it also makes them potentially available to *other* 
implementations. (There would have been some advantage to making this 
change for 3.0 But there was also reason for as little change as needed, 
just as with unittest.)

Are at least some of the implementation methods similar enough that they 
could use the same AST? It is, after all, a *semantic* translation into 
another language, and that need not depend on subsequent transforation 
and compilation to the ultimate target. A multiple-implementation AST 
could still be x.y dependent.

-- 
Terry Jan Reedy


From dinov at microsoft.com  Mon Apr  4 22:05:11 2011
From: dinov at microsoft.com (Dino Viehland)
Date: Mon, 4 Apr 2011 20:05:11 +0000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <ind7p1$fvn$1@dough.gmane.org>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
	<ind7p1$fvn$1@dough.gmane.org>
Message-ID: <6C7ABA8B4E309440B857D74348836F2E1503570D@TK5EX14MBXC133.redmond.corp.microsoft.com>

Terry wrote:
> Are at least some of the implementation methods similar enough that they
> could use the same AST? It is, after all, a *semantic* translation into another
> language, and that need not depend on subsequent transforation and
> compilation to the ultimate target. A multiple-implementation AST could still
> be x.y dependent.

For IronPython we have our own AST which is closely tied to the DLR ASTs (our
AST nodes are actually subclasses of the core DLR Expression node which then
"reduce" to the core DLR nodes on-demand).  We already do a huge amount of
manipulation of those ASTs from optimizations (constant folding being the primary
one) to re-writing them completely for things like generators or sys.settrace support and 
other optimizations like runtime optimized fast exception support.  But our ASTs are 
probably sufficiently different and sufficiently tied to the DLR that we couldn't 
share the exact same optimizations  on the ASTs but it would probably make it 
easier to steal ideas from CPython if you did them at the AST level as well.

They also have other differences such as the fact that they're effectively immutable.
Likely when we implement the _ast module it'll just transform our ASTs into the shared
ASTs via some additional attributes we attach to our ASTs rather than making them the
core AST implementation.




From fuzzyman at voidspace.org.uk  Mon Apr  4 22:16:21 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 4 Apr 2011 21:16:21 +0100
Subject: [Python-Dev] [Python-checkins] cpython: Revert the
 Lib/test/test_bigmem.py changes from commit 17891566a478 (and a
In-Reply-To: <20110325180928.7ef6f692@pitrou.net>
References: <E1Q34UC-0000Yl-SC@dinsdale.python.org>
	<4D8CC66A.5080405@netwok.org> <20110325180928.7ef6f692@pitrou.net>
Message-ID: <BANLkTikm2VSXZhtDVfmfs0VKNR-uiivUwQ@mail.gmail.com>

On 25 March 2011 17:09, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Fri, 25 Mar 2011 17:44:26 +0100
> ?ric Araujo <merwok at netwok.org> wrote:
> > Hi,
> >
> > > changeset:   68921:11dc3f270594
> > > user:        Thomas Wouters <thomas at python.org>
> > > date:        Fri Mar 25 11:42:37 2011 +0100
> > > summary:
> > >   Revert the Lib/test/test_bigmem.py changes from commit 17891566a478
> (and a
> > > few other assertEqual tests that snuck in), and expand the docstrings
> and
> > > comments explaining why and how these tests are supposed to work.
> >
> > Your commit message does not explain why you reverted the changes.  The
> > specific assert* methods give more useful messages than assertEqual in
> > case of failure.
>
> Because they don't go well with huge inputs?
>
> >>> s = "x" * (2**29)
> >>> case.assertEqual(s + "a", s + "b")
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
>  File "/home/antoine/cpython/default/Lib/unittest/case.py", line 643,
> in assertEqual assertion_func(first, second, msg=msg)
>  File "/home/antoine/cpython/default/Lib/unittest/case.py", line 984,
> in assertMultiLineEqual secondlines = [second + '\n']
> MemoryError
>
>
http://bugs.python.org/issue11763

Michael


>
> (of course, the functions could just be fixed)
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>



-- 

http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110404/e3309feb/attachment.html>

From guido at python.org  Mon Apr  4 22:31:40 2011
From: guido at python.org (Guido van Rossum)
Date: Mon, 4 Apr 2011 13:31:40 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <ind7p1$fvn$1@dough.gmane.org>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
	<ind7p1$fvn$1@dough.gmane.org>
Message-ID: <BANLkTi=L+CiB7iWEaeuQcnUkY1H8ksxxOQ@mail.gmail.com>

On Mon, Apr 4, 2011 at 12:56 PM, Terry Reedy <tjreedy at udel.edu> wrote:
> Moving optimizations from bytecode (where they
> are demonstrably a bit fragile) to ast manipulations (where we presume they
> will be more robust and can be broader) should be a win in itself

I am still doubtful of that. While in theory it is easier to become
confused about what the bytecode means, in practice the bugs we had
due to bytecode optimization were based on misunderstandings and
unintended consequences that would have caused the *exact* same bug if
the optimization was done at the AST level. (E.g. various
mistreatments of -0, ignoring possible floating point misbehavior for
extreme values or situations.)

-- 
--Guido van Rossum (python.org/~guido)

From brian.curtin at gmail.com  Mon Apr  4 22:38:08 2011
From: brian.curtin at gmail.com (Brian Curtin)
Date: Mon, 4 Apr 2011 15:38:08 -0500
Subject: [Python-Dev] Supporting Visual Studio 2010
Message-ID: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>

Would it be reasonable to begin supporting Visual Studio 2010 for Windows
builds of 3.3? I now have a personal interest in this happening for some
stuff at work, and there's been a lot of questions in the last few months
about when we'll support it coming from python-list, #python-dev, and in
person at PyCon.

I wasn't around for the transition from 2005 to 2008, but I see we have a
few sub-folders in PC/ for previous versions, so apparently we may support
multiple versions at one time. Does it make sense to start this process now
for a change to 2010?

If it's not feasible to release 3.3 from a 2010 build, when might we be able
to make the change? Keep in mind the 3.3 final release is almost a year and
a half away, and we already know that Microsoft is likely to pull the cord
on VS2008 Express at some point now that 2010 has been out for a while.


I'm willing to do the work on this, but I just want to make sure it's a
worthwhile effort.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110404/204eb0cf/attachment.html>

From martin at v.loewis.de  Mon Apr  4 23:40:33 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 04 Apr 2011 23:40:33 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
Message-ID: <4D9A3AD1.3000403@v.loewis.de>

Am 04.04.2011 22:38, schrieb Brian Curtin:
> Would it be reasonable to begin supporting Visual Studio 2010 for
> Windows builds of 3.3?

Interesting question. The following concerns have played a role in the
past:
- depending on the timing of the next VS release, Python may actually
  want to skip VS 2010, and move right to VS 2012 (say).
- users have expressed concerns that they constantly need to upgrade
  VS releases when developing for Python. With VS Express, that concern
  may be reduced - but you still need to buy a full license if you want
  to support AMD64.
- users have also expressed concerns that old VS versions become
  unavailable; of course, these are different users (since the first
  ones have already bought copies of VS 2008). The counter-argument is
  that you can still get cheap copies on Ebay, but that may be a red
  herring.
- every time this comes up, people also suggest that we should stop
  building with VS, and use gcc in the first place.

> Does it make sense to start this
> process now for a change to 2010?

I'll abstain from a vote here, and I think it essentially comes down to
voting (or somebody putting the foot down saying "I want this now",
which really was the way it worked the last time(s)).

Somebody would need to take charge of this, and fix all the issues that
come up: incompatibilities, generation of backwards-compatible project
files, MSI packaging, getting licenses to buildbot operators.

So if you want to lead this, and the votes are generally in favor,
go ahead. Be prepared to do this *again* before the 3.3 release when
switching to the next VS release (and yes, Microsoft's timing had
been so unfortunate in the past that such a switch would have occurred
just before the first beta release of Python).

> If it's not feasible to release 3.3 from a 2010 build, when might we be
> able to make the change?

If we don't switch for 3.3, we'll definitely switch to VS 2012.

> I'm willing to do the work on this, but I just want to make sure it's a
> worthwhile effort.

See above: it may or may not.

Regards,
Martin

From solipsis at pitrou.net  Tue Apr  5 00:21:55 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 5 Apr 2011 00:21:55 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de>
Message-ID: <20110405002155.3ded22cc@pitrou.net>

On Mon, 04 Apr 2011 23:40:33 +0200
"Martin v. L?wis" <martin at v.loewis.de> wrote:
> - users have expressed concerns that they constantly need to upgrade
>   VS releases when developing for Python.

Isn't that kind of a misguided argument? It's not Python who decides the
lifecycle of MSVC releases, it's Microsoft. We can't be blamed for the
churn.

If getting old (Microsoft-unsupported) MSVC releases is difficult, then
I think switching to the newest MSVC as soon as possible is the best
strategy, since it minimizes the annoyance for people wanting to build
extensions several years after a release is made.

Regards

Antoine.



From fuzzyman at voidspace.org.uk  Tue Apr  5 00:43:29 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 04 Apr 2011 23:43:29 +0100
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110405002155.3ded22cc@pitrou.net>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>
	<20110405002155.3ded22cc@pitrou.net>
Message-ID: <4D9A4991.9090800@voidspace.org.uk>

On 04/04/2011 23:21, Antoine Pitrou wrote:
> On Mon, 04 Apr 2011 23:40:33 +0200
> "Martin v. L?wis"<martin at v.loewis.de>  wrote:
>> - users have expressed concerns that they constantly need to upgrade
>>    VS releases when developing for Python.
> Isn't that kind of a misguided argument? It's not Python who decides the
> lifecycle of MSVC releases, it's Microsoft. We can't be blamed for the
> churn.
>
> If getting old (Microsoft-unsupported) MSVC releases is difficult, then
> I think switching to the newest MSVC as soon as possible is the best
> strategy, since it minimizes the annoyance for people wanting to build
> extensions several years after a release is made.

Won't that still be an issue despite the stable ABI? Extensions on 
Windows should be linked to the same version of MSVCRT used to compile 
Python - and every time we switch version of Visual Studio it is usually 
accompanied by a switch in MSVCRT version. So for C extensions compiled 
with a specific version of Python will need to be recompiled for later 
versions of Python, even if they only use the stable ABI, if the newer 
version of Python is compiled against a different MSVCRT. (?)

This would seem to circumvent one of the core use-cases of the stable 
ABI which was not needing to recompile extensions for new versions of 
Python. Of course I could be completely wrong about all this.

All the best,

Michael
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From scott+python-dev at scottdial.com  Tue Apr  5 01:12:12 2011
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Mon, 04 Apr 2011 19:12:12 -0400
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9A4991.9090800@voidspace.org.uk>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>	<20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk>
Message-ID: <4D9A504C.7080407@scottdial.com>

On 4/4/2011 6:43 PM, Michael Foord wrote:
> Won't that still be an issue despite the stable ABI? Extensions on
> Windows should be linked to the same version of MSVCRT used to compile
> Python - and every time we switch version of Visual Studio it is usually
> accompanied by a switch in MSVCRT version.

My understanding (but I haven't looked closely) was that the stable ABI
specifically excluded anything that would expose a problem due to a CRT
mismatch -- making this a moot point. I'm sure Martin will correct me if
I am wrong.

-Scott

-- 
Scott Dial
scott at scottdial.com
scodial at cs.indiana.edu

From tjreedy at udel.edu  Tue Apr  5 01:27:25 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 04 Apr 2011 19:27:25 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <6C7ABA8B4E309440B857D74348836F2E1503570D@TK5EX14MBXC133.redmond.corp.microsoft.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>	<4D9A0210.4000406@voidspace.org.uk>	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>	<ind7p1$fvn$1@dough.gmane.org>
	<6C7ABA8B4E309440B857D74348836F2E1503570D@TK5EX14MBXC133.redmond.corp.microsoft.com>
Message-ID: <indk4s$kv8$1@dough.gmane.org>

On 4/4/2011 4:05 PM, Dino Viehland wrote:

> "reduce" to the core DLR nodes on-demand).  We already do a huge amount of
> manipulation of those ASTs from optimizations (constant folding being the primary
> one) to re-writing them completely for things like generators or sys.settrace support and
> other optimizations like runtime optimized fast exception support.  But our ASTs are

I meant to add that doing optimization (and other manipulations) with 
AST would also make it easier to borrow from other implementations.

-- 
Terry Jan Reedy


From brett at python.org  Tue Apr  5 01:46:35 2011
From: brett at python.org (Brett Cannon)
Date: Mon, 4 Apr 2011 16:46:35 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
Message-ID: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>

At both the VM and language summits at PyCon this year, the issue of
compatibility of the stdlib amongst the various VMs came up. Two issues came
about in regards to modules that use C code. One is that code that comes in
only as C code sucks for all other VMs that are not CPython since they all
end up having to re-implement that module themselves. Two is that modules
that have an accelerator module (e.g., heapq, warnings, etc.) can end up
with compatibility options (sorry, Raymond, for picking on heapq, but is was
what bit the PyPy people most recently =).

In lieu of all of this, here is a draft PEP to more clearly state the policy
for the stdlib when it comes to C code. Since this has come up before and
this was discussed so much at the summits I have gone ahead and checked this
in so that even if this PEP gets rejected there can be a written record as
to why.

And before anyone asks, I have already run this past the lead devs of PyPy,
Jython, and IronPython and they all support what this PEP proposes. And with
the devs of the other VMs gaining push privileges there shouldn't be an
added developer burden on everyone to make this PEP happen.

==========================================================
PEP: 399
Title: Pure Python/C Accelerator Module Compatibiilty Requirements
Version: $Revision: 88219 $
Last-Modified: $Date: 2011-01-27 13:47:00 -0800 (Thu, 27 Jan 2011) $
Author: Brett Cannon <brett at python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 04-Apr-2011
Python-Version: 3.3
Post-History:

Abstract
========

The Python standard library under CPython contains various instances
of modules implemented in both pure Python and C. This PEP requires
that in these instances that both the Python and C code *must* be
semantically identical (except in cases where implementation details
of a VM prevents it entirely). It is also required that new C-based
modules lacking a pure Python equivalent implementation get special
permissions to be added to the standard library.


Rationale
=========

Python has grown beyond the CPython virtual machine (VM). IronPython_,
Jython_, and PyPy_ all currently being viable alternatives to the
CPython VM. This VM ecosystem that has sprung up around the Python
programming language has led to Python being used in many different
areas where CPython cannot be used, e.g., Jython allowing Python to be
used in Java applications.

A problem all of the VMs other than CPython face is handling modules
from the standard library that are implemented in C. Since they do not
typically support the entire `C API of Python`_ they are unable to use
the code used to create the module. Often times this leads these other
VMs to either re-implement the modules in pure Python or in the
programming language used to implement the VM (e.g., in C# for
IronPython). This duplication of effort between CPython, PyPy, Jython,
and IronPython is extremely unfortunate as implementing a module *at
least* in pure Python would help mitigate this duplicate effort.

The purpose of this PEP is to minimize this duplicate effort by
mandating that all new modules added to Python's standard library
*must* have a pure Python implementation _unless_ special dispensation
is given. This makes sure that a module in the stdlib is available to
all VMs and not just to CPython.

Re-implementing parts (or all) of a module in C (in the case
of CPython) is still allowed for performance reasons, but any such
accelerated code must semantically match the pure Python equivalent to
prevent divergence. To accomplish this, the pure Python and C code must
be thoroughly tested with the *same* test suite to verify compliance.
This is to prevent users from accidentally relying
on semantics that are specific to the C code and are not reflected in
the pure Python implementation that other VMs rely upon, e.g., in
CPython 3.2.0, ``heapq.heappop()`` raises different exceptions
depending on whether the accelerated C code is used or not::

    from test.support import import_fresh_module

    c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
    py_heapq = import_fresh_module('heapq', blocked=['_heapq'])


    class Spam:
        """Tester class which defines no other magic methods but
        __len__()."""
        def __len__(self):
            return 0


    try:
        c_heapq.heappop(Spam())
    except TypeError:
        # "heap argument must be a list"
        pass

    try:
        py_heapq.heappop(Spam())
    except AttributeError:
        # "'Foo' object has no attribute 'pop'"
        pass

This kind of divergence is a problem for users as they unwittingly
write code that is CPython-specific. This is also an issue for other
VM teams as they have to deal with bug reports from users thinking
that they incorrectly implemented the module when in fact it was
caused by an untested case.


Details
=======

Starting in Python 3.3, any modules added to the standard library must
have a pure Python implementation. This rule can only be ignored if
the Python development team grants a special exemption for the module.
Typically the exemption would be granted only when a module wraps a
specific C-based library (e.g., sqlite3_). In granting an exemption it
will be recognized that the module will most likely be considered
exclusive to CPython and not part of Python's standard library that
other VMs are expected to support. Usage of ``ctypes`` to provide an
API for a C library will continue to be frowned upon as ``ctypes``
lacks compiler guarantees that C code typically relies upon to prevent
certain errors from occurring (e.g., API changes).

Even though a pure Python implementation is mandated by this PEP, it
does not preclude the use of a companion acceleration module. If an
acceleration module is provided it is to be named the same as the
module it is accelerating with an underscore attached as a prefix,
e.g., ``_warnings`` for ``warnings``. The common pattern to access
the accelerated code from the pure Python implementation is to import
it with an ``import *``, e.g., ``from _warnings import *``. This is
typically done at the end of the module to allow it to overwrite
specific Python objects with their accelerated equivalents. This kind
of import can also be done before the end of the module when needed,
e.g., an accelerated base class is provided but is then subclassed by
Python code. This PEP does not mandate that pre-existing modules in
the stdlib that lack a pure Python equivalent gain such a module. But
if people do volunteer to provide and maintain a pure Python
equivalent (e.g., the PyPy team volunteering their pure Python
implementation of the ``csv`` module and maintaining it) then such
code will be accepted.

Any accelerated code must be semantically identical to the pure Python
implementation. The only time any semantics are allowed to be
different are when technical details of the VM providing the
accelerated code prevent matching semantics from being possible, e.g.,
a class being a ``type`` when implemented in C. The semantics
equivalence requirement also dictates that no public API be provided
in accelerated code that does not exist in the pure Python code.
Without this requirement people could accidentally come to rely on a
detail in the acclerated code which is not made available to other VMs
that use the pure Python implementation. To help verify that the
contract of semantic equivalence is being met, a module must be tested
both with and without its accelerated code as thoroughly as possible.

As an example, to write tests which exercise both the pure Python and
C acclerated versions of a module, a basic idiom can be followed::

    import collections.abc
    from test.support import import_fresh_module, run_unittest
    import unittest

    c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
    py_heapq = import_fresh_module('heapq', blocked=['_heapq'])


    class ExampleTest(unittest.TestCase):

        def test_heappop_exc_for_non_MutableSequence(self):
            # Raise TypeError when heap is not a
            # collections.abc.MutableSequence.
            class Spam:
                """Test class lacking many ABC-required methods
                (e.g., pop())."""
                def __len__(self):
                    return 0

            heap = Spam()
            self.assertFalse(isinstance(heap,
                                collections.abc.MutableSequence))
            with self.assertRaises(TypeError):
                self.heapq.heappop(heap)


    class AcceleratedExampleTest(ExampleTest):

        """Test using the acclerated code."""

        heapq = c_heapq


    class PyExampleTest(ExampleTest):

        """Test with just the pure Python code."""

        heapq = py_heapq


    def test_main():
        run_unittest(AcceleratedExampleTest, PyExampleTest)


    if __name__ == '__main__':
        test_main()

Thoroughness of the test can be verified using coverage measurements
with branching coverage on the pure Python code to verify that all
possible scenarios are tested using (or not using) accelerator code.


Copyright
=========

This document has been placed in the public domain.


.. _IronPython: http://ironpython.net/
.. _Jython: http://www.jython.org/
.. _PyPy: http://pypy.org/
.. _C API of Python: http://docs.python.org/py3k/c-api/index.html
.. _sqlite3: http://docs.python.org/py3k/library/sqlite3.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110404/6d37bf1b/attachment-0001.html>

From fuzzyman at voidspace.org.uk  Tue Apr  5 01:48:20 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Tue, 05 Apr 2011 00:48:20 +0100
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9A504C.7080407@scottdial.com>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>	<20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk>
	<4D9A504C.7080407@scottdial.com>
Message-ID: <4D9A58C4.1090200@voidspace.org.uk>

On 05/04/2011 00:12, Scott Dial wrote:
> On 4/4/2011 6:43 PM, Michael Foord wrote:
>> Won't that still be an issue despite the stable ABI? Extensions on
>> Windows should be linked to the same version of MSVCRT used to compile
>> Python - and every time we switch version of Visual Studio it is usually
>> accompanied by a switch in MSVCRT version.
> My understanding (but I haven't looked closely) was that the stable ABI
> specifically excluded anything that would expose a problem due to a CRT
> mismatch -- making this a moot point. I'm sure Martin will correct me if
> I am wrong.
>
Ah, wouldn't surprise me at all to know he'd already thought of that. :-)

Michael

> -Scott
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From eltoder at gmail.com  Tue Apr  5 04:25:49 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Mon, 4 Apr 2011 22:25:49 -0400
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
Message-ID: <BANLkTi=EQNukcPXEdYa0PHg3nAy74vB6Cw@mail.gmail.com>

> Ok, so it sounds like ast is *not* limited to CPython? That makes it
> harder to justify changing it just so as to ease the compilation
> process in CPython (as opposed to add new language features).

The changes above are not just for CPython, but to simplify processing
of AST in general, by reducing redundancy and separating syntax from
semantics. It just happens that the current structure of AST doesn't
allow important cases of constant folding at all, so I had to make
*some* changes. However, if the goal is to preserve the current AST as
much as possible, I can instead make a very simple completely backward
compatible change -- add one new node type that will never be present
in unoptimized AST. This is much less elegant and will add more cruft
to cpython's code (rather than removing it like the current patch
does), but it will work.

Eugene

From eltoder at gmail.com  Tue Apr  5 04:27:17 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Mon, 4 Apr 2011 22:27:17 -0400
Subject: [Python-Dev] Policy for versions of system python
In-Reply-To: <20110404191949.07e13129@pitrou.net>
References: <BANLkTim_0z8R0s+FyhgC5dGH4-pX4qJviA@mail.gmail.com>
	<BANLkTinGGMvQw17Uex5AayGBCRe6Jv9=cQ@mail.gmail.com>
	<20110404191949.07e13129@pitrou.net>
Message-ID: <BANLkTi=uhiJSPHcANBe=thP0afTLWbwXtQ@mail.gmail.com>

> To answer Eugene's question, there's no official policy but the
> comment at the top of Python/makeopcodetargets.py can indeed serve as
> an useful guideline. I wonder if we still have buildbots with 2.3 as
> the system Python, by the way.

Ok, I'll use 2.3 as my target. Thanks.

Eugene

From stefan_ml at behnel.de  Tue Apr  5 10:26:45 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Tue, 05 Apr 2011 10:26:45 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
Message-ID: <inejo5$45u$1@dough.gmane.org>

Brett Cannon, 05.04.2011 01:46:
> At both the VM and language summits at PyCon this year, the issue of
> compatibility of the stdlib amongst the various VMs came up. Two issues came
> about in regards to modules that use C code. One is that code that comes in
> only as C code sucks for all other VMs that are not CPython since they all
> end up having to re-implement that module themselves. Two is that modules
> that have an accelerator module (e.g., heapq, warnings, etc.) can end up
> with compatibility options (sorry, Raymond, for picking on heapq, but is was
> what bit the PyPy people most recently =).
>
> In lieu of all of this, here is a draft PEP to more clearly state the policy
> for the stdlib when it comes to C code. Since this has come up before and
> this was discussed so much at the summits I have gone ahead and checked this
> in so that even if this PEP gets rejected there can be a written record as
> to why.
>
> And before anyone asks, I have already run this past the lead devs of PyPy,
> Jython, and IronPython and they all support what this PEP proposes.

We recently had the discussion about reimplementing stdlib C modules in 
Cython. Accelerator modules are the obvious first step here, as they could 
be implemented in Python and compiled with Cython, instead of actually 
writing them in C in the first place. Wouldn't this be worth mentioning in 
the PEP?

Stefan


From martin at v.loewis.de  Tue Apr  5 11:46:15 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 05 Apr 2011 11:46:15 +0200
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
Message-ID: <4D9AE4E7.60004@v.loewis.de>

> Ok, so it sounds like ast is *not* limited to CPython? That makes it
> harder to justify changing it just so as to ease the compilation
> process in CPython (as opposed to add new language features).

I propose a different view: the AST *is* implementation specific,
although implementations are certainly encouraged to use a similar AST
if they provide an AST module at all.

Applications of it then explicitly need to be ported to each version
of each Python implementation that supports an AST module. If the
ASTs are similar, this porting will hopefully be easy.

The only alternative I can see is to freeze the AST structure, allowing
for extensions at best. I don't think any of the implementations are in
a state where such an approach is feasible.

Regards,
Martin


From martin at v.loewis.de  Tue Apr  5 11:49:49 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Tue, 05 Apr 2011 11:49:49 +0200
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTinfJq8w5HyBMfONLQKxPkS36FLd8w@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>	<4D9A0210.4000406@voidspace.org.uk>	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>	<CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>
	<BANLkTinfJq8w5HyBMfONLQKxPkS36FLd8w@mail.gmail.com>
Message-ID: <4D9AE5BD.1030407@v.loewis.de>

> Add py.test as an application that uses the AST to support Jython,
> PyPy and CPython in a portable way.  I always assumed AST was created
> *because* bytecode was too CPython specific (but then I've never
> implemented a language).

Historically, that's incorrect. The AST structure was introduced to
simplify the implementation of (C)Python. Exposing it to Python modules
was done primarily because it's neat and was easy to do - not because
any specific use was expected, and certainly not as the primary
motivation for having an AST. It was clear (to me, at least) back then
that the AST will change over time (hence the module was originally
called _ast).

Regards,
Martin

From martin at v.loewis.de  Tue Apr  5 11:55:40 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 05 Apr 2011 11:55:40 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110405002155.3ded22cc@pitrou.net>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>
	<20110405002155.3ded22cc@pitrou.net>
Message-ID: <4D9AE71C.9050108@v.loewis.de>

Am 05.04.2011 00:21, schrieb Antoine Pitrou:
> On Mon, 04 Apr 2011 23:40:33 +0200
> "Martin v. L?wis" <martin at v.loewis.de> wrote:
>> - users have expressed concerns that they constantly need to upgrade
>>   VS releases when developing for Python.
> 
> Isn't that kind of a misguided argument? It's not Python who decides the
> lifecycle of MSVC releases, it's Microsoft. We can't be blamed for the
> churn.

But we *can* be blamed for closely following the MS release cycle (if
we actually did that). For Python 3.2, we resisted.

> If getting old (Microsoft-unsupported) MSVC releases is difficult, then
> I think switching to the newest MSVC as soon as possible is the best
> strategy, since it minimizes the annoyance for people wanting to build
> extensions several years after a release is made.

OTOH, the very same people will have to buy licenses for all MSVC
releases. If we manage to skip some of them, the zoo of products you
need to install to support Python gets smaller.

Of course, if you use the stable ABI, going forward, you can decouple
from Python's product management.

Regards,
Martin

From martin at v.loewis.de  Tue Apr  5 11:58:13 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 05 Apr 2011 11:58:13 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9A4991.9090800@voidspace.org.uk>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>	<20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk>
Message-ID: <4D9AE7B5.5060900@v.loewis.de>

> Won't that still be an issue despite the stable ABI? Extensions on
> Windows should be linked to the same version of MSVCRT used to compile
> Python

Not if they use the stable ABI. There still might be issues if you
mix CRTs, but none related to the Python ABI - in particular, none
of those crashing conditions can arise from the stable ABI.

> This would seem to circumvent one of the core use-cases of the stable
> ABI which was not needing to recompile extensions for new versions of
> Python. Of course I could be completely wrong about all this.

Not completely, but slightly (I hope).

Regards,
Martin

From victor.stinner at haypocalc.com  Tue Apr  5 12:12:22 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Tue, 05 Apr 2011 12:12:22 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11707: Fast C
 version of functools.cmp_to_key()
In-Reply-To: <E1Q72eE-0004mi-9P@dinsdale.python.org>
References: <E1Q72eE-0004mi-9P@dinsdale.python.org>
Message-ID: <1301998342.6838.1.camel@marge>

I don't think that the following change conforms to the PEP 399: there
is only a test for the C version.

Victor

Le mardi 05 avril 2011 ? 11:34 +0200, raymond.hettinger a ?crit :
> http://hg.python.org/cpython/rev/a03fb2fc3ed8
> changeset:   69150:a03fb2fc3ed8
> user:        Raymond Hettinger <python at rcn.com>
> date:        Tue Apr 05 02:33:54 2011 -0700
> summary:
>   Issue #11707: Fast C version of functools.cmp_to_key()
> 
> diff --git a/Lib/test/test_functools.py b/Lib/test/test_functools.py
> --- a/Lib/test/test_functools.py
> +++ b/Lib/test/test_functools.py
> @@ -435,18 +435,81 @@
>          self.assertEqual(self.func(add, d), "".join(d.keys()))
>  
>  class TestCmpToKey(unittest.TestCase):
> +
>      def test_cmp_to_key(self):
> +        def cmp1(x, y):
> +            return (x > y) - (x < y)
> +        key = functools.cmp_to_key(cmp1)
> +        self.assertEqual(key(3), key(3))
> +        self.assertGreater(key(3), key(1))
> +        def cmp2(x, y):
> +            return int(x) - int(y)
> +        key = functools.cmp_to_key(cmp2)
> +        self.assertEqual(key(4.0), key('4'))
> +        self.assertLess(key(2), key('35'))
> +
> +    def test_cmp_to_key_arguments(self):
> +        def cmp1(x, y):
> +            return (x > y) - (x < y)
> +        key = functools.cmp_to_key(mycmp=cmp1)
> +        self.assertEqual(key(obj=3), key(obj=3))
> +        self.assertGreater(key(obj=3), key(obj=1))
> +        with self.assertRaises((TypeError, AttributeError)):
> +            key(3) > 1    # rhs is not a K object
> +        with self.assertRaises((TypeError, AttributeError)):
> +            1 < key(3)    # lhs is not a K object
> +        with self.assertRaises(TypeError):
> +            key = functools.cmp_to_key()             # too few args
> +        with self.assertRaises(TypeError):
> +            key = functools.cmp_to_key(cmp1, None)   # too many args
> +        key = functools.cmp_to_key(cmp1)
> +        with self.assertRaises(TypeError):
> +            key()                                    # too few args
> +        with self.assertRaises(TypeError):
> +            key(None, None)                          # too many args
> +
> +    def test_bad_cmp(self):
> +        def cmp1(x, y):
> +            raise ZeroDivisionError
> +        key = functools.cmp_to_key(cmp1)
> +        with self.assertRaises(ZeroDivisionError):
> +            key(3) > key(1)
> +
> +        class BadCmp:
> +            def __lt__(self, other):
> +                raise ZeroDivisionError
> +        def cmp1(x, y):
> +            return BadCmp()
> +        with self.assertRaises(ZeroDivisionError):
> +            key(3) > key(1)
> +
> +    def test_obj_field(self):
> +        def cmp1(x, y):
> +            return (x > y) - (x < y)
> +        key = functools.cmp_to_key(mycmp=cmp1)
> +        self.assertEqual(key(50).obj, 50)
> +
> +    def test_sort_int(self):
>          def mycmp(x, y):
>              return y - x
>          self.assertEqual(sorted(range(5), key=functools.cmp_to_key(mycmp)),
>                           [4, 3, 2, 1, 0])
>  
> +    def test_sort_int_str(self):
> +        def mycmp(x, y):
> +            x, y = int(x), int(y)
> +            return (x > y) - (x < y)
> +        values = [5, '3', 7, 2, '0', '1', 4, '10', 1]
> +        values = sorted(values, key=functools.cmp_to_key(mycmp))
> +        self.assertEqual([int(value) for value in values],
> +                         [0, 1, 1, 2, 3, 4, 5, 7, 10])
> +
>      def test_hash(self):
>          def mycmp(x, y):
>              return y - x
>          key = functools.cmp_to_key(mycmp)
>          k = key(10)
> -        self.assertRaises(TypeError, hash(k))
> +        self.assertRaises(TypeError, hash, k) 




From ncoghlan at gmail.com  Tue Apr  5 14:01:58 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 5 Apr 2011 22:01:58 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
Message-ID: <BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>

On Tue, Apr 5, 2011 at 9:46 AM, Brett Cannon <brett at python.org> wrote:
> ??? try:
> ??????? c_heapq.heappop(Spam())
> ??? except TypeError:
> ??????? # "heap argument must be a list"
> ??????? pass
>
> ??? try:
> ??????? py_heapq.heappop(Spam())
> ??? except AttributeError:
> ??????? # "'Foo' object has no attribute 'pop'"
> ??????? pass
>
> This kind of divergence is a problem for users as they unwittingly
> write code that is CPython-specific. This is also an issue for other
> VM teams as they have to deal with bug reports from users thinking
> that they incorrectly implemented the module when in fact it was
> caused by an untested case.

While I agree with the PEP in principle, I disagree with the way this
example is written. Guido has stated in the past that code simply
*cannot* rely on TypeError being consistently thrown instead of
AttributeError (or vice-versa) when it comes to duck-typing. Code that
cares which of the two is thrown is wrong.

However, there actually *is* a significant semantic discrepancy in the
heapq case, which is that py_heapq is duck-typed, while c_heapq is
not:

>>> from test.support import import_fresh_module
>>> c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
>>> py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
>>> from collections import UserList
>>> class Seq(UserList): pass
...
>>> c_heapq.heappop(UserList())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: heap argument must be a list
>>> py_heapq.heappop(UserList())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ncoghlan/devel/py3k/Lib/heapq.py", line 140, in heappop
    lastelt = heap.pop()    # raises appropriate IndexError if heap is empty
  File "/home/ncoghlan/devel/py3k/Lib/collections/__init__.py", line 848, in pop
    def pop(self, i=-1): return self.data.pop(i)
IndexError: pop from empty list

Cheers,
Nick.

P.S. The reason I was bugging Guido to answer the TypeError vs
AttributeError question in the first place was to find out whether or
not I needed to get rid of the following gross inconsistency in the
behaviour of the with statement relative to other language constructs:

>>> 1()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
>>> with 1: pass
...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute '__exit__'



Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From exarkun at twistedmatrix.com  Tue Apr  5 14:48:50 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Tue, 05 Apr 2011 12:48:50 -0000
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9AE71C.9050108@v.loewis.de>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9AE71C.9050108@v.loewis.de>
Message-ID: <20110405124850.1992.307167274.divmod.xquotient.209@localhost.localdomain>

On 09:55 am, martin at v.loewis.de wrote:
>Am 05.04.2011 00:21, schrieb Antoine Pitrou:
>>On Mon, 04 Apr 2011 23:40:33 +0200
>>"Martin v. L?wis" <martin at v.loewis.de> wrote:
>>>- users have expressed concerns that they constantly need to upgrade
>>>   VS releases when developing for Python.
>>
>>Isn't that kind of a misguided argument? It's not Python who decides 
>>the
>>lifecycle of MSVC releases, it's Microsoft. We can't be blamed for the
>>churn.
>
>But we *can* be blamed for closely following the MS release cycle (if
>we actually did that). For Python 3.2, we resisted.
>>If getting old (Microsoft-unsupported) MSVC releases is difficult, 
>>then
>>I think switching to the newest MSVC as soon as possible is the best
>>strategy, since it minimizes the annoyance for people wanting to build
>>extensions several years after a release is made.
>
>OTOH, the very same people will have to buy licenses for all MSVC
>releases. If we manage to skip some of them, the zoo of products you
>need to install to support Python gets smaller.

Recent Visual Studio Express editions are available as free downloads.

Jean-Paul

From exarkun at twistedmatrix.com  Tue Apr  5 14:52:42 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Tue, 05 Apr 2011 12:52:42 -0000
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9AE7B5.5060900@v.loewis.de>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk> <4D9AE7B5.5060900@v.loewis.de>
Message-ID: <20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>

On 09:58 am, martin at v.loewis.de wrote:
>>Won't that still be an issue despite the stable ABI? Extensions on
>>Windows should be linked to the same version of MSVCRT used to compile
>>Python
>
>Not if they use the stable ABI. There still might be issues if you
>mix CRTs, but none related to the Python ABI - in particular, none
>of those crashing conditions can arise from the stable ABI.

Does this mean new versions of distutils let you build_ext with any C 
compiler, instead of enforcing the same compiler as it has done 
previously?  That would be great.

Jean-Paul

From brian.curtin at gmail.com  Tue Apr  5 15:09:04 2011
From: brian.curtin at gmail.com (Brian Curtin)
Date: Tue, 5 Apr 2011 08:09:04 -0500
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110405124850.1992.307167274.divmod.xquotient.209@localhost.localdomain>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9AE71C.9050108@v.loewis.de>
	<20110405124850.1992.307167274.divmod.xquotient.209@localhost.localdomain>
Message-ID: <BANLkTi=i4LfqkW0egOCaKwvS=v0e7=MW6w@mail.gmail.com>

On Tue, Apr 5, 2011 at 07:48, <exarkun at twistedmatrix.com> wrote:

> On 09:55 am, martin at v.loewis.de wrote:
>
>> Am 05.04.2011 00:21, schrieb Antoine Pitrou:
>>
>>> On Mon, 04 Apr 2011 23:40:33 +0200
>>> "Martin v. L?wis" <martin at v.loewis.de> wrote:
>>>
>>>> - users have expressed concerns that they constantly need to upgrade
>>>>  VS releases when developing for Python.
>>>>
>>>
>>> Isn't that kind of a misguided argument? It's not Python who decides the
>>> lifecycle of MSVC releases, it's Microsoft. We can't be blamed for the
>>> churn.
>>>
>>
>> But we *can* be blamed for closely following the MS release cycle (if
>> we actually did that). For Python 3.2, we resisted.
>>
>>> If getting old (Microsoft-unsupported) MSVC releases is difficult, then
>>> I think switching to the newest MSVC as soon as possible is the best
>>> strategy, since it minimizes the annoyance for people wanting to build
>>> extensions several years after a release is made.
>>>
>>
>> OTOH, the very same people will have to buy licenses for all MSVC
>> releases. If we manage to skip some of them, the zoo of products you
>> need to install to support Python gets smaller.
>>
>
> Recent Visual Studio Express editions are available as free downloads.
>
> Jean-Paul


On top of that, since you and others have asked on IRC: Visual Studio 2010
Express supports x64 compilation if you have the Windows SDK installed
alongside VS2010. No more "support" via registry and config file hacking.

http://msdn.microsoft.com/en-us/library/9yb4317s.aspx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110405/507d3c44/attachment.html>

From jimjjewett at gmail.com  Tue Apr  5 15:10:45 2011
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 5 Apr 2011 09:10:45 -0400
Subject: [Python-Dev] clarification: subset vs equality Re:
 [Python-checkins] peps: Draft of PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
Message-ID: <BANLkTimZwrh5vK9v9Tcy+VKnCds8mehfTg@mail.gmail.com>

On 4/4/11, brett.cannon <python-checkins at python.org> wrote:
>   Draft of PEP 399: Pure Python/C Accelerator Module Compatibiilty
> Requirements

> +Abstract
> +========
> +
> +The Python standard library under CPython contains various instances
> +of modules implemented in both pure Python and C. This PEP requires
> +that in these instances that both the Python and C code *must* be
> +semantically identical (except in cases where implementation details
> +of a VM prevents it entirely). It is also required that new C-based
> +modules lacking a pure Python equivalent implementation get special
> +permissions to be added to the standard library.

I think it is worth stating explicitly that the C version can be even
a strict subset.  It is OK for the accelerated C code to rely on the
common python version; it is just the reverse that is not OK.

-jJ

From ncoghlan at gmail.com  Tue Apr  5 15:37:43 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 5 Apr 2011 23:37:43 +1000
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <4D9AE5BD.1030407@v.loewis.de>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
	<CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>
	<BANLkTinfJq8w5HyBMfONLQKxPkS36FLd8w@mail.gmail.com>
	<4D9AE5BD.1030407@v.loewis.de>
Message-ID: <BANLkTi=9CvWRFNbOC0MgqCmt6j=Sx9htgA@mail.gmail.com>

On Tue, Apr 5, 2011 at 7:49 PM, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> Historically, that's incorrect. The AST structure was introduced to
> simplify the implementation of (C)Python. Exposing it to Python modules
> was done primarily because it's neat and was easy to do - not because
> any specific use was expected, and certainly not as the primary
> motivation for having an AST. It was clear (to me, at least) back then
> that the AST will change over time (hence the module was originally
> called _ast).

_ast is actually still there under the hood - the Python just added
some nicer tools for working with it.

It's probably worth mentioning the specific non-new-feature-related
changes to the AST that Eugene proposed on the tracker:

1. Making "docstring" an attribute of the Function node rather than
leaving it embedded as the first statement in the suite (this avoids
issues where AST-based constant folding could potentially corrupt the
docstring)
2. Collapsing Num, Str, Bytes, Ellipsis into a single Literal node
type (the handling of those nodes is the same in a lot of cases)
3. Since they're keywords now, pick up True, False, None at the
parsing stage and turn them into instances of the Literal node type,
allowing the current Name-based special casing to be removed.

These are the proposed changes that would be visible to someone using
the ast.PyCF_ONLY_AST flag. Any further changes (i.e. the actual
constant folding) aren't exposed in the AST produced by that flag -
those changes happen as a pre-transformation step in the process of
turning the submitted AST into CPython bytecode.

As I said in the tracker item, I actually *like* those 3 changes and
think they streamline the AST (and the associated code) quite nicely.
However, I didn't want to tell Eugene to proceed on that basis without
getting feedback from a wider audience first.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Tue Apr  5 16:05:55 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 5 Apr 2011 16:05:55 +0200
Subject: [Python-Dev] Buildbot status
Message-ID: <20110405160555.7de069f5@pitrou.net>


Hello,

For the record, we have 9 stable buildbots, one of which is currently
offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
Paul Moore's XP buildbot is back in the stable stable.
(http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)

We also have a new 64-bit FreeBSD 8.2 buildbot donated and managed by
Stefan Krah.
(http://www.python.org/dev/buildbot/all/buildslaves/krah-freebsd)

Regards

Antoine.



From brett at python.org  Tue Apr  5 20:20:48 2011
From: brett at python.org (Brett Cannon)
Date: Tue, 5 Apr 2011 11:20:48 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <inejo5$45u$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<inejo5$45u$1@dough.gmane.org>
Message-ID: <BANLkTinOWLNNuBLrQHyx8aGw9n5ofS9Zsg@mail.gmail.com>

On Tue, Apr 5, 2011 at 01:26, Stefan Behnel <stefan_ml at behnel.de> wrote:

> Brett Cannon, 05.04.2011 01:46:
>
>  At both the VM and language summits at PyCon this year, the issue of
>> compatibility of the stdlib amongst the various VMs came up. Two issues
>> came
>> about in regards to modules that use C code. One is that code that comes
>> in
>> only as C code sucks for all other VMs that are not CPython since they all
>> end up having to re-implement that module themselves. Two is that modules
>> that have an accelerator module (e.g., heapq, warnings, etc.) can end up
>> with compatibility options (sorry, Raymond, for picking on heapq, but is
>> was
>> what bit the PyPy people most recently =).
>>
>> In lieu of all of this, here is a draft PEP to more clearly state the
>> policy
>> for the stdlib when it comes to C code. Since this has come up before and
>> this was discussed so much at the summits I have gone ahead and checked
>> this
>> in so that even if this PEP gets rejected there can be a written record as
>> to why.
>>
>> And before anyone asks, I have already run this past the lead devs of
>> PyPy,
>> Jython, and IronPython and they all support what this PEP proposes.
>>
>
> We recently had the discussion about reimplementing stdlib C modules in
> Cython. Accelerator modules are the obvious first step here, as they could
> be implemented in Python and compiled with Cython, instead of actually
> writing them in C in the first place. Wouldn't this be worth mentioning in
> the PEP?
>

I consider whether Cython is used orthogonal to the PEP. If Cython is
actually used and it is deemed needed I will update the PEP.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110405/3fd342d3/attachment.html>

From brett at python.org  Tue Apr  5 20:24:49 2011
From: brett at python.org (Brett Cannon)
Date: Tue, 5 Apr 2011 11:24:49 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>
Message-ID: <BANLkTimZ1JiO0iCeKgzKV4=SN2MAifD8Gg@mail.gmail.com>

On Tue, Apr 5, 2011 at 05:01, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Tue, Apr 5, 2011 at 9:46 AM, Brett Cannon <brett at python.org> wrote:
> >     try:
> >         c_heapq.heappop(Spam())
> >     except TypeError:
> >         # "heap argument must be a list"
> >         pass
> >
> >     try:
> >         py_heapq.heappop(Spam())
> >     except AttributeError:
> >         # "'Foo' object has no attribute 'pop'"
> >         pass
> >
> > This kind of divergence is a problem for users as they unwittingly
> > write code that is CPython-specific. This is also an issue for other
> > VM teams as they have to deal with bug reports from users thinking
> > that they incorrectly implemented the module when in fact it was
> > caused by an untested case.
>
> While I agree with the PEP in principle, I disagree with the way this
> example is written. Guido has stated in the past that code simply
> *cannot* rely on TypeError being consistently thrown instead of
> AttributeError (or vice-versa) when it comes to duck-typing. Code that
> cares which of the two is thrown is wrong.
>

Which is unfortunate since least common base class is Exception. But I can
add a note to the PEP saying that this is the case and change the example.


>
> However, there actually *is* a significant semantic discrepancy in the
> heapq case, which is that py_heapq is duck-typed, while c_heapq is
> not:
>
> >>> from test.support import import_fresh_module
> >>> c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
> >>> py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
> >>> from collections import UserList
> >>> class Seq(UserList): pass
> ...
> >>> c_heapq.heappop(UserList())
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: heap argument must be a list
> >>> py_heapq.heappop(UserList())
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
>  File "/home/ncoghlan/devel/py3k/Lib/heapq.py", line 140, in heappop
>    lastelt = heap.pop()    # raises appropriate IndexError if heap is empty
>  File "/home/ncoghlan/devel/py3k/Lib/collections/__init__.py", line 848, in
> pop
>    def pop(self, i=-1): return self.data.pop(i)
> IndexError: pop from empty list
>
> Cheers,
> Nick.
>
> P.S. The reason I was bugging Guido to answer the TypeError vs
> AttributeError question in the first place was to find out whether or
> not I needed to get rid of the following gross inconsistency in the
> behaviour of the with statement relative to other language constructs:
>
> >>> 1()
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: 'int' object is not callable
> >>> with 1: pass
> ...
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> AttributeError: 'int' object has no attribute '__exit__'
>
>
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110405/16557b44/attachment-0001.html>

From barry at python.org  Tue Apr  5 20:52:13 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 5 Apr 2011 14:52:13 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
Message-ID: <20110405145213.29f706aa@neurotica.wooz.org>

I just checked in PEP 396, Module Version Numbers.  This is an informational
PEP describing how to specify version numbers using the __version__
attribute.  This has already made one round through distutils-sig so it's time
to post it here.  Comments welcome of course!

Cheers,
-Barry

P.S. Yeah, I know the $Keyword$ strings here are wrong.  I'm not sure what to
do about them though.

PEP: 396
Title: Module Version Numbers
Version: $Revision: 65628 $
Last-Modified: $Date: 2008-08-10 09:59:20 -0400 (Sun, 10 Aug 2008) $
Author: Barry Warsaw <barry at python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 2011-03-16
Post-History: 2011-04-05


Abstract
========

Given that it is useful and common to specify version numbers for
Python modules, and given that different ways of doing this have grown
organically within the Python community, it is useful to establish
standard conventions for module authors to adhere to and reference.
This informational PEP describes best practices for Python module
authors who want to define the version number of their Python module.

Conformance with this PEP is optional, however other Python tools
(such as ``distutils2`` [1]_) may be adapted to use the conventions
defined here.


User Stories
============

Alice is writing a new module, called ``alice``, which she wants to
share with other Python developers.  ``alice`` is a simple module and
lives in one file, ``alice.py``.  Alice wants to specify a version
number so that her users can tell which version they are using.
Because her module lives entirely in one file, she wants to add the
version number to that file.

Bob has written a module called ``bob`` which he has shared with many
users.  ``bob.py`` contains a version number for the convenience of
his users.  Bob learns about the Cheeseshop [2]_, and adds some simple
packaging using classic distutils so that he can upload *The Bob
Bundle* to the Cheeseshop.  Because ``bob.py`` already specifies a
version number which his users can access programmatically, he wants
the same API to continue to work even though his users now get it from
the Cheeseshop.

Carole maintains several namespace packages, each of which are
independently developed and distributed.  In order for her users to
properly specify dependencies on the right versions of her packages,
she specifies the version numbers in the namespace package's
``setup.py`` file.  Because Carol wants to have to update one version
number per package, she specifies the version number in her module and
has the ``setup.py`` extract the module version number when she builds
the *sdist* archive.

David maintains a package in the standard library, and also produces
standalone versions for other versions of Python.  The standard
library copy defines the version number in the module, and this same
version number is used for the standalone distributions as well.


Rationale
=========

Python modules, both in the standard library and available from third
parties, have long included version numbers.  There are established
de-facto standards for describing version numbers, and many ad-hoc
ways have grown organically over the years.  Often, version numbers
can be retrieved from a module programmatically, by importing the
module and inspecting an attribute.  Classic Python distutils
``setup()`` functions [3]_ describe a ``version`` argument where the
release's version number can be specified.  PEP 8 [4]_ describes the
use of a module attribute called ``__version__`` for recording
"Subversion, CVS, or RCS" version strings using keyword expansion.  In
the PEP author's own email archives, the earliest example of the use
of an ``__version__`` module attribute by independent module
developers dates back to 1995.

Another example of version information is the sqlite3 [5]_ library
with its ``sqlite_version_info``, ``version``, and ``version_info``
attributes.  It may not be immediately obvious which attribute
contains a version number for the module, and which contains a version
number for the underlying SQLite3 library.

This informational PEP codifies established practice, and recommends
standard ways of describing module version numbers, along with some
use cases for when -- and when *not* -- to include them.  Its adoption
by module authors is purely voluntary; packaging tools in the standard
library will provide optional support for the standards defined
herein, and other tools in the Python universe may comply as well.


Specification
=============

#. In general, modules in the standard library SHOULD NOT have version
   numbers.  They implicitly carry the version number of the Python
   release they are included in.

#. On a case-by-case basis, standard library modules which are also
   released in standalone form for other Python versions MAY include a
   module version number when included in the standard library, and
   SHOULD include a version number when packaged separately.

#. When a module includes a version number, it SHOULD be available in
   the ``__version__`` attribute on that module.

#. For modules which are also packages, the module namespace SHOULD
   include the ``__version__`` attribute.

#. For modules which live inside a namespace package, the sub-package
   name SHOULD include the ``__version__`` attribute.  The namespace
   module itself SHOULD NOT include its own ``__version__`` attribute.

#. The ``__version__`` attribute's value SHOULD be a string.

#. Module version numbers SHOULD conform to the normalized version
   format specified in PEP 386 [6]_.

#. Module version numbers SHOULD NOT contain version control system
   supplied revision numbers, or any other semantically different
   version numbers (e.g. underlying library version number).

#. Wherever a ``__version__`` attribute exists, a module MAY also
   include a ``__version_info__`` attribute, containing a tuple
   representation of the module version number, for easy comparisons.

#. ``__version_info__`` SHOULD be of the format returned by PEP 386's
   ``parse_version()`` function.

#. The ``version`` attribute in a classic distutils ``setup.py``
   file, or the PEP 345 [7]_ ``Version`` metadata field SHOULD be
   derived from the ``__version__`` field, or vice versa.


Examples
========

Retrieving the version number from a third party package::

    >>> import bzrlib
    >>> bzrlib.__version__
    '2.3.0'

Retrieving the version number from a standard library package that is
also distributed as a standalone module::

    >>> import email
    >>> email.__version__
    '5.1.0'

Version numbers for namespace packages::

    >>> import flufl.i18n
    >>> import flufl.enum
    >>> import flufl.lock

    >>> print flufl.i18n.__version__
    1.0.4
    >>> print flufl.enum.__version__
    3.1
    >>> print flufl.lock.__version__
    2.1

    >>> import flufl
    >>> flufl.__version__
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: 'module' object has no attribute '__version__'
    >>>


Deriving
========

Module version numbers can appear in at least two places, and
sometimes more.  For example, in accordance with this PEP, they are
available programmatically on the module's ``__version__`` attribute.
In a classic distutils ``setup.py`` file, the ``setup()`` function
takes a ``version`` argument, while the distutils2 ``setup.cfg`` file
has a ``version`` key.  The version number must also get into the PEP
345 metadata, preferably when the *sdist* archive is built.  It's
desirable for module authors to only have to specify the version
number once, and have all the other uses derive from this single
definition.

While there are any number of ways this could be done, this section
describes one possible approach, for each scenario.

Let's say Elle adds this attribute to her module file ``elle.py``::

    __version__ = '3.1.1'


Classic distutils
-----------------

In classic distutils, the simplest way to add the version string to
the ``setup()`` function in ``setup.py`` is to do something like
this::

    from elle import __version__
    setup(name='elle', version=__version__)

In the PEP author's experience however, this can fail in some cases,
such as when the module uses automatic Python 3 conversion via the
``2to3`` program (because ``setup.py`` is executed by Python 3 before
the ``elle`` module has been converted).

In that case, it's not much more difficult to write a little code to
parse the ``__version__`` from the file rather than importing it::

    import re
    DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')

    def get_version(filename, pattern=None):
        if pattern is None:
            cre = DEFAULT_VERSION_RE
        else:
            cre = re.compile(pattern)
        with open(filename) as fp:
            for line in fp:
                if line.startswith('__version__'):
                    mo = cre.search(line)
                    assert mo, 'No valid __version__ string found'
                    return mo.group('version')
        raise AssertionError('No __version__ assignment found')

    setup(name='elle', version=get_version('elle.py'))


Distutils2
----------

Because the distutils2 style ``setup.cfg`` is declarative, we can't
run any code to extract the ``__version__`` attribute, either via
import or via parsing.  This PEP suggests a special key be added to
the ``[metadata]`` section of the ``setup.cfg`` file to indicate "get
the version from this file".  Something like this might work::

    [metadata]
    version-from-file: elle.py

where ``parse`` means to use a parsing method similar to the above, on
the file named after the colon.  The exact recipe for doing this will
be discussed in the appropriate distutils2 development forum.

An alternative is to only define the version number in ``setup.cfg``
and use the ``pkgutil`` module [8]_ to make it available
programmatically.  E.g. in ``elle.py``::

    from distutils2._backport import pkgutil
    __version__ = pkgutil.get_distribution('elle').metadata['version']


PEP 376 metadata
================

PEP 376 [9]_ defines a standard for static metadata, but doesn't
describe the process by which this metadata gets created.  It is
highly desirable for the derived version information to be placed into
the PEP 376 ``.dist-info`` metadata at build-time rather than
install-time.  This way, the metadata will be available for
introspection even when the code is not installed.


References
==========

.. [1] Distutils2 documentation
   (http://distutils2.notmyidea.org/)

.. [2] The Cheeseshop (Python Package Index)
   (http://pypi.python.org)

.. [3] http://docs.python.org/distutils/setupscript.html

.. [4] PEP 8, Style Guide for Python Code
   (http://www.python.org/dev/peps/pep-0008)

.. [5] sqlite3 module documentation
   (http://docs.python.org/library/sqlite3.html)

.. [6] PEP 386, Changing the version comparison module in Distutils
   (http://www.python.org/dev/peps/pep-0386/)

.. [7] PEP 345, Metadata for Python Software Packages 1.2
   (http://www.python.org/dev/peps/pep-0345/#version)

.. [8] pkgutil - Package utilities
   (http://distutils2.notmyidea.org/library/pkgutil.html)

.. [9] PEP 376, Database of Installed Python Distributions
   (http://www.python.org/dev/peps/pep-0376/)


Copyright
=========

This document has been placed in the public domain.



..
   Local Variables:
   mode: indented-text
   indent-tabs-mode: nil
   sentence-end-double-space: t
   fill-column: 70
   coding: utf-8
   End:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110405/d3726374/attachment.pgp>

From glyph at twistedmatrix.com  Tue Apr  5 21:34:10 2011
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Tue, 5 Apr 2011 15:34:10 -0400
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk> <4D9AE7B5.5060900@v.loewis.de>
	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
Message-ID: <8C900B78-556D-4EF0-A0E4-ED4356D45766@twistedmatrix.com>


On Apr 5, 2011, at 8:52 AM, exarkun at twistedmatrix.com wrote:

> On 09:58 am, martin at v.loewis.de wrote:
>>> Won't that still be an issue despite the stable ABI? Extensions on
>>> Windows should be linked to the same version of MSVCRT used to compile
>>> Python
>> 
>> Not if they use the stable ABI. There still might be issues if you
>> mix CRTs, but none related to the Python ABI - in particular, none
>> of those crashing conditions can arise from the stable ABI.
> 
> Does this mean new versions of distutils let you build_ext with any C compiler, instead of enforcing the same compiler as it has done previously?  That would be great.

That *would* be great.  But is it possible?

<http://www.python.org/dev/peps/pep-0384/> says "functions expecting FILE* are not part of the ABI, to avoid depending on a specific version of the Microsoft C runtime DLL on Windows".  Can extension modules that need to read and write files practically avoid all of those functions?  (If your extension module links a library with a different CRT, but doesn't pass functions back and forth to Python, is that OK?)

The PEP also says that it will allow users to "check whether their modules conform to the ABI", but it doesn't say how that will be done.  How can we build extension modules so that we're sure we're ABI-conformant?

From raymond.hettinger at gmail.com  Tue Apr  5 21:57:13 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Tue, 5 Apr 2011 12:57:13 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
Message-ID: <334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>

[Brett]
> This PEP requires that in these instances that both
> the Python and C code must be semantically identical

Are you talking about the guaranteed semantics
promised by the docs or are you talking about
every possible implementation detail?

ISTM that even with pure python code, we get problems
with people relying on implementation specific details.

* Two functions accept a sequence, but one accesses
  it using __len__ and __getitem__ while the other
  uses __iter__.   (This is like the Spam example
  in the PEP).

* Given pure python library code like:
       if x < y: ...
  I've seen people only implement __lt__
  but not __gt__, making it impossible to
  make even minor adjustments to the code such as:
       if y > x:  ...

* We also suffer from inconsistency in choice of
  exceptions (i.e. overly large sequence indices
  raising either an IndexError, OverflowError, or
  ValueError).

With C code, I wonder if certain implementation
differences go with the territory:

* Concurrency issues are a common semantic difference.
  For example, deque.pop() is atomic because the C
  code holds the GIL but a pure python equivalent
  would have to use locks to achieve same effect
  (and even then might introduce liveness or deadlock
  issues).

* Heapq is one of the rare examples of purely
  algorithmic code.  Much of the code in CPython
  does accesses libraries (i.e. the math module),
  interfaces with the OS, access binary data
  structures, links to third-party tools (sqlite3
  and Tkinter) or does something else that doesn't
  have pure python equivalents (at least without
  using C types).

* The C API for parsing argument tuples and keywords
  do not readily parallel the way the same are
  written in Python.  And with iterators, the argument
  checking in the C versions tends to happen when the
  iterator is instantiated, but code written with
  pure python generators doesn't have its setup and
  checking section run until next() is called the 
  first time.

* We've had a very difficult time bridging the gulf
  between python's infinite precision numbers and
  and C's fixed width numbers (for example, it took
  years to get range() to handle values greater than
  a word size).

* C code tends to be written in a way that takes
  advantage of that language's features instead of
  in a form that is a direct translation of pure
  python.  For example, I think the work being done
  on a C implementation of decimal has vastly different
  internal structures and it would be a huge challenge
  to make it semantically identical to the pure python
  version with respect to its implementation details.
  Likewise, a worthwhile C implementation of OrderedDict
  can only achieve massive space savings by having
  majorly different implementation details.

Instead of expressing the wishful thought that C
versions and pure Python versions are semantically
identical with respect to implementation details,
I would like to see more thought put into specific
limitations on C coding techniques and general
agreement on which implementation specific details
should be guaranteed:

* I would like to see a restriction on the use of
  the concrete C API such that it is *only* used
  when a exact type match has been found or created
  (i.e. if someone writes Py_ListNew(), then it
  is okay to use Py_ListSetItem()).  See 
  http://bugs.python.org/issue10977 for a discussion
  of what can go wrong.  The original json C
  was an example of code that used the concrete
  C API is a way that precluded pure python
  subclasses of list and dict.

* I would like to see better consistency on when to 
  use OverflowError vs ValueError vs IndexError.

* There should also be a discussion of whether the
  possible exceptions should be a guaranteed part
  of the API as it is in Java.  Because there were
  no guarantees (i.e. ord(x) can raise this, that,
  and the other), people tend to run an experiment
  and then rely on whatever C Python happens to do.

* There should be a discussion on when it is okay
  for a C implementation to handle only a value
  range that fits in a word.

* When there is C code, when is it okay for a user
  to assume atomic access?  Even with pure python
  code, we're not always consistent about it 
  (i.e. OrderedDict implementation is not threadsafe
  but the LRU_Cache is).

* There should be some agreement that people 
  implementing rich comparisons will implement
  all six operations so that client code doesn't
  become dependent on (x<y versus y>x).  For
  example, we had to add special-case logic to
  heapq years ago because Twisted implemented
  a task object that defined __le__ instead of
  __lt__, so it was usable only with an older
  version of heapq but not with min, sort, etc.

A good PEP should address these issues head-on.
Just saying that C and python code have to
be semantically identical in all implementation
details doesn't really address the issue.


[Brett]
> (sorry, Raymond, for picking on heapq, but is
> was what bit the PyPy people most recently =).

No worries, it wasn't even my code.  Someone
donated it.  The was a discusion on python-dev
and collective agreement to allow it to have 
semantic differences that would let it run faster.
IIRC, the final call was made by Uncle Timmy.

That being said, I would like to see a broader set
of examples rather rather than extrapolating from
a single piece 7+ year-old code.  It is purely
algorithmic, so it really just represents the
simplest case.  It would be much more interesting
to discuss something what should be done with
future C implementations for threading, decimal,
OrderedDict, or some existing non-trivial C 
accelerators like that for JSON or XML.

Brett, thanks for bringing the issue up.
I've been bugged for a good while about
issues like overbroad use of the concrete C API.


Raymond


From martin at v.loewis.de  Tue Apr  5 22:02:23 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 05 Apr 2011 22:02:23 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <8C900B78-556D-4EF0-A0E4-ED4356D45766@twistedmatrix.com>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>
	<20110405002155.3ded22cc@pitrou.net>	<4D9A4991.9090800@voidspace.org.uk>
	<4D9AE7B5.5060900@v.loewis.de>	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
	<8C900B78-556D-4EF0-A0E4-ED4356D45766@twistedmatrix.com>
Message-ID: <4D9B754F.7090603@v.loewis.de>

> <http://www.python.org/dev/peps/pep-0384/> says "functions expecting
> FILE* are not part of the ABI, to avoid depending on a specific
> version of the Microsoft C runtime DLL on Windows".  Can extension
> modules that need to read and write files practically avoid all of
> those functions?

Certainly! fread/fwrite/fprintf is not part of the Python API at all,
so clearly doesn't need to be part of the ABI.

> (If your extension module links a library with a
> different CRT, but doesn't pass functions back and forth to Python,
> is that OK?)

It is (and always was). The difficult functions are PyRun_AnyFileFlags
and friends.

> The PEP also says that it will allow users to "check whether their
> modules conform to the ABI", but it doesn't say how that will be
> done.  How can we build extension modules so that we're sure we're
> ABI-conformant?

If it compiles with Py_LIMITED_API defined, and links successfully
on Windows, it should be ABI-conforming (unless you deliberately
bypass the test, e.g. by replicating struct definitions in your
own code).

Regards,
Martin

From martin at v.loewis.de  Tue Apr  5 21:58:31 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 05 Apr 2011 21:58:31 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>
	<20110405002155.3ded22cc@pitrou.net>	<4D9A4991.9090800@voidspace.org.uk>
	<4D9AE7B5.5060900@v.loewis.de>
	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
Message-ID: <4D9B7467.7050309@v.loewis.de>

> Does this mean new versions of distutils let you build_ext with any C
> compiler, instead of enforcing the same compiler as it has done
> previously? 

No, it doesn't. distutils was considered frozen, and changes to it to
better support the ABI where rejected.

Regards,
Martin

From v+python at g.nevcal.com  Tue Apr  5 22:22:55 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Tue, 05 Apr 2011 13:22:55 -0700
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <20110405145213.29f706aa@neurotica.wooz.org>
References: <20110405145213.29f706aa@neurotica.wooz.org>
Message-ID: <4D9B7A1F.3070106@g.nevcal.com>

On 4/5/2011 11:52 AM, Barry Warsaw wrote:
> #. Module version numbers SHOULD conform to the normalized version
>     format specified in PEP 386 [6]_.
 From PEP 386:

>
>   Roadmap <http://www.python.org/dev/peps/pep-0386/#id21>
>
> Distutils will deprecate its existing versions class in favor of 
> NormalizedVersion. The verlib module presented in this PEP will be 
> renamed to version and placed into the distutils package.
>

With more standardization of versions, should the version module be 
promoted to stdlib directly?


On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>      __version__ = pkgutil.get_distribution('elle').metadata['version']

The RE as given won't match alpha, beta, rc, dev, and post suffixes that 
are discussed in POP 386.

Nor will it match the code shown and quoted for the alternative 
distutils2 case.


Other comments:

Are there issues for finding and loading multiple versions of the same 
module?

Should it be possible to determine a version before loading a module?  
If yes, the version module would have to be able to find a parse version 
strings in any of the many places this PEP suggests they could be... so 
that would be somewhat complex, but the complexity shouldn't be used to 
change the answer... but if the answer is yes, it might encourage fewer 
variant cases to be supported for acceptable version definition 
locations for this PEP.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110405/1c51e985/attachment.html>

From greg.ewing at canterbury.ac.nz  Tue Apr  5 22:43:45 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 06 Apr 2011 08:43:45 +1200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9AE7B5.5060900@v.loewis.de>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk> <4D9AE7B5.5060900@v.loewis.de>
Message-ID: <4D9B7F01.9060306@canterbury.ac.nz>

Martin v. L?wis wrote:
> Not if they use the stable ABI. There still might be issues if you
> mix CRTs, but none related to the Python ABI - in particular, none
> of those crashing conditions can arise from the stable ABI.

Won't there still be a problem of your extension module
being linked with a CRT that may not be present on the
target system?

-- 
Greg

From greg.ewing at canterbury.ac.nz  Tue Apr  5 22:53:34 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 06 Apr 2011 08:53:34 +1200
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTi=9CvWRFNbOC0MgqCmt6j=Sx9htgA@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
	<CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>
	<BANLkTinfJq8w5HyBMfONLQKxPkS36FLd8w@mail.gmail.com>
	<4D9AE5BD.1030407@v.loewis.de>
	<BANLkTi=9CvWRFNbOC0MgqCmt6j=Sx9htgA@mail.gmail.com>
Message-ID: <4D9B814E.2000003@canterbury.ac.nz>

Nick Coghlan wrote:

> 1. Making "docstring" an attribute of the Function node rather than
> leaving it embedded as the first statement in the suite (this avoids
> issues where AST-based constant folding could potentially corrupt the
> docstring)
> 2. Collapsing Num, Str, Bytes, Ellipsis into a single Literal node
> type (the handling of those nodes is the same in a lot of cases)
> 3. Since they're keywords now, pick up True, False, None at the
> parsing stage and turn them into instances of the Literal node type,
> allowing the current Name-based special casing to be removed.

These all sound good to me.

-- 
Greg



From martin at v.loewis.de  Tue Apr  5 23:31:19 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Tue, 05 Apr 2011 23:31:19 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9B7F01.9060306@canterbury.ac.nz>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>
	<20110405002155.3ded22cc@pitrou.net>	<4D9A4991.9090800@voidspace.org.uk>
	<4D9AE7B5.5060900@v.loewis.de> <4D9B7F01.9060306@canterbury.ac.nz>
Message-ID: <4D9B8A27.2030102@v.loewis.de>

Am 05.04.2011 22:43, schrieb Greg Ewing:
> Martin v. L?wis wrote:
>> Not if they use the stable ABI. There still might be issues if you
>> mix CRTs, but none related to the Python ABI - in particular, none
>> of those crashing conditions can arise from the stable ABI.
> 
> Won't there still be a problem of your extension module
> being linked with a CRT that may not be present on the
> target system?

Certainly. Anybody packaging an extension module needs to make sure
all libraries it uses are either already on the target system, or
delivered along with the extension module. Developers could refer
users to the redist package, or they could literally include the CRT
with their package (which is easier with VS2010 than it was with
VS2008).

Regards,
Martin

From barry at python.org  Wed Apr  6 01:22:26 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 5 Apr 2011 19:22:26 -0400
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <20110402085545.GA1381@sleipnir.bytereef.org>
References: <20110330161709.756b27f7@neurotica.wooz.org>
	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
	<1301655247.6531.65.camel@tim-laptop>
	<4D962BBB.6040205@v.loewis.de>
	<20110401195253.77e48df3@neurotica.wooz.org>
	<20110402020309.7c7299c3@pitrou.net>
	<20110402085545.GA1381@sleipnir.bytereef.org>
Message-ID: <20110405192226.386b3bad@neurotica.wooz.org>

On Apr 02, 2011, at 10:55 AM, Stefan Krah wrote:

>In this case, it's clearly Ubuntu who is going to break things. Still,
>the proposed patch could make life a lot easier for many people.

I'd be more concerned about adding some Debian/Ubuntu special code to setup.py
if it wasn't already a rats nest of specialization.  

$ grep darwin setup.py | wc -l
41

Not to mention the checks for osf1, unixware7, and openunix8 (!).  It's a
little ugly trying to run dpkg-architecture on every platform, but I'm not
sure anything better can be done, and of course it only needs to run it once.

I'm going to apply the patch to Python 2.7, 3.1, 3.2, and 3.3 tomorrow.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110405/68b25287/attachment-0001.pgp>

From exarkun at twistedmatrix.com  Wed Apr  6 03:39:45 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Wed, 06 Apr 2011 01:39:45 -0000
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9B7467.7050309@v.loewis.de>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk> <4D9AE7B5.5060900@v.loewis.de>
	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
	<4D9B7467.7050309@v.loewis.de>
Message-ID: <20110406013945.1992.1463920715.divmod.xquotient.218@localhost.localdomain>

On 5 Apr, 07:58 pm, martin at v.loewis.de wrote:
>>Does this mean new versions of distutils let you build_ext with any C
>>compiler, instead of enforcing the same compiler as it has done
>>previously?
>
>No, it doesn't. distutils was considered frozen, and changes to it to
>better support the ABI where rejected.

How about distutils2 then?

Jean-Paul

From exarkun at twistedmatrix.com  Wed Apr  6 03:39:45 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Wed, 06 Apr 2011 01:39:45 -0000
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <4D9B7467.7050309@v.loewis.de>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>
	<4D9A3AD1.3000403@v.loewis.de> <20110405002155.3ded22cc@pitrou.net>
	<4D9A4991.9090800@voidspace.org.uk> <4D9AE7B5.5060900@v.loewis.de>
	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>
	<4D9B7467.7050309@v.loewis.de>
Message-ID: <20110406013945.1992.1463920715.divmod.xquotient.218@localhost.localdomain>

On 5 Apr, 07:58 pm, martin at v.loewis.de wrote:
>>Does this mean new versions of distutils let you build_ext with any C
>>compiler, instead of enforcing the same compiler as it has done
>>previously?
>
>No, it doesn't. distutils was considered frozen, and changes to it to
>better support the ABI where rejected.

How about distutils2 then?

Jean-Paul

From tjreedy at udel.edu  Wed Apr  6 07:02:05 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 06 Apr 2011 01:02:05 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
Message-ID: <ings4d$qse$1@dough.gmane.org>

On 4/5/2011 3:57 PM, Raymond Hettinger wrote:
> [Brett]
>> This PEP requires that in these instances that both
>> the Python and C code must be semantically identical
>
> Are you talking about the guaranteed semantics
> promised by the docs or are you talking about
> every possible implementation detail?

I personally would limit the guarantee to what the docs promise. That is 
all people should expect anyway if the Python code were executed by some 
other implementation, or by someone else's system-coded version, or even 
a different version of CPython.

This assumes that the docs have reasonably complete specifications. The 
was improved in 3.2 and should improve further as system-code 
implementers find more holes.

Exceptions are a bit of a gray area. The docs are quite uneven about 
specifying exceptions. They sometimes do, sometimes do not, even for 
similar functions. This should be another PEP though.

-- 
Terry Jan Reedy


From john at arbash-meinel.com  Wed Apr  6 11:04:08 2011
From: john at arbash-meinel.com (John Arbash Meinel)
Date: Wed, 06 Apr 2011 11:04:08 +0200
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <20110405145213.29f706aa@neurotica.wooz.org>
References: <20110405145213.29f706aa@neurotica.wooz.org>
Message-ID: <4D9C2C88.8020604@arbash-meinel.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


...
> #. ``__version_info__`` SHOULD be of the format returned by PEP 386's
>    ``parse_version()`` function.

The only reference to parse_version in PEP 386 I could find was the
setuptools implementation which is pretty odd:

> 
> In other words, parse_version will return a tuple for each version string, that is compatible with StrictVersion but also accept arbitrary version and deal with them so they can be compared:
> 
>>>> from pkg_resources import parse_version as V
>>>> V('1.2')
> ('00000001', '00000002', '*final')
>>>> V('1.2b2')
> ('00000001', '00000002', '*b', '00000002', '*final')
>>>> V('FunkyVersion')
> ('*funkyversion', '*final')

bzrlib has certainly used 'version_info' as a tuple indication such as:

version_info = (2, 4, 0, 'dev', 2)

and

version_info = (2, 4, 0, 'beta', 1)

and

version_info = (2, 3, 1, 'final', 0)

etc.

This is mapping what we could sort out from Python's "sys.version_info".

The *really* nice bit is that you can do:

if sys.version_info >= (2, 6):
  # do stuff for python 2.6(.0) and beyond

Doing that as:

if sys.version_info >= ('000000002', '000000006'):

is pretty ugly.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2cLIcACgkQJdeBCYSNAAPT9wCg01L2s0DcqXE+zBAVPB7/Ts0W
HwgAnRRrzR1yiQCSeFGh9jZzuXYrHwPz
=0l4b
-----END PGP SIGNATURE-----

From ncoghlan at gmail.com  Wed Apr  6 15:55:59 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 6 Apr 2011 23:55:59 +1000
Subject: [Python-Dev] Buildbot status
In-Reply-To: <20110405160555.7de069f5@pitrou.net>
References: <20110405160555.7de069f5@pitrou.net>
Message-ID: <BANLkTikyYKqrr=rwvjOJVD26p6JGZ6qL=A@mail.gmail.com>

On Wed, Apr 6, 2011 at 12:05 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>
> Hello,
>
> For the record, we have 9 stable buildbots, one of which is currently
> offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
> Paul Moore's XP buildbot is back in the stable stable.
> (http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)

Huzzah!

Since it appears the intermittent failures affecting these platforms
have been dealt with, is it time to switch python-committers email
notifications back on for buildbot failures that turn the stable bots
red?

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Wed Apr  6 15:59:58 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 6 Apr 2011 15:59:58 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
Message-ID: <20110406155958.738fa14b@pitrou.net>

On Tue, 5 Apr 2011 12:57:13 -0700
Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> 
> * I would like to see a restriction on the use of
>   the concrete C API such that it is *only* used
>   when a exact type match has been found or created
>   (i.e. if someone writes Py_ListNew(), then it
>   is okay to use Py_ListSetItem()).

That should be qualified.
For example, not being able to use PyUnicode_AS_STRING in some
performance-critical code (such as the io lib) would be a large
impediment.

Regards

Antoine.



From solipsis at pitrou.net  Wed Apr  6 16:01:15 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 06 Apr 2011 16:01:15 +0200
Subject: [Python-Dev] Buildbot status
In-Reply-To: <BANLkTikyYKqrr=rwvjOJVD26p6JGZ6qL=A@mail.gmail.com>
References: <20110405160555.7de069f5@pitrou.net>
	<BANLkTikyYKqrr=rwvjOJVD26p6JGZ6qL=A@mail.gmail.com>
Message-ID: <1302098475.3700.0.camel@localhost.localdomain>

Le mercredi 06 avril 2011 ? 23:55 +1000, Nick Coghlan a ?crit :
> On Wed, Apr 6, 2011 at 12:05 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >
> > Hello,
> >
> > For the record, we have 9 stable buildbots, one of which is currently
> > offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
> > Paul Moore's XP buildbot is back in the stable stable.
> > (http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)
> 
> Huzzah!
> 
> Since it appears the intermittent failures affecting these platforms
> have been dealt with, is it time to switch python-committers email
> notifications back on for buildbot failures that turn the stable bots
> red?

They have not been "dealt with" (not all of them anyway), you are just
lucky that they are all green at the moment :)

Regards

Antoine.



From ncoghlan at gmail.com  Wed Apr  6 16:08:45 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 00:08:45 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
Message-ID: <BANLkTimeUCB_h-U8LgHhNBDXySt-Osx97w@mail.gmail.com>

On Wed, Apr 6, 2011 at 5:57 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
> [Brett]
>> This PEP requires that in these instances that both
>> the Python and C code must be semantically identical
>
> Are you talking about the guaranteed semantics
> promised by the docs or are you talking about
> every possible implementation detail?
>
> ISTM that even with pure python code, we get problems
> with people relying on implementation specific details.

Indeed.

Argument handling is certainly a tricky one - getting positional only
arguments requires a bit of a hack in pure Python code (accepting
*args and unpacking the arguments manually), but it comes reasonably
naturally when parsing arguments directly using the C API.

As another example where these questions will arise (this time going
the other way) is that I would like to see a pure-Python version of
partial added back in to functools, with the C version becoming an
accelerated override for it.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Wed Apr  6 16:11:43 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 00:11:43 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <20110406155958.738fa14b@pitrou.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<20110406155958.738fa14b@pitrou.net>
Message-ID: <BANLkTi=GxE+5p+gyQ=Kjdt-EA89d7F236A@mail.gmail.com>

On Wed, Apr 6, 2011 at 11:59 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Tue, 5 Apr 2011 12:57:13 -0700
> Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
>>
>> * I would like to see a restriction on the use of
>> ? the concrete C API such that it is *only* used
>> ? when a exact type match has been found or created
>> ? (i.e. if someone writes Py_ListNew(), then it
>> ? is okay to use Py_ListSetItem()).
>
> That should be qualified.
> For example, not being able to use PyUnicode_AS_STRING in some
> performance-critical code (such as the io lib) would be a large
> impediment.

Str/unicode/bytes are really an exception to most rules when it comes
to duck-typing. There's so much code out there that only works with
"real" strings, nobody is surprised when an API doesn't accept string
look-alikes. (There aren't any standard ABCs for those interfaces, and
I haven't really encountered anyone clamouring for them, either).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fuzzyman at voidspace.org.uk  Wed Apr  6 16:17:05 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Wed, 06 Apr 2011 15:17:05 +0100
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
 Module	Compatibiilty Requirements
In-Reply-To: <334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
Message-ID: <4D9C75E1.5070907@voidspace.org.uk>

On 05/04/2011 20:57, Raymond Hettinger wrote:
> [snip...]
> [Brett]
>> (sorry, Raymond, for picking on heapq, but is
>> was what bit the PyPy people most recently =).
> No worries, it wasn't even my code.  Someone
> donated it.  The was a discusion on python-dev
> and collective agreement to allow it to have
> semantic differences that would let it run faster.
> IIRC, the final call was made by Uncle Timmy.
>

The major problem that pypy had with heapq, aside from semantic 
differences, was (is?) that if you run the tests against the pure-Python 
version (without the C accelerator) then tests *fail*. This means they 
have to patch the CPython tests in order to be able to use the pure 
Python version.

Ensuring that tests run against both (even if there are some unavoidable 
differences like exception types with the tests allowing for both or 
skipping some tests) would at least prevent this happening.

All the best,

Michael

> That being said, I would like to see a broader set
> of examples rather rather than extrapolating from
> a single piece 7+ year-old code.  It is purely
> algorithmic, so it really just represents the
> simplest case.  It would be much more interesting
> to discuss something what should be done with
> future C implementations for threading, decimal,
> OrderedDict, or some existing non-trivial C
> accelerators like that for JSON or XML.
>
> Brett, thanks for bringing the issue up.
> I've been bugged for a good while about
> issues like overbroad use of the concrete C API.
>
>
> Raymond
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From ncoghlan at gmail.com  Wed Apr  6 16:26:27 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 00:26:27 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9B7A1F.3070106@g.nevcal.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
Message-ID: <BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>

On Wed, Apr 6, 2011 at 6:22 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> With more standardization of versions, should the version module be promoted
> to stdlib directly?

When Tarek lands "packaging" (i.e. what distutils2 becomes in the
Python 3.3 stdlib), the standardised version handling will come with
it.

> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>
>     DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>
>     __version__ = pkgutil.get_distribution('elle').metadata['version']
>
> The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
> discussed in POP 386.

Indeed, I really don't like the RE suggestion - better to tell people
to just move the version info into the static config file and use
pkgutil to make it available as shown. That solves the build time vs
install time problem as well.

> Nor will it match the code shown and quoted for the alternative distutils2
> case.
>
>
> Other comments:
>
> Are there issues for finding and loading multiple versions of the same
> module?

No, you simply can't do it. Python's import semantics are already
overly complicated even without opening that particular can of worms.

> Should it be possible to determine a version before loading a module?? If
> yes, the version module would have to be able to find a parse version
> strings in any of the many places this PEP suggests they could be... so that
> would be somewhat complex, but the complexity shouldn't be used to change
> the answer... but if the answer is yes, it might encourage fewer variant
> cases to be supported for acceptable version definition locations for this
> PEP.

Yep, this is why the version information should be in the setup.cfg
file, and hence available via pkgutil without loading the module
first.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Wed Apr  6 16:27:50 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 00:27:50 +1000
Subject: [Python-Dev] Buildbot status
In-Reply-To: <1302098475.3700.0.camel@localhost.localdomain>
References: <20110405160555.7de069f5@pitrou.net>
	<BANLkTikyYKqrr=rwvjOJVD26p6JGZ6qL=A@mail.gmail.com>
	<1302098475.3700.0.camel@localhost.localdomain>
Message-ID: <BANLkTin7P1vbxgrs2U78JxEqtcfu6C1Z2Q@mail.gmail.com>

On Thu, Apr 7, 2011 at 12:01 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Le mercredi 06 avril 2011 ? 23:55 +1000, Nick Coghlan a ?crit :
>> Since it appears the intermittent failures affecting these platforms
>> have been dealt with, is it time to switch python-committers email
>> notifications back on for buildbot failures that turn the stable bots
>> red?
>
> They have not been "dealt with" (not all of them anyway), you are just
> lucky that they are all green at the moment :)

Ah, 'twas mere unfounded optimism, then. We'll get there one day :)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Wed Apr  6 16:30:53 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 6 Apr 2011 16:30:53 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
 Module	Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<4D9C75E1.5070907@voidspace.org.uk>
Message-ID: <20110406163053.12de863a@pitrou.net>

On Wed, 06 Apr 2011 15:17:05 +0100
Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 05/04/2011 20:57, Raymond Hettinger wrote:
> > [snip...]
> > [Brett]
> >> (sorry, Raymond, for picking on heapq, but is
> >> was what bit the PyPy people most recently =).
> > No worries, it wasn't even my code.  Someone
> > donated it.  The was a discusion on python-dev
> > and collective agreement to allow it to have
> > semantic differences that would let it run faster.
> > IIRC, the final call was made by Uncle Timmy.
> >
> 
> The major problem that pypy had with heapq, aside from semantic 
> differences, was (is?) that if you run the tests against the pure-Python 
> version (without the C accelerator) then tests *fail*. This means they 
> have to patch the CPython tests in order to be able to use the pure 
> Python version.

Was the tests patch contributed back?

Regards

Antoine.



From brian.curtin at gmail.com  Wed Apr  6 16:38:40 2011
From: brian.curtin at gmail.com (Brian Curtin)
Date: Wed, 6 Apr 2011 09:38:40 -0500
Subject: [Python-Dev] Buildbot status
In-Reply-To: <20110405160555.7de069f5@pitrou.net>
References: <20110405160555.7de069f5@pitrou.net>
Message-ID: <BANLkTinhY6skCtYxCPk6Hwj3DV1xt4qneg@mail.gmail.com>

On Tue, Apr 5, 2011 at 09:05, Antoine Pitrou <solipsis at pitrou.net> wrote:

>
> Hello,
>
> For the record, we have 9 stable buildbots, one of which is currently
> offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
> Paul Moore's XP buildbot is back in the stable stable.
> (http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)
>
> We also have a new 64-bit FreeBSD 8.2 buildbot donated and managed by
> Stefan Krah.
> (http://www.python.org/dev/buildbot/all/buildslaves/krah-freebsd)
>
> Regards
>
> Antoine.


Apologies to anyone hoping to see Windows Server 2008 in this list...or
maybe you Linux guys are laughing :)

That build slave has had more problems than I've had time to deal with, so
it's resting for now.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/8cf7ada3/attachment.html>

From ncoghlan at gmail.com  Wed Apr  6 17:13:26 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 01:13:26 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <75FCB6C3-46EB-4455-B905-2B3BDD96F3AF@fuhm.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTimeUCB_h-U8LgHhNBDXySt-Osx97w@mail.gmail.com>
	<75FCB6C3-46EB-4455-B905-2B3BDD96F3AF@fuhm.net>
Message-ID: <BANLkTi=vJ0=W=5CGF9+_EgwOQSsv14WWXg@mail.gmail.com>

On Thu, Apr 7, 2011 at 1:03 AM, James Y Knight <foom at fuhm.net> wrote:
> Perhaps the argument handling for C functions ought to be enhanced to work like python's argument handling, instead of trying to hack it the other way around?

Oh, definitely. It is just that you pretty much have to use the *args
hack when providing Python versions of C functions that accept both
positional-only arguments and arbitrary keyword arguments.

For "ordinary" calls, simply switching to PyArg_ParseTupleAndKeywords
over other alternatives basically deals with the problem.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From foom at fuhm.net  Wed Apr  6 17:03:28 2011
From: foom at fuhm.net (James Y Knight)
Date: Wed, 6 Apr 2011 11:03:28 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTimeUCB_h-U8LgHhNBDXySt-Osx97w@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTimeUCB_h-U8LgHhNBDXySt-Osx97w@mail.gmail.com>
Message-ID: <75FCB6C3-46EB-4455-B905-2B3BDD96F3AF@fuhm.net>


On Apr 6, 2011, at 10:08 AM, Nick Coghlan wrote:

> On Wed, Apr 6, 2011 at 5:57 AM, Raymond Hettinger
> <raymond.hettinger at gmail.com> wrote:
>> [Brett]
>>> This PEP requires that in these instances that both
>>> the Python and C code must be semantically identical
>> 
>> Are you talking about the guaranteed semantics
>> promised by the docs or are you talking about
>> every possible implementation detail?
>> 
>> ISTM that even with pure python code, we get problems
>> with people relying on implementation specific details.
> 
> Indeed.
> 
> Argument handling is certainly a tricky one - getting positional only
> arguments requires a bit of a hack in pure Python code (accepting
> *args and unpacking the arguments manually), but it comes reasonably
> naturally when parsing arguments directly using the C API.

Perhaps the argument handling for C functions ought to be enhanced to work like python's argument handling, instead of trying to hack it the other way around?

James

From merwok at netwok.org  Wed Apr  6 18:33:50 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Wed, 06 Apr 2011 18:33:50 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110406013945.1992.1463920715.divmod.xquotient.218@localhost.localdomain>
References: "<AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>"
	<4D9A3AD1.3000403@v.loewis.de> "<20110405002155.3ded22cc@pitrou.net>"
	<4D9A4991.9090800@voidspace.org.uk> "\"<4D9AE7B5.5060900@v.loewis.de>"
	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>"
	<4D9B7467.7050309@v.loewis.de>
	<20110406013945.1992.1463920715.divmod.xquotient.218@localhost.localdomain>
Message-ID: <b32ae0db9ad5f04c5e82cce11a88a02c@netwok.org>

 Le 06/04/2011 03:39, exarkun at twistedmatrix.com a ?crit :
> On 5 Apr, 07:58 pm, martin at v.loewis.de wrote:
>>> Does this mean new versions of distutils let you build_ext with any 
>>> C
>>> compiler, instead of enforcing the same compiler as it has done
>>> previously?
>>
>> No, it doesn't. distutils was considered frozen, and changes to it 
>> to
>> better support the ABI where rejected.
>
> How about distutils2 then?

 If there isn?t already an open bug about that, it would be welcome.

 Regards

From dasdasich at googlemail.com  Wed Apr  6 18:52:24 2011
From: dasdasich at googlemail.com (DasIch)
Date: Wed, 6 Apr 2011 18:52:24 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)
Message-ID: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>

Hello Guys,
I would like to present my proposal for the Google Summer of Code,
concerning the idea of porting the benchmarks to Python 3.x for
speed.pypy.org. I think I have successfully integrated the feedback I
got from prior discussions on the topic and I would like to hear your
opinion.

Abstract
=======

As of now there are several benchmark suites used by Python
implementations, PyPy[1] uses the benchmarks developed for the Unladen
Swallow[2] project as well as several other benchmarks they
implemented on their own, CPython[3] uses the Unladen Swallow
benchmarks and several "crap benchmarks used for historical
reasons"[4].

This makes comparisons unnecessarily hard and causes confusion. As a
solution to this problem I propose merging the existing benchmarks -
at least those considered worth having - into a single benchmark suite
which can be shared by all implementations and ported to Python 3.x.
Milestones
The project can be divided into several milestones:

1. Definition of the benchmark suite. This will entail contacting
developers of Python implementations (CPython, PyPy, IronPython and
Jython), via discussion on the appropriate mailing lists. This might
be achievable as part of this proposal.

2. Implementing the benchmark suite. Based on the prior agreed upon
definition, the suite will be implemented, which means that the
benchmarks will be merged into a single mercurial repository on
Bitbucket[5].

3. Porting the suite to Python 3.x. The suite will be ported to 3.x
using 2to3[6], as far as possible. The usage of 2to3 will make it
easier make changes to the repository especially for those still
focusing on 2.x. It is to be expected that some benchmarks cannot be
ported due to dependencies which are not available on Python 3.x.
Those will be ignored by this project to be ported at a later time,
when the necessary requirements are met.

Start of Program (May 24)
======================

Before the coding, milestones 2 and 3, can begin it is necessary to
agree upon a set of benchmarks, everyone is happy with, as described.

Midterm Evaluation (July 12)
=======================

During the midterm I want to finish the second milestone and before
the evaluation I want to start in the third milestone.

Final Evaluation (Aug 16)
=====================

In this period the benchmark suite will be ported. If everything works
out perfectly I will even have some time left, if there are problems I
have a buffer here.

Probably Asked Questions
======================

Why not use one of the existing benchmark suites for porting?

The effort will be wasted if there is no good base to build upon,
creating a new benchmark suite based upon the existing ones ensures
that.

Why not use Git/Bazaar/...?

Mercurial is used by CPython, PyPy and is fairly well known and used
in the Python community. This ensures easy accessibility for everyone.

What will happen with the Repository after GSoC/How will access to the
repository be handled?

I propose to give administrative rights to one or two representatives
of each project. Those will provide other developers with write
access.

Communication
=============

Communication of the progress will be done via Twitter[7] and my
blog[8], if desired I can also send an email with the contents of the
blog post to the mailing lists of the implementations. Furthermore I
am usually quick to answer via IRC (DasIch on freenode), Twitter or
E-Mail(dasdasich at gmail.com) if anyone has any questions.

Contact to the mentor can be established via the means mentioned above
or via Skype.

About Me
========

My name is Daniel Neuh?user, I am 19 years old and currently a student
at the Bergstadt-Gymnasium L?denscheid[9]. I started programming (with
Python) about 4 years ago and became a member of the Pocoo Team[10]
after successfully participating in the Google Summer of Code last
year, during which I ported Sphinx[11] to Python 3.x and implemented
an algorithm to diff abstract syntax trees to preserve comments and
translated strings which has been used by the other GSoC projects
targeting Sphinx.

.. [1]: https://bitbucket.org/pypy/benchmarks/src
.. [2]: http://code.google.com/p/unladen-swallow/
.. [3]: http://hg.python.org/benchmarks/file/tip/performance
.. [4]: http://hg.python.org/benchmarks/file/62e754c57a7f/performance/README
.. [5]: http://bitbucket.org/
.. [6]: http://docs.python.org/library/2to3.html
.. [7]: http://twitter.com/#!/DasIch
.. [8]: http://dasdasich.blogspot.com/
.. [9]: http://bergstadt-gymnasium.de/
.. [10]: http://www.pocoo.org/team/#daniel-neuhauser
.. [11]: http://sphinx.pocoo.org/

P.S.: I would like to get in touch with the IronPython developers as
well, unfortunately I was not able to find a mailing list or IRC
channel is there anybody how can send me in the right direction?

From brett at python.org  Wed Apr  6 19:06:30 2011
From: brett at python.org (Brett Cannon)
Date: Wed, 6 Apr 2011 10:06:30 -0700
Subject: [Python-Dev] clarification: subset vs equality Re:
 [Python-checkins] peps: Draft of PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTimZwrh5vK9v9Tcy+VKnCds8mehfTg@mail.gmail.com>
References: <BANLkTimZwrh5vK9v9Tcy+VKnCds8mehfTg@mail.gmail.com>
Message-ID: <BANLkTimXOT_WXBjp46evO351h6u6iFmBog@mail.gmail.com>

On Tue, Apr 5, 2011 at 06:10, Jim Jewett <jimjjewett at gmail.com> wrote:

> On 4/4/11, brett.cannon <python-checkins at python.org> wrote:
> >   Draft of PEP 399: Pure Python/C Accelerator Module Compatibiilty
> > Requirements
>
> > +Abstract
> > +========
> > +
> > +The Python standard library under CPython contains various instances
> > +of modules implemented in both pure Python and C. This PEP requires
> > +that in these instances that both the Python and C code *must* be
> > +semantically identical (except in cases where implementation details
> > +of a VM prevents it entirely). It is also required that new C-based
> > +modules lacking a pure Python equivalent implementation get special
> > +permissions to be added to the standard library.
>
> I think it is worth stating explicitly that the C version can be even
> a strict subset.  It is OK for the accelerated C code to rely on the
> common python version; it is just the reverse that is not OK.
>

I thought that was obvious, but I went ahead and tweaked the abstract and
rationale to make this more explicit.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/751eb8ff/attachment.html>

From fijall at gmail.com  Wed Apr  6 19:24:08 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Wed, 6 Apr 2011 19:24:08 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
Message-ID: <BANLkTikc2h6bqBGd3GBvCpzwDc+y-i7DBg@mail.gmail.com>

> No worries, it wasn't even my code. ?Someone
> donated it. ?The was a discusion on python-dev
> and collective agreement to allow it to have
> semantic differences that would let it run faster.
> IIRC, the final call was made by Uncle Timmy.
>

The bug link is here:

http://bugs.python.org/issue3051

I think this PEP is precisely targeting this:

"I saw no need to complicate the pure python code for this."

if you complicate the C code for this, then please as well complicate
python code for this since it's breaking stuff.

And this:

"FWIW, the C code is not guaranteed to be exactly the same in terms of
implementation details, only the published API should be the same.
And, for this module, a decision was made for the C code to support
only lists eventhough the pure python version supports any sequence."

The idea of the PEP is for C code to be guaranteed to be the same as
Python where it matters to people.

Cheers,
fijal

From brett at python.org  Wed Apr  6 19:39:09 2011
From: brett at python.org (Brett Cannon)
Date: Wed, 6 Apr 2011 10:39:09 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
Message-ID: <BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>

On Tue, Apr 5, 2011 at 12:57, Raymond Hettinger <raymond.hettinger at gmail.com
> wrote:

> [Brett]
> > This PEP requires that in these instances that both
> > the Python and C code must be semantically identical
>
> Are you talking about the guaranteed semantics
> promised by the docs or are you talking about
> every possible implementation detail?
>
> ISTM that even with pure python code, we get problems
> with people relying on implementation specific details.
>
> * Two functions accept a sequence, but one accesses
>  it using __len__ and __getitem__ while the other
>  uses __iter__.   (This is like the Spam example
>  in the PEP).
>

That's a consistency problem in all of our C code and not unique to Python/C
modules.


>
> * Given pure python library code like:
>       if x < y: ...
>  I've seen people only implement __lt__
>  but not __gt__, making it impossible to
>  make even minor adjustments to the code such as:
>       if y > x:  ...
>

How is that an issue here? Because someone was lazy in the C code but not
the Python code? That is an issue as that is a difference in what methods
are provided.


>
> * We also suffer from inconsistency in choice of
>  exceptions (i.e. overly large sequence indices
>  raising either an IndexError, OverflowError, or
>  ValueError).
>

Once again, a general issue in our C code and not special to this PEP.


>
> With C code, I wonder if certain implementation
> differences go with the territory:
>
> * Concurrency issues are a common semantic difference.
>  For example, deque.pop() is atomic because the C
>  code holds the GIL but a pure python equivalent
>  would have to use locks to achieve same effect
>  (and even then might introduce liveness or deadlock
>  issues).
>

That's just a CPython-specific issue that will always be tough to work
around. Obviously we can do the best we can but since the other VMs don't
necessarily have the same concurrency guarantees per Python expression it is
near impossible to define.


>
> * Heapq is one of the rare examples of purely
>  algorithmic code.  Much of the code in CPython
>  does accesses libraries (i.e. the math module),
>  interfaces with the OS, access binary data
>  structures, links to third-party tools (sqlite3
>  and Tkinter) or does something else that doesn't
>  have pure python equivalents (at least without
>  using C types).
>

Those C modules are outside the scope of the PEP.


>
> * The C API for parsing argument tuples and keywords
>  do not readily parallel the way the same are
>  written in Python.  And with iterators, the argument
>  checking in the C versions tends to happen when the
>  iterator is instantiated, but code written with
>  pure python generators doesn't have its setup and
>  checking section run until next() is called the
>  first time.
>
> * We've had a very difficult time bridging the gulf
>  between python's infinite precision numbers and
>  and C's fixed width numbers (for example, it took
>  years to get range() to handle values greater than
>  a word size).
>

I don't expect that to be an issue as this is a limitation in CPython that
the other VMs never run into. If anything it is puts the other VMs at an
advantage for us relying on C code.


>
> * C code tends to be written in a way that takes
>  advantage of that language's features instead of
>  in a form that is a direct translation of pure
>  python.  For example, I think the work being done
>  on a C implementation of decimal has vastly different
>  internal structures and it would be a huge challenge
>  to make it semantically identical to the pure python
>  version with respect to its implementation details.
>  Likewise, a worthwhile C implementation of OrderedDict
>  can only achieve massive space savings by having
>  majorly different implementation details.
>
> Instead of expressing the wishful thought that C
> versions and pure Python versions are semantically
> identical with respect to implementation details,
> I would like to see more thought put into specific
> limitations on C coding techniques and general
> agreement on which implementation specific details
> should be guaranteed:
>
> * I would like to see a restriction on the use of
>  the concrete C API such that it is *only* used
>  when a exact type match has been found or created
>  (i.e. if someone writes Py_ListNew(), then it
>  is okay to use Py_ListSetItem()).  See
>  http://bugs.python.org/issue10977 for a discussion
>  of what can go wrong.  The original json C
>  was an example of code that used the concrete
>  C API is a way that precluded pure python
>  subclasses of list and dict.
>

That's a general coding policy that is not special to this PEP.


>
> * I would like to see better consistency on when to
>  use OverflowError vs ValueError vs IndexError.
>
>
Once again, not specific to this PEP.


> * There should also be a discussion of whether the
>  possible exceptions should be a guaranteed part
>  of the API as it is in Java.  Because there were
>  no guarantees (i.e. ord(x) can raise this, that,
>  and the other), people tend to run an experiment
>  and then rely on whatever C Python happens to do.
>
>
Still not part of this PEP and I am going to stop saying this. =)


> * There should be a discussion on when it is okay
>  for a C implementation to handle only a value
>  range that fits in a word.
>
> * When there is C code, when is it okay for a user
>  to assume atomic access?  Even with pure python
>  code, we're not always consistent about it
>  (i.e. OrderedDict implementation is not threadsafe
>  but the LRU_Cache is).
>
> * There should be some agreement that people
>  implementing rich comparisons will implement
>  all six operations so that client code doesn't
>  become dependent on (x<y versus y>x).  For
>  example, we had to add special-case logic to
>  heapq years ago because Twisted implemented
>  a task object that defined __le__ instead of
>  __lt__, so it was usable only with an older
>  version of heapq but not with min, sort, etc.
>
> A good PEP should address these issues head-on.
> Just saying that C and python code have to
> be semantically identical in all implementation
> details doesn't really address the issue.
>
>
> [Brett]
> > (sorry, Raymond, for picking on heapq, but is
> > was what bit the PyPy people most recently =).
>
> No worries, it wasn't even my code.  Someone
> donated it.  The was a discusion on python-dev
> and collective agreement to allow it to have
> semantic differences that would let it run faster.
> IIRC, the final call was made by Uncle Timmy.
>
> That being said, I would like to see a broader set
> of examples rather rather than extrapolating from
> a single piece 7+ year-old code.  It is purely
> algorithmic, so it really just represents the
> simplest case.  It would be much more interesting
> to discuss something what should be done with
> future C implementations for threading, decimal,
> OrderedDict, or some existing non-trivial C
> accelerators like that for JSON or XML.
>

This is a known issue and is a priori something that needs to be worked out.
If one of the other VM teams want to dig up some more examples they can, but
I'm not going to put them through that for something that is so obviously
something we want written down in a PEP.


>
> Brett, thanks for bringing the issue up.
> I've been bugged for a good while about
> issues like overbroad use of the concrete C API.
>

Since people are taking my "semantically identical" point too strongly for
what I mean (there is a reason I said "except in cases
where implementation details of a VM prevents [semantic equivalency]
entirely"), how about we change the requirement that C acceleration code
must pass the same test suite (sans C specific issues such as refcount tests
or word size) and adhere to the documented semantics the same? It should get
us the same result without ruffling so many feathers. And if the other VMs
find an inconsistency they can add a proper test and then we fix the code
(as would be the case regardless). And in instances where it is simply not
possible because of C limitations the test won't get written since the test
will never pass.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/2178cbfb/attachment.html>

From brett at python.org  Wed Apr  6 19:40:16 2011
From: brett at python.org (Brett Cannon)
Date: Wed, 6 Apr 2011 10:40:16 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>
Message-ID: <BANLkTim1=ghH_ak9mbDjD5yKnwFPyE0RWw@mail.gmail.com>

On Tue, Apr 5, 2011 at 05:01, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Tue, Apr 5, 2011 at 9:46 AM, Brett Cannon <brett at python.org> wrote:
> >     try:
> >         c_heapq.heappop(Spam())
> >     except TypeError:
> >         # "heap argument must be a list"
> >         pass
> >
> >     try:
> >         py_heapq.heappop(Spam())
> >     except AttributeError:
> >         # "'Foo' object has no attribute 'pop'"
> >         pass
> >
> > This kind of divergence is a problem for users as they unwittingly
> > write code that is CPython-specific. This is also an issue for other
> > VM teams as they have to deal with bug reports from users thinking
> > that they incorrectly implemented the module when in fact it was
> > caused by an untested case.
>
> While I agree with the PEP in principle, I disagree with the way this
> example is written. Guido has stated in the past that code simply
> *cannot* rely on TypeError being consistently thrown instead of
> AttributeError (or vice-versa) when it comes to duck-typing. Code that
> cares which of the two is thrown is wrong.
>
> However, there actually *is* a significant semantic discrepancy in the
> heapq case, which is that py_heapq is duck-typed, while c_heapq is
> not:
>

That's true. I will re-word it to point that out. The example code still
shows it, I just didn't explicitly state that in the example.

-Brett


>
> >>> from test.support import import_fresh_module
> >>> c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
> >>> py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
> >>> from collections import UserList
> >>> class Seq(UserList): pass
> ...
> >>> c_heapq.heappop(UserList())
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: heap argument must be a list
> >>> py_heapq.heappop(UserList())
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
>  File "/home/ncoghlan/devel/py3k/Lib/heapq.py", line 140, in heappop
>    lastelt = heap.pop()    # raises appropriate IndexError if heap is empty
>  File "/home/ncoghlan/devel/py3k/Lib/collections/__init__.py", line 848, in
> pop
>    def pop(self, i=-1): return self.data.pop(i)
> IndexError: pop from empty list
>
> Cheers,
> Nick.
>
> P.S. The reason I was bugging Guido to answer the TypeError vs
> AttributeError question in the first place was to find out whether or
> not I needed to get rid of the following gross inconsistency in the
> behaviour of the with statement relative to other language constructs:
>
> >>> 1()
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: 'int' object is not callable
> >>> with 1: pass
> ...
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> AttributeError: 'int' object has no attribute '__exit__'
>
>
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/79246962/attachment-0001.html>

From raymond.hettinger at gmail.com  Wed Apr  6 20:36:04 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 6 Apr 2011 11:36:04 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikc2h6bqBGd3GBvCpzwDc+y-i7DBg@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikc2h6bqBGd3GBvCpzwDc+y-i7DBg@mail.gmail.com>
Message-ID: <E0F31CC3-4648-40C5-8B9D-2A34FD3210A1@gmail.com>


On Apr 6, 2011, at 10:24 AM, Maciej Fijalkowski wrote:
> 
> "I saw no need to complicate the pure python code for this."
> 
> if you complicate the C code for this, then please as well complicate
> python code for this since it's breaking stuff.


Do you really need a PEP for this one extraordinary and weird case?
The code is long since gone (never in 3.x).  If you disagreed with
the closing of the bug report, just re-open it and a patch can go
into a 2.7 point release.  The downside is that it would not be a
pretty piece of python.


> And this:
> 
> "FWIW, the C code is not guaranteed to be exactly the same in terms of
> implementation details, only the published API should be the same.
> And, for this module, a decision was made for the C code to support
> only lists eventhough the pure python version supports any sequence."
> 
> The idea of the PEP is for C code to be guaranteed to be the same as
> Python where it matters to people.


That is a good goal.  Unfortunately, people can choose to rely on
all manner of implementation details (whether in C or pure Python).

If we want a pure python version of map() for example, the straight-forward
way doesn't work very well because "map(chr, 3)" raises a TypeError
right away in C code, but a python version using a generator wouldn't
raise until next() is called.  Would this be considered a detail that
matters to people?  If so, it means that all the pure python equivalents
for itertools would be have to be written as classes, making them hard
to read and making them run slowly on all implementations except for PyPy.

The original of the bug report you mentioned arose because a major
tool relied on the pure python heapq code comparing "not b <= a"
rather than the equivalent "a < b".  So this was an implementation
detail that mattered to someone, but it went *far* beyond any guaranteed
behaviors.

Tracebacks are another area where C code and pure python code
can't be identical.  This may or may not matter to someone.

The example in the PEP focused on which particular exception,
a TypeError or AttributeError, was raised in response to an
oddly constructed Spam() class.  I don't know that that was
forseeable or that there would have been a reasonable way
to eliminate the difference.  It does sound like the difference
mattered to someone though.

C code tends to use direct internal calls such as Py_SIZE(obj)
rather than doing a lookup using obj.__len__().  This is often
a detail that matters to people because it prevents them from
hooking the call to  __len__.   The C code has to take this 
approach in order to protect its internal invariants and not crash.
If the pure python code tried to emulate this, then every call to
len(self) would need to be replaced by self.__internal_len()
where __internal_len is the real length method and __len__
is made equal to it.

In C to Python translations, do we want locks to be required
so that atomicity behaviors are matched?  That property
likely matters to some users.

ISTM that every person who sets out to translate code from
C to Python or vice versa is already trying their best to make them
behave as similarly as possible.  That is always the goal.

However, the PEP seems to be raising the bar by insisting
on the code being functionally identical.  I think we should
make some decisions about what that really means; otherwise,
every piece of code will be in violation of the PEP for someone
choosing to rely on an implementation detail that isn't the same.

In my opinion, a good outcome of this discussion would be
a list of implementation details that we want to guarantee
and ones that we explicitly say that are allowed to vary.

I would also like to see strong guidance on the use of the
concrete C API which can make it impossible for client code
to use subclasses of builtin types (issue 10977).  That is
another area where differences will arise that will matter
to some users.


Raymond


P.S.  It would be great if the PEP were to supply a
complete, real-word example of code that is considered
to be identical.  A pure python version of map() could
serve as a good example, trying to make it model all 
the C behaviors as exactly as possible (argument handling,
choice of exceptions, length estimation and presizing, etc).








-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/3c0cf00a/attachment.html>

From martin at v.loewis.de  Wed Apr  6 20:36:44 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 06 Apr 2011 20:36:44 +0200
Subject: [Python-Dev] Supporting Visual Studio 2010
In-Reply-To: <20110406013945.1992.1463920715.divmod.xquotient.218@localhost.localdomain>
References: <AANLkTimk52ipfYg=H0=Y8D_YQvMxa09YsOystqsA+ANT@mail.gmail.com>	<4D9A3AD1.3000403@v.loewis.de>
	<20110405002155.3ded22cc@pitrou.net>	<4D9A4991.9090800@voidspace.org.uk>
	<4D9AE7B5.5060900@v.loewis.de>	<20110405125242.1992.1543759088.divmod.xquotient.216@localhost.localdomain>	<4D9B7467.7050309@v.loewis.de>
	<20110406013945.1992.1463920715.divmod.xquotient.218@localhost.localdomain>
Message-ID: <4D9CB2BC.9070605@v.loewis.de>

Am 06.04.2011 03:39, schrieb exarkun at twistedmatrix.com:
> On 5 Apr, 07:58 pm, martin at v.loewis.de wrote:
>>> Does this mean new versions of distutils let you build_ext with any C
>>> compiler, instead of enforcing the same compiler as it has done
>>> previously?
>>
>> No, it doesn't. distutils was considered frozen, and changes to it to
>> better support the ABI where rejected.
> 
> How about distutils2 then?

That certainly will be changed to support the ABI better.

Regards,
Martin

From tjreedy at udel.edu  Wed Apr  6 20:54:02 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 06 Apr 2011 14:54:02 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikc2h6bqBGd3GBvCpzwDc+y-i7DBg@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikc2h6bqBGd3GBvCpzwDc+y-i7DBg@mail.gmail.com>
Message-ID: <inicsa$oo9$1@dough.gmane.org>

On 4/6/2011 1:24 PM, Maciej Fijalkowski wrote:
>> No worries, it wasn't even my code.  Someone
>> donated it.  The was a discusion on python-dev
>> and collective agreement to allow it to have
>> semantic differences that would let it run faster.
>> IIRC, the final call was made by Uncle Timmy.
...
> And, for this module, a decision was made for the C code to support
> only lists eventhough the pure python version supports any sequence."

I believe that at the time of that decision, the Python code was only 
intended for humans, like the Python (near) equivalents in the itertools 
docs to C-coded itertool functions. Now that we are aiming to have 
stdlib Python code be a reference implementation for all interpreters, 
that decision should be revisited. Either the C code should be 
generalized to sequences or the Python code specialized to lists, making 
sure the doc matches either way.

> The idea of the PEP is for C code to be guaranteed to be the same as
> Python where it matters to people.

-- 
Terry Jan Reedy


From stefan at bytereef.org  Wed Apr  6 21:08:24 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Wed, 6 Apr 2011 21:08:24 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibiilty Requirements
In-Reply-To: <BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
Message-ID: <20110406190824.GA14538@sleipnir.bytereef.org>

Brett Cannon <brett at python.org> wrote:
>     * We also suffer from inconsistency in choice of
>     ?exceptions (i.e. overly large sequence indices
>     ?raising either an IndexError, OverflowError, or
>     ?ValueError).
> 
> 
> Once again, a general issue in our C code and not special to this PEP.

Not only in the C code. I get the impression that exceptions are
sometimes handled somewhat arbitrarily. Example:


decimal.py encodes the rounding mode as strings. For a simple invalid
argument we have the following three cases:


# I would prefer a ValueError:
>>> Decimal("1").quantize(Decimal('2'), "this is not a rounding mode")
Decimal('1')

# I would prefer a ValueError:
>>> Decimal("1.11111111111").quantize(Decimal('1e100000'), "this is not a rounding mode")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/stefan/pydev/cpython/Lib/decimal.py", line 2494, in quantize
    ans = self._rescale(exp._exp, rounding)
  File "/home/stefan/pydev/cpython/Lib/decimal.py", line 2557, in _rescale
    this_function = getattr(self, self._pick_rounding_function[rounding])
KeyError: 'this is not a rounding mode'

# I would prefer a TypeError:
>>> Decimal("1.23456789").quantize(Decimal('1e-100000'), ROUND_UP, "this is not a context")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/stefan/pydev/cpython/Lib/decimal.py", line 2478, in quantize
    if not (context.Etiny() <= exp._exp <= context.Emax):
AttributeError: 'str' object has no attribute 'Etiny'


cdecimal naturally encodes the rounding mode as integers and raises a
TypeError in all three cases. The context in cdecimal is a custom
type that translates the flag dictionaries to simple C integers.

This is extremely fast since the slow dictionaries are only updated
on actual accesses. In normal usage, there is no visible difference
to the decimal.py semantics, but there is no way that one could
use a custom context (why would one anyway?).


I think Raymond is right that these issues need to be addressed. Other
C modules will have similar discrepancies to their Python counterparts.


A start would be:

  1) Module constants (like ROUND_UP), should be treated as opaque. If
     a user relies on a specific type, he is on his own.

  2) If it is not expected that custom types will used for a certain
     data structure, then a fixed type can be used.


For cdecimal, the context actually falls under the recently added subset
clause of the PEP, but 2) might be interesting for other modules.



Stefan Krah



From raymond.hettinger at gmail.com  Wed Apr  6 21:45:25 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 6 Apr 2011 12:45:25 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
Message-ID: <EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>


On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> Since people are taking my "semantically identical" point too strongly for what I mean (there is a reason I said "except in cases
> where implementation details of a VM prevents [semantic equivalency] entirely"), how about we change the requirement that C acceleration code must pass the same test suite (sans C specific issues such as refcount tests or word size) and adhere to the documented semantics the same? It should get us the same result without ruffling so many feathers. And if the other VMs find an inconsistency they can add a proper test and then we fix the code (as would be the case regardless). And in instances where it is simply not possible because of C limitations the test won't get written since the test will never pass.

Does the whole PEP just boil down to "if a test is C specific, it should be marked as such"?

Anyone setting out to create equivalent code is already trying to making it as functionally equivalent as possible.   At some point, we should help implementers by thinking out what kinds of implementation details are guaranteed.


Raymond


P.S.  We also need a PEP 8 entry or somesuch giving specific advice about rich comparisons (i.e. never just supply one ordering method, always implement all six); otherwise, rich comparisons will be a never ending source of headaches.

From brett at python.org  Wed Apr  6 22:22:09 2011
From: brett at python.org (Brett Cannon)
Date: Wed, 6 Apr 2011 13:22:09 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
Message-ID: <BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>

On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger <raymond.hettinger at gmail.com
> wrote:

>
> On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> > Since people are taking my "semantically identical" point too strongly
> for what I mean (there is a reason I said "except in cases
> > where implementation details of a VM prevents [semantic equivalency]
> entirely"), how about we change the requirement that C acceleration code
> must pass the same test suite (sans C specific issues such as refcount tests
> or word size) and adhere to the documented semantics the same? It should get
> us the same result without ruffling so many feathers. And if the other VMs
> find an inconsistency they can add a proper test and then we fix the code
> (as would be the case regardless). And in instances where it is simply not
> possible because of C limitations the test won't get written since the test
> will never pass.
>
> Does the whole PEP just boil down to "if a test is C specific, it should be
> marked as such"?
>

How about the test suite needs to have 100% test coverage (or as close as
possible) on the pure Python version? That will guarantee that the C code
which passes that level of test detail is as semantically equivalent as
possible. It also allows the other VMs to write their own acceleration code
without falling into the same trap as CPython can.

There is also the part of the PEP strongly stating that any module that gets
added with no pure Python equivalent will be considered CPython-only and you
better have a damned good reason for it to be only in C from this point
forward.


>
> Anyone setting out to create equivalent code is already trying to making it
> as functionally equivalent as possible.   At some point, we should help
> implementers by thinking out what kinds of implementation details are
> guaranteed.
>

I suspect 100% test coverage will be as good of a metric as any without
bogging ourselves down with every minute detail of C code that could change
as time goes on.

If we want a more thorough definition of what C code should be trying to do
to be as compatible with Python practices should be in a doc in the devguide
rather than this PEP.


>
>
> Raymond
>
>
> P.S.  We also need a PEP 8 entry or somesuch giving specific advice about
> rich comparisons (i.e. never just supply one ordering method, always
> implement all six); otherwise, rich comparisons will be a never ending
> source of headaches.



Fine by me, but I will let you handle that one.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/7a8b057c/attachment.html>

From stefan_ml at behnel.de  Wed Apr  6 22:35:17 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Wed, 06 Apr 2011 22:35:17 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <75FCB6C3-46EB-4455-B905-2B3BDD96F3AF@fuhm.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>	<BANLkTimeUCB_h-U8LgHhNBDXySt-Osx97w@mail.gmail.com>
	<75FCB6C3-46EB-4455-B905-2B3BDD96F3AF@fuhm.net>
Message-ID: <iniiq5$t2v$1@dough.gmane.org>

James Y Knight, 06.04.2011 17:03:
> On Apr 6, 2011, at 10:08 AM, Nick Coghlan wrote:
>> Argument handling is certainly a tricky one - getting positional only
>> arguments requires a bit of a hack in pure Python code (accepting
>> *args and unpacking the arguments manually), but it comes reasonably
>> naturally when parsing arguments directly using the C API.
>
> Perhaps the argument handling for C functions ought to be enhanced to work like python's argument handling, instead of trying to hack it the other way around?

FWIW, Cython implemented functions have full Python 3 semantics for 
argument unpacking but the generated code is usually faster (and sometimes 
much faster) than the commonly used C-API function calls because it is 
tightly adapted to the typed function signature.

Stefan


From solipsis at pitrou.net  Wed Apr  6 22:37:00 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 6 Apr 2011 22:37:00 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
Message-ID: <20110406223700.0bb972be@pitrou.net>

On Wed, 6 Apr 2011 13:22:09 -0700
Brett Cannon <brett at python.org> wrote:
> On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger <raymond.hettinger at gmail.com
> > wrote:
> 
> >
> > On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> > > Since people are taking my "semantically identical" point too strongly
> > for what I mean (there is a reason I said "except in cases
> > > where implementation details of a VM prevents [semantic equivalency]
> > entirely"), how about we change the requirement that C acceleration code
> > must pass the same test suite (sans C specific issues such as refcount tests
> > or word size) and adhere to the documented semantics the same? It should get
> > us the same result without ruffling so many feathers. And if the other VMs
> > find an inconsistency they can add a proper test and then we fix the code
> > (as would be the case regardless). And in instances where it is simply not
> > possible because of C limitations the test won't get written since the test
> > will never pass.
> >
> > Does the whole PEP just boil down to "if a test is C specific, it should be
> > marked as such"?
> >
> 
> How about the test suite needs to have 100% test coverage (or as close as
> possible) on the pure Python version?

Let's say "as good coverage as the C code has", instead ;)

Regards

Antoine.



From skip at pobox.com  Wed Apr  6 22:44:17 2011
From: skip at pobox.com (skip at pobox.com)
Date: Wed, 6 Apr 2011 15:44:17 -0500
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
Message-ID: <19868.53409.915361.560799@montanaro.dyndns.org>


    Brett> How about the test suite needs to have 100% test coverage (or as
    Brett> close as possible) on the pure Python version?

Works for me, but you will have to define what "100%" is fairly clearly.
100% of the lines get executed?  All the branches are taken?  Under what
circumstances might the 100% rule be relaxed?

Skip

From raymond.hettinger at gmail.com  Wed Apr  6 23:15:40 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 6 Apr 2011 14:15:40 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
Message-ID: <21BB2BFC-9F55-45CE-AA9D-B1EBB3A41C9C@gmail.com>


On Apr 6, 2011, at 1:22 PM, Brett Cannon wrote:

> 
> 
> On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> 
> On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> > Since people are taking my "semantically identical" point too strongly for what I mean (there is a reason I said "except in cases
> > where implementation details of a VM prevents [semantic equivalency] entirely"), how about we change the requirement that C acceleration code must pass the same test suite (sans C specific issues such as refcount tests or word size) and adhere to the documented semantics the same? It should get us the same result without ruffling so many feathers. And if the other VMs find an inconsistency they can add a proper test and then we fix the code (as would be the case regardless). And in instances where it is simply not possible because of C limitations the test won't get written since the test will never pass.
> 
> Does the whole PEP just boil down to "if a test is C specific, it should be marked as such"?
> 
> How about the test suite needs to have 100% test coverage (or as close as possible) on the pure Python version? That will guarantee that the C code which passes that level of test detail is as semantically equivalent as possible. It also allows the other VMs to write their own acceleration code without falling into the same trap as CPython can.

Sounds good.

> 
> There is also the part of the PEP strongly stating that any module that gets added with no pure Python equivalent will be considered CPython-only and you better have a damned good reason for it to be only in C from this point forward.

That seems reasonable for purely algorithmic modules.  I presume if an xz compressor gets added, there won't be a requirement that it be coded in Python ;-)

Also, I'm not sure the current wording of the PEP makes it clear that this is a going-forward requirement.  We don't want to set off an avalanche of new devs rewriting all the current C components (struct, math, cmath, bz2, defaultdict, arraymodule, sha1, mersenne twister, etc).

For the most part, I expect that people writing algorithmic C modules will start-off by writing a pure python version anyway, so this shouldn't be a big change to their development process.


> 
> P.S.  We also need a PEP 8 entry or somesuch giving specific advice about rich comparisons (i.e. never just supply one ordering method, always implement all six); otherwise, rich comparisons will be a never ending source of headaches.
> 
> 
> Fine by me, but I will let you handle that one. 
> 

Done!



Raymond

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/fa6543ee/attachment.html>

From raymond.hettinger at gmail.com  Wed Apr  6 23:27:20 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 6 Apr 2011 14:27:20 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
Message-ID: <2742C0F0-980F-491D-BF2F-6A16AA12A520@gmail.com>


On Apr 6, 2011, at 1:22 PM, Brett Cannon wrote:

> How about the test suite needs to have 100% test coverage (or as close as possible) on the pure Python version? That will guarantee that the C code which passes that level of test detail is as semantically equivalent as possible. It also allows the other VMs to write their own acceleration code without falling into the same trap as CPython can.

One other thought:  we should probably make a specific exception for pure python code using generators.  It is common for generators to defer argument checking until the next() method is called while the C equivalent makes the check immediately upon instantiation (i.e. map(chr, 3) raises TypeError immediately in C but a pure python generator won't raise until the generator is actually run).


Raymond 

From solipsis at pitrou.net  Wed Apr  6 23:40:10 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 6 Apr 2011 23:40:10 +0200
Subject: [Python-Dev] Force build form
Message-ID: <20110406234010.31dd64af@pitrou.net>


Hello,

For the record, I've tried to make the force build form clearer on the
buildbot Web UI. See e.g.:
http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom

Regards

Antoine.



From ncoghlan at gmail.com  Wed Apr  6 23:40:41 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 07:40:41 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
Message-ID: <BANLkTimoRy3tJukQQ8wME5YZ5SZHiNWSfA@mail.gmail.com>

On Thu, Apr 7, 2011 at 6:22 AM, Brett Cannon <brett at python.org> wrote:
> How about the test suite needs to have 100% test coverage (or as close as
> possible) on the pure Python version? That will guarantee that the C code
> which passes that level of test detail is as semantically equivalent as
> possible. It also allows the other VMs to write their own acceleration code
> without falling into the same trap as CPython can.

Independent of coverage numbers, C acceleration code should really be
tested with 3 kinds of arguments:
- builtin types
- subclasses of builtin types
- duck types

Those are (often) 2 or 3 different code paths in accelerated C code,
but will usually be a single path in the Python code.

(e.g. I'd be willing to bet that it is possible to get the Python
version of heapq to 100% coverage without testing the second two
cases, since the Python code doesn't special-case list in any way)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From v+python at g.nevcal.com  Wed Apr  6 23:58:00 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 06 Apr 2011 14:58:00 -0700
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
Message-ID: <4D9CE1E8.9000203@g.nevcal.com>

On 4/6/2011 7:26 AM, Nick Coghlan wrote:
> On Wed, Apr 6, 2011 at 6:22 AM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> With more standardization of versions, should the version module be promoted
>> to stdlib directly?
> When Tarek lands "packaging" (i.e. what distutils2 becomes in the
> Python 3.3 stdlib), the standardised version handling will come with
> it.

I thought that might be part of the answer :)  But that, and below, seem 
to indicate that use of "packaging" suddenly becomes a requirement for 
all modules that want to include versions.  The packaging of "version" 
inside a version of "packaging" implies more dependencies on a larger 
body of code for a simple function.


>> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>
>>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>>
>>      __version__ = pkgutil.get_distribution('elle').metadata['version']
>>
>> The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
>> discussed in POP 386.
> Indeed, I really don't like the RE suggestion - better to tell people
> to just move the version info into the static config file and use
> pkgutil to make it available as shown. That solves the build time vs
> install time problem as well.
>
>> Nor will it match the code shown and quoted for the alternative distutils2
>> case.
>>
>>
>> Other comments:
>>
>> Are there issues for finding and loading multiple versions of the same
>> module?
> No, you simply can't do it. Python's import semantics are already
> overly complicated even without opening that particular can of worms.

OK, I just recalled some discussion about multiple coexisting versions 
in past time, not that they produced any conclusion that such should or 
would ever be implemented.

>> Should it be possible to determine a version before loading a module?  If
>> yes, the version module would have to be able to find a parse version
>> strings in any of the many places this PEP suggests they could be... so that
>> would be somewhat complex, but the complexity shouldn't be used to change
>> the answer... but if the answer is yes, it might encourage fewer variant
>> cases to be supported for acceptable version definition locations for this
>> PEP.
> Yep, this is why the version information should be in the setup.cfg
> file, and hence available via pkgutil without loading the module
> first.

So, no support for single .py file modules, then?

If "packaging" truly is the only thing that knows the version of 
something, and "version" lives in "packaging", then perhaps packaging 
"__version__" as part of the module is inappropriate, and the API to 
obtain the version of a module should be inside "packaging" with the 
module (or its name) as a parameter, rather than asking the module, 
which may otherwise not need a dependency on the internals of 
"packaging" except to obtain its own version, which, it doesn't likely 
need for its own use anyway, except to report it.

Which is likely why Barry offered so many choices as to where the 
version of a package or module might live in the first place.

Perhaps a different technique would be that if packaging is in use, that 
it could somehow inject the version from setup.cfg into the module, 
either by tweaking the source as it gets packaged, or installed, or 
tweaking the module as/after it gets loaded (the latter still required 
some runtime dependency on code from the packaging system).  A line like 
the following in some designated-to-"packaging" source file could be 
replaced during packaging:

__version__ = "7.9.7xxxx" # replaced by "packaging"

could be used for source codes that use "packaging" which would replace 
it by the version from setup.cfg during the packaging process, whereas a 
module that doesn't use "packaging" would put in the real version, and 
avoid the special comment.  The reason the fake version would have a 
(redundant) number would be to satisfy dependencies during 
pre-"packaging" testing.  (The above would add a new parsing requirement 
to "version" for "xxxx" at the end.  Something different than "dev" so 
that development releases that still go through the packaging process 
are still different than developers test code.  "packaging" should 
probably complain if the versions are numerically different and the 
version in the source file doesn't have "xxxx" or doesn't exactly match 
the version in setup.cfg, and if the special comment is not found.)

Caveat: I'm not 100% clear on when/how any of "distutils", "setuptools", 
or "packaging" are invoked (I was sort of waiting for the dust to settle 
on "packaging" to learn how to use this latest way of doing things), but 
based on historical experience with other systems, and expectations 
about how things "should" work, I would expect that a packaging system 
is something that should be used after a module is complete, to wrap it 
up for distribution and installation, but that the module itself should 
not have significant knowledge of or dependency on such a packaging 
system, so that when the module is invoked at runtime, it doesn't bring 
overhead of such packaging systems to the runtime.  I've seen the 
setuptools "egg" where the module stays inside the egg after being 
installed, and while that might have benefits for reducing the installed 
file count, and perhaps other benefits, I've not tried to figure out 
whether the module is dependent on "setuptools" or whether the 
"setuptools" just provides the benefits mentioned, without the module 
needing to be dependent on it, but still it seems the application winds 
up being dependent on some "setuptools" stuff at runtime, when it uses 
such a module.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/c7874896/attachment.html>

From tseaver at palladion.com  Thu Apr  7 00:05:57 2011
From: tseaver at palladion.com (Tres Seaver)
Date: Wed, 06 Apr 2011 18:05:57 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <20110406223700.0bb972be@pitrou.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
	<20110406223700.0bb972be@pitrou.net>
Message-ID: <inio44$s8t$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/06/2011 04:37 PM, Antoine Pitrou wrote:
> On Wed, 6 Apr 2011 13:22:09 -0700
> Brett Cannon <brett at python.org> wrote:
>> On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger <raymond.hettinger at gmail.com
>>> wrote:
>>
>>>
>>> On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
>>>> Since people are taking my "semantically identical" point too strongly
>>> for what I mean (there is a reason I said "except in cases
>>>> where implementation details of a VM prevents [semantic equivalency]
>>> entirely"), how about we change the requirement that C acceleration code
>>> must pass the same test suite (sans C specific issues such as refcount tests
>>> or word size) and adhere to the documented semantics the same? It should get
>>> us the same result without ruffling so many feathers. And if the other VMs
>>> find an inconsistency they can add a proper test and then we fix the code
>>> (as would be the case regardless). And in instances where it is simply not
>>> possible because of C limitations the test won't get written since the test
>>> will never pass.
>>>
>>> Does the whole PEP just boil down to "if a test is C specific, it should be
>>> marked as such"?
>>>
>>
>> How about the test suite needs to have 100% test coverage (or as close as
>> possible) on the pure Python version?
> 
> Let's say "as good coverage as the C code has", instead ;)

The point is to require that the *Python* version be the "reference
implementation", which means that the tests should be fully covering it
(especially for any non-grandfathered module).


Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2c48QACgkQ+gerLs4ltQ4p2ACgjds89LnzLnSEZOykwZKzqFVn
VVAAn10q1x74JOW2gi/DlYDgf9hkRCuv
=ee3b
-----END PGP SIGNATURE-----


From raymond.hettinger at gmail.com  Wed Apr  6 23:57:44 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 6 Apr 2011 14:57:44 -0700
Subject: [Python-Dev] Force build form
In-Reply-To: <20110406234010.31dd64af@pitrou.net>
References: <20110406234010.31dd64af@pitrou.net>
Message-ID: <525F2DBB-D6C1-4780-BBBC-EBCE6803F4B6@gmail.com>


On Apr 6, 2011, at 2:40 PM, Antoine Pitrou wrote:

> For the record, I've tried to make the force build form clearer on the
> buildbot Web UI. See e.g.:
> http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom


Much improved.  Thanks.


Raymond


From rdmurray at bitdance.com  Thu Apr  7 02:36:19 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 06 Apr 2011 20:36:19 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <inio44$s8t$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
	<20110406223700.0bb972be@pitrou.net> <inio44$s8t$1@dough.gmane.org>
Message-ID: <20110407003649.C25F12505E6@mailhost.webabinitio.net>

On Wed, 06 Apr 2011 18:05:57 -0400, Tres Seaver <tseaver at palladion.com> wrote:
> On 04/06/2011 04:37 PM, Antoine Pitrou wrote:
> > On Wed, 6 Apr 2011 13:22:09 -0700 Brett Cannon <brett at python.org> wrote:
> >> How about the test suite needs to have 100% test coverage (or as close as
> >> possible) on the pure Python version?
> > 
> > Let's say "as good coverage as the C code has", instead ;)
> 
> The point is to require that the *Python* version be the "reference
> implementation", which means that the tests should be fully covering it
> (especially for any non-grandfathered module).

There are two slightly different requirements covered by these two
suggested rules.  The Python one says "any test the Python package passes
the C version should also pass, and let's make sure we test all of the
Python code".  The C one says "any test that the C code passes the Python
code should also pass".   These are *almost* the same rule, but not quite.

Brett's point in asking for 100% coverage of the Python code is to make
sure the C implementation covers the same ground as the Python code.
Antoine's point in asking that the Python tests be at least as good as
the C tests is to make sure that the Python code covers the same ground
as the C code.  The former is most important for modules that are
getting new accelerator code, the latter for existing modules that
already have accelerators or are newly acquiring Python versions.

The PEP actually already contains the combined rule:  both the C and
the Python version must pass the *same* test suite (unless there are
virtual machine issues that simply can't be worked around).  I think
the thing that we are talking about adding to the PEP is that there
should be no untested features in *either* the Python or the C version,
insofar as we can make that happen (that is, we are testing also that
the feature sets are the same).  And that passing that comprehensive
test suite is the definition of compliance with the PEP, not abstract
arguments about semantics.  (We can argue about the semantics when we
discuss individual tests :)

100% branch coverage as measured by coverage.py is one good place to
start for building such a comprehensive test suite.  Existing tests
for C versions getting (or newly testing) Python code is another.
Bug reports from alternate VMs will presumably fill out the remainder.

--
R. David Murray           http://www.bitdance.com

PS: testing that Python code handles subclasses and duck typing is by
no means wasted effort; I've some bugs in the email package using such
tests, and it is pure Python.

From foom at fuhm.net  Thu Apr  7 03:23:01 2011
From: foom at fuhm.net (James Y Knight)
Date: Wed, 6 Apr 2011 21:23:01 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <19868.53409.915361.560799@montanaro.dyndns.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>
	<BANLkTikNVAOsyOWdhO4dz7T=hPXWaA2JNA@mail.gmail.com>
	<EFAA8EA5-C32E-4CAC-B5DA-183B802BA0A6@gmail.com>
	<BANLkTi=nP57Vt_AitaYqM6V0A3Gyd_ETCw@mail.gmail.com>
	<19868.53409.915361.560799@montanaro.dyndns.org>
Message-ID: <A3058D4F-11E8-4C67-BE9E-071FF07C93BB@fuhm.net>


On Apr 6, 2011, at 4:44 PM, skip at pobox.com wrote:

>    Brett> How about the test suite needs to have 100% test coverage (or as
>    Brett> close as possible) on the pure Python version?
> 
> Works for me, but you will have to define what "100%" is fairly clearly.
> 100% of the lines get executed?  All the branches are taken?  Under what
> circumstances might the 100% rule be relaxed?

And...does that include all branches taken within the interpreter too? :)

E.g. check whether all possible exceptions are thrown in all possible places an exception could be thrown? (As per the exception compatibility subthread)

And what about all the possible crazy stuff you could do in callbacks back to user code (e.g. mutating arguments passed to the initial function, or installing a trace hook or...)?

Does use of the function as a class attribute need to be covered? (see previous discussion on differences in behavior due to descriptors).

Etcetc.

I'd love it if CPython C modules acted equivalently to python code, but there is almost an endless supply of differences...100% test coverage of the behavior seems completely infeasible if interpreted strictly; some explicit subset of all possible behavior needs to be defined for what users cannot reasonably depend on. (sys.settrace almost certainly belonging on that list :).)

James

From tjreedy at udel.edu  Thu Apr  7 04:42:21 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 06 Apr 2011 22:42:21 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <inicsa$oo9$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<334B298C-B87D-4A74-B113-46A422779A8E@gmail.com>	<BANLkTikc2h6bqBGd3GBvCpzwDc+y-i7DBg@mail.gmail.com>
	<inicsa$oo9$1@dough.gmane.org>
Message-ID: <inj8ad$8dl$1@dough.gmane.org>

On 4/6/2011 2:54 PM, Terry Reedy wrote:

> I believe that at the time of that decision, the Python [heapq] code was only
> intended for humans, like the Python (near) equivalents in the itertools
> docs to C-coded itertool functions. Now that we are aiming to have
> stdlib Python code be a reference implementation for all interpreters,
> that decision should be revisited.

OK so far.

 > Either the C code should be generalized to sequences or
 > the Python code specialized to lists, making sure the doc matches 
either way.

After rereading the heapq doc and .py file and thinking some more, I 
retract this statement for the following reasons.

1. The heapq doc clearly states that a list is required. It leaves the 
behavior for other types undefined. Let it be so.

2. Both _heapq.c (or its actual name) and heapq.py meet (I presume) the 
documented requirements and pass (or would pass) a complete test suite 
based on using lists as heaps. In that regard, both are conformant and 
should be considered 'equivalent'.

3. _heapq.c is clearly optimized for speed. It allows a list subclass as 
input and will heapify such, but it ignores a custom __getitem__. My 
informal test on the result of random.shuffle(list(range(9999999) shows 
that heapify is over 10x as fast as .sort(). Let it be so.

4. When I suggested changing heapq.py, I had forgetten that heap.py 
defined several functions rather than a wrapper class with methods. I 
was thinking of putting a type check in .__init__, where it would be 
applied once per heap (and possibly bypassed), and could easily be 
removed. Instead every function would require a type check for every 
call. This would be too obnoxious to me. I love duck typing and held my 
nose a bit when suggesting a one-time type check.

5. Python already has an "extra's allowed" principle. In other words, an 
implementation does not have to bother to enforce documented 
restrictions. For one example, Python 2 manuals restrict identifiers to 
ascii letters. CPython 2 (at least in recent versions) actually allows 
extended ascii letters, as in latin-1. For another, namespaces (globals 
and attribute namespaces), by their name, only need to map identifiers 
to objects. However, CPython uses general dicts rather than specialized 
string dicts with validity checks. People have exploited both loopholes. 
But those who have should not complain to us if such code fails on a 
different implementation that adheres to the doc.

I think the Language and Library references should start with something 
a bit more specific than at present:

"The Python x.y Language and Library References define the Python x.y 
language, its builtin objects, and standard library. Code written to 
these docs should run on any implementation that includes the features 
used. Code that exploits or depends on any implementation-specific 
feature or behavior may not be portable."

_x.c and x.py are separate implementations of module x. I think they 
should be subject to the same disclaimer.


Therefore, I currently think that the only change needed for heapq 
(assuming both versions pass complete tests as per the doc) is an 
explanation at the top of heapq.py that goes something like this:

"Heapq.py is a reference implementation of the heapq module for both 
humans and implementations that do not have an accelerated version. For 
CPython, most of the functions are replaced by much faster C-coded versions.

Heapq is documented to required a python list as input to the heap 
functions. The C functions enforce this restriction. The Python versions 
do not and should work with any mutable random-access sequence. Should 
you wish to run the Python code with CPython, copy this file, give it a 
new name, delete the following lines:

try:
     from _heapq import *
except ImportError:
     pass

make any other changes you wish, and do not expect the result to be 
portable."

-- 
Terry Jan Reedy


From techtonik at gmail.com  Thu Apr  7 05:37:03 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Thu, 7 Apr 2011 06:37:03 +0300
Subject: [Python-Dev] Code highlighting in tracker
Message-ID: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>

Is it a good idea to have code highlighting in tracker?

I'd like to gather independent unbiased opinion for a little research
of Python development. Unfortunately, there is no way to create a
poll, but if you just say yes or no without reading all other comments
- that would be fine. Thanks.
-- 
anatoly t.

From benjamin at python.org  Thu Apr  7 06:01:28 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 6 Apr 2011 23:01:28 -0500
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
Message-ID: <BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>

2011/4/6 anatoly techtonik <techtonik at gmail.com>:
> Is it a good idea to have code highlighting in tracker?

Why would we need it?



-- 
Regards,
Benjamin

From ncoghlan at gmail.com  Thu Apr  7 06:08:21 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 14:08:21 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9CE1E8.9000203@g.nevcal.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
Message-ID: <BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>

On Thu, Apr 7, 2011 at 7:58 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> Perhaps a different technique would be that if packaging is in use, that it
> could somehow inject the version from setup.cfg into the module, either by
> tweaking the source as it gets packaged, or installed, or tweaking the
> module as/after it gets loaded (the latter still required some runtime
> dependency on code from the packaging system).? A line like the following in
> some designated-to-"packaging" source file could be replaced during
> packaging:
>
> __version__ = "7.9.7xxxx" # replaced by "packaging"

If you don't upload your module to PyPI, then you can do whatever you
want with your versioning info. If you *do* upload it to PyPI, then
part of doing so properly is to package it so that your metadata is
where other utilities expect it to be. At that point, you can move the
version info over to setup.cfg and add the code into the module to
read it from the metadata store.

The guidelines in 396 really only apply to distributed packages, so it
doesn't make sense to obfuscate by catering to esoteric use cases. If
prviate modules don't work with the standard tools, who is going to
care? The module author clearly doesn't, and they aren't distributing
it to anyone else. Once they *do* start distributing it, then their
new users will help bring them into line. Having the recommended
practice clearly documented just makes it easier for those users to
point new module distributors in the right direction.

(Also, tsk, tsk, Barry for including Standards track proposals in an
Informational PEP!)

Cheers,
Nick.

P.S. A nice coincidental progression: PEP 376, 386 and 396 are all
related to versioning and package metadata

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Thu Apr  7 06:18:31 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 14:18:31 +1000
Subject: [Python-Dev] Force build form
In-Reply-To: <20110406234010.31dd64af@pitrou.net>
References: <20110406234010.31dd64af@pitrou.net>
Message-ID: <BANLkTinmTkURiJVx=D9FmLKRPViKtj8AmQ@mail.gmail.com>

On Thu, Apr 7, 2011 at 7:40 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> For the record, I've tried to make the force build form clearer on the
> buildbot Web UI. See e.g.:
> http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom

Looks good - trying it out on my LHS precedence correction branch to
confirm I am using it correctly.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Thu Apr  7 06:24:49 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 14:24:49 +1000
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
Message-ID: <BANLkTikOgXJSnTfDLg8e7TDXS5ZGc4YbHw@mail.gmail.com>

On Thu, Apr 7, 2011 at 1:37 PM, anatoly techtonik <techtonik at gmail.com> wrote:
> Is it a good idea to have code highlighting in tracker?

The tracker doesn't display code. Only the code review tool and the
repository browser display code (and syntax highlighting is useful but
not essential for those use cases, just as it is useful but not
essential during actual coding).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From v+python at g.nevcal.com  Thu Apr  7 06:55:49 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 06 Apr 2011 21:55:49 -0700
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>	<4D9B7A1F.3070106@g.nevcal.com>	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
Message-ID: <4D9D43D5.40603@g.nevcal.com>

On 4/6/2011 9:08 PM, Nick Coghlan wrote:
> On Thu, Apr 7, 2011 at 7:58 AM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> Perhaps a different technique would be that if packaging is in use, that it
>> could somehow inject the version from setup.cfg into the module, either by
>> tweaking the source as it gets packaged, or installed, or tweaking the
>> module as/after it gets loaded (the latter still required some runtime
>> dependency on code from the packaging system).  A line like the following in
>> some designated-to-"packaging" source file could be replaced during
>> packaging:
>>
>> __version__ = "7.9.7xxxx" # replaced by "packaging"
> If you don't upload your module to PyPI, then you can do whatever you
> want with your versioning info. If you *do* upload it to PyPI, then
> part of doing so properly is to package it so that your metadata is
> where other utilities expect it to be. At that point, you can move the
> version info over to setup.cfg and add the code into the module to
> read it from the metadata store.

The PEP doesn't mention PyPI, and at present none of the modules there 
use "packaging" :)  So it wasn't obvious to me that the PEP applies only 
to PyPI, and I have used modules that were not available from PyPI yet 
were still distributed and packaged somehow (not using "packaging" clearly).

While there has been much effort (discussion by many) to make 
"packaging" useful to many, and that is probably a good thing, I still 
wonder why a packaging system should be loaded into applications when 
all the code has already been installed.  Or is the runtime of 
"packaging" packaged so that only a small amount of code has to be 
loaded to obtain "version" and "__version__"?  I don't recall that being 
discussed on this list, but maybe it has been on more focused lists, 
sorry for my ignorance... but I also read about embedded people 
complaining about how many files Python opens at start up, and see no 
need for a full packaging system to be loaded, just to do version checking.


> The guidelines in 396 really only apply to distributed packages, so it
> doesn't make sense to obfuscate by catering to esoteric use cases. If
> prviate modules don't work with the standard tools, who is going to
> care? The module author clearly doesn't, and they aren't distributing
> it to anyone else. Once they *do* start distributing it, then their
> new users will help bring them into line. Having the recommended
> practice clearly documented just makes it easier for those users to
> point new module distributors in the right direction.

Oh, I fully agree that there be a PEP with guidelines, and yesterday 
converted my private versioning system to conform with the names in the 
PEP, and the style of version string in the referenced PEP.  And I 
distribute my modules -- so far only in a private group, and so far as 
straight .py files... no use of "packaging".  And even if I never use 
"packaging", it seems like a good thing to conform to this PEP, if I 
can.  Version checking is useful.

> (Also, tsk, tsk, Barry for including Standards track proposals in an
> Informational PEP!)
>
> Cheers,
> Nick.
>
> P.S. A nice coincidental progression: PEP 376, 386 and 396 are all
> related to versioning and package metadata
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110406/f1117b32/attachment.html>

From stefan_ml at behnel.de  Thu Apr  7 07:15:09 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Thu, 07 Apr 2011 07:15:09 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTim1=ghH_ak9mbDjD5yKnwFPyE0RWw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>
	<BANLkTim1=ghH_ak9mbDjD5yKnwFPyE0RWw@mail.gmail.com>
Message-ID: <injh8t$dan$1@dough.gmane.org>

Brett Cannon, 06.04.2011 19:40:
> On Tue, Apr 5, 2011 at 05:01, Nick Coghlan wrote:
>> However, there actually *is* a significant semantic discrepancy in the
>> heapq case, which is that py_heapq is duck-typed, while c_heapq is
>> not:
>>
>> TypeError: heap argument must be a list
>
> That's true. I will re-word it to point that out. The example code still
> shows it, I just didn't explicitly state that in the example.

Assuming there always is an "equivalent" Python implementation anyway, what 
about using that as a fallback for input types that the C implementation 
cannot deal with?

Or would it be a larger surprise for users if the code ran slower when 
passing in a custom type than if it throws an exception instead?

Stefan


From ncoghlan at gmail.com  Thu Apr  7 08:53:50 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 16:53:50 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D43D5.40603@g.nevcal.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
	<4D9D43D5.40603@g.nevcal.com>
Message-ID: <BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>

On Thu, Apr 7, 2011 at 2:55 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> __version__ = "7.9.7xxxx" # replaced by "packaging"
>
> If you don't upload your module to PyPI, then you can do whatever you
> want with your versioning info. If you *do* upload it to PyPI, then
> part of doing so properly is to package it so that your metadata is
> where other utilities expect it to be. At that point, you can move the
> version info over to setup.cfg and add the code into the module to
> read it from the metadata store.
>
> The PEP doesn't mention PyPI, and at present none of the modules there use
> "packaging" :)

They all use distutils (or setuptools or distutils2) though, which is
what packaging replaces.

(Sorry for not making that clear - it's easy to forget which aspects
of these issues aren't common knowledge as yet)

> So it wasn't obvious to me that the PEP applies only to
> PyPI, and I have used modules that were not available from PyPI yet were
> still distributed and packaged somehow (not using "packaging" clearly).

packaging is the successor to the current distutils package.
Distribution via PyPI is the main reason to bother with creating a
correctly structured package - for internal distribution, people use
all sorts of ad hoc schemes (often just the packaging systems of their
internal target platforms). I'll grant that some people do use
properly structured packages for purely internal use, but I'd also be
willing to bet that they're the exception rather than the rule.

What I would like to see the PEP say is that if you don't *have* a
setup.cfg file, then go ahead and embed the version directly in your
Python source file. If you *do* have one, then put the version there
and retrieve it with "pkgutil" if you want to provide a __version__
attribute.

Barry is welcome to make a feature request to allow that dependency to
go the other way, with the packaging system reading the version number
out of the source file, but such a suggestion doesn't belong in an
Informational PEP. If such a feature is ever accepted, then the
recommendation in the PEP could be updated.

> While there has been much effort (discussion by many) to make "packaging"
> useful to many, and that is probably a good thing, I still wonder why a
> packaging system should be loaded into applications when all the code has
> already been installed.? Or is the runtime of "packaging" packaged so that
> only a small amount of code has to be loaded to obtain "version" and
> "__version__"?? I don't recall that being discussed on this list, but maybe
> it has been on more focused lists, sorry for my ignorance... but I also read
> about embedded people complaining about how many files Python opens at start
> up, and see no need for a full packaging system to be loaded, just to do
> version checking.

pkgutil will be able to read the metadata - it is a top level standard
library module, *not* a submodule of distutils/packaging.

It may make sense for the version parsing support to be in pkgutil as
well, since PEP 345 calls for it to be stored as a string in the
package metadata, but it needs to be converted with NormalizedVersion
to be safe to use in arbitrary version range checks. That's Tarek's
call as to whether to provide it that way, or as a submodule of
packaging. As you say, the fact that distutils/packaging are usually
first on the chopping block when distros are looking to save space is
a strong point in favour of having that particular functionality
somewhere else.

That said, I've seen people have problems because a Python 2.6
redistributor decided "contextlib" wasn't important and left it out,
so YMMV regardless of where the code ends up.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Thu Apr  7 08:59:24 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 16:59:24 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <injh8t$dan$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTi=7bajBfuvxCrC2Kn82EKj3PWMiBg@mail.gmail.com>
	<BANLkTim1=ghH_ak9mbDjD5yKnwFPyE0RWw@mail.gmail.com>
	<injh8t$dan$1@dough.gmane.org>
Message-ID: <BANLkTikB_B6D3G3wMN7U6yxtmk93Dj9yoA@mail.gmail.com>

On Thu, Apr 7, 2011 at 3:15 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Assuming there always is an "equivalent" Python implementation anyway, what
> about using that as a fallback for input types that the C implementation
> cannot deal with?
>
> Or would it be a larger surprise for users if the code ran slower when
> passing in a custom type than if it throws an exception instead?

It often isn't practical - the internal structures of the two don't
necessarily play nicely together.

It's an interesting idea for heapq in particular, though. (The C
module fairly could easily alias the Python versions with underscore
prefixes, then fallback to those instead of raising an error if
PyList_CheckExact fails).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From v+python at g.nevcal.com  Thu Apr  7 09:05:31 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 07 Apr 2011 00:05:31 -0700
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>	<4D9B7A1F.3070106@g.nevcal.com>	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>	<4D9CE1E8.9000203@g.nevcal.com>	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>	<4D9D43D5.40603@g.nevcal.com>
	<BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
Message-ID: <4D9D623B.6060603@g.nevcal.com>

On 4/6/2011 11:53 PM, Nick Coghlan wrote:
> On Thu, Apr 7, 2011 at 2:55 PM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> __version__ = "7.9.7xxxx" # replaced by "packaging"
>>
>> If you don't upload your module to PyPI, then you can do whatever you
>> want with your versioning info. If you *do* upload it to PyPI, then
>> part of doing so properly is to package it so that your metadata is
>> where other utilities expect it to be. At that point, you can move the
>> version info over to setup.cfg and add the code into the module to
>> read it from the metadata store.
>>
>> The PEP doesn't mention PyPI, and at present none of the modules there use
>> "packaging" :)
> They all use distutils (or setuptools or distutils2) though, which is
> what packaging replaces.
>
> (Sorry for not making that clear - it's easy to forget which aspects
> of these issues aren't common knowledge as yet)

I knew that packaging replaced those others, but was unaware that those 
were the only two methods used on PyPI.  Not that I'd heard of or 
experienced any others from that source, but there are many packages there.


>> So it wasn't obvious to me that the PEP applies only to
>> PyPI, and I have used modules that were not available from PyPI yet were
>> still distributed and packaged somehow (not using "packaging" clearly).
> packaging is the successor to the current distutils package.
> Distribution via PyPI is the main reason to bother with creating a
> correctly structured package - for internal distribution, people use
> all sorts of ad hoc schemes (often just the packaging systems of their
> internal target platforms). I'll grant that some people do use
> properly structured packages for purely internal use, but I'd also be
> willing to bet that they're the exception rather than the rule.
>
> What I would like to see the PEP say is that if you don't *have* a
> setup.cfg file, then go ahead and embed the version directly in your
> Python source file. If you *do* have one, then put the version there
> and retrieve it with "pkgutil" if you want to provide a __version__
> attribute.
>
> Barry is welcome to make a feature request to allow that dependency to
> go the other way, with the packaging system reading the version number
> out of the source file, but such a suggestion doesn't belong in an
> Informational PEP. If such a feature is ever accepted, then the
> recommendation in the PEP could be updated.
>
>> While there has been much effort (discussion by many) to make "packaging"
>> useful to many, and that is probably a good thing, I still wonder why a
>> packaging system should be loaded into applications when all the code has
>> already been installed.  Or is the runtime of "packaging" packaged so that
>> only a small amount of code has to be loaded to obtain "version" and
>> "__version__"?  I don't recall that being discussed on this list, but maybe
>> it has been on more focused lists, sorry for my ignorance... but I also read
>> about embedded people complaining about how many files Python opens at start
>> up, and see no need for a full packaging system to be loaded, just to do
>> version checking.
> pkgutil will be able to read the metadata - it is a top level standard
> library module, *not* a submodule of distutils/packaging.
>
> It may make sense for the version parsing support to be in pkgutil as
> well, since PEP 345 calls for it to be stored as a string in the
> package metadata, but it needs to be converted with NormalizedVersion
> to be safe to use in arbitrary version range checks. That's Tarek's
> call as to whether to provide it that way, or as a submodule of
> packaging. As you say, the fact that distutils/packaging are usually
> first on the chopping block when distros are looking to save space is
> a strong point in favour of having that particular functionality
> somewhere else.

This sounds more practical; if I recall prior discussions correctly, 
pkgutil reads a standard set of metadata data packaging systems should 
provide, and version would seem to be part of that, more so than of 
packaging itself... seems things would have a better (smaller at 
runtime) dependency tree that way, from what I understand about it.

> That said, I've seen people have problems because a Python 2.6
> redistributor decided "contextlib" wasn't important and left it out,
> so YMMV regardless of where the code ends up.

:)

> Cheers,
> Nick

Thanks Nick, for the info in this thread.   This is mostly a thank you 
note for helping me understand better.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/781fc3bb/attachment.html>

From ncoghlan at gmail.com  Thu Apr  7 11:24:30 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 19:24:30 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D623B.6060603@g.nevcal.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
	<4D9D43D5.40603@g.nevcal.com>
	<BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
	<4D9D623B.6060603@g.nevcal.com>
Message-ID: <BANLkTinn93VKc5-XXJS2ReBKDwdV-OScXw@mail.gmail.com>

On Thu, Apr 7, 2011 at 5:05 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> On 4/6/2011 11:53 PM, Nick Coghlan wrote:
> They all use distutils (or setuptools or distutils2) though, which is
> what packaging replaces.
>
> (Sorry for not making that clear - it's easy to forget which aspects
> of these issues aren't common knowledge as yet)
>
> I knew that packaging replaced those others, but was unaware that those were
> the only two methods used on PyPI.? Not that I'd heard of or experienced any
> others from that source, but there are many packages there.

I believe it is possible to get stuff up onto PyPI without actually
using one of the various packaging utilities, but such entries
generally won't play well with others (including automated tools like
pip and cheesecake).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fuzzyman at voidspace.org.uk  Thu Apr  7 13:10:59 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 12:10:59 +0100
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
Message-ID: <4D9D9BC3.7040101@voidspace.org.uk>

On 06/04/2011 15:26, Nick Coghlan wrote:
> On Wed, Apr 6, 2011 at 6:22 AM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> With more standardization of versions, should the version module be promoted
>> to stdlib directly?
> When Tarek lands "packaging" (i.e. what distutils2 becomes in the
> Python 3.3 stdlib), the standardised version handling will come with
> it.
>
>> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>
>>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>>
>>      __version__ = pkgutil.get_distribution('elle').metadata['version']
>>

I really dislike this way of specifying the version. For a start it is 
really ugly.

More importantly it means the version information is *only* available if 
the package has been installed by "packaging", and so isn't available 
for the parts of my pre-build process like building the documentation 
(which import the version number to put into the docs).

Currently all my packages have the canonical version number information 
in the package itself using:

     __version__ = '1.2.3'

Anything that needs the version number, including setup.py for upload to 
pypi, has one place to look for it and it doesn't depend on any other 
tools or processes. If switching to "packaging" prevents me from doing 
this then it will inhibit me using "packaging".

What I may have to do is use a python script that will generate the 
static metadata, which is not such a bad thing I guess as it will only 
need to be executed at package build time. I won't be switching to that 
horrible technique for specifying versions within my packages though.

All the best,

Michael
>> The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
>> discussed in POP 386.
> Indeed, I really don't like the RE suggestion - better to tell people
> to just move the version info into the static config file and use
> pkgutil to make it available as shown. That solves the build time vs
> install time problem as well.
>
>> Nor will it match the code shown and quoted for the alternative distutils2
>> case.
>>
>>
>> Other comments:
>>
>> Are there issues for finding and loading multiple versions of the same
>> module?
> No, you simply can't do it. Python's import semantics are already
> overly complicated even without opening that particular can of worms.
>
>> Should it be possible to determine a version before loading a module?  If
>> yes, the version module would have to be able to find a parse version
>> strings in any of the many places this PEP suggests they could be... so that
>> would be somewhat complex, but the complexity shouldn't be used to change
>> the answer... but if the answer is yes, it might encourage fewer variant
>> cases to be supported for acceptable version definition locations for this
>> PEP.
> Yep, this is why the version information should be in the setup.cfg
> file, and hence available via pkgutil without loading the module
> first.
>
> Cheers,
> Nick.
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Thu Apr  7 13:51:23 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 12:51:23 +0100
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D9BC3.7040101@voidspace.org.uk>
References: <20110405145213.29f706aa@neurotica.wooz.org>	<4D9B7A1F.3070106@g.nevcal.com>	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
Message-ID: <4D9DA53B.9070805@voidspace.org.uk>

On 07/04/2011 12:10, Michael Foord wrote:
> On 06/04/2011 15:26, Nick Coghlan wrote:
>> On Wed, Apr 6, 2011 at 6:22 AM, Glenn 
>> Linderman<v+python at g.nevcal.com>  wrote:
>>> With more standardization of versions, should the version module be 
>>> promoted
>>> to stdlib directly?
>> When Tarek lands "packaging" (i.e. what distutils2 becomes in the
>> Python 3.3 stdlib), the standardised version handling will come with
>> it.
>>
>>> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>>
>>>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>>>
>>>      __version__ = pkgutil.get_distribution('elle').metadata['version']
>>>
>
> I really dislike this way of specifying the version. For a start it is 
> really ugly.
>
> More importantly it means the version information is *only* available 
> if the package has been installed by "packaging", and so isn't 
> available for the parts of my pre-build process like building the 
> documentation (which import the version number to put into the docs).
>

And in fact it would make the module itself unimportable unless 
installed by "packaging", so not compatible with other installation 
methods (including the ever-loved 'just drop it somewhere on sys.path) 
or earlier versions of Python that don't have the required apis (or 
don't have packaging installed).

So I don't think recommending 
"pkgutil.get_distribution('elle').metadata['version']" as a way for 
packages to provide version information is good advice.

All the best,

Michael Foord

> Currently all my packages have the canonical version number 
> information in the package itself using:
>
>     __version__ = '1.2.3'
>
> Anything that needs the version number, including setup.py for upload 
> to pypi, has one place to look for it and it doesn't depend on any 
> other tools or processes. If switching to "packaging" prevents me from 
> doing this then it will inhibit me using "packaging".
>
> What I may have to do is use a python script that will generate the 
> static metadata, which is not such a bad thing I guess as it will only 
> need to be executed at package build time. I won't be switching to 
> that horrible technique for specifying versions within my packages 
> though.
>
> All the best,
>
> Michael
>>> The RE as given won't match alpha, beta, rc, dev, and post suffixes 
>>> that are
>>> discussed in POP 386.
>> Indeed, I really don't like the RE suggestion - better to tell people
>> to just move the version info into the static config file and use
>> pkgutil to make it available as shown. That solves the build time vs
>> install time problem as well.
>>
>>> Nor will it match the code shown and quoted for the alternative 
>>> distutils2
>>> case.
>>>
>>>
>>> Other comments:
>>>
>>> Are there issues for finding and loading multiple versions of the same
>>> module?
>> No, you simply can't do it. Python's import semantics are already
>> overly complicated even without opening that particular can of worms.
>>
>>> Should it be possible to determine a version before loading a 
>>> module?  If
>>> yes, the version module would have to be able to find a parse version
>>> strings in any of the many places this PEP suggests they could be... 
>>> so that
>>> would be somewhat complex, but the complexity shouldn't be used to 
>>> change
>>> the answer... but if the answer is yes, it might encourage fewer 
>>> variant
>>> cases to be supported for acceptable version definition locations 
>>> for this
>>> PEP.
>> Yep, this is why the version information should be in the setup.cfg
>> file, and hence available via pkgutil without loading the module
>> first.
>>
>> Cheers,
>> Nick.
>>
>
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From solipsis at pitrou.net  Thu Apr  7 13:51:48 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 7 Apr 2011 13:51:48 +0200
Subject: [Python-Dev] PEP 396, Module Version Numbers
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
Message-ID: <20110407135148.18705d45@pitrou.net>

On Thu, 07 Apr 2011 12:10:59 +0100
Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 06/04/2011 15:26, Nick Coghlan wrote:
> > On Wed, Apr 6, 2011 at 6:22 AM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
> >> With more standardization of versions, should the version module be promoted
> >> to stdlib directly?
> > When Tarek lands "packaging" (i.e. what distutils2 becomes in the
> > Python 3.3 stdlib), the standardised version handling will come with
> > it.
> >
> >> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
> >>
> >>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
> >>
> >>      __version__ = pkgutil.get_distribution('elle').metadata['version']
> >>
> 
> I really dislike this way of specifying the version. For a start it is 
> really ugly.

Agreed, it is incredibly obscure and unpleasantly opaque.

Regards

Antoine.



From ncoghlan at gmail.com  Thu Apr  7 13:59:12 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 7 Apr 2011 21:59:12 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D9BC3.7040101@voidspace.org.uk>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
Message-ID: <BANLkTi=OWrd06dj8CRCO_B7c9XnKWQZbUw@mail.gmail.com>

On Thu, Apr 7, 2011 at 9:10 PM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> I really dislike this way of specifying the version. For a start it is
> really ugly.
>
> More importantly it means the version information is *only* available if the
> package has been installed by "packaging", and so isn't available for the
> parts of my pre-build process like building the documentation (which import
> the version number to put into the docs).
>
> Currently all my packages have the canonical version number information in
> the package itself using:
>
> ? ?__version__ = '1.2.3'
>
> Anything that needs the version number, including setup.py for upload to
> pypi, has one place to look for it and it doesn't depend on any other tools
> or processes. If switching to "packaging" prevents me from doing this then
> it will inhibit me using "packaging".
>
> What I may have to do is use a python script that will generate the static
> metadata, which is not such a bad thing I guess as it will only need to be
> executed at package build time. I won't be switching to that horrible
> technique for specifying versions within my packages though.

It sounds like part of the PEP needs another trip through
distutils-sig. An informational PEP really shouldn't be advocating
standard library changes, but it would make sense for this point of
view to inform any updates to PEP 386 (the version handling
standardisation PEP).

As I see it, there appear to be two main requests:
1. Normalised version parsing and comparison should be available even
if packaging itself is not installed (e.g. as part of pkgutil)
2. packaging should support extraction of the version metadata from
the source files when bundling a package for distribution

On point 2, rather than requiring that it be explicitly requested, I
would suggest the following semantics for determining the version when
bundling a package ready for distribution:

- if present in the metadata, use that
- if not present in the metadata, look for __version__ in the module
source code (or the __init__ source code for an actual package)
- otherwise warn the developer that no version information has been
provided so it is automatically being set to "0.0.0a0"

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fuzzyman at voidspace.org.uk  Thu Apr  7 14:22:24 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 13:22:24 +0100
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTi=OWrd06dj8CRCO_B7c9XnKWQZbUw@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>	<4D9B7A1F.3070106@g.nevcal.com>	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>	<4D9D9BC3.7040101@voidspace.org.uk>
	<BANLkTi=OWrd06dj8CRCO_B7c9XnKWQZbUw@mail.gmail.com>
Message-ID: <4D9DAC80.3020006@voidspace.org.uk>

On 07/04/2011 12:59, Nick Coghlan wrote:
> On Thu, Apr 7, 2011 at 9:10 PM, Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>> I really dislike this way of specifying the version. For a start it is
>> really ugly.
>>
>> More importantly it means the version information is *only* available if the
>> package has been installed by "packaging", and so isn't available for the
>> parts of my pre-build process like building the documentation (which import
>> the version number to put into the docs).
>>
>> Currently all my packages have the canonical version number information in
>> the package itself using:
>>
>>     __version__ = '1.2.3'
>>
>> Anything that needs the version number, including setup.py for upload to
>> pypi, has one place to look for it and it doesn't depend on any other tools
>> or processes. If switching to "packaging" prevents me from doing this then
>> it will inhibit me using "packaging".
>>
>> What I may have to do is use a python script that will generate the static
>> metadata, which is not such a bad thing I guess as it will only need to be
>> executed at package build time. I won't be switching to that horrible
>> technique for specifying versions within my packages though.
> It sounds like part of the PEP needs another trip through
> distutils-sig. An informational PEP really shouldn't be advocating
> standard library changes, but it would make sense for this point of
> view to inform any updates to PEP 386 (the version handling
> standardisation PEP).
>
> As I see it, there appear to be two main requests:
> 1. Normalised version parsing and comparison should be available even
> if packaging itself is not installed (e.g. as part of pkgutil)
> 2. packaging should support extraction of the version metadata from
> the source files when bundling a package for distribution
>
> On point 2, rather than requiring that it be explicitly requested, I
> would suggest the following semantics for determining the version when
> bundling a package ready for distribution:
>
> - if present in the metadata, use that
> - if not present in the metadata, look for __version__ in the module
> source code (or the __init__ source code for an actual package)
> - otherwise warn the developer that no version information has been
> provided so it is automatically being set to "0.0.0a0"
>
This sounds good to me.

As an added consideration the suggested technique may not work for tools 
like py2exe / py2app, embedded python and alternative implementations - 
which may not have the full "pacakging" machinery available.

All the best,

Michael Foord


> Cheers,
> Nick.
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From a.badger at gmail.com  Thu Apr  7 18:13:55 2011
From: a.badger at gmail.com (Toshio Kuratomi)
Date: Thu, 7 Apr 2011 09:13:55 -0700
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9C2C88.8020604@arbash-meinel.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9C2C88.8020604@arbash-meinel.com>
Message-ID: <20110407161143.GA9851@unaka.lan>

On Wed, Apr 06, 2011 at 11:04:08AM +0200, John Arbash Meinel wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> 
> ...
> > #. ``__version_info__`` SHOULD be of the format returned by PEP 386's
> >    ``parse_version()`` function.
> 
> The only reference to parse_version in PEP 386 I could find was the
> setuptools implementation which is pretty odd:
> 
> > 
> > In other words, parse_version will return a tuple for each version string, that is compatible with StrictVersion but also accept arbitrary version and deal with them so they can be compared:
> > 
> >>>> from pkg_resources import parse_version as V
> >>>> V('1.2')
> > ('00000001', '00000002', '*final')
> >>>> V('1.2b2')
> > ('00000001', '00000002', '*b', '00000002', '*final')
> >>>> V('FunkyVersion')
> > ('*funkyversion', '*final')
> 
Barry -- I think we want to talk about NormalizedVersion.from_parts() rather
than parse_version().

> bzrlib has certainly used 'version_info' as a tuple indication such as:
> 
> version_info = (2, 4, 0, 'dev', 2)
> 
> and
> 
> version_info = (2, 4, 0, 'beta', 1)
> 
> and
> 
> version_info = (2, 3, 1, 'final', 0)
> 
> etc.
> 
> This is mapping what we could sort out from Python's "sys.version_info".
> 
> The *really* nice bit is that you can do:
> 
> if sys.version_info >= (2, 6):
>   # do stuff for python 2.6(.0) and beyond
> 
<nod>  People like to compare versions and the tuple forms allow that.  Note
that the tuples you give don't compare correctly.  This is the order that
they sort:

(2, 4, 0)
(2, 4, 0, 'beta', 1)
(2, 4, 0, 'dev', 2)
(2, 4, 0, 'final', 0)

So that means, snapshot releases will always sort after the alpha and beta
releases (and release candidate if you use 'c' to mean release candidate).
Since the simple (2, 4, 0) tuple sorts before everything else, a comparison
that doesn't work with the 2.4.0-alpha (or beta or arbitrary dev snapshots)
would need to specify something like:

(2, 4, 0, 'z')

NormalizedVersion.from_parts() uses nested tuples to handle this better.
But I think that even with nested tuples a naive comparison fails since most
of the suffixes are prerelease strings.  ie: ((2, 4, 0),) < ((2, 4, 0),
('beta', 1))

So you can't escape needing a function to compare versions.
(NormalizedVersion does this by letting you compare NormalizedVersions
together).  Barry if this is correct, maybe __version_info__ is useless and
I shouldn't have brought it up at pycon?

-Toshio
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/ac5eeef8/attachment.pgp>

From fabiofz at gmail.com  Thu Apr  7 18:18:52 2011
From: fabiofz at gmail.com (Fabio Zadrozny)
Date: Thu, 7 Apr 2011 13:18:52 -0300
Subject: [Python-Dev] Test cases not garbage collected after run
Message-ID: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>

I actually created a bug entry for this
(http://bugs.python.org/issue11798) and just later it occurred that I
should've asked in the list first :)

So, here's the text for opinions:

Right now, when doing a test case, one must clear all the variables
created in the test class, and I believe this shouldn't be needed...

E.g.:

class Test(TestCase):
  def setUp(self):
    self.obj1 = MyObject()

  ...

  def tearDown(self):
    del self.obj1

Ideally (in my view), right after running the test, it should be
garbage-collected and the explicit tearDown just for deleting the
object wouldn't be needed (as the test would be garbage-collected,
that reference would automatically die), because this is currently
very error prone... (and probably a source of leaks for any
sufficiently big test suite).

If that's accepted, I can provide a patch.

Thanks,

Fabio

From techtonik at gmail.com  Thu Apr  7 18:21:41 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Thu, 7 Apr 2011 19:21:41 +0300
Subject: [Python-Dev] Force build form
In-Reply-To: <20110406234010.31dd64af@pitrou.net>
References: <20110406234010.31dd64af@pitrou.net>
Message-ID: <BANLkTiksfihem95aTsGsfY9PLkBJJUCv=Q@mail.gmail.com>

On Thu, Apr 7, 2011 at 12:40 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>
> For the record, I've tried to make the force build form clearer on the
> buildbot Web UI. See e.g.:
> http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom

Cool. I've recently discovered buildbot page for twisted. It is more
convenient to have build request form on the right.
http://buildbot.twistedmatrix.com/builders/winxp32-py2.6-msi/
--
anatoly t.

From techtonik at gmail.com  Thu Apr  7 18:22:58 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Thu, 7 Apr 2011 19:22:58 +0300
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
Message-ID: <BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>

On Thu, Apr 7, 2011 at 7:01 AM, Benjamin Peterson <benjamin at python.org> wrote:
> 2011/4/6 anatoly techtonik <techtonik at gmail.com>:
>> Is it a good idea to have code highlighting in tracker?
>
> Why would we need it?

Because tracker is ugly.

From benjamin at python.org  Thu Apr  7 18:29:09 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 7 Apr 2011 11:29:09 -0500
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
Message-ID: <BANLkTikx=_pnsProLK9L+CMau65yED+TnA@mail.gmail.com>

2011/4/7 anatoly techtonik <techtonik at gmail.com>:
> On Thu, Apr 7, 2011 at 7:01 AM, Benjamin Peterson <benjamin at python.org> wrote:
>> 2011/4/6 anatoly techtonik <techtonik at gmail.com>:
>>> Is it a good idea to have code highlighting in tracker?
>>
>> Why would we need it?
>
> Because tracker is ugly.

So we should add some highlighted code to spice it up? :)



-- 
Regards,
Benjamin

From eric at trueblade.com  Thu Apr  7 18:39:03 2011
From: eric at trueblade.com (Eric Smith)
Date: Thu, 7 Apr 2011 12:39:03 -0400 (EDT)
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
Message-ID: <9153882f5d8a81edfc6c0b4782226b60.squirrel@mail.trueblade.com>

> On Thu, Apr 7, 2011 at 7:01 AM, Benjamin Peterson <benjamin at python.org>
> wrote:
>> 2011/4/6 anatoly techtonik <techtonik at gmail.com>:
>>> Is it a good idea to have code highlighting in tracker?
>>
>> Why would we need it?
>
> Because tracker is ugly.

That's not a good enough reason. I'm -1 on adding this: it's yet another
thing to maintain, and adding markup to the tracker would increase the
mental burden for using it.

Eric.


From fuzzyman at voidspace.org.uk  Thu Apr  7 18:49:29 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 17:49:29 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>
Message-ID: <4D9DEB19.10307@voidspace.org.uk>

On 07/04/2011 17:18, Fabio Zadrozny wrote:
> I actually created a bug entry for this
> (http://bugs.python.org/issue11798) and just later it occurred that I
> should've asked in the list first :)
>
> So, here's the text for opinions:
>
> Right now, when doing a test case, one must clear all the variables
> created in the test class, and I believe this shouldn't be needed...
>
> E.g.:
>
> class Test(TestCase):
>    def setUp(self):
>      self.obj1 = MyObject()
>
>    ...
>
>    def tearDown(self):
>      del self.obj1
>
> Ideally (in my view), right after running the test, it should be
> garbage-collected and the explicit tearDown just for deleting the
> object wouldn't be needed (as the test would be garbage-collected,
> that reference would automatically die), because this is currently
> very error prone... (and probably a source of leaks for any
> sufficiently big test suite).
>
> If that's accepted, I can provide a patch.

You mean that the test run keeps the test instances alive for the whole 
test run so instance attributes are also kept alive. How would you solve 
this - by having calling a TestSuite (which is how a test run is 
executed) remove members from themselves after each test execution? (Any 
failure tracebacks etc stored by the TestResult would also have to not 
keep the test alive.)

My only concern would be backwards compatibility due to the change in 
behaviour.

All the best,

Michael Foord

> Thanks,
>
> Fabio
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From brian.curtin at gmail.com  Thu Apr  7 18:48:51 2011
From: brian.curtin at gmail.com (Brian Curtin)
Date: Thu, 7 Apr 2011 11:48:51 -0500
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
Message-ID: <BANLkTint1GZy3mrFviwLLTAjKjt2tNW7wQ@mail.gmail.com>

On Thu, Apr 7, 2011 at 11:22, anatoly techtonik <techtonik at gmail.com> wrote:

> On Thu, Apr 7, 2011 at 7:01 AM, Benjamin Peterson <benjamin at python.org>
> wrote:
> > 2011/4/6 anatoly techtonik <techtonik at gmail.com>:
> >> Is it a good idea to have code highlighting in tracker?
> >
> > Why would we need it?
>
> Because tracker is ugly.


It's a bug tracker, not a Myspace profile.

Unless the lack of syntax highlighting is causing a work stoppage, I don't
think we need this.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/d15b2998/attachment.html>

From raymond.hettinger at gmail.com  Thu Apr  7 18:57:10 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 7 Apr 2011 09:57:10 -0700
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
Message-ID: <7B5F1EFC-F440-4D6A-9F98-2D5A22B9C31E@gmail.com>


On Apr 7, 2011, at 9:22 AM, anatoly techtonik wrote:

> On Thu, Apr 7, 2011 at 7:01 AM, Benjamin Peterson <benjamin at python.org> wrote:
>> 2011/4/6 anatoly techtonik <techtonik at gmail.com>:
>>> Is it a good idea to have code highlighting in tracker?

+0

That has its highpoints;

* give tracker entries a more professional appearance
  closer to what is done on code paste sites, code viewers, and wikis

* provide a clean way to post code snippets
  (we've had past issues with whitespace being gobbled-up)


The downsides:
* it would probably need a preview button and markup help screen
* it's just one more thing to learn and maintain
* many ways to do it (code paste, rietveld, attaching a patch, plain text, etc)
* smells of feature creep


Raymond


From alexander.belopolsky at gmail.com  Thu Apr  7 19:28:41 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 7 Apr 2011 13:28:41 -0400
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <7B5F1EFC-F440-4D6A-9F98-2D5A22B9C31E@gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
	<7B5F1EFC-F440-4D6A-9F98-2D5A22B9C31E@gmail.com>
Message-ID: <BANLkTi=nROCvs8+KWh05dUxDWtaWX+LhHw@mail.gmail.com>

On Thu, Apr 7, 2011 at 12:57 PM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
..
> * provide a clean way to post code snippets
> ?(we've had past issues with whitespace being gobbled-up)
>

What would really help is if someone would figure out how to stop the
tracker from removing the lines that start with the python >>> prompt
from comments sent by e-mail.

http://psf.upfronthosting.co.za/roundup/meta/issue321
http://psf.upfronthosting.co.za/roundup/meta/issue264

From robertc at robertcollins.net  Thu Apr  7 21:18:07 2011
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 8 Apr 2011 07:18:07 +1200
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <4D9DEB19.10307@voidspace.org.uk>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>
	<4D9DEB19.10307@voidspace.org.uk>
Message-ID: <BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>

On Fri, Apr 8, 2011 at 4:49 AM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> You mean that the test run keeps the test instances alive for the whole test
> run so instance attributes are also kept alive. How would you solve this -
> by having calling a TestSuite (which is how a test run is executed) remove
> members from themselves after each test execution? (Any failure tracebacks
> etc stored by the TestResult would also have to not keep the test alive.)
>
> My only concern would be backwards compatibility due to the change in
> behaviour.

An alternative is in TestCase.run() / TestCase.__call__(), make a copy
and immediately delegate to it; that leaves the original untouched,
permitting run-in-a-loop style helpers to still work.

Testtools did something to address this problem, but I forget what it
was offhand.

-Rob

From fijall at gmail.com  Thu Apr  7 21:41:39 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Thu, 7 Apr 2011 21:41:39 +0200
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
	<AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
Message-ID: <BANLkTinJvzxpG92uhcfwxT+nLsbYaOubhg@mail.gmail.com>

> AFAIK the AST is
> CPython-specific so should be treated with the same attitude as
> changes to the bytecode. That means, do it conservatively, since there
> *are* people who like to write tools that manipulate or analyze this,
> and while they know they're doing something CPython and
> version-specific, they should not be broken by bugfix releases, since
> the people who *use* their code probably have no idea of the deep
> magic they're depending on.

PyPy implements exactly the same AST. I think Jython also does,
although I'm not that sure. There were already issues with say
subclassing ast nodes were pypy was incompatible from CPython. That
said, it's completely fine from PyPy's perspective to change AST
between major releases.

From scopatz at gmail.com  Thu Apr  7 21:54:13 2011
From: scopatz at gmail.com (Anthony Scopatz)
Date: Thu, 7 Apr 2011 14:54:13 -0500
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
Message-ID: <BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>

Hi Daniel,

Thanks for putting this together.  I am a huge supporter of benchmarking
efforts.  My brief comment is below.

On Wed, Apr 6, 2011 at 11:52 AM, DasIch <dasdasich at googlemail.com> wrote:

>
> 1. Definition of the benchmark suite. This will entail contacting
> developers of Python implementations (CPython, PyPy, IronPython and
> Jython), via discussion on the appropriate mailing lists. This might
> be achievable as part of this proposal.
>
>
If you are reaching out to other projects at this stage, I think you should
also be in touch with the Cython people  (even if its 'implementation'
sits on top of CPython).

As a scientist/engineer what I care about is how Cython benchmarks to
CPython.  I believe that they have some ideas on benchmarking and have
also explored this space.  Their inclusion would be helpful to me thinking
this GSoC successful at the end of the day (summer).

Thanks for your consideration.
Be Well
Anthony


> 2. Implementing the benchmark suite. Based on the prior agreed upon
> definition, the suite will be implemented, which means that the
> benchmarks will be merged into a single mercurial repository on
> Bitbucket[5].
>
> 3. Porting the suite to Python 3.x. The suite will be ported to 3.x
> using 2to3[6], as far as possible. The usage of 2to3 will make it
> easier make changes to the repository especially for those still
> focusing on 2.x. It is to be expected that some benchmarks cannot be
> ported due to dependencies which are not available on Python 3.x.
> Those will be ignored by this project to be ported at a later time,
> when the necessary requirements are met.
>
> Start of Program (May 24)
> ======================
>
> Before the coding, milestones 2 and 3, can begin it is necessary to
> agree upon a set of benchmarks, everyone is happy with, as described.
>
> Midterm Evaluation (July 12)
> =======================
>
> During the midterm I want to finish the second milestone and before
> the evaluation I want to start in the third milestone.
>
> Final Evaluation (Aug 16)
> =====================
>
> In this period the benchmark suite will be ported. If everything works
> out perfectly I will even have some time left, if there are problems I
> have a buffer here.
>
> Probably Asked Questions
> ======================
>
> Why not use one of the existing benchmark suites for porting?
>
> The effort will be wasted if there is no good base to build upon,
> creating a new benchmark suite based upon the existing ones ensures
> that.
>
> Why not use Git/Bazaar/...?
>
> Mercurial is used by CPython, PyPy and is fairly well known and used
> in the Python community. This ensures easy accessibility for everyone.
>
> What will happen with the Repository after GSoC/How will access to the
> repository be handled?
>
> I propose to give administrative rights to one or two representatives
> of each project. Those will provide other developers with write
> access.
>
> Communication
> =============
>
> Communication of the progress will be done via Twitter[7] and my
> blog[8], if desired I can also send an email with the contents of the
> blog post to the mailing lists of the implementations. Furthermore I
> am usually quick to answer via IRC (DasIch on freenode), Twitter or
> E-Mail(dasdasich at gmail.com) if anyone has any questions.
>
> Contact to the mentor can be established via the means mentioned above
> or via Skype.
>
> About Me
> ========
>
> My name is Daniel Neuh?user, I am 19 years old and currently a student
> at the Bergstadt-Gymnasium L?denscheid[9]. I started programming (with
> Python) about 4 years ago and became a member of the Pocoo Team[10]
> after successfully participating in the Google Summer of Code last
> year, during which I ported Sphinx[11] to Python 3.x and implemented
> an algorithm to diff abstract syntax trees to preserve comments and
> translated strings which has been used by the other GSoC projects
> targeting Sphinx.
>
> .. [1]: https://bitbucket.org/pypy/benchmarks/src
> .. [2]: http://code.google.com/p/unladen-swallow/
> .. [3]: http://hg.python.org/benchmarks/file/tip/performance
> .. [4]:
> http://hg.python.org/benchmarks/file/62e754c57a7f/performance/README
> .. [5]: http://bitbucket.org/
> .. [6]: http://docs.python.org/library/2to3.html
> .. [7]: http://twitter.com/#!/DasIch
> .. [8]: http://dasdasich.blogspot.com/
> .. [9]: http://bergstadt-gymnasium.de/
> .. [10]: http://www.pocoo.org/team/#daniel-neuhauser
> .. [11]: http://sphinx.pocoo.org/
>
> P.S.: I would like to get in touch with the IronPython developers as
> well, unfortunately I was not able to find a mailing list or IRC
> channel is there anybody how can send me in the right direction?
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/scopatz%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/bf80fa63/attachment.html>

From fuzzyman at voidspace.org.uk  Thu Apr  7 22:12:20 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 21:12:20 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>	<4D9DEB19.10307@voidspace.org.uk>
	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>
Message-ID: <4D9E1AA4.4020607@voidspace.org.uk>

On 07/04/2011 20:18, Robert Collins wrote:
> On Fri, Apr 8, 2011 at 4:49 AM, Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>> You mean that the test run keeps the test instances alive for the whole test
>> run so instance attributes are also kept alive. How would you solve this -
>> by having calling a TestSuite (which is how a test run is executed) remove
>> members from themselves after each test execution? (Any failure tracebacks
>> etc stored by the TestResult would also have to not keep the test alive.)
>>
>> My only concern would be backwards compatibility due to the change in
>> behaviour.
> An alternative is in TestCase.run() / TestCase.__call__(), make a copy
> and immediately delegate to it; that leaves the original untouched,
> permitting run-in-a-loop style helpers to still work.
>
> Testtools did something to address this problem, but I forget what it
> was offhand.
>
That doesn't sound like a general solution as not everything is copyable 
and I don't think we should make that a requirement of tests.

The proposed "fix" is to make test suite runs destructive, either 
replacing TestCase instances with None or pop'ing tests after they are 
run (the latter being what twisted Trial does). run-in-a-loop helpers 
could still repeatedly iterate over suites, just not call the suite.

All the best,

Michael

> -Rob


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From jnoller at gmail.com  Thu Apr  7 22:28:44 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Thu, 7 Apr 2011 16:28:44 -0400
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
Message-ID: <BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>

On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz <scopatz at gmail.com> wrote:
> Hi Daniel,
> Thanks for putting this together. ?I am a huge supporter of benchmarking
> efforts. ?My brief comment is below.
>
> On Wed, Apr 6, 2011 at 11:52 AM, DasIch <dasdasich at googlemail.com> wrote:
>>
>> 1. Definition of the benchmark suite. This will entail contacting
>> developers of Python implementations (CPython, PyPy, IronPython and
>> Jython), via discussion on the appropriate mailing lists. This might
>> be achievable as part of this proposal.
>>
>
> If you are reaching out to other projects at this stage, I think you should
> also be in touch with the Cython people ?(even if its 'implementation'
> sits on top of CPython).
> As a scientist/engineer what I care about is how Cython benchmarks to
> CPython. ?I believe that they have some ideas on benchmarking and have
> also explored this space. ?Their inclusion would be helpful to me thinking
> this GSoC?successful at the end of the day (summer).
> Thanks for your consideration.
> Be Well
> Anthony

Right now, we are talking about building "speed.python.org" to test
the speed of python interpreters, over time, and alongside one another
- cython *is not* an interpreter.

Cython is out of scope for this.

From janssen at parc.com  Thu Apr  7 22:31:04 2011
From: janssen at parc.com (Bill Janssen)
Date: Thu, 7 Apr 2011 13:31:04 PDT
Subject: [Python-Dev] funky buildbot problems again...
Message-ID: <64081.1302208264@parc.com>

My Intel Snow Leopard 2 build slave has gone into outer-space again.

When I look at it, I see buildslave taking up most of a CPU (80%), and
nothing much else going on.  The twistd log says:

[... much omitted ...]
2011-04-04 08:35:47-0700 [-] sending app-level keepalive
2011-04-04 08:45:47-0700 [-] sending app-level keepalive
2011-04-04 08:55:47-0700 [-] sending app-level keepalive
2011-04-04 09:03:15-0700 [Broker,client] lost remote
2011-04-04 09:03:15-0700 [Broker,client] lost remote
2011-04-04 09:03:15-0700 [Broker,client] lost remote
2011-04-04 09:03:15-0700 [Broker,client] lost remote
2011-04-04 09:03:15-0700 [Broker,client] lost remote
2011-04-04 09:03:15-0700 [Broker,client] Lost connection to dinsdale.python.org:9020
2011-04-04 09:03:15-0700 [Broker,client] <twisted.internet.tcp.Connector instance at 0x101629ab8> will retry in 3 seconds
2011-04-04 09:03:15-0700 [Broker,client] Stopping factory <buildslave.bot.BotFactory instance at 0x1016299e0>
2011-04-04 09:03:18-0700 [-] Starting factory <buildslave.bot.BotFactory instance at 0x1016299e0>
2011-04-04 09:03:18-0700 [-] Connecting to dinsdale.python.org:9020
2011-04-04 09:03:18-0700 [Uninitialized] Connection to dinsdale.python.org:9020 failed: Connection Refused
2011-04-04 09:03:18-0700 [Uninitialized] <twisted.internet.tcp.Connector instance at 0x101629ab8> will retry in 8 seconds
2011-04-04 09:03:18-0700 [Uninitialized] Stopping factory <buildslave.bot.BotFactory instance at 0x1016299e0>
2011-04-04 09:03:27-0700 [-] Starting factory <buildslave.bot.BotFactory instance at 0x1016299e0>
2011-04-04 09:03:27-0700 [-] Connecting to dinsdale.python.org:9020

So it's been spinning its wheels for 3 days.

Sure looks like the connection attempt is failing, for some reason.

I'm using the stock Twisted that comes with Snow Leopard -- tried to
upgrade it but apparently can't.

On my OS X 10.4 buildslave, I see a similar but more successful sequence:

2011-04-04 08:56:06-0700 [-] sending app-level keepalive
2011-04-04 09:04:39-0700 [Broker,client] lost remote
2011-04-04 09:04:39-0700 [Broker,client] lost remote
2011-04-04 09:04:39-0700 [Broker,client] lost remote
2011-04-04 09:04:39-0700 [Broker,client] lost remote
2011-04-04 09:04:39-0700 [Broker,client] lost remote
2011-04-04 09:04:39-0700 [Broker,client] <twisted.internet.tcp.Connector instance at 0x10352d8> will retry in 3 seconds
2011-04-04 09:04:39-0700 [Broker,client] Stopping factory <buildslave.bot.BotFactory instance at 0x133bd78>
2011-04-04 09:04:42-0700 [-] Starting factory <buildslave.bot.BotFactory instance at 0x133bd78>
2011-04-04 09:04:43-0700 [Uninitialized] <twisted.internet.tcp.Connector instance at 0x10352d8> will retry in 10 seconds
2011-04-04 09:04:43-0700 [Uninitialized] Stopping factory <buildslave.bot.BotFactory instance at 0x133bd78>
2011-04-04 09:04:53-0700 [-] Starting factory <buildslave.bot.BotFactory instance at 0x133bd78>
2011-04-04 09:04:57-0700 [Broker,client] message from master: attached

Bill

From fuzzyman at voidspace.org.uk  Thu Apr  7 22:35:07 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 21:35:07 +0100
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
 3.x)
In-Reply-To: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
Message-ID: <4D9E1FFB.6030508@voidspace.org.uk>

On 06/04/2011 17:52, DasIch wrote:
> Hello Guys,
> I would like to present my proposal for the Google Summer of Code,
> concerning the idea of porting the benchmarks to Python 3.x for
> speed.pypy.org. I think I have successfully integrated the feedback I
> got from prior discussions on the topic and I would like to hear your
> opinion.
> [snip...]
> P.S.: I would like to get in touch with the IronPython developers as
> well, unfortunately I was not able to find a mailing list or IRC
> channel is there anybody how can send me in the right direction?

This is the IronPython mailing list:

     http://lists.ironpython.com/listinfo.cgi/users-ironpython.com

All the best,

Michael Foord


> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Thu Apr  7 22:36:36 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 07 Apr 2011 21:36:36 +0100
Subject: [Python-Dev] funky buildbot problems again...
In-Reply-To: <64081.1302208264@parc.com>
References: <64081.1302208264@parc.com>
Message-ID: <4D9E2054.3080409@voidspace.org.uk>

On 07/04/2011 21:31, Bill Janssen wrote:
> My Intel Snow Leopard 2 build slave has gone into outer-space again.
> [snip...]
> So it's been spinning its wheels for 3 days.
>
> Sure looks like the connection attempt is failing, for some reason.
>
> I'm using the stock Twisted that comes with Snow Leopard -- tried to
> upgrade it but apparently can't.
>

You certainly shouldn't update the Twisted on your system Python. Can't 
you install Python 2.6 (from python.org) separately and install Twisted 
into that?

Michael

> On my OS X 10.4 buildslave, I see a similar but more successful sequence:
>
> 2011-04-04 08:56:06-0700 [-] sending app-level keepalive
> 2011-04-04 09:04:39-0700 [Broker,client] lost remote
> 2011-04-04 09:04:39-0700 [Broker,client] lost remote
> 2011-04-04 09:04:39-0700 [Broker,client] lost remote
> 2011-04-04 09:04:39-0700 [Broker,client] lost remote
> 2011-04-04 09:04:39-0700 [Broker,client] lost remote
> 2011-04-04 09:04:39-0700 [Broker,client]<twisted.internet.tcp.Connector instance at 0x10352d8>  will retry in 3 seconds
> 2011-04-04 09:04:39-0700 [Broker,client] Stopping factory<buildslave.bot.BotFactory instance at 0x133bd78>
> 2011-04-04 09:04:42-0700 [-] Starting factory<buildslave.bot.BotFactory instance at 0x133bd78>
> 2011-04-04 09:04:43-0700 [Uninitialized]<twisted.internet.tcp.Connector instance at 0x10352d8>  will retry in 10 seconds
> 2011-04-04 09:04:43-0700 [Uninitialized] Stopping factory<buildslave.bot.BotFactory instance at 0x133bd78>
> 2011-04-04 09:04:53-0700 [-] Starting factory<buildslave.bot.BotFactory instance at 0x133bd78>
> 2011-04-04 09:04:57-0700 [Broker,client] message from master: attached
>
> Bill
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From tseaver at palladion.com  Thu Apr  7 23:32:24 2011
From: tseaver at palladion.com (Tres Seaver)
Date: Thu, 07 Apr 2011 17:32:24 -0400
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
Message-ID: <inlah7$6o0$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/07/2011 04:28 PM, Jesse Noller wrote:
> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz <scopatz at gmail.com> wrote:
>> Hi Daniel,
>> Thanks for putting this together.  I am a huge supporter of benchmarking
>> efforts.  My brief comment is below.
>>
>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch <dasdasich at googlemail.com> wrote:
>>>
>>> 1. Definition of the benchmark suite. This will entail contacting
>>> developers of Python implementations (CPython, PyPy, IronPython and
>>> Jython), via discussion on the appropriate mailing lists. This might
>>> be achievable as part of this proposal.
>>>
>>
>> If you are reaching out to other projects at this stage, I think you should
>> also be in touch with the Cython people  (even if its 'implementation'
>> sits on top of CPython).
>> As a scientist/engineer what I care about is how Cython benchmarks to
>> CPython.  I believe that they have some ideas on benchmarking and have
>> also explored this space.  Their inclusion would be helpful to me thinking
>> this GSoC successful at the end of the day (summer).
>> Thanks for your consideration.
>> Be Well
>> Anthony
> 
> Right now, we are talking about building "speed.python.org" to test
> the speed of python interpreters, over time, and alongside one another
> - cython *is not* an interpreter.
> 
> Cython is out of scope for this.

Why is it out of scope to use the benchmarks and test harness to answer
questions like "can we use Cython to provide optional optimizations for
the stdlib"?  I can certainly see value in havng an objective way to
compare the macro benchmark performance of a Cython-optimized CPython
vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.


Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2eLWcACgkQ+gerLs4ltQ4R7wCgmam/W+3JzJRgxtehnnfbE54S
RxcAn0ooO2kpw84kRvmTP5dCAWir9g3i
=3mL7
-----END PGP SIGNATURE-----


From solipsis at pitrou.net  Thu Apr  7 23:41:10 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 7 Apr 2011 23:41:10 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org>
Message-ID: <20110407234110.38702683@pitrou.net>

On Thu, 07 Apr 2011 17:32:24 -0400
Tres Seaver <tseaver at palladion.com> wrote:
> > 
> > Right now, we are talking about building "speed.python.org" to test
> > the speed of python interpreters, over time, and alongside one another
> > - cython *is not* an interpreter.
> > 
> > Cython is out of scope for this.
> 
> Why is it out of scope to use the benchmarks and test harness to answer
> questions like "can we use Cython to provide optional optimizations for
> the stdlib"?  I can certainly see value in havng an objective way to
> compare the macro benchmark performance of a Cython-optimized CPython
> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.

Agreed. Assuming someone wants to take care of the Cython side of
things, I don't think there's any reason to exclude it under the
dubious reason that it's "not an interpreter".
(would you exclude Psyco, if it was still alive?)

Regards

Antoine.



From benjamin at python.org  Thu Apr  7 23:56:52 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 7 Apr 2011 16:56:52 -0500
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTinJvzxpG92uhcfwxT+nLsbYaOubhg@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<4D9819EC.7040507@v.loewis.de>
	<BANLkTim5u6sTFSFt_VC=jbTcvPgMxcG5nw@mail.gmail.com>
	<AANLkTi=937PcWXN9-AWOwm4neK5Lk4nod=cKumr4xpjx@mail.gmail.com>
	<BANLkTinJvzxpG92uhcfwxT+nLsbYaOubhg@mail.gmail.com>
Message-ID: <BANLkTinFHRQMHkcXaHtrFAyqfyrUm9Hxkg@mail.gmail.com>

2011/4/7 Maciej Fijalkowski <fijall at gmail.com>:
>> AFAIK the AST is
>> CPython-specific so should be treated with the same attitude as
>> changes to the bytecode. That means, do it conservatively, since there
>> *are* people who like to write tools that manipulate or analyze this,
>> and while they know they're doing something CPython and
>> version-specific, they should not be broken by bugfix releases, since
>> the people who *use* their code probably have no idea of the deep
>> magic they're depending on.
>
> PyPy implements exactly the same AST. I think Jython also does,
> although I'm not that sure. There were already issues with say
> subclassing ast nodes were pypy was incompatible from CPython. That
> said, it's completely fine from PyPy's perspective to change AST
> between major releases.

Speaking as the author of PyPy's AST implementation, there are even
some changes I'd like to make it easier!



-- 
Regards,
Benjamin

From janssen at parc.com  Fri Apr  8 00:18:29 2011
From: janssen at parc.com (Bill Janssen)
Date: Thu, 7 Apr 2011 15:18:29 PDT
Subject: [Python-Dev] funky buildbot problems again...
In-Reply-To: <4D9E2054.3080409@voidspace.org.uk>
References: <64081.1302208264@parc.com> <4D9E2054.3080409@voidspace.org.uk>
Message-ID: <66954.1302214709@parc.com>

Michael Foord <fuzzyman at voidspace.org.uk> wrote:

> On 07/04/2011 21:31, Bill Janssen wrote:
> > My Intel Snow Leopard 2 build slave has gone into outer-space again.
> > [snip...]
> > So it's been spinning its wheels for 3 days.
> >
> > Sure looks like the connection attempt is failing, for some reason.
> >
> > I'm using the stock Twisted that comes with Snow Leopard -- tried to
> > upgrade it but apparently can't.
> >
> 
> You certainly shouldn't update the Twisted on your system
> Python. Can't you install Python 2.6 (from python.org) separately and
> install Twisted into that?

Apparently not.  That's what I tried first -- install Python 2.7, and
then the latest Twisted.

Bill

From fwierzbicki at gmail.com  Fri Apr  8 00:36:57 2011
From: fwierzbicki at gmail.com (fwierzbicki at gmail.com)
Date: Thu, 7 Apr 2011 15:36:57 -0700
Subject: [Python-Dev] Policy for making changes to the AST
In-Reply-To: <BANLkTi=9CvWRFNbOC0MgqCmt6j=Sx9htgA@mail.gmail.com>
References: <AANLkTimuKcuK3jN-vzLU9QPiXpOEPypUp4WQTzTogoVN@mail.gmail.com>
	<BANLkTin5C+gGxo8gtsh=QRnb-EVBAQEvZw@mail.gmail.com>
	<4D9A0210.4000406@voidspace.org.uk>
	<BANLkTimRGz8vDHG63TG9qEg_gm7g8b9RYQ@mail.gmail.com>
	<CFB1DD2E-20FC-43C6-8A71-3500FC0E5E29@twistedmatrix.com>
	<BANLkTinfJq8w5HyBMfONLQKxPkS36FLd8w@mail.gmail.com>
	<4D9AE5BD.1030407@v.loewis.de>
	<BANLkTi=9CvWRFNbOC0MgqCmt6j=Sx9htgA@mail.gmail.com>
Message-ID: <BANLkTinvfzrAbjQyqs0MExT8RuRksrjCfA@mail.gmail.com>

On Tue, Apr 5, 2011 at 6:37 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> 1. Making "docstring" an attribute of the Function node rather than
> leaving it embedded as the first statement in the suite (this avoids
> issues where AST-based constant folding could potentially corrupt the
> docstring)
> 2. Collapsing Num, Str, Bytes, Ellipsis into a single Literal node
> type (the handling of those nodes is the same in a lot of cases)
> 3. Since they're keywords now, pick up True, False, None at the
> parsing stage and turn them into instances of the Literal node type,
> allowing the current Name-based special casing to be removed.
All of these sound like useful changes to me - I wouldn't want them
blocked on Jython's account. We'll just implement them when we catch
up to this version as far as I'm concerned.

-Frank

From nad at acm.org  Fri Apr  8 00:42:26 2011
From: nad at acm.org (Ned Deily)
Date: Thu, 07 Apr 2011 15:42:26 -0700
Subject: [Python-Dev] funky buildbot problems again...
References: <64081.1302208264@parc.com> <4D9E2054.3080409@voidspace.org.uk>
Message-ID: <nad-FBA277.15422607042011@news.gmane.org>

In article <4D9E2054.3080409 at voidspace.org.uk>,
 Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 07/04/2011 21:31, Bill Janssen wrote:
> > My Intel Snow Leopard 2 build slave has gone into outer-space again.
> > [snip...]
> > So it's been spinning its wheels for 3 days.
> >
> > Sure looks like the connection attempt is failing, for some reason.
> >
> > I'm using the stock Twisted that comes with Snow Leopard -- tried to
> > upgrade it but apparently can't.
> You certainly shouldn't update the Twisted on your system Python. Can't 
> you install Python 2.6 (from python.org) separately and install Twisted 
> into that?

+1

That should have no impact that I can think of on any buildbot testing 
as python.org framework builds are entirely self-contained.

-- 
 Ned Deily,
 nad at acm.org


From fuzzyman at voidspace.org.uk  Fri Apr  8 01:11:17 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 08 Apr 2011 00:11:17 +0100
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
 3.x)
In-Reply-To: <20110407234110.38702683@pitrou.net>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inlah7$6o0$1@dough.gmane.org>
	<20110407234110.38702683@pitrou.net>
Message-ID: <4D9E4495.4060302@voidspace.org.uk>

On 07/04/2011 22:41, Antoine Pitrou wrote:
> On Thu, 07 Apr 2011 17:32:24 -0400
> Tres Seaver<tseaver at palladion.com>  wrote:
>>> Right now, we are talking about building "speed.python.org" to test
>>> the speed of python interpreters, over time, and alongside one another
>>> - cython *is not* an interpreter.
>>>
>>> Cython is out of scope for this.
>> Why is it out of scope to use the benchmarks and test harness to answer
>> questions like "can we use Cython to provide optional optimizations for
>> the stdlib"?  I can certainly see value in havng an objective way to
>> compare the macro benchmark performance of a Cython-optimized CPython
>> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.
> Agreed. Assuming someone wants to take care of the Cython side of
> things, I don't think there's any reason to exclude it under the
> dubious reason that it's "not an interpreter".
> (would you exclude Psyco, if it was still alive?)
>

Well, sure - but within the scope of a GSOC project limiting it to "core 
python" seems like a more realistic goal.

Adding cython later shouldn't be an issue if someone is willing to do 
the work.

All the best,

Michael Foord

> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From exarkun at twistedmatrix.com  Fri Apr  8 01:17:03 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Thu, 07 Apr 2011 23:17:03 -0000
Subject: [Python-Dev] funky buildbot problems again...
In-Reply-To: <64081.1302208264@parc.com>
References: <64081.1302208264@parc.com>
Message-ID: <20110407231703.1992.33179214.divmod.xquotient.282@localhost.localdomain>

On 08:31 pm, janssen at parc.com wrote:
>My Intel Snow Leopard 2 build slave has gone into outer-space again.
>
>When I look at it, I see buildslave taking up most of a CPU (80%), and
>nothing much else going on.  The twistd log says:
>
>[... much omitted ...]
>2011-04-04 08:35:47-0700 [-] sending app-level keepalive
>2011-04-04 08:45:47-0700 [-] sending app-level keepalive
>2011-04-04 08:55:47-0700 [-] sending app-level keepalive
>2011-04-04 09:03:15-0700 [Broker,client] lost remote
>2011-04-04 09:03:15-0700 [Broker,client] lost remote
>2011-04-04 09:03:15-0700 [Broker,client] lost remote
>2011-04-04 09:03:15-0700 [Broker,client] lost remote
>2011-04-04 09:03:15-0700 [Broker,client] lost remote
>2011-04-04 09:03:15-0700 [Broker,client] Lost connection to 
>dinsdale.python.org:9020
>2011-04-04 09:03:15-0700 [Broker,client] 
><twisted.internet.tcp.Connector instance at 0x101629ab8> will retry in 
>3 seconds
>2011-04-04 09:03:15-0700 [Broker,client] Stopping factory 
><buildslave.bot.BotFactory instance at 0x1016299e0>
>2011-04-04 09:03:18-0700 [-] Starting factory 
><buildslave.bot.BotFactory instance at 0x1016299e0>
>2011-04-04 09:03:18-0700 [-] Connecting to dinsdale.python.org:9020
>2011-04-04 09:03:18-0700 [Uninitialized] Connection to 
>dinsdale.python.org:9020 failed: Connection Refused
>2011-04-04 09:03:18-0700 [Uninitialized] 
><twisted.internet.tcp.Connector instance at 0x101629ab8> will retry in 
>8 seconds
>2011-04-04 09:03:18-0700 [Uninitialized] Stopping factory 
><buildslave.bot.BotFactory instance at 0x1016299e0>
>2011-04-04 09:03:27-0700 [-] Starting factory 
><buildslave.bot.BotFactory instance at 0x1016299e0>
>2011-04-04 09:03:27-0700 [-] Connecting to dinsdale.python.org:9020
>
>So it's been spinning its wheels for 3 days.

Does this mean that the "2011-04-04 09:03:27-0700 [-] Connecting to 
dinsdale.python.org:9020" message in the logs is the last one you see 
until you restart the slave?

Or does it mean that the logs go on and on for three days with these 
"Connecting to dinsdale...." / "Connection Refused" / "... will retry in 
N seconds" cycles, thousands and thousands of times?

What does the buildmaster's info page for this slave say when the slave 
is in this state?  In particular, what does it say about 
"connects/hour"?

Jean-Paul

From scopatz at gmail.com  Fri Apr  8 01:36:53 2011
From: scopatz at gmail.com (Anthony Scopatz)
Date: Thu, 7 Apr 2011 18:36:53 -0500
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <4D9E4495.4060302@voidspace.org.uk>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
Message-ID: <BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>

On Thu, Apr 7, 2011 at 6:11 PM, Michael Foord <fuzzyman at voidspace.org.uk>wrote:

> On 07/04/2011 22:41, Antoine Pitrou wrote:
>
>> On Thu, 07 Apr 2011 17:32:24 -0400
>> Tres Seaver<tseaver at palladion.com>  wrote:
>>
>>> Right now, we are talking about building "speed.python.org" to test
>>>> the speed of python interpreters, over time, and alongside one another
>>>> - cython *is not* an interpreter.
>>>>
>>>> Cython is out of scope for this.
>>>>
>>> Why is it out of scope to use the benchmarks and test harness to answer
>>> questions like "can we use Cython to provide optional optimizations for
>>> the stdlib"?  I can certainly see value in havng an objective way to
>>> compare the macro benchmark performance of a Cython-optimized CPython
>>> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.
>>>
>> Agreed. Assuming someone wants to take care of the Cython side of
>> things, I don't think there's any reason to exclude it under the
>> dubious reason that it's "not an interpreter".
>> (would you exclude Psyco, if it was still alive?)
>>
>>
> Well, sure - but within the scope of a GSOC project limiting it to "core
> python" seems like a more realistic goal.
>
> Adding cython later shouldn't be an issue if someone is willing to do the
> work.


Jesse, I understand that we are talking about the benchmarks on
speed.pypy.org.  The current suite, and correct me if I
am wrong, is completely written in pure python so that any of the
'interpreters' may run them.

My point, which I stand by, was that during the initial phase (where
benchmarks are defined) that the Cython crowd
should have a voice.  This should have an enriching effect on the whole
benchmarking task since they have
thought about this issue in a way that is largely orthogonal to the methods
PyPy developed.  I think it
would be a mistake to leave Cython out of the scoping study.

I actually agree with Micheal.  I think the onus of getting the benchmarks
working on every platform is the
onus of that interpreter's community.

The benchmarking framework that is being developed as part of GSoC should be
agile enough to add and
drop projects over time and be able to make certain tests as 'known
failures', etc.

I don't think I am asking anything unreasonable here.  Especially, since at
the end of the day the purview of
projects like PyPy and Cython ("Make Python Faster") is the same.

Be Well
Anthony


>
>
> All the best,
>
> Michael Foord
>
>  Regards
>>
>> Antoine.
>>
>>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>>
>
>
> --
> http://www.voidspace.org.uk/
>
> May you do good and not evil
> May you find forgiveness for yourself and forgive others
> May you share freely, never taking more than you give.
> -- the sqlite blessing http://www.sqlite.org/different.html
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/scopatz%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/fac291a8/attachment-0001.html>

From fuzzyman at voidspace.org.uk  Fri Apr  8 01:52:17 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 08 Apr 2011 00:52:17 +0100
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
 3.x)
In-Reply-To: <BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
Message-ID: <4D9E4E31.6000506@voidspace.org.uk>

On 08/04/2011 00:36, Anthony Scopatz wrote:
>
>
> On Thu, Apr 7, 2011 at 6:11 PM, Michael Foord 
> <fuzzyman at voidspace.org.uk <mailto:fuzzyman at voidspace.org.uk>> wrote:
>
>     On 07/04/2011 22:41, Antoine Pitrou wrote:
>
>         On Thu, 07 Apr 2011 17:32:24 -0400
>         Tres Seaver<tseaver at palladion.com
>         <mailto:tseaver at palladion.com>>  wrote:
>
>                 Right now, we are talking about building
>                 "speed.python.org <http://speed.python.org>" to test
>                 the speed of python interpreters, over time, and
>                 alongside one another
>                 - cython *is not* an interpreter.
>
>                 Cython is out of scope for this.
>
>             Why is it out of scope to use the benchmarks and test
>             harness to answer
>             questions like "can we use Cython to provide optional
>             optimizations for
>             the stdlib"?  I can certainly see value in havng an
>             objective way to
>             compare the macro benchmark performance of a
>             Cython-optimized CPython
>             vs. a vanilla CPython, as well as vs. PyPY, Jython, or
>             IronPython.
>
>         Agreed. Assuming someone wants to take care of the Cython side of
>         things, I don't think there's any reason to exclude it under the
>         dubious reason that it's "not an interpreter".
>         (would you exclude Psyco, if it was still alive?)
>
>
>     Well, sure - but within the scope of a GSOC project limiting it to
>     "core python" seems like a more realistic goal.
>
>     Adding cython later shouldn't be an issue if someone is willing to
>     do the work.
>
>
> Jesse, I understand that we are talking about the benchmarks on 
> speed.pypy.org <http://speed.pypy.org>.  The current suite, and 
> correct me if I
> am wrong, is completely written in pure python so that any of the 
> 'interpreters' may run them.
>
> My point, which I stand by, was that during the initial phase (where 
> benchmarks are defined) that the Cython crowd
> should have a voice.  This should have an enriching effect on the 
> whole benchmarking task since they have
> thought about this issue in a way that is largely orthogonal to the 
> methods PyPy developed.  I think it
> would be a mistake to leave Cython out of the scoping study.
>

Personally I think the Gsoc project should just take the pypy suite and 
run with that - bikeshedding about what benchmarks to include is going 
to make it hard to make progress. We can have fun with that discussion 
once we have the infrastructure and *some* good benchmarks in place (and 
the pypy ones are good ones).

So I'm still with Jesse on this one. If there is any "discussion phase" 
as part of the Gsoc project it should be very strictly bounded by time.

All the best,

Michael

> I actually agree with Micheal.  I think the onus of getting the 
> benchmarks working on every platform is the
> onus of that interpreter's community.
>
> The benchmarking framework that is being developed as part of GSoC 
> should be agile enough to add and
> drop projects over time and be able to make certain tests as 'known 
> failures', etc.
>
> I don't think I am asking anything unreasonable here.  Especially, 
> since at the end of the day the purview of
> projects like PyPy and Cython ("Make Python Faster") is the same.
>
> Be Well
> Anthony
>
>
>
>     All the best,
>
>     Michael Foord
>
>         Regards
>
>         Antoine.
>
>
>         _______________________________________________
>         Python-Dev mailing list
>         Python-Dev at python.org <mailto:Python-Dev at python.org>
>         http://mail.python.org/mailman/listinfo/python-dev
>         Unsubscribe:
>         http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>
>
>
>     -- 
>     http://www.voidspace.org.uk/
>
>     May you do good and not evil
>     May you find forgiveness for yourself and forgive others
>     May you share freely, never taking more than you give.
>     -- the sqlite blessing http://www.sqlite.org/different.html
>
>     _______________________________________________
>     Python-Dev mailing list
>     Python-Dev at python.org <mailto:Python-Dev at python.org>
>     http://mail.python.org/mailman/listinfo/python-dev
>     Unsubscribe:
>     http://mail.python.org/mailman/options/python-dev/scopatz%40gmail.com
>
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110408/5dac88ab/attachment.html>

From janssen at parc.com  Fri Apr  8 02:07:12 2011
From: janssen at parc.com (Bill Janssen)
Date: Thu, 7 Apr 2011 17:07:12 PDT
Subject: [Python-Dev] funky buildbot problems again...
In-Reply-To: <20110407231703.1992.33179214.divmod.xquotient.282@localhost.localdomain>
References: <64081.1302208264@parc.com>
	<20110407231703.1992.33179214.divmod.xquotient.282@localhost.localdomain>
Message-ID: <69698.1302221232@parc.com>

exarkun at twistedmatrix.com wrote:

> On 08:31 pm, janssen at parc.com wrote:
> >My Intel Snow Leopard 2 build slave has gone into outer-space again.
> >
> >When I look at it, I see buildslave taking up most of a CPU (80%), and
> >nothing much else going on.  The twistd log says:
> >
> >[... much omitted ...]
> >2011-04-04 08:35:47-0700 [-] sending app-level keepalive
> >2011-04-04 08:45:47-0700 [-] sending app-level keepalive
> >2011-04-04 08:55:47-0700 [-] sending app-level keepalive
> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
> > 2011-04-04 09:03:15-0700 [Broker,client] Lost connection to
> > dinsdale.python.org:9020
> > 2011-04-04 09:03:15-0700 [Broker,client]
> > <twisted.internet.tcp.Connector instance at 0x101629ab8> will retry
> > in 3 seconds
> > 2011-04-04 09:03:15-0700 [Broker,client] Stopping factory
> > <buildslave.bot.BotFactory instance at 0x1016299e0>
> > 2011-04-04 09:03:18-0700 [-] Starting factory
> > <buildslave.bot.BotFactory instance at 0x1016299e0>
> >2011-04-04 09:03:18-0700 [-] Connecting to dinsdale.python.org:9020
> > 2011-04-04 09:03:18-0700 [Uninitialized] Connection to
> > dinsdale.python.org:9020 failed: Connection Refused
> > 2011-04-04 09:03:18-0700 [Uninitialized]
> > <twisted.internet.tcp.Connector instance at 0x101629ab8> will retry
> > in 8 seconds
> > 2011-04-04 09:03:18-0700 [Uninitialized] Stopping factory
> > <buildslave.bot.BotFactory instance at 0x1016299e0>
> > 2011-04-04 09:03:27-0700 [-] Starting factory
> > <buildslave.bot.BotFactory instance at 0x1016299e0>
> >2011-04-04 09:03:27-0700 [-] Connecting to dinsdale.python.org:9020
> >
> >So it's been spinning its wheels for 3 days.
> 
> Does this mean that the "2011-04-04 09:03:27-0700 [-] Connecting to
> dinsdale.python.org:9020" message in the logs is the last one you see
> until you restart the slave?

Yes, that's the last line in the file.

> Or does it mean that the logs go on and on for three days with these
> "Connecting to dinsdale...." / "Connection Refused" / "... will retry
> in N seconds" cycles, thousands and thousands of times?

Well, it's doing something, chewing up cycles, but there's only one
"Connecting" line at the end of the log file.

> What does the buildmaster's info page for this slave say when the
> slave is in this state?  In particular, what does it say about
> "connects/hour"?

Ah, good question.  Too bad I restarted the slave after I sent out my
info.  Is there some way to recover that from earlier?  If not, it will
undoubtedly fail again in a few days.

Bill

From scopatz at gmail.com  Fri Apr  8 02:43:02 2011
From: scopatz at gmail.com (Anthony Scopatz)
Date: Thu, 7 Apr 2011 19:43:02 -0500
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <4D9E4E31.6000506@voidspace.org.uk>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
	<4D9E4E31.6000506@voidspace.org.uk>
Message-ID: <BANLkTi=K9oXLwBY+boNKoLhLzkx_0pi3og@mail.gmail.com>

On Thu, Apr 7, 2011 at 6:52 PM, Michael Foord <fuzzyman at voidspace.org.uk>wrote:

>
>
*some* good benchmarks in place (and the pypy ones are good ones).
>

Agreed. The PyPy ones are good.


>
> So I'm still with Jesse on this one. If there is any "discussion phase" as
> part of the Gsoc project it should be very strictly bounded by time.
>

I was simply going with what the abstract said.  I am fine with discussion
needing to be timely (a week?).  But it seems that from what you are saying,
just to be clear, "Point (2) Implementation" is also non-existent as the
PyPy benchmarks already exist.  If the point of the GSoC is to port the PyPy
benchmarks to Python 3, under "Point (3) Porting", might I suggest a slight
revision of the proposal ;)?

Be Well
Anthony


>
> All the best,
>
> Michael
>
>
>  I actually agree with Micheal.  I think the onus of getting the
> benchmarks working on every platform is the
> onus of that interpreter's community.
>
>  The benchmarking framework that is being developed as part of GSoC should
> be agile enough to add and
> drop projects over time and be able to make certain tests as 'known
> failures', etc.
>
>  I don't think I am asking anything unreasonable here.  Especially, since
> at the end of the day the purview of
> projects like PyPy and Cython ("Make Python Faster") is the same.
>
>  Be Well
> Anthony
>
>
>>
>>
>> All the best,
>>
>> Michael Foord
>>
>>   Regards
>>>
>>> Antoine.
>>>
>>>
>>> _______________________________________________
>>> Python-Dev mailing list
>>> Python-Dev at python.org
>>> http://mail.python.org/mailman/listinfo/python-dev
>>>  Unsubscribe:
>>> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>>>
>>
>>
>> --
>> http://www.voidspace.org.uk/
>>
>> May you do good and not evil
>> May you find forgiveness for yourself and forgive others
>> May you share freely, never taking more than you give.
>> -- the sqlite blessing http://www.sqlite.org/different.html
>>
>> _______________________________________________
>>   Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/scopatz%40gmail.com
>>
>
>
>
> -- http://www.voidspace.org.uk/
>
> May you do good and not evil
> May you find forgiveness for yourself and forgive others
> May you share freely, never taking more than you give.
> -- the sqlite blessing http://www.sqlite.org/different.html
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/6bc134c6/attachment.html>

From exarkun at twistedmatrix.com  Fri Apr  8 03:01:17 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Fri, 08 Apr 2011 01:01:17 -0000
Subject: [Python-Dev] funky buildbot problems again...
In-Reply-To: <69698.1302221232@parc.com>
References: <64081.1302208264@parc.com>
	<20110407231703.1992.33179214.divmod.xquotient.282@localhost.localdomain>
	<69698.1302221232@parc.com>
Message-ID: <20110408010117.1992.1304003430.divmod.xquotient.288@localhost.localdomain>

On 12:07 am, janssen at parc.com wrote:
>exarkun at twistedmatrix.com wrote:
>>On 08:31 pm, janssen at parc.com wrote:
>> >My Intel Snow Leopard 2 build slave has gone into outer-space again.
>> >
>> >When I look at it, I see buildslave taking up most of a CPU (80%), 
>>and
>> >nothing much else going on.  The twistd log says:
>> >
>> >[... much omitted ...]
>> >2011-04-04 08:35:47-0700 [-] sending app-level keepalive
>> >2011-04-04 08:45:47-0700 [-] sending app-level keepalive
>> >2011-04-04 08:55:47-0700 [-] sending app-level keepalive
>> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
>> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
>> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
>> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
>> >2011-04-04 09:03:15-0700 [Broker,client] lost remote
>> > 2011-04-04 09:03:15-0700 [Broker,client] Lost connection to
>> > dinsdale.python.org:9020
>> > 2011-04-04 09:03:15-0700 [Broker,client]
>> > <twisted.internet.tcp.Connector instance at 0x101629ab8> will retry
>> > in 3 seconds
>> > 2011-04-04 09:03:15-0700 [Broker,client] Stopping factory
>> > <buildslave.bot.BotFactory instance at 0x1016299e0>
>> > 2011-04-04 09:03:18-0700 [-] Starting factory
>> > <buildslave.bot.BotFactory instance at 0x1016299e0>
>> >2011-04-04 09:03:18-0700 [-] Connecting to dinsdale.python.org:9020
>> > 2011-04-04 09:03:18-0700 [Uninitialized] Connection to
>> > dinsdale.python.org:9020 failed: Connection Refused
>> > 2011-04-04 09:03:18-0700 [Uninitialized]
>> > <twisted.internet.tcp.Connector instance at 0x101629ab8> will retry
>> > in 8 seconds
>> > 2011-04-04 09:03:18-0700 [Uninitialized] Stopping factory
>> > <buildslave.bot.BotFactory instance at 0x1016299e0>
>> > 2011-04-04 09:03:27-0700 [-] Starting factory
>> > <buildslave.bot.BotFactory instance at 0x1016299e0>
>> >2011-04-04 09:03:27-0700 [-] Connecting to dinsdale.python.org:9020
>> >
>> >So it's been spinning its wheels for 3 days.
>>
>>Does this mean that the "2011-04-04 09:03:27-0700 [-] Connecting to
>>dinsdale.python.org:9020" message in the logs is the last one you see
>>until you restart the slave?
>
>Yes, that's the last line in the file.
>>Or does it mean that the logs go on and on for three days with these
>>"Connecting to dinsdale...." / "Connection Refused" / "... will retry
>>in N seconds" cycles, thousands and thousands of times?
>
>Well, it's doing something, chewing up cycles, but there's only one
>"Connecting" line at the end of the log file.

That's very interesting.  It may be worth doing some gdb or dtrace 
investigation next time it gets into this state.
>>What does the buildmaster's info page for this slave say when the
>>slave is in this state?  In particular, what does it say about
>>"connects/hour"?
>
>Ah, good question.  Too bad I restarted the slave after I sent out my
>info.  Is there some way to recover that from earlier?  If not, it will
>undoubtedly fail again in a few days.

If the master logs are available, that would provide some information. 
Otherwise, I think waiting for it to happen again is the thing to do.

Since there were no other messages in the log file, I expect the 
connects/hour value will be low - perhaps 0.

Jean-Paul

From eltoder at gmail.com  Fri Apr  8 03:02:12 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Thu, 7 Apr 2011 21:02:12 -0400
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>
	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>
	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
Message-ID: <BANLkTi=4dpzy47Hs34UKyMRLF+MfV=Ns5g@mail.gmail.com>

> Because tracker is ugly.

Is this an unbiased opinion? :)

Eugene

From robertc at robertcollins.net  Fri Apr  8 03:10:04 2011
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 8 Apr 2011 13:10:04 +1200
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <4D9E1AA4.4020607@voidspace.org.uk>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>
	<4D9DEB19.10307@voidspace.org.uk>
	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>
	<4D9E1AA4.4020607@voidspace.org.uk>
Message-ID: <BANLkTimo7dT7grkLaFukJd7YoQxr-QD1hA@mail.gmail.com>

On Fri, Apr 8, 2011 at 8:12 AM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 07/04/2011 20:18, Robert Collins wrote:
>>
>> On Fri, Apr 8, 2011 at 4:49 AM, Michael Foord<fuzzyman at voidspace.org.uk>
>> ?wrote:
>>>
>>> You mean that the test run keeps the test instances alive for the whole
>>> test
>>> run so instance attributes are also kept alive. How would you solve this
>>> -
>>> by having calling a TestSuite (which is how a test run is executed)
>>> remove
>>> members from themselves after each test execution? (Any failure
>>> tracebacks
>>> etc stored by the TestResult would also have to not keep the test alive.)
>>>
>>> My only concern would be backwards compatibility due to the change in
>>> behaviour.
>>
>> An alternative is in TestCase.run() / TestCase.__call__(), make a copy
>> and immediately delegate to it; that leaves the original untouched,
>> permitting run-in-a-loop style helpers to still work.
>>
>> Testtools did something to address this problem, but I forget what it
>> was offhand.
>>
> That doesn't sound like a general solution as not everything is copyable and
> I don't think we should make that a requirement of tests.
>
> The proposed "fix" is to make test suite runs destructive, either replacing
> TestCase instances with None or pop'ing tests after they are run (the latter
> being what twisted Trial does). run-in-a-loop helpers could still repeatedly
> iterate over suites, just not call the suite.

Thats quite expensive - repeating discovery etc from scratch. If you
don't repeat discovery then you're assuming copyability. What I
suggested didn't /require/ copying - it delegates it to the test, an
uncopyable test would simply not do this.

-Rob

From jnoller at gmail.com  Fri Apr  8 03:29:34 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Thu, 7 Apr 2011 21:29:34 -0400
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <4D9E4E31.6000506@voidspace.org.uk>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
	<4D9E4E31.6000506@voidspace.org.uk>
Message-ID: <BANLkTimZqog3gxZhKqOuDVDeHeE71vMntg@mail.gmail.com>

On Thu, Apr 7, 2011 at 7:52 PM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 08/04/2011 00:36, Anthony Scopatz wrote:
>
> On Thu, Apr 7, 2011 at 6:11 PM, Michael Foord <fuzzyman at voidspace.org.uk>
> wrote:
>>
>> On 07/04/2011 22:41, Antoine Pitrou wrote:
>>>
>>> On Thu, 07 Apr 2011 17:32:24 -0400
>>> Tres Seaver<tseaver at palladion.com> ?wrote:
>>>>>
>>>>> Right now, we are talking about building "speed.python.org" to test
>>>>> the speed of python interpreters, over time, and alongside one another
>>>>> - cython *is not* an interpreter.
>>>>>
>>>>> Cython is out of scope for this.
>>>>
>>>> Why is it out of scope to use the benchmarks and test harness to answer
>>>> questions like "can we use Cython to provide optional optimizations for
>>>> the stdlib"? ?I can certainly see value in havng an objective way to
>>>> compare the macro benchmark performance of a Cython-optimized CPython
>>>> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.
>>>
>>> Agreed. Assuming someone wants to take care of the Cython side of
>>> things, I don't think there's any reason to exclude it under the
>>> dubious reason that it's "not an interpreter".
>>> (would you exclude Psyco, if it was still alive?)
>>>
>>
>> Well, sure - but within the scope of a GSOC project limiting it to "core
>> python" seems like a more realistic goal.
>>
>> Adding cython later shouldn't be an issue if someone is willing to do the
>> work.
>
> Jesse, I understand that we are talking about the benchmarks on
> speed.pypy.org. ?The current suite, and correct me if I
> am wrong, is completely written in pure python so that any of the
> 'interpreters' may run them.
> My point, which I stand by, was that?during?the initial phase (where
> benchmarks are defined) that the Cython crowd
> should have a voice. ?This should have an?enriching effect on the whole
> benchmarking task since they have
> thought about this issue in a way that is largely?orthogonal to the methods
> PyPy developed. ?I think it
> would be a mistake to leave Cython out of the scoping study.
>
> Personally I think the Gsoc project should just take the pypy suite and run
> with that - bikeshedding about what benchmarks to include is going to make
> it hard to make progress. We can have fun with that discussion once we have
> the infrastructure and *some* good benchmarks in place (and the pypy ones
> are good ones).
>
> So I'm still with Jesse on this one. If there is any "discussion phase" as
> part of the Gsoc project it should be very strictly bounded by time.
>

What michael said: My goal is is to get speed.pypy.org ported to be
able to be used by $N interpreters, for $Y sets of performance
numbers. I'm trying to constrain the problem, and the initial
deployment so we don't spend the next year meandering about. It should
be sufficient to port the benchmarks from speed.pypy.org, and any
deltas from http://hg.python.org/benchmarks/ to Python 3 and the
framework that runs the tests to start.

I don't care if we eventually run cython, psyco, parrot, etc. But the
focus at the language summit, and the continued focus of me getting
the hardware via the PSF to host this on performance/speed.python.org
is tightly focused on the pypy, ironpython, jython and cpython
interpreters.

Let's just get our basics done first before we go all crazy with adding stuff :)

jesse

From eltoder at gmail.com  Fri Apr  8 03:43:07 2011
From: eltoder at gmail.com (Eugene Toder)
Date: Thu, 7 Apr 2011 21:43:07 -0400
Subject: [Python-Dev] abstractmethod doesn't work in classes
Message-ID: <BANLkTi=NXjojuqTPJ7kxHXPyFaYWdxLctQ@mail.gmail.com>

Hello,

I've found that abstractmethod and similar decorators "don't work" in
classes, inherited from built-in types other than object.
For example:

>>> import abc
>>> class MyBase(metaclass=abc.ABCMeta):
	@abc.abstractmethod
	def foo(): pass

>>> MyBase()
Traceback (most recent call last):
  File "<pyshell#8>", line 1, in <module>
    MyBase()
TypeError: Can't instantiate abstract class MyBase with abstract methods foo

So far so good, but:

>>> class MyList(list, MyBase):
	pass

>>> MyList()
[]
>>> MyList.__abstractmethods__
frozenset({'foo'})

This is unexpected, since MyList still doesn't implement foo.
Should this be considered a bug? I don't see this in documentation.
The underlying reason is that __abstractmethods__ is checked in
object_new, but built-in types typically call tp_alloc directly, thus
skipping the check.

Eugene

From scopatz at gmail.com  Fri Apr  8 04:53:33 2011
From: scopatz at gmail.com (Anthony Scopatz)
Date: Thu, 7 Apr 2011 21:53:33 -0500
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTimZqog3gxZhKqOuDVDeHeE71vMntg@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
	<4D9E4E31.6000506@voidspace.org.uk>
	<BANLkTimZqog3gxZhKqOuDVDeHeE71vMntg@mail.gmail.com>
Message-ID: <BANLkTik=qVFGTUaWPW3gecAHXnd_AQTohg@mail.gmail.com>

On Thu, Apr 7, 2011 at 8:29 PM, Jesse Noller <jnoller at gmail.com> wrote:

> On Thu, Apr 7, 2011 at 7:52 PM, Michael Foord <fuzzyman at voidspace.org.uk>
> wrote:
> > On 08/04/2011 00:36, Anthony Scopatz wrote:
> >
> > On Thu, Apr 7, 2011 at 6:11 PM, Michael Foord <fuzzyman at voidspace.org.uk
> >
> > wrote:
> >>
> >> On 07/04/2011 22:41, Antoine Pitrou wrote:
> >>>
> >>> On Thu, 07 Apr 2011 17:32:24 -0400
> >>> Tres Seaver<tseaver at palladion.com>  wrote:
> >>>>>
> >>>>> Right now, we are talking about building "speed.python.org" to test
> >>>>> the speed of python interpreters, over time, and alongside one
> another
> >>>>> - cython *is not* an interpreter.
> >>>>>
> >>>>> Cython is out of scope for this.
> >>>>
> >>>> Why is it out of scope to use the benchmarks and test harness to
> answer
> >>>> questions like "can we use Cython to provide optional optimizations
> for
> >>>> the stdlib"?  I can certainly see value in havng an objective way to
> >>>> compare the macro benchmark performance of a Cython-optimized CPython
> >>>> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.
> >>>
> >>> Agreed. Assuming someone wants to take care of the Cython side of
> >>> things, I don't think there's any reason to exclude it under the
> >>> dubious reason that it's "not an interpreter".
> >>> (would you exclude Psyco, if it was still alive?)
> >>>
> >>
> >> Well, sure - but within the scope of a GSOC project limiting it to "core
> >> python" seems like a more realistic goal.
> >>
> >> Adding cython later shouldn't be an issue if someone is willing to do
> the
> >> work.
> >
> > Jesse, I understand that we are talking about the benchmarks on
> > speed.pypy.org.  The current suite, and correct me if I
> > am wrong, is completely written in pure python so that any of the
> > 'interpreters' may run them.
> > My point, which I stand by, was that during the initial phase (where
> > benchmarks are defined) that the Cython crowd
> > should have a voice.  This should have an enriching effect on the whole
> > benchmarking task since they have
> > thought about this issue in a way that is largely orthogonal to the
> methods
> > PyPy developed.  I think it
> > would be a mistake to leave Cython out of the scoping study.
> >
> > Personally I think the Gsoc project should just take the pypy suite and
> run
> > with that - bikeshedding about what benchmarks to include is going to
> make
> > it hard to make progress. We can have fun with that discussion once we
> have
> > the infrastructure and *some* good benchmarks in place (and the pypy ones
> > are good ones).
> >
> > So I'm still with Jesse on this one. If there is any "discussion phase"
> as
> > part of the Gsoc project it should be very strictly bounded by time.
> >
>
> What michael said: My goal is is to get speed.pypy.org ported to be
> able to be used by $N interpreters, for $Y sets of performance
> numbers. I'm trying to constrain the problem, and the initial
> deployment so we don't spend the next year meandering about. It should
> be sufficient to port the benchmarks from speed.pypy.org, and any
> deltas from http://hg.python.org/benchmarks/ to Python 3 and the
> framework that runs the tests to start.
>
> I don't care if we eventually run cython, psyco, parrot, etc. But the
> focus at the language summit, and the continued focus of me getting
> the hardware via the PSF to host this on performance/speed.python.org
> is tightly focused on the pypy, ironpython, jython and cpython
> interpreters.
>
>

> Let's just get our basics done first before we go all crazy with adding
> stuff :)
>
>
Ahh gotcha, I think I misunderstood the scope in the short term ;).

Be Well
Anthony



>  jesse
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110407/89a82e18/attachment.html>

From urban.dani+py at gmail.com  Fri Apr  8 07:15:19 2011
From: urban.dani+py at gmail.com (Daniel Urban)
Date: Fri, 8 Apr 2011 07:15:19 +0200
Subject: [Python-Dev] abstractmethod doesn't work in classes
In-Reply-To: <BANLkTi=NXjojuqTPJ7kxHXPyFaYWdxLctQ@mail.gmail.com>
References: <BANLkTi=NXjojuqTPJ7kxHXPyFaYWdxLctQ@mail.gmail.com>
Message-ID: <BANLkTinnkFCSP7XN1yfdNCCQLjCq+eDPXA@mail.gmail.com>

> I've found that abstractmethod and similar decorators "don't work" in
> classes, inherited from built-in types other than object.

http://bugs.python.org/issue5996

From fijall at gmail.com  Fri Apr  8 08:16:09 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Fri, 8 Apr 2011 08:16:09 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTimZqog3gxZhKqOuDVDeHeE71vMntg@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
	<4D9E4E31.6000506@voidspace.org.uk>
	<BANLkTimZqog3gxZhKqOuDVDeHeE71vMntg@mail.gmail.com>
Message-ID: <BANLkTinBrUnzsc+c0mqfpCgDxxbipnBfNA@mail.gmail.com>

On Fri, Apr 8, 2011 at 3:29 AM, Jesse Noller <jnoller at gmail.com> wrote:
> On Thu, Apr 7, 2011 at 7:52 PM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
>> On 08/04/2011 00:36, Anthony Scopatz wrote:
>>
>> On Thu, Apr 7, 2011 at 6:11 PM, Michael Foord <fuzzyman at voidspace.org.uk>
>> wrote:
>>>
>>> On 07/04/2011 22:41, Antoine Pitrou wrote:
>>>>
>>>> On Thu, 07 Apr 2011 17:32:24 -0400
>>>> Tres Seaver<tseaver at palladion.com> ?wrote:
>>>>>>
>>>>>> Right now, we are talking about building "speed.python.org" to test
>>>>>> the speed of python interpreters, over time, and alongside one another
>>>>>> - cython *is not* an interpreter.
>>>>>>
>>>>>> Cython is out of scope for this.
>>>>>
>>>>> Why is it out of scope to use the benchmarks and test harness to answer
>>>>> questions like "can we use Cython to provide optional optimizations for
>>>>> the stdlib"? ?I can certainly see value in havng an objective way to
>>>>> compare the macro benchmark performance of a Cython-optimized CPython
>>>>> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.
>>>>
>>>> Agreed. Assuming someone wants to take care of the Cython side of
>>>> things, I don't think there's any reason to exclude it under the
>>>> dubious reason that it's "not an interpreter".
>>>> (would you exclude Psyco, if it was still alive?)
>>>>
>>>
>>> Well, sure - but within the scope of a GSOC project limiting it to "core
>>> python" seems like a more realistic goal.
>>>
>>> Adding cython later shouldn't be an issue if someone is willing to do the
>>> work.
>>
>> Jesse, I understand that we are talking about the benchmarks on
>> speed.pypy.org. ?The current suite, and correct me if I
>> am wrong, is completely written in pure python so that any of the
>> 'interpreters' may run them.
>> My point, which I stand by, was that?during?the initial phase (where
>> benchmarks are defined) that the Cython crowd
>> should have a voice. ?This should have an?enriching effect on the whole
>> benchmarking task since they have
>> thought about this issue in a way that is largely?orthogonal to the methods
>> PyPy developed. ?I think it
>> would be a mistake to leave Cython out of the scoping study.
>>
>> Personally I think the Gsoc project should just take the pypy suite and run
>> with that - bikeshedding about what benchmarks to include is going to make
>> it hard to make progress. We can have fun with that discussion once we have
>> the infrastructure and *some* good benchmarks in place (and the pypy ones
>> are good ones).
>>
>> So I'm still with Jesse on this one. If there is any "discussion phase" as
>> part of the Gsoc project it should be very strictly bounded by time.
>>
>
> What michael said: My goal is is to get speed.pypy.org ported to be
> able to be used by $N interpreters, for $Y sets of performance
> numbers. I'm trying to constrain the problem, and the initial
> deployment so we don't spend the next year meandering about. It should
> be sufficient to port the benchmarks from speed.pypy.org, and any
> deltas from http://hg.python.org/benchmarks/ to Python 3 and the
> framework that runs the tests to start.
>
> I don't care if we eventually run cython, psyco, parrot, etc. But the
> focus at the language summit, and the continued focus of me getting
> the hardware via the PSF to host this on performance/speed.python.org
> is tightly focused on the pypy, ironpython, jython and cpython
> interpreters.
>
> Let's just get our basics done first before we go all crazy with adding stuff :)
>
> jesse

Hi.

Spending significant effort to make those benchmarks run on cython is
definitely out of scope. If cython can say compile twisted we can as
well run it (or skip few ones that are not compilable for some time).
If cython won't run every of the major benchmarks (using large
libraries) or cython people would complain all the time about adding
static type analysis (this won't happen) then the answer is cython is
not python enough.

The second part is porting to python3. I would like to postpone this
part (I already chatted with DasIch about it) until libraries are
ready. As of 8th of April 2010 none of the interesting benchmarks
would run or be easy to port to Python 3. by skipping all interesting
ones (each of them requiring large library), the outcome would be a
set of benchmarks that not that many people care about and also not a
very interesting one.

My proposal to steer this project would be to first have an
infrastructure running, improve the backend running the benchmarks,
make sure we build all the interpreters and have this running. Also
polish codespeed to look good for everyone. This is already *a lot* of
work, even though it might not look like it.

Cheers,
fijal

From stefan_ml at behnel.de  Fri Apr  8 11:22:43 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Fri, 08 Apr 2011 11:22:43 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
Message-ID: <inmk53$aak$1@dough.gmane.org>

Jesse Noller, 07.04.2011 22:28:
> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>> Hi Daniel,
>> Thanks for putting this together.  I am a huge supporter of benchmarking
>> efforts.  My brief comment is below.
>>
>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>
>>> 1. Definition of the benchmark suite. This will entail contacting
>>> developers of Python implementations (CPython, PyPy, IronPython and
>>> Jython), via discussion on the appropriate mailing lists. This might
>>> be achievable as part of this proposal.
>>>
>>
>> If you are reaching out to other projects at this stage, I think you should
>> also be in touch with the Cython people  (even if its 'implementation'
>> sits on top of CPython).
>> As a scientist/engineer what I care about is how Cython benchmarks to
>> CPython.  I believe that they have some ideas on benchmarking and have
>> also explored this space.  Their inclusion would be helpful to me thinking
>> this GSoC successful at the end of the day (summer).
>> Thanks for your consideration.
>> Be Well
>> Anthony
>
> Right now, we are talking about building "speed.python.org" to test
> the speed of python interpreters, over time, and alongside one another
> - cython *is not* an interpreter.

Would you also want to exclude Psyco then? It clearly does not qualify as a 
Python interpreter.


> Cython is out of scope for this.

Why? It should be easy to integrate Cython using pyximport. Basically, all 
you have to do is register the pyximport module as an import hook. Cython 
will then try to compile the imported Python modules and fall back to the 
normal .py file import if the compilation fails for some reason.

So, once CPython is up and running in the benchmark test, adding Cython 
should be as easy as copying the configuration, installing Cython and 
adding two lines to site.py.

Obviously, we'd have to integrate a build of the latest Cython development 
sources as well, but it's not like installing a distutils enabled Python 
package from sources is so hard that it pushes Cython out of scope for this 
GSoC.

Stefan


From fijall at gmail.com  Fri Apr  8 11:41:05 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Fri, 8 Apr 2011 11:41:05 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <inmk53$aak$1@dough.gmane.org>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inmk53$aak$1@dough.gmane.org>
Message-ID: <BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>

On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Jesse Noller, 07.04.2011 22:28:
>>
>> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>>>
>>> Hi Daniel,
>>> Thanks for putting this together. ?I am a huge supporter of benchmarking
>>> efforts. ?My brief comment is below.
>>>
>>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>>
>>>> 1. Definition of the benchmark suite. This will entail contacting
>>>> developers of Python implementations (CPython, PyPy, IronPython and
>>>> Jython), via discussion on the appropriate mailing lists. This might
>>>> be achievable as part of this proposal.
>>>>
>>>
>>> If you are reaching out to other projects at this stage, I think you
>>> should
>>> also be in touch with the Cython people ?(even if its 'implementation'
>>> sits on top of CPython).
>>> As a scientist/engineer what I care about is how Cython benchmarks to
>>> CPython. ?I believe that they have some ideas on benchmarking and have
>>> also explored this space. ?Their inclusion would be helpful to me
>>> thinking
>>> this GSoC successful at the end of the day (summer).
>>> Thanks for your consideration.
>>> Be Well
>>> Anthony
>>
>> Right now, we are talking about building "speed.python.org" to test
>> the speed of python interpreters, over time, and alongside one another
>> - cython *is not* an interpreter.
>
> Would you also want to exclude Psyco then? It clearly does not qualify as a
> Python interpreter.
>

Why not? it does run those benchmarks just fine.

>
>> Cython is out of scope for this.
>
> Why? It should be easy to integrate Cython using pyximport. Basically, all
> you have to do is register the pyximport module as an import hook. Cython
> will then try to compile the imported Python modules and fall back to the
> normal .py file import if the compilation fails for some reason.

then it's fine to include it. we can even include it now in
speed.pypy.org that way. would it compile django?

>
> So, once CPython is up and running in the benchmark test, adding Cython
> should be as easy as copying the configuration, installing Cython and adding
> two lines to site.py.

can you provide a simple command line tool for that? I want
essentially to run ./cython-importing-stuff some-file.py

>
> Obviously, we'd have to integrate a build of the latest Cython development
> sources as well, but it's not like installing a distutils enabled Python
> package from sources is so hard that it pushes Cython out of scope for this
> GSoC.

no, that's fine. My main concern is - will cython run those
benchmarks? and will you complain if we don't provide a custom cython
hacks? (like providing extra type information)

>
> Stefan
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

From stefan_ml at behnel.de  Fri Apr  8 12:18:51 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Fri, 08 Apr 2011 12:18:51 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inmk53$aak$1@dough.gmane.org>
	<BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>
Message-ID: <inmneb$sf1$1@dough.gmane.org>

Maciej Fijalkowski, 08.04.2011 11:41:
> On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel<stefan_ml at behnel.de>  wrote:
>> Jesse Noller, 07.04.2011 22:28:
>>>
>>> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>>>>
>>>> Hi Daniel,
>>>> Thanks for putting this together.  I am a huge supporter of benchmarking
>>>> efforts.  My brief comment is below.
>>>>
>>>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>>>
>>>>> 1. Definition of the benchmark suite. This will entail contacting
>>>>> developers of Python implementations (CPython, PyPy, IronPython and
>>>>> Jython), via discussion on the appropriate mailing lists. This might
>>>>> be achievable as part of this proposal.
>>>>>
>>>>
>>>> If you are reaching out to other projects at this stage, I think you
>>>> should
>>>> also be in touch with the Cython people  (even if its 'implementation'
>>>> sits on top of CPython).
>>>> As a scientist/engineer what I care about is how Cython benchmarks to
>>>> CPython.  I believe that they have some ideas on benchmarking and have
>>>> also explored this space.  Their inclusion would be helpful to me
>>>> thinking
>>>> this GSoC successful at the end of the day (summer).
>>>> Thanks for your consideration.
>>>> Be Well
>>>> Anthony
>>>
>>> Right now, we are talking about building "speed.python.org" to test
>>> the speed of python interpreters, over time, and alongside one another
>>> - cython *is not* an interpreter.
>>
>> Would you also want to exclude Psyco then? It clearly does not qualify as a
>> Python interpreter.
>
> Why not? it does run those benchmarks just fine.

Sure.


>>> Cython is out of scope for this.
>>
>> Why? It should be easy to integrate Cython using pyximport. Basically, all
>> you have to do is register the pyximport module as an import hook. Cython
>> will then try to compile the imported Python modules and fall back to the
>> normal .py file import if the compilation fails for some reason.
>
> then it's fine to include it. we can even include it now in
> speed.pypy.org that way. would it compile django?

Never tried. Likely not completely, but surely some major parts of it. 
That's the beauty of it - it just falls back to CPython. :) If we're lucky, 
it will manage to compile some performance critical parts without 
modifications. In any case, it'll be trying to compile each module.


>> So, once CPython is up and running in the benchmark test, adding Cython
>> should be as easy as copying the configuration, installing Cython and adding
>> two lines to site.py.
>
> can you provide a simple command line tool for that? I want
> essentially to run ./cython-importing-stuff some-file.py

You can try

     python -c 'import pyximport; \
                pyximport.install(pyimport=True); \
                exec("somefile.py")'

You may want to configure the output directory for the binary modules, 
though, see

https://github.com/cython/cython/blob/master/pyximport/pyximport.py#L343

Please also take care to provide suitable gcc CFLAGS, e.g. "-O3 
-march=native" etc.


>> Obviously, we'd have to integrate a build of the latest Cython development
>> sources as well, but it's not like installing a distutils enabled Python
>> package from sources is so hard that it pushes Cython out of scope for this
>> GSoC.
>
> no, that's fine. My main concern is - will cython run those
> benchmarks?

In the worst case, they will run at CPython speed with uncompiled modules.


> and will you complain if we don't provide a custom cython
> hacks? (like providing extra type information)

I don't consider providing extra type information a hack. Remember that 
they are only used for additional speed-ups in cases where the author is 
smarter than the compiler. It will work just fine without them.

Stefan


From fuzzyman at voidspace.org.uk  Fri Apr  8 12:38:53 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 08 Apr 2011 11:38:53 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <BANLkTimo7dT7grkLaFukJd7YoQxr-QD1hA@mail.gmail.com>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>	<4D9DEB19.10307@voidspace.org.uk>	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>	<4D9E1AA4.4020607@voidspace.org.uk>
	<BANLkTimo7dT7grkLaFukJd7YoQxr-QD1hA@mail.gmail.com>
Message-ID: <4D9EE5BD.5030700@voidspace.org.uk>

On 08/04/2011 02:10, Robert Collins wrote:
> On Fri, Apr 8, 2011 at 8:12 AM, Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>> On 07/04/2011 20:18, Robert Collins wrote:
>>> On Fri, Apr 8, 2011 at 4:49 AM, Michael Foord<fuzzyman at voidspace.org.uk>
>>>   wrote:
>>>> You mean that the test run keeps the test instances alive for the whole
>>>> test
>>>> run so instance attributes are also kept alive. How would you solve this
>>>> -
>>>> by having calling a TestSuite (which is how a test run is executed)
>>>> remove
>>>> members from themselves after each test execution? (Any failure
>>>> tracebacks
>>>> etc stored by the TestResult would also have to not keep the test alive.)
>>>>
>>>> My only concern would be backwards compatibility due to the change in
>>>> behaviour.
>>> An alternative is in TestCase.run() / TestCase.__call__(), make a copy
>>> and immediately delegate to it; that leaves the original untouched,
>>> permitting run-in-a-loop style helpers to still work.
>>>
>>> Testtools did something to address this problem, but I forget what it
>>> was offhand.
>>>
>> That doesn't sound like a general solution as not everything is copyable and
>> I don't think we should make that a requirement of tests.
>>
>> The proposed "fix" is to make test suite runs destructive, either replacing
>> TestCase instances with None or pop'ing tests after they are run (the latter
>> being what twisted Trial does). run-in-a-loop helpers could still repeatedly
>> iterate over suites, just not call the suite.
> Thats quite expensive - repeating discovery etc from scratch.

Nope, just executing the tests by iterating over the suite and calling 
them individually - no need to repeat discovery. With the fix in place 
executing tests by calling the suite would be destructive, but iterating 
over the suite wouldn't be destructive - so the contained tests can 
still be executed repeatedly by copying and executing.
>   If you
> don't repeat discovery then you're assuming copyability.
Well, individual test frameworks are free to assume what they want. My 
point is that frameworks that wish to do that would still be able to do 
this, but they'd have to iterate over the suite themselves rather than 
calling it directly.
> What I
> suggested didn't /require/ copying - it delegates it to the test, an
> uncopyable test would simply not do this.
>
Ok, so you're not suggesting tests copy themselves by default? In which 
case I don't see that you're offering a fix for the problem. (Or at 
least not a built-in one.)

All the best,

Michael Foord

> -Rob


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Fri Apr  8 13:26:40 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 08 Apr 2011 12:26:40 +0100
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
 3.x)
In-Reply-To: <inmneb$sf1$1@dough.gmane.org>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inmk53$aak$1@dough.gmane.org>	<BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>
	<inmneb$sf1$1@dough.gmane.org>
Message-ID: <4D9EF0F0.4010305@voidspace.org.uk>

On 08/04/2011 11:18, Stefan Behnel wrote:
> Maciej Fijalkowski, 08.04.2011 11:41:
>> On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel<stefan_ml at behnel.de>  
>> wrote:
>>> [snip...]
>>> So, once CPython is up and running in the benchmark test, adding Cython
>>> should be as easy as copying the configuration, installing Cython 
>>> and adding
>>> two lines to site.py.
>>
>> can you provide a simple command line tool for that? I want
>> essentially to run ./cython-importing-stuff some-file.py
>
> You can try
>
>     python -c 'import pyximport; \
>                pyximport.install(pyimport=True); \
>                exec("somefile.py")'
>
> You may want to configure the output directory for the binary modules, 
> though, see
>
> https://github.com/cython/cython/blob/master/pyximport/pyximport.py#L343
>
> Please also take care to provide suitable gcc CFLAGS, e.g. "-O3 
> -march=native" etc.
>
>

If this works it is great. I don't think doing this work should be part 
of the gsoc proposal. Considering it as a use case could be included in 
the infrastructure work though.

All the best,

Michael Foord

>>> Obviously, we'd have to integrate a build of the latest Cython 
>>> development
>>> sources as well, but it's not like installing a distutils enabled 
>>> Python
>>> package from sources is so hard that it pushes Cython out of scope 
>>> for this
>>> GSoC.
>>
>> no, that's fine. My main concern is - will cython run those
>> benchmarks?
>
> In the worst case, they will run at CPython speed with uncompiled 
> modules.
>
>
>> and will you complain if we don't provide a custom cython
>> hacks? (like providing extra type information)
>
> I don't consider providing extra type information a hack. Remember 
> that they are only used for additional speed-ups in cases where the 
> author is smarter than the compiler. It will work just fine without them.
>
> Stefan
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Fri Apr  8 13:30:07 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 08 Apr 2011 12:30:07 +0100
Subject: [Python-Dev] Code highlighting in tracker
In-Reply-To: <BANLkTi=4dpzy47Hs34UKyMRLF+MfV=Ns5g@mail.gmail.com>
References: <BANLkTinaP0cAO-tinA1n=HKNABsoMp4_9Q@mail.gmail.com>	<BANLkTi=bGnpANFE9D42AeNCLEdM__qKbyQ@mail.gmail.com>	<BANLkTikcz8o7cz+X5NEaeFjf-Dpyq512nw@mail.gmail.com>
	<BANLkTi=4dpzy47Hs34UKyMRLF+MfV=Ns5g@mail.gmail.com>
Message-ID: <4D9EF1BF.6070905@voidspace.org.uk>

On 08/04/2011 02:02, Eugene Toder wrote:
>> Because tracker is ugly.
> Is this an unbiased opinion? :)
Having Python code syntax highlighted would definitely be *nicer*, and 
wouldn't *necessarily* mean switching to a custom markup format for all 
submissions (we could probably get 90% of the way there with 
heuristics). Of course as always someone would have to do the work...

On the other hand switching to *permitting* restructured-text 
submissions for tracker comments, with syntax highlighting for literal 
blocks (::), would be nice. :-)

All the best,

Michael Foord



> Eugene
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From ncoghlan at gmail.com  Fri Apr  8 13:29:25 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 8 Apr 2011 21:29:25 +1000
Subject: [Python-Dev] AST Transformation Hooks for Domain Specific Languages
Message-ID: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>

A few odds and ends from recent discussions finally clicked into
something potentially interesting earlier this evening. Or possibly
just something insane. I'm not quite decided on that point as yet (but
leaning towards the latter).

Anyway, without further ado, I present:

AST Transformation Hooks for Domain Specific Languages
======================================================

Consider:

# In some other module
ast.register_dsl("dsl.sql", dsl.sql.TransformAST)

# In a module using that DSL
import dsl.sql
def lookup_address(name : dsl.sql.char, dob : dsl.sql.date) from dsl.sql:
    select address
    from people
    where name = {name} and dob = {dob}


Suppose that the standard AST for the latter looked something like:

    DSL(syntax="dsl.sql",
        name='lookup_address',
        args=arguments(
            args=[arg(arg='name',
                      annotation=<Normal AST for "dsl.sql.char">),
                  arg(arg='dob',
                      annotation=<Normal AST for "dsl.sql.date">)],
            vararg=None, varargannotation=None,
            kwonlyargs=[], kwarg=None, kwargannotation=None,
            defaults=[], kw_defaults=[]),
        body=[Expr(value=Str(s='select address\nfrom people\nwhere
name = {name} and dob = {dob}'))],
        decorator_list=[],
        returns=None)

(For those not familiar with the AST, the above is actually just the
existing Function node with a "syntax" attribute added)

At *compile* time (note, *not* function definition time), the
registered AST transformation hook would be invoked and would replace
that DSL node with "standard" AST nodes.

For example, depending on the design of the DSL and its support code,
the above example might be equivalent to:

    @dsl.sql.escape_and_validate_args
    def lookup_address(name: dsl.sql.char, dob: dsl.sql.date):
       args = dict(name=name, dob=dob)
       query = "select address\nfrom people\nwhere name = {name} and
dob = {dob}"
       return dsl.sql.cursor(query, args)


As a simpler example, consider something like:

    def f() from all_nonlocal:
        x += 1
        y -= 2

That was translated at compile time into:

    def f():
        nonlocal x, y
        x += 1
        y -= 2

My first pass at a rough protocol for the AST transformers suggests
they would only need two methods:

  get_cookie() - Magic cookie to add to PYC files containing instances
of the DSL (allows recompilation to be forced if the DSL is updated)
  transform_AST(node) - a DSL() node is passed in, expected to return
an AST containing no DSL nodes (SyntaxError if one is found)

Attempts to use an unregistered DSL would trigger SyntaxError

So there you are, that's the crazy idea. The stoning of the heretic
may now commence :)

Where this idea came from was the various discussions about "make
statement" style constructs and a conversation I had with Eric Snow at
Pycon about function definition time really being *too late* to do
anything particularly interesting that couldn't already be handled
better in other ways. Some tricks Dave Malcolm had done to support
Python level manipulation of the AST during compilation also played a
big part, as did Eugene Toder's efforts to add an AST optimisation
step to the compilation process.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Fri Apr  8 13:31:18 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 8 Apr 2011 21:31:18 +1000
Subject: [Python-Dev] Ack, wrong list
Message-ID: <BANLkTim5rrV057Rg4BOF-wy5MqOdBSmrfQ@mail.gmail.com>

Sorry, my last mail was meant to go to python-ideas, not python-dev
(and the gmail/mailman disagreement means I can't easily reply to it).

Reply to the version on python-ideas please, not the version on here.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fijall at gmail.com  Fri Apr  8 13:37:21 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Fri, 8 Apr 2011 13:37:21 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <inmneb$sf1$1@dough.gmane.org>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inmk53$aak$1@dough.gmane.org>
	<BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>
	<inmneb$sf1$1@dough.gmane.org>
Message-ID: <BANLkTi=S+3V3C+HAisu4OppYG30mbth_5A@mail.gmail.com>

On Fri, Apr 8, 2011 at 12:18 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Maciej Fijalkowski, 08.04.2011 11:41:
>>
>> On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel<stefan_ml at behnel.de>
>> ?wrote:
>>>
>>> Jesse Noller, 07.04.2011 22:28:
>>>>
>>>> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>>>>>
>>>>> Hi Daniel,
>>>>> Thanks for putting this together. ?I am a huge supporter of
>>>>> benchmarking
>>>>> efforts. ?My brief comment is below.
>>>>>
>>>>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>>>>
>>>>>> 1. Definition of the benchmark suite. This will entail contacting
>>>>>> developers of Python implementations (CPython, PyPy, IronPython and
>>>>>> Jython), via discussion on the appropriate mailing lists. This might
>>>>>> be achievable as part of this proposal.
>>>>>>
>>>>>
>>>>> If you are reaching out to other projects at this stage, I think you
>>>>> should
>>>>> also be in touch with the Cython people ?(even if its 'implementation'
>>>>> sits on top of CPython).
>>>>> As a scientist/engineer what I care about is how Cython benchmarks to
>>>>> CPython. ?I believe that they have some ideas on benchmarking and have
>>>>> also explored this space. ?Their inclusion would be helpful to me
>>>>> thinking
>>>>> this GSoC successful at the end of the day (summer).
>>>>> Thanks for your consideration.
>>>>> Be Well
>>>>> Anthony
>>>>
>>>> Right now, we are talking about building "speed.python.org" to test
>>>> the speed of python interpreters, over time, and alongside one another
>>>> - cython *is not* an interpreter.
>>>
>>> Would you also want to exclude Psyco then? It clearly does not qualify as
>>> a
>>> Python interpreter.
>>
>> Why not? it does run those benchmarks just fine.
>
> Sure.
>
>
>>>> Cython is out of scope for this.
>>>
>>> Why? It should be easy to integrate Cython using pyximport. Basically,
>>> all
>>> you have to do is register the pyximport module as an import hook. Cython
>>> will then try to compile the imported Python modules and fall back to the
>>> normal .py file import if the compilation fails for some reason.
>>
>> then it's fine to include it. we can even include it now in
>> speed.pypy.org that way. would it compile django?
>
> Never tried. Likely not completely, but surely some major parts of it.
> That's the beauty of it - it just falls back to CPython. :) If we're lucky,
> it will manage to compile some performance critical parts without
> modifications. In any case, it'll be trying to compile each module.
>

Ok, sure let's try.

>
>>> So, once CPython is up and running in the benchmark test, adding Cython
>>> should be as easy as copying the configuration, installing Cython and
>>> adding
>>> two lines to site.py.
>>
>> can you provide a simple command line tool for that? I want
>> essentially to run ./cython-importing-stuff some-file.py
>
> You can try
>
> ? ?python -c 'import pyximport; \
> ? ? ? ? ? ? ? pyximport.install(pyimport=True); \
> ? ? ? ? ? ? ? exec("somefile.py")'

I think you meant execfile. Also, how do I make sure that somefile.py
is also compiled?

>
> You may want to configure the output directory for the binary modules,
> though, see
>
> https://github.com/cython/cython/blob/master/pyximport/pyximport.py#L343
>
> Please also take care to provide suitable gcc CFLAGS, e.g. "-O3
> -march=native" etc.
>
>
>>> Obviously, we'd have to integrate a build of the latest Cython
>>> development
>>> sources as well, but it's not like installing a distutils enabled Python
>>> package from sources is so hard that it pushes Cython out of scope for
>>> this
>>> GSoC.
>>
>> no, that's fine. My main concern is - will cython run those
>> benchmarks?
>
> In the worst case, they will run at CPython speed with uncompiled modules.

ok, fine.

>
>
>> and will you complain if we don't provide a custom cython
>> hacks? (like providing extra type information)
>
> I don't consider providing extra type information a hack. Remember that they
> are only used for additional speed-ups in cases where the author is smarter
> than the compiler. It will work just fine without them.

We can agree to disagree on this one.

>
> Stefan
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

From stefan_ml at behnel.de  Fri Apr  8 14:28:12 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Fri, 08 Apr 2011 14:28:12 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTi=S+3V3C+HAisu4OppYG30mbth_5A@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inmk53$aak$1@dough.gmane.org>	<BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>	<inmneb$sf1$1@dough.gmane.org>
	<BANLkTi=S+3V3C+HAisu4OppYG30mbth_5A@mail.gmail.com>
Message-ID: <inmv0s$8io$1@dough.gmane.org>

Maciej Fijalkowski, 08.04.2011 13:37:
> On Fri, Apr 8, 2011 at 12:18 PM, Stefan Behnel wrote:
>>>> So, once CPython is up and running in the benchmark test, adding Cython
>>>> should be as easy as copying the configuration, installing Cython and
>>>> adding
>>>> two lines to site.py.
>>>
>>> can you provide a simple command line tool for that? I want
>>> essentially to run ./cython-importing-stuff some-file.py
>>
>> You can try
>>
>>     python -c 'import pyximport; \
>>                pyximport.install(pyimport=True); \
>>                exec("somefile.py")'
>
> I think you meant execfile.

Ah, yes. Untested. ;)


> Also, how do I make sure that somefile.py
> is also compiled?

It's not getting compiled because it's not getting imported. Maybe we 
should discuss the exact setup for speed.pypy.org in private e-mail.

Stefan


From tseaver at palladion.com  Fri Apr  8 14:51:04 2011
From: tseaver at palladion.com (Tres Seaver)
Date: Fri, 08 Apr 2011 08:51:04 -0400
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <4D9E4E31.6000506@voidspace.org.uk>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inlah7$6o0$1@dough.gmane.org>
	<20110407234110.38702683@pitrou.net>	<4D9E4495.4060302@voidspace.org.uk>	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
	<4D9E4E31.6000506@voidspace.org.uk>
Message-ID: <inn0bn$ciq$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/07/2011 07:52 PM, Michael Foord wrote:

> Personally I think the Gsoc project should just take the pypy suite and 
> run with that - bikeshedding about what benchmarks to include is going 
> to make it hard to make progress. We can have fun with that discussion 
> once we have the infrastructure and *some* good benchmarks in place (and 
> the pypy ones are good ones).
> 
> So I'm still with Jesse on this one. If there is any "discussion phase" 
> as part of the Gsoc project it should be very strictly bounded by time.

Somehow I missed seeing '[GSoC]' in the subject line (the blizzard of
notification messages to the various GSoC specific lists must've
snow-blinded me :).  I'm fine with leaving Cython out-of-scope for the
GSoC effort, just not for perf.python.org as a whole.


Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2fBLgACgkQ+gerLs4ltQ67jACgozHfglhw7QQQH42hdwXy4VLX
fXQAn33X/rq71BdZxmfsGn0swdeseHxJ
=Ttvg
-----END PGP SIGNATURE-----


From jnoller at gmail.com  Fri Apr  8 15:53:08 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Fri, 8 Apr 2011 09:53:08 -0400
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <inn0bn$ciq$1@dough.gmane.org>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inlah7$6o0$1@dough.gmane.org> <20110407234110.38702683@pitrou.net>
	<4D9E4495.4060302@voidspace.org.uk>
	<BANLkTimUnXqh3MnDNqe0iKUiV+ejJ5RJ2A@mail.gmail.com>
	<4D9E4E31.6000506@voidspace.org.uk> <inn0bn$ciq$1@dough.gmane.org>
Message-ID: <BANLkTikYwOr-sYcRR+gKC_DVCxmDih1Dtw@mail.gmail.com>

On Fri, Apr 8, 2011 at 8:51 AM, Tres Seaver <tseaver at palladion.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 04/07/2011 07:52 PM, Michael Foord wrote:
>
>> Personally I think the Gsoc project should just take the pypy suite and
>> run with that - bikeshedding about what benchmarks to include is going
>> to make it hard to make progress. We can have fun with that discussion
>> once we have the infrastructure and *some* good benchmarks in place (and
>> the pypy ones are good ones).
>>
>> So I'm still with Jesse on this one. If there is any "discussion phase"
>> as part of the Gsoc project it should be very strictly bounded by time.
>
> Somehow I missed seeing '[GSoC]' in the subject line (the blizzard of
> notification messages to the various GSoC specific lists must've
> snow-blinded me :). ?I'm fine with leaving Cython out-of-scope for the
> GSoC effort, just not for perf.python.org as a whole.

We don't need a massive outstanding todo list for perf.python.org - we
need to get the current speed.pypy.org stuff made more generic for the
purposes we're aiming for and to get the hardware (on my plate) first.

Then we can talk about expanding it. I'm just begging that we not add
a bunch of stuff to a todo list for something that doesn't exist right
now.

jesse

From fuzzyman at voidspace.org.uk  Fri Apr  8 15:57:54 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 8 Apr 2011 14:57:54 +0100
Subject: [Python-Dev] AST Transformation Hooks for Domain Specific
	Languages
In-Reply-To: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
References: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
Message-ID: <BANLkTim80dn_fVmjeN9x_Mbx_NW_rk7CLA@mail.gmail.com>

On 8 April 2011 12:29, Nick Coghlan <ncoghlan at gmail.com> wrote:

> A few odds and ends from recent discussions finally clicked into
> something potentially interesting earlier this evening. Or possibly
> just something insane. I'm not quite decided on that point as yet (but
> leaning towards the latter).
>
>
The essence of the proposal is to allow arbitrary syntax within "standard
python files". I don't think it stands much of a chance in core.

It would be an awesome tool for experimenting with new syntax and dsls
though. :-)

Michael


> Anyway, without further ado, I present:
>
> AST Transformation Hooks for Domain Specific Languages
> ======================================================
>
> Consider:
>
> # In some other module
> ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
>
> # In a module using that DSL
> import dsl.sql
> def lookup_address(name : dsl.sql.char, dob : dsl.sql.date) from dsl.sql:
>    select address
>    from people
>    where name = {name} and dob = {dob}
>
>
> Suppose that the standard AST for the latter looked something like:
>
>    DSL(syntax="dsl.sql",
>        name='lookup_address',
>        args=arguments(
>            args=[arg(arg='name',
>                      annotation=<Normal AST for "dsl.sql.char">),
>                  arg(arg='dob',
>                      annotation=<Normal AST for "dsl.sql.date">)],
>            vararg=None, varargannotation=None,
>            kwonlyargs=[], kwarg=None, kwargannotation=None,
>            defaults=[], kw_defaults=[]),
>        body=[Expr(value=Str(s='select address\nfrom people\nwhere
> name = {name} and dob = {dob}'))],
>        decorator_list=[],
>        returns=None)
>
> (For those not familiar with the AST, the above is actually just the
> existing Function node with a "syntax" attribute added)
>
> At *compile* time (note, *not* function definition time), the
> registered AST transformation hook would be invoked and would replace
> that DSL node with "standard" AST nodes.
>
> For example, depending on the design of the DSL and its support code,
> the above example might be equivalent to:
>
>    @dsl.sql.escape_and_validate_args
>    def lookup_address(name: dsl.sql.char, dob: dsl.sql.date):
>       args = dict(name=name, dob=dob)
>       query = "select address\nfrom people\nwhere name = {name} and
> dob = {dob}"
>       return dsl.sql.cursor(query, args)
>
>
> As a simpler example, consider something like:
>
>    def f() from all_nonlocal:
>        x += 1
>        y -= 2
>
> That was translated at compile time into:
>
>    def f():
>        nonlocal x, y
>        x += 1
>        y -= 2
>
> My first pass at a rough protocol for the AST transformers suggests
> they would only need two methods:
>
>  get_cookie() - Magic cookie to add to PYC files containing instances
> of the DSL (allows recompilation to be forced if the DSL is updated)
>  transform_AST(node) - a DSL() node is passed in, expected to return
> an AST containing no DSL nodes (SyntaxError if one is found)
>
> Attempts to use an unregistered DSL would trigger SyntaxError
>
> So there you are, that's the crazy idea. The stoning of the heretic
> may now commence :)
>
> Where this idea came from was the various discussions about "make
> statement" style constructs and a conversation I had with Eric Snow at
> Pycon about function definition time really being *too late* to do
> anything particularly interesting that couldn't already be handled
> better in other ways. Some tricks Dave Malcolm had done to support
> Python level manipulation of the AST during compilation also played a
> big part, as did Eugene Toder's efforts to add an AST optimisation
> step to the compilation process.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>



-- 

http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110408/4e6b3521/attachment.html>

From scopatz at gmail.com  Fri Apr  8 17:32:05 2011
From: scopatz at gmail.com (Anthony Scopatz)
Date: Fri, 8 Apr 2011 10:32:05 -0500
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTi=S+3V3C+HAisu4OppYG30mbth_5A@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inmk53$aak$1@dough.gmane.org>
	<BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>
	<inmneb$sf1$1@dough.gmane.org>
	<BANLkTi=S+3V3C+HAisu4OppYG30mbth_5A@mail.gmail.com>
Message-ID: <BANLkTinyoUq1jd1V2B6rJdbsZJjNf99wdA@mail.gmail.com>

>
>
> >
> >
> >> and will you complain if we don't provide a custom cython
> >> hacks? (like providing extra type information)
> >
> > I don't consider providing extra type information a hack. Remember that
> they
> > are only used for additional speed-ups in cases where the author is
> smarter
> > than the compiler. It will work just fine without them.
>
> We can agree to disagree on this one.
>
>
The way to think about this is really that Cython is its own (creole)
language that which has major intersections with the Python language.
The goal of the Cython project is to have Python be a strict subset of
Cython.  Therefore constructions such as type declarations are really
self-consistent Cython.

Because it aims to be a superset of Python, Cython is more like a two-stage
Python compiler (compile to C, then compile to assembly) than
an interpreter.  For the purposes of benchmarking, the distinction between
compiler and interpreter, as some one said above, 'dubious'.

You wouldn't want to add all of the type info or do anything in Cython that
is *not* in Python here.  That would defeat the purpose of benchmarking
where you absolutely have to compare apples to apples.

That said, despite the abstract, it seems that points 1 and 2 won't actually
be present in this GSoC.  We are not defining benchmarks.  This
project is more about porting to Python 3.

Thus, I agree with Jesse, and we shouldn't heap on more TODOs than already
exist. As people
have mentioned here, it will be easy to add Cython support once the system
is up and running.

Be Well
Anthony


> >
> > Stefan
> >
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev at python.org
> > http://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> > http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
> >
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/scopatz%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110408/1c814e78/attachment.html>

From tjreedy at udel.edu  Fri Apr  8 18:00:24 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 08 Apr 2011 12:00:24 -0400
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTinyoUq1jd1V2B6rJdbsZJjNf99wdA@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inmk53$aak$1@dough.gmane.org>	<BANLkTimOrV62y0N0vzX1ZsN9a_UmbbAS7A@mail.gmail.com>	<inmneb$sf1$1@dough.gmane.org>	<BANLkTi=S+3V3C+HAisu4OppYG30mbth_5A@mail.gmail.com>
	<BANLkTinyoUq1jd1V2B6rJdbsZJjNf99wdA@mail.gmail.com>
Message-ID: <innben$oqt$1@dough.gmane.org>

On 4/8/2011 11:32 AM, Anthony Scopatz wrote:

> an interpreter.  For the purposes of benchmarking, the distinction
> between compiler and interpreter, as some one said above, 'dubious'.

I agree. We should be comparing 'Python execution systems'. My 
impression is that some of what Cython does in terms of code analysis is 
similar to PyPy does, perhaps in the jit phase. So comparing PyPy and 
CPython+Cython on standard Python code is a fair and interesting comparison.

> You wouldn't want to add all of the type info or do anything in Cython
> that is *not* in Python here.  That would defeat the purpose of benchmarking
> where you absolutely have to compare apples to apples.

If Cython people want to modify benchmarks to show what speedup one can 
get with what effort, that is a separate issue.

-- 
Terry Jan Reedy


From status at bugs.python.org  Fri Apr  8 18:07:20 2011
From: status at bugs.python.org (Python tracker)
Date: Fri,  8 Apr 2011 18:07:20 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20110408160720.1C94C1CC04@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2011-04-01 - 2011-04-08)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    2741 ( +8)
  closed 20845 (+58)
  total  23586 (+66)

Open issues with patches: 1183 


Issues opened (41)
==================

#5673: Add timeout option to subprocess.Popen
http://bugs.python.org/issue5673  reopened by haypo

#11277: test_zlib.test_big_buffer crashes under BSD (Mac OS X and Free
http://bugs.python.org/issue11277  reopened by haypo

#11492: email.header.Header doesn't fold headers correctly
http://bugs.python.org/issue11492  reopened by kitterma

#11740: difflib html diff takes extremely long
http://bugs.python.org/issue11740  opened by mkorourk at adobe.com

#11743: Rewrite PipeConnection and Connection in pure Python
http://bugs.python.org/issue11743  opened by pitrou

#11747: unified_diff function product incorrect range information
http://bugs.python.org/issue11747  opened by jan.koprowski

#11748: test_ftplib failure in test for source_address
http://bugs.python.org/issue11748  opened by pitrou

#11750: Mutualize win32 functions
http://bugs.python.org/issue11750  opened by pitrou

#11751: Increase distutils.filelist test coverage
http://bugs.python.org/issue11751  opened by jlove

#11754: Changed test to check calculated constants in test_string.py
http://bugs.python.org/issue11754  opened by Lynne.Qu

#11757: test_subprocess.test_communicate_timeout_large_ouput failure o
http://bugs.python.org/issue11757  opened by pitrou

#11758: increase xml.dom.minidom test coverage
http://bugs.python.org/issue11758  opened by mdorn

#11762: Ast doc: warning and version number
http://bugs.python.org/issue11762  opened by terry.reedy

#11763: assertEqual memory issues with large text inputs
http://bugs.python.org/issue11763  opened by michael.foord

#11764: inspect.getattr_static code execution w/ class body as non dic
http://bugs.python.org/issue11764  opened by michael.foord

#11767: Maildir iterator leaks file descriptors by default
http://bugs.python.org/issue11767  opened by moyix

#11768: test_signals() of test_threadsignals failure on Mac OS X
http://bugs.python.org/issue11768  opened by haypo

#11769: test_notify() of test_threading hang on "x86 XP-4 3.x":
http://bugs.python.org/issue11769  opened by haypo

#11770: inspect.dir_static
http://bugs.python.org/issue11770  opened by michael.foord

#11772: email header wrapping edge case failure
http://bugs.python.org/issue11772  opened by r.david.murray

#11776: types.MethodType() params and usage is not documented
http://bugs.python.org/issue11776  opened by techtonik

#11779: test_mmap timeout (30 min) on "AMD64 Snow Leopard 3.x" buildbo
http://bugs.python.org/issue11779  opened by haypo

#11780: email.encoders are broken
http://bugs.python.org/issue11780  opened by sdaoden

#11781: test/test_email directory does not get installed by 'make inst
http://bugs.python.org/issue11781  opened by sdaoden

#11782: email.generator.Generator.flatten() fails
http://bugs.python.org/issue11782  opened by sdaoden

#11783: email parseaddr and formataddr should be IDNA aware
http://bugs.python.org/issue11783  opened by r.david.murray

#11784: multiprocessing.Process.join: timeout argument doesn't specify
http://bugs.python.org/issue11784  opened by pyfex

#11785: email subpackages documentation problems
http://bugs.python.org/issue11785  opened by ysj.ray

#11786: ConfigParser.[Raw]ConfigParser optionxform()
http://bugs.python.org/issue11786  opened by Adam.Groszer

#11787: File handle leak in TarFile lib
http://bugs.python.org/issue11787  opened by shahpr

#11789: Extend upon metaclass/type class documentation, here: zope.int
http://bugs.python.org/issue11789  opened by carsten.klein

#11790: transient failure in test_multiprocessing.WithProcessesTestCon
http://bugs.python.org/issue11790  opened by pitrou

#11792: asyncore module print to stdout
http://bugs.python.org/issue11792  opened by kaplun

#11795: Better core dev guidelines for committing submitted patches
http://bugs.python.org/issue11795  opened by ncoghlan

#11796: list and generator expressions in a class definition fail if e
http://bugs.python.org/issue11796  opened by mjs0

#11797: 2to3 does not correct "reload"
http://bugs.python.org/issue11797  opened by tebeka

#11798: Test cases not garbage collected after run
http://bugs.python.org/issue11798  opened by fabioz

#11799: urllib HTTP authentication behavior with unrecognized auth met
http://bugs.python.org/issue11799  opened by ubershmekel

#11800: regrtest --timeout: apply the timeout on a function, not on th
http://bugs.python.org/issue11800  opened by haypo

#11802: filecmp.cmp needs a documented way to clear cache
http://bugs.python.org/issue11802  opened by lopgok

#11804: expat parser not xml 1.1 (breaks xmlrpclib)
http://bugs.python.org/issue11804  opened by xrg



Most recent 15 issues with no replies (15)
==========================================

#11804: expat parser not xml 1.1 (breaks xmlrpclib)
http://bugs.python.org/issue11804

#11796: list and generator expressions in a class definition fail if e
http://bugs.python.org/issue11796

#11790: transient failure in test_multiprocessing.WithProcessesTestCon
http://bugs.python.org/issue11790

#11784: multiprocessing.Process.join: timeout argument doesn't specify
http://bugs.python.org/issue11784

#11783: email parseaddr and formataddr should be IDNA aware
http://bugs.python.org/issue11783

#11782: email.generator.Generator.flatten() fails
http://bugs.python.org/issue11782

#11781: test/test_email directory does not get installed by 'make inst
http://bugs.python.org/issue11781

#11780: email.encoders are broken
http://bugs.python.org/issue11780

#11776: types.MethodType() params and usage is not documented
http://bugs.python.org/issue11776

#11772: email header wrapping edge case failure
http://bugs.python.org/issue11772

#11769: test_notify() of test_threading hang on "x86 XP-4 3.x":
http://bugs.python.org/issue11769

#11758: increase xml.dom.minidom test coverage
http://bugs.python.org/issue11758

#11736: windows installers ssl module / openssl broken for some sites
http://bugs.python.org/issue11736

#11719: test_msilib skip unexpected on non-Windows platforms
http://bugs.python.org/issue11719

#11708: argparse: suggestion for formatting optional positional args
http://bugs.python.org/issue11708



Most recent 15 issues waiting for review (15)
=============================================

#11800: regrtest --timeout: apply the timeout on a function, not on th
http://bugs.python.org/issue11800

#11799: urllib HTTP authentication behavior with unrecognized auth met
http://bugs.python.org/issue11799

#11797: 2to3 does not correct "reload"
http://bugs.python.org/issue11797

#11785: email subpackages documentation problems
http://bugs.python.org/issue11785

#11784: multiprocessing.Process.join: timeout argument doesn't specify
http://bugs.python.org/issue11784

#11781: test/test_email directory does not get installed by 'make inst
http://bugs.python.org/issue11781

#11780: email.encoders are broken
http://bugs.python.org/issue11780

#11772: email header wrapping edge case failure
http://bugs.python.org/issue11772

#11767: Maildir iterator leaks file descriptors by default
http://bugs.python.org/issue11767

#11763: assertEqual memory issues with large text inputs
http://bugs.python.org/issue11763

#11762: Ast doc: warning and version number
http://bugs.python.org/issue11762

#11758: increase xml.dom.minidom test coverage
http://bugs.python.org/issue11758

#11757: test_subprocess.test_communicate_timeout_large_ouput failure o
http://bugs.python.org/issue11757

#11754: Changed test to check calculated constants in test_string.py
http://bugs.python.org/issue11754

#11751: Increase distutils.filelist test coverage
http://bugs.python.org/issue11751



Top 10 most discussed issues (10)
=================================

#11734: Add half-float (16-bit) support to struct module
http://bugs.python.org/issue11734  17 msgs

#2736: datetime needs an "epoch" method
http://bugs.python.org/issue2736  14 msgs

#10977: Concrete object C API needs abstract path for subclasses of bu
http://bugs.python.org/issue10977  12 msgs

#11800: regrtest --timeout: apply the timeout on a function, not on th
http://bugs.python.org/issue11800  11 msgs

#11798: Test cases not garbage collected after run
http://bugs.python.org/issue11798  10 msgs

#11767: Maildir iterator leaks file descriptors by default
http://bugs.python.org/issue11767   9 msgs

#11492: email.header.Header doesn't fold headers correctly
http://bugs.python.org/issue11492   8 msgs

#11757: test_subprocess.test_communicate_timeout_large_ouput failure o
http://bugs.python.org/issue11757   8 msgs

#6040: bdist_msi does not deal with pre-release version
http://bugs.python.org/issue6040   7 msgs

#11277: test_zlib.test_big_buffer crashes under BSD (Mac OS X and Free
http://bugs.python.org/issue11277   7 msgs



Issues closed (58)
==================

#4112: Subprocess: Popen'ed children hang due to open pipes
http://bugs.python.org/issue4112  closed by rosslagerwall

#5863: bz2.BZ2File should accept other file-like objects. (issue42740
http://bugs.python.org/issue5863  closed by pitrou

#7108: test_commands.py failing on OS X 10.5.7 due to '@' in ls outpu
http://bugs.python.org/issue7108  closed by ned.deily

#7311: Bug on regexp of HTMLParser
http://bugs.python.org/issue7311  closed by ezio.melotti

#8252: add a metadata section in setup.cfg
http://bugs.python.org/issue8252  closed by eric.araujo

#8253: add a resource+files section in setup.cfg
http://bugs.python.org/issue8253  closed by eric.araujo

#9319: imp.find_module('test/badsyntax_pep3120') causes segfault
http://bugs.python.org/issue9319  closed by haypo

#9347: Calling argparse add_argument with a sequence as 'type' causes
http://bugs.python.org/issue9347  closed by bethard

#9861: subprocess module changed exposed attributes
http://bugs.python.org/issue9861  closed by rosslagerwall

#10023: test_lib2to3 leaks under 3.1
http://bugs.python.org/issue10023  closed by sandro.tosi

#10339: test_lib2to3 leaks
http://bugs.python.org/issue10339  closed by sandro.tosi

#10762: strftime('%f') segfault
http://bugs.python.org/issue10762  closed by orsenthil

#10785: parser: store the filename as an unicode object
http://bugs.python.org/issue10785  closed by haypo

#10791: Wrapping TextIOWrapper around gzip files
http://bugs.python.org/issue10791  closed by pitrou

#10963: "subprocess" can raise OSError (EPIPE) when communicating with
http://bugs.python.org/issue10963  closed by rosslagerwall

#11282: 3.3 unittest document not kept consist with code
http://bugs.python.org/issue11282  closed by ezio.melotti

#11576: timedelta subtraction glitch on big timedelta values
http://bugs.python.org/issue11576  closed by belopolsky

#11597: Can't get ConfigParser.write to write unicode strings
http://bugs.python.org/issue11597  closed by r.david.murray

#11605: EMail generator.flatten() disintegrates over non-ascii multipa
http://bugs.python.org/issue11605  closed by r.david.murray

#11661: test_collections.TestNamedTuple.test_source failing on many bu
http://bugs.python.org/issue11661  closed by rhettinger

#11665: Regexp findall freezes
http://bugs.python.org/issue11665  closed by haypo

#11674: list(obj), tuple(obj) swallow TypeError (in _PyObject_LengthHi
http://bugs.python.org/issue11674  closed by rhettinger

#11688: SQLite trace callback
http://bugs.python.org/issue11688  closed by pitrou

#11707: Create C version of functools.cmp_to_key()
http://bugs.python.org/issue11707  closed by rhettinger

#11715: Building Python on multiarch Debian and Ubuntu
http://bugs.python.org/issue11715  closed by barry

#11730: Setting Invalid sys.stdin in interactive mode =>  loop forever
http://bugs.python.org/issue11730  closed by haypo

#11733: Implement a `Counter.elements_count` method
http://bugs.python.org/issue11733  closed by rhettinger

#11738: ThreadSignals.test_signals() of test_threadsignals hangs on PP
http://bugs.python.org/issue11738  closed by haypo

#11739: Python doesn't have a 2011 april fools joke
http://bugs.python.org/issue11739  closed by benjamin.peterson

#11741: shutil2.copy fails with destination filenames
http://bugs.python.org/issue11741  closed by ezio.melotti

#11742: Possible bug in Python/import.c
http://bugs.python.org/issue11742  closed by brett.cannon

#11744: re.LOCALE doesn't reflect locale.setlocale(...)
http://bugs.python.org/issue11744  closed by r.david.murray

#11745: idlelib/PyShell.py: incorrect module name reported in error me
http://bugs.python.org/issue11745  closed by ezio.melotti

#11746: ssl library load_cert_chain cannot use elliptic curve type pri
http://bugs.python.org/issue11746  closed by pitrou

#11749: test_socket failure
http://bugs.python.org/issue11749  closed by pitrou

#11752: Gungor Basa wants to stay in touch on LinkedIn
http://bugs.python.org/issue11752  closed by ezio.melotti

#11753: test_sendall_interrupted() of test_socket hangs on FreeBSD
http://bugs.python.org/issue11753  closed by haypo

#11755: test_itimer_real() of test_signal hang on FreeBSD
http://bugs.python.org/issue11755  closed by haypo

#11756: bytes.hex()
http://bugs.python.org/issue11756  closed by benjamin.peterson

#11759: assert for exception parameters
http://bugs.python.org/issue11759  closed by r.david.murray

#11760: Bus error in test_big_buffer() of test_zlib on "AMD64 Snow Leo
http://bugs.python.org/issue11760  closed by haypo

#11761: fragile tests in test_gc
http://bugs.python.org/issue11761  closed by pitrou

#11765: test_faulthandler failure
http://bugs.python.org/issue11765  closed by haypo

#11766: test_multiprocessing failure (test_pool_worker_lifetime)
http://bugs.python.org/issue11766  closed by pitrou

#11771: hashlib object cannot be pickled
http://bugs.python.org/issue11771  closed by haypo

#11773: Unicode compared using "is" results in abnormal behavior
http://bugs.python.org/issue11773  closed by ezio.melotti

#11774: Issue tracker sends notification mails twice...
http://bugs.python.org/issue11774  closed by brian.curtin

#11775: `bool(Counter({'a': 0})) is True`
http://bugs.python.org/issue11775  closed by rhettinger

#11777: Executor.map does not submit futures until iter.next() is call
http://bugs.python.org/issue11777  closed by bquinlan

#11778: __subclasscheck__ :  class P(M): __metaclass__=M causes maximu
http://bugs.python.org/issue11778  closed by benjamin.peterson

#11788: Decorator class with optional arguments and a __call__ method 
http://bugs.python.org/issue11788  closed by benjamin.peterson

#11791: python -m doctest has a -v flag that it ignores
http://bugs.python.org/issue11791  closed by Devin Jeanpierre

#11793: raw strings
http://bugs.python.org/issue11793  closed by amaury.forgeotdarc

#11794: Backport new logging docs to 2.7
http://bugs.python.org/issue11794  closed by vinay.sajip

#11801: difference in comparison behavior between 32 bit and 64 bit re
http://bugs.python.org/issue11801  closed by ezio.melotti

#11803: Memory leak in sub-interpreters
http://bugs.python.org/issue11803  closed by jcea

#963906: Unicode email address helper
http://bugs.python.org/issue963906  closed by r.david.murray

#1690608: email.utils.formataddr() should be rfc2047 aware
http://bugs.python.org/issue1690608  closed by r.david.murray

From merwok at netwok.org  Fri Apr  8 18:10:35 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Fri, 08 Apr 2011 18:10:35 +0200
Subject: [Python-Dev] [Python-checkins] cpython (3.1): Issue 11715:
 Build extension modules on multiarch Debian and Ubuntu by
In-Reply-To: <E1Q7ZRw-0006mm-EQ@dinsdale.python.org>
References: <E1Q7ZRw-0006mm-EQ@dinsdale.python.org>
Message-ID: <80047896a9eea9ed48949df5e5d08524@netwok.org>

 Hi,

> http://hg.python.org/cpython/rev/7582a78f573b
> branch:      3.1
> user:        Barry Warsaw <barry at python.org>
> summary:
>   Issue 11715: Build extension modules on multiarch Debian and Ubuntu 
> by
> extending search paths to include multiarch directories.
>
> diff --git a/setup.py b/setup.py

> +        if not os.path.exists(self.build_temp):
> +            os.makedirs(self.build_temp)

 Isn?t there a possible raise condition here?  I think it?s recommended
 to follow EAFP for mkdir and makedirs.

> +        ret = os.system(
> +            'dpkg-architecture -qDEB_HOST_MULTIARCH > %s 2> 
> /dev/null' %
> +            tmpfile)
> +        try:
> +            if ret >> 8 == 0:
> +                with open(tmpfile) as fp:
> +                    multiarch_path_component = fp.readline().strip()
> +                add_dir_to_list(self.compiler.library_dirs,
> +                                '/usr/lib/' + 
> multiarch_path_component)
> +                add_dir_to_list(self.compiler.include_dirs,
> +                                '/usr/include/' + 
> multiarch_path_component)
> +        finally:
> +            os.unlink(tmpfile)

 Is there a benefit in creating and reading a file rather than catching
 stdout?

 Regards

From jon.riehl at gmail.com  Fri Apr  8 19:14:21 2011
From: jon.riehl at gmail.com (Jon Riehl)
Date: Fri, 8 Apr 2011 12:14:21 -0500
Subject: [Python-Dev] AST Transformation Hooks for Domain Specific
	Languages
In-Reply-To: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
References: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
Message-ID: <BANLkTim+PJiPHjWrSE-55PczBiBjMR5uWA@mail.gmail.com>

Hi Nick, all,

Just for the record, I would point to Mython (mython.org) as an
existing provider of this capability.  I've already added an AST node
called "Quote" that functions like your DSL node, along with well
defined lexical, concrete syntax, and compile-time properties.

I have a mostly functioning front end for 2.X that does these
expansions (MyFront), and I'm waiting for a stable Mercurial migration
(I've been only lightly lurking on python-dev, so if this already
exists, someone should ping me) so I can publish a 3.X branch that
will get rid of a lot of the code I have to maintain by just building
on top of CPython (CMython? *smirk*).

It looks like you have some ideas about import semantics and managing
compile-time dependencies.  I would invite further elaboration on the
mython-dev Google group.  I currently have two different mechanisms
implemented, one via import hooks and the other by forced global
recompilation, but neither of these satisfies because you are imposing
a compile-time concept into a thoroughly dynamic language.

...and yes, compile-time metaprogramming is insane.

Regards,
-Jon

http://mython.org/ - Make Python yours.

On Fri, Apr 8, 2011 at 6:29 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> A few odds and ends from recent discussions finally clicked into
> something potentially interesting earlier this evening. Or possibly
> just something insane. I'm not quite decided on that point as yet (but
> leaning towards the latter).
>
> Anyway, without further ado, I present:
>
> AST Transformation Hooks for Domain Specific Languages
> ======================================================
>
> Consider:
>
> # In some other module
> ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
>
> # In a module using that DSL
> import dsl.sql
> def lookup_address(name : dsl.sql.char, dob : dsl.sql.date) from dsl.sql:
> ? ?select address
> ? ?from people
> ? ?where name = {name} and dob = {dob}
>
>
> Suppose that the standard AST for the latter looked something like:
>
> ? ?DSL(syntax="dsl.sql",
> ? ? ? ?name='lookup_address',
> ? ? ? ?args=arguments(
> ? ? ? ? ? ?args=[arg(arg='name',
> ? ? ? ? ? ? ? ? ? ? ?annotation=<Normal AST for "dsl.sql.char">),
> ? ? ? ? ? ? ? ? ?arg(arg='dob',
> ? ? ? ? ? ? ? ? ? ? ?annotation=<Normal AST for "dsl.sql.date">)],
> ? ? ? ? ? ?vararg=None, varargannotation=None,
> ? ? ? ? ? ?kwonlyargs=[], kwarg=None, kwargannotation=None,
> ? ? ? ? ? ?defaults=[], kw_defaults=[]),
> ? ? ? ?body=[Expr(value=Str(s='select address\nfrom people\nwhere
> name = {name} and dob = {dob}'))],
> ? ? ? ?decorator_list=[],
> ? ? ? ?returns=None)
>
> (For those not familiar with the AST, the above is actually just the
> existing Function node with a "syntax" attribute added)
>
> At *compile* time (note, *not* function definition time), the
> registered AST transformation hook would be invoked and would replace
> that DSL node with "standard" AST nodes.
>
> For example, depending on the design of the DSL and its support code,
> the above example might be equivalent to:
>
> ? ?@dsl.sql.escape_and_validate_args
> ? ?def lookup_address(name: dsl.sql.char, dob: dsl.sql.date):
> ? ? ? args = dict(name=name, dob=dob)
> ? ? ? query = "select address\nfrom people\nwhere name = {name} and
> dob = {dob}"
> ? ? ? return dsl.sql.cursor(query, args)
>
>
> As a simpler example, consider something like:
>
> ? ?def f() from all_nonlocal:
> ? ? ? ?x += 1
> ? ? ? ?y -= 2
>
> That was translated at compile time into:
>
> ? ?def f():
> ? ? ? ?nonlocal x, y
> ? ? ? ?x += 1
> ? ? ? ?y -= 2
>
> My first pass at a rough protocol for the AST transformers suggests
> they would only need two methods:
>
> ?get_cookie() - Magic cookie to add to PYC files containing instances
> of the DSL (allows recompilation to be forced if the DSL is updated)
> ?transform_AST(node) - a DSL() node is passed in, expected to return
> an AST containing no DSL nodes (SyntaxError if one is found)
>
> Attempts to use an unregistered DSL would trigger SyntaxError
>
> So there you are, that's the crazy idea. The stoning of the heretic
> may now commence :)
>
> Where this idea came from was the various discussions about "make
> statement" style constructs and a conversation I had with Eric Snow at
> Pycon about function definition time really being *too late* to do
> anything particularly interesting that couldn't already be handled
> better in other ways. Some tricks Dave Malcolm had done to support
> Python level manipulation of the AST during compilation also played a
> big part, as did Eugene Toder's efforts to add an AST optimisation
> step to the compilation process.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/jon.riehl%40gmail.com
>

From dmalcolm at redhat.com  Fri Apr  8 18:50:50 2011
From: dmalcolm at redhat.com (David Malcolm)
Date: Fri, 08 Apr 2011 12:50:50 -0400
Subject: [Python-Dev] AST Transformation Hooks for Domain
	Specific	Languages
In-Reply-To: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
References: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
Message-ID: <1302281450.3336.47.camel@radiator.bos.redhat.com>

On Fri, 2011-04-08 at 21:29 +1000, Nick Coghlan wrote:
> A few odds and ends from recent discussions finally clicked into
> something potentially interesting earlier this evening. Or possibly
> just something insane. I'm not quite decided on that point as yet (but
> leaning towards the latter).

I too am leaning towards the latter (I'm afraid my first thought was to
check the date on the email); as Michael said, I too don't think it
stands much of a chance in core.

> Anyway, without further ado, I present:
> 
> AST Transformation Hooks for Domain Specific Languages
> ======================================================

This reminds me a lot of Mython:
  http://mython.org/
If you haven't seen it, it's well worth a look.

My favourite use case for this kind of thing is having the ability to
embed shell pipelines into Python code, by transforming bash-style
syntax into subprocess calls (it's almost possible to do all this in
regular Python by overloading the | and > operators, but not quite).

> Consider:
> 
> # In some other module
> ast.register_dsl("dsl.sql", dsl.sql.TransformAST)

Where is this registered?   Do you have to import this "other module"
before importing the module using "dsl.sql" ?   It sounds like this is
global state for the interpreter.

> # In a module using that DSL

How is this usage expressed?  via the following line?

> import dsl.sql

I see the "import dsl.sql" here, but surely you have to somehow process
the "import" in order to handle the rest of the parsing.

This is reminiscent of the "from __future__ " specialcasing in the
parser.  But from my understanding of CPython's Python/future.c, you
already have an AST at that point (mod_ty, from Python/compile.c).
There seems to be a chicken-and-egg problem with this proposal.

Though another syntax might read:

  from __dsl__ import sql

to perhaps emphasize that something magical is about to happen.

[...snip example of usage of a DSL, and the AST it gets parsed to...]

Where and how would the bytes of the file usage the DSL get converted to
an in-memory tree representation?  

IIRC, manipulating AST nodes in CPython requires some care: the parser
has its own allocator (PyArena), and the entities it allocates have a
shared lifetime that ends when PyArena_Free occurs.

> So there you are, that's the crazy idea. The stoning of the heretic
> may now commence :)

Or, less violently, take it to python-ideas?  (though I'm not subscribed
there, fwiw, make of that what you will)

One "exciting" aspect of this is that if someone changes the DSL file,
the meaning of all of your code changes from under you.  This may or may
not be a sane approach to software development :)

(I also worry what this means e.g. for people writing text editors,
syntax highlighters, etc; insert usual Alan Perlis quote about syntactic
sugar causing cancer of the semicolon)

Also, insert usual comments about the need to think about how
non-CPython implementations of Python would go about implementing such
ideas.

> Where this idea came from was the various discussions about "make
> statement" style constructs and a conversation I had with Eric Snow at
> Pycon about function definition time really being *too late* to do
> anything particularly interesting that couldn't already be handled
> better in other ways. Some tricks Dave Malcolm had done to support
> Python level manipulation of the AST during compilation also played a
> big part, as did Eugene Toder's efforts to add an AST optimisation
> step to the compilation process.

Like I said earlier, have a look at Mython

Hope this is helpful
Dave


From ericsnowcurrently at gmail.com  Fri Apr  8 19:34:30 2011
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 8 Apr 2011 11:34:30 -0600
Subject: [Python-Dev] AST Transformation Hooks for Domain Specific
	Languages
In-Reply-To: <1302281450.3336.47.camel@radiator.bos.redhat.com>
References: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
	<1302281450.3336.47.camel@radiator.bos.redhat.com>
Message-ID: <BANLkTi=yx5U3R=aYEHn8J0qUw8cwX_Ns+w@mail.gmail.com>

On Fri, Apr 8, 2011 at 10:50 AM, David Malcolm <dmalcolm at redhat.com> wrote:

> On Fri, 2011-04-08 at 21:29 +1000, Nick Coghlan wrote:
> > A few odds and ends from recent discussions finally clicked into
> > something potentially interesting earlier this evening. Or possibly
> > just something insane. I'm not quite decided on that point as yet (but
> > leaning towards the latter).
>
> I too am leaning towards the latter (I'm afraid my first thought was to
> check the date on the email); as Michael said, I too don't think it
> stands much of a chance in core.
>
> > Anyway, without further ado, I present:
> >
> > AST Transformation Hooks for Domain Specific Languages
> > ======================================================
>
> This reminds me a lot of Mython:
>  http://mython.org/
> If you haven't seen it, it's well worth a look.
>
> My favourite use case for this kind of thing is having the ability to
> embed shell pipelines into Python code, by transforming bash-style
> syntax into subprocess calls (it's almost possible to do all this in
> regular Python by overloading the | and > operators, but not quite).
>
> > Consider:
> >
> > # In some other module
> > ast.register_dsl("dsl.sql", dsl.sql.TransformAST)
>
> Where is this registered?   Do you have to import this "other module"
> before importing the module using "dsl.sql" ?   It sounds like this is
> global state for the interpreter.
>
> > # In a module using that DSL
>
> How is this usage expressed?  via the following line?
>
> > import dsl.sql
>
> I see the "import dsl.sql" here, but surely you have to somehow process
> the "import" in order to handle the rest of the parsing.
>
> This is reminiscent of the "from __future__ " specialcasing in the
> parser.  But from my understanding of CPython's Python/future.c, you
> already have an AST at that point (mod_ty, from Python/compile.c).
> There seems to be a chicken-and-egg problem with this proposal.
>
> Though another syntax might read:
>
>  from __dsl__ import sql
>
> to perhaps emphasize that something magical is about to happen.
>
> [...snip example of usage of a DSL, and the AST it gets parsed to...]
>
> Where and how would the bytes of the file usage the DSL get converted to
> an in-memory tree representation?
>
> IIRC, manipulating AST nodes in CPython requires some care: the parser
> has its own allocator (PyArena), and the entities it allocates have a
> shared lifetime that ends when PyArena_Free occurs.
>
> > So there you are, that's the crazy idea. The stoning of the heretic
> > may now commence :)
>
> Or, less violently, take it to python-ideas?  (though I'm not subscribed
> there, fwiw, make of that what you will)
>
> One "exciting" aspect of this is that if someone changes the DSL file,
> the meaning of all of your code changes from under you.  This may or may
> not be a sane approach to software development :)
>
> (I also worry what this means e.g. for people writing text editors,
> syntax highlighters, etc; insert usual Alan Perlis quote about syntactic
> sugar causing cancer of the semicolon)
>
> Also, insert usual comments about the need to think about how
> non-CPython implementations of Python would go about implementing such
> ideas.
>
> > Where this idea came from was the various discussions about "make
> > statement" style constructs and a conversation I had with Eric Snow at
> > Pycon about function definition time really being *too late* to do
> > anything particularly interesting that couldn't already be handled
> > better in other ways. Some tricks Dave Malcolm had done to support
> > Python level manipulation of the AST during compilation also played a
> > big part, as did Eugene Toder's efforts to add an AST optimisation
> > step to the compilation process.
>
> Like I said earlier, have a look at Mython
>
> Hope this is helpful
> Dave
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com
>

Someone brought up some of the same stuff in the python-ideas thread and
Nick responded there, particularly about the import question.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110408/ca5ccc59/attachment.html>

From solipsis at pitrou.net  Fri Apr  8 19:40:06 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 8 Apr 2011 19:40:06 +0200
Subject: [Python-Dev] [Python-checkins] cpython (3.1): Issue 11715:
 Build extension modules on multiarch Debian and Ubuntu by
References: <E1Q7ZRw-0006mm-EQ@dinsdale.python.org>
	<80047896a9eea9ed48949df5e5d08524@netwok.org>
Message-ID: <20110408194006.04b33cd7@pitrou.net>

On Fri, 08 Apr 2011 18:10:35 +0200
?ric Araujo <merwok at netwok.org> wrote:
>  Hi,
> 
> > http://hg.python.org/cpython/rev/7582a78f573b
> > branch:      3.1
> > user:        Barry Warsaw <barry at python.org>
> > summary:
> >   Issue 11715: Build extension modules on multiarch Debian and Ubuntu 
> > by
> > extending search paths to include multiarch directories.
> >
> > diff --git a/setup.py b/setup.py
> 
> > +        if not os.path.exists(self.build_temp):
> > +            os.makedirs(self.build_temp)
> 
>  Isn?t there a possible raise condition here?  I think it?s recommended
>  to follow EAFP for mkdir and makedirs.

Since this is setup.py, I don't think we care.
(I assume you meant "race condition", not "raise condition")

Regards

Antoine.



From dasdasich at googlemail.com  Fri Apr  8 20:21:12 2011
From: dasdasich at googlemail.com (DasIch)
Date: Fri, 8 Apr 2011 20:21:12 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
Message-ID: <BANLkTikQPomGLL-b4NeB91BZgRFGY6pVDQ@mail.gmail.com>

I talked to Fijal about my project last night, the result is that
basically the project as is, is not that interesting because the means
to execute the benchmarks on multiple interpreters are currently
missing.

Another point we talked about was that porting the benchmarks would
not be very useful as the interesting ones all have dependencies which
have not (yet) been ported to Python 3.x.

The first point, execution on multiple interpreters, has to be solved
or this project is pretty much pointless, therefore I've changed my
proposal to include just that. However the proposal still includes
porting the benchmarks although this is planned to happen after the
development of an application able to run the benchmarks on multiple
interpreters.

The reason for this is that even though the portable benchmarks might
not prove to be that interesting the basic stuff for porting using
2to3 would be there, making it easier to port benchmarks in the
future, as the dependencies become available under Python 3.x. However
I plan to do that after implementing the prior mentioned application
putting the application at higher priority.

This way, should I not be able to complete all my goals, it is
unlikely that anything but the porting will suffer and the project
would still produce useful results during the GSoC.

Anyway here is the current, updated, proposal:

Abstract
=======

As of now there are several benchmark suites used by Python
implementations, PyPy uses the benchmarks[1] developed for the Unladen
Swallow[2] project as well as several other benchmarks they
implemented on their own, CPython[3] uses the Unladen Swallow
benchmarks and several "crap benchmarks used for historical
reasons"[4].

This makes comparisons unnecessarily hard and causes confusion. As a
solution to this problem I propose merging the existing benchmarks -
at least those considered worth having - into a single benchmark suite
which can be shared by all implementations and ported to Python 3.x.

Another problem reported by Maciej Fijalkowski is that currenly the
way benchmarks are executed by PyPy is more or less a hack. Work will
have to be done to allow execution of the benchmarks on different
interpreters and their most recent versions (from their respective
repositories). The application for this should also be able to upload
the results to a codespeed instance such as http://speed.pypy.org.

Milestones
=========
The project can be divided into several milestones:

1. Definition of the benchmark suite. This will entail contacting
developers of Python implementations (CPython, PyPy, IronPython and
Jython), via discussion on the appropriate mailing lists. This might
be achievable as part of this proposal.
2. Merging the benchmarks. Based on the prior agreed upon definition,
the benchmarks will be merged into a single suite.
3. Implementing a system to run the benchmarks. In order to execute
the benchmarks it will be necessary to have a configurable application
which downloads the interpreters from their repositories, builds them
and executes the benchmarks with them.
4. Porting the suite to Python 3.x. The suite will be ported to 3.x
using 2to3[5], as far as possible. The usage of 2to3 will make it
easier make changes to the repository especially for those still
focusing on 2.x. It is to be expected that some benchmarks cannot be
ported due to dependencies which are not available on Python 3.x.
Those will be ignored by this project to be ported at a later time,
when the necessary requirements are met.

Start of Program (May 24)
======================

Before the coding, milestones 2 and 3, can begin it is necessary to
agree upon a set of benchmarks, everyone is happy with, as described.

Midterm Evaluation (July 12)
=======================

During the midterm I want to merge the benchmarks and implement a way
to execute them.

Final Evaluation (Aug 16)
=====================

In this period the benchmark suite will be ported. If everything works
out perfectly I will even have some time left, if there are problems I
have a buffer here.

Implementation of the Benchmark Runner
==================================

In order to run the benchmarks I propose a simple application which
can be configured to download multiple interpreters, to build them and
execute the benchmarks. The configuration could be similar to tox[6],
downloads of the interpreters could be handled using anyvc[7].

For a site such as http://speed.pypy.org a cronjob, buildbot or
whatelse is preferred, could be setup which executes the application
regularly.

Repository Handling
================

The code for the project will be developed in a Mercurial[8]
repository hosted on Bitbucket[9], both PyPy and CPython use Mercurial
and most people in the Python community should be able to use it.

Probably Asked Questions
======================

Why not use one of the existing benchmark suites for porting?

The effort will be wasted if there is no good base to build upon,
creating a new benchmark suite based upon the existing ones ensures
that.

Why not use Git/Bazaar/...?

Mercurial is used by CPython, PyPy and is fairly well known and used
in the Python community. This ensures easy accessibility for everyone.

What will happen with the Repository after GSoC/How will access to the
repository be handled?

I propose to give administrative rights to one or two representatives
of each project. Those will provide other developers with write
access.

Communication
=============

Communication of the progress will be done via Twitter[10] and my
blog[11], if desired I can also send an email with the contents of the
blog post to the mailing lists of the implementations. Furthermore I
am usually quick to answer via IRC(DasIch on freenode), Twitter or
E-Mail(dasdasich at gmail.com) if anyone has any questions.

Contact to the mentor can be established via the means mentioned above
or via Skype.

About Me
========
My name is Daniel Neuh?user, I am 19 years old and currently a student
at the Bergstadt-Gymnasium L?denscheid[12]. I started programming
(with Python) about 4 years ago and became a member of the Pocoo
Team[13] after successfully participating in the Google Summer of Code
last year, during which I ported Sphinx[14] to Python 3.x and
implemented an algorithm to diff abstract syntax trees to preserve
comments and translated strings which has been used by the other GSoC
projects targeting Sphinx.


.. [1]: https://bitbucket.org/pypy/benchmarks/src
.. [2]: http://code.google.com/p/unladen-swallow/
.. [3]: http://hg.python.org/benchmarks/file/tip/performance
.. [4]: http://hg.python.org/benchmarks/file/62e754c57a7f/performance/README
.. [5]: http://docs.python.org/library/2to3.html
.. [6]: http://codespeak.net/tox/
.. [7]: http://anyvc.readthedocs.org/en/latest/?redir
.. [8]: http://mercurial.selenic.com/
.. [9]: https://bitbucket.org/
.. [10]: http://twitter.com/#!/DasIch
.. [11]: http://dasdasich.blogspot.com/
.. [12]: http://bergstadt-gymnasium.de/
.. [13]: http://www.pocoo.org/team/#daniel-neuhauser
.. [14]: http://sphinx.pocoo.org/

From tjreedy at udel.edu  Fri Apr  8 23:06:22 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 08 Apr 2011 17:06:22 -0400
Subject: [Python-Dev] AST Transformation Hooks for Domain Specific
	Languages
In-Reply-To: <BANLkTim+PJiPHjWrSE-55PczBiBjMR5uWA@mail.gmail.com>
References: <BANLkTi=NcHXn0USiQkMMADHy2ZNLgDD3Gw@mail.gmail.com>
	<BANLkTim+PJiPHjWrSE-55PczBiBjMR5uWA@mail.gmail.com>
Message-ID: <4D9F78CE.4090600@udel.edu>

On 4/8/2011 1:14 PM, Jon Riehl wrote:

> I have a mostly functioning front end for 2.X that does these
> expansions (MyFront), and I'm waiting for a stable Mercurial migration

Done and in use over a month. http://hg.python.org/

Further discussion of this idea is on the python-ideas list.
(The posting to pydev was an accident.)

-- 
Terry Jan Reedy


From ncoghlan at gmail.com  Sat Apr  9 02:22:35 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 9 Apr 2011 10:22:35 +1000
Subject: [Python-Dev] [Python-checkins] cpython (3.1): Issue 11715:
 Build extension modules on multiarch Debian and Ubuntu by
In-Reply-To: <20110408194006.04b33cd7@pitrou.net>
References: <E1Q7ZRw-0006mm-EQ@dinsdale.python.org>
	<80047896a9eea9ed48949df5e5d08524@netwok.org>
	<20110408194006.04b33cd7@pitrou.net>
Message-ID: <BANLkTiksHa8-w9mzCtem7WnSmefezNLVrg@mail.gmail.com>

On Sat, Apr 9, 2011 at 3:40 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> ?Isn?t there a possible raise condition here? ?I think it?s recommended
>> ?to follow EAFP for mkdir and makedirs.
>
> Since this is setup.py, I don't think we care.
> (I assume you meant "race condition", not "raise condition")

Indeed, the pre-check is OK here due to the fact that we control
"build_temp", so other processes shouldn't be creating a directory
with the same name.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From merwok at netwok.org  Sat Apr  9 18:23:34 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Sat, 09 Apr 2011 18:23:34 +0200
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9CE1E8.9000203@g.nevcal.com>
References: "\"<20110405145213.29f706aa@neurotica.wooz.org>"
	<4D9B7A1F.3070106@g.nevcal.com>"
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
Message-ID: <c53b77e7bf942c8c0007c7096c9b6ce6@netwok.org>

 Hi,

 Le 06/04/2011 23:58, Glenn Linderman a ?crit :
> On 4/6/2011 7:26 AM, Nick Coghlan wrote:
>> On Wed, Apr 6, 2011 at 6:22 AM, Glenn 
>> Linderman<v+python at g.nevcal.com>  wrote:
>>> With more standardization of versions, should the version module be 
>>> promoted
>>> to stdlib directly?
>> When Tarek lands "packaging" (i.e. what distutils2 becomes in the
>> Python 3.3 stdlib), the standardised version handling will come with
>> it.
>
> I thought that might be part of the answer :)  But that, and below, 
> seem
> to indicate that use of "packaging" suddenly becomes a requirement 
> for
> all modules that want to include versions.  The packaging of 
> "version"
> inside a version of "packaging" implies more dependencies on a larger
> body of code for a simple function.
 Not really.  Some parts of distutils2/packaging are standalone modules
 that are deliberately exposed publicly for third parties to use:
 version, metadata, markers, etc.  packaging.version is just the full
 name of the module implementing PEP 386.  (The first implementation,
 called verlib, has not been explicitly ended, nor the references in the
 PEP updated.)

> So, no support for single .py file modules, then?
 From packaging?s viewpoint, a project (something with a name and a
 version) is a directory with a setup.cfg file.  The directory can
 contain zero or more Python modules, Python packages, extension modules
 or data files.

> Caveat: I'm not 100% clear on when/how any of "distutils", 
> "setuptools",
> or "packaging" are invoked

 FTR: setuptools is a monkey-patching set of extensions to distutils:
 packaging is a full replacement of distutils.  packaging does not 
 depend
 on distutils nor setuptools; it is a fork of distutils with some ideas
 and code taken from setuptools.

 Regards

From merwok at netwok.org  Sat Apr  9 18:23:48 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Sat, 09 Apr 2011 18:23:48 +0200
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D9BC3.7040101@voidspace.org.uk>
References: "\"<20110405145213.29f706aa@neurotica.wooz.org>"
	<4D9B7A1F.3070106@g.nevcal.com>"
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
Message-ID: <2e96c2afb77a3e4f17f8934bc653697a@netwok.org>

 Hi,

 Le 07/04/2011 13:10, Michael Foord a ?crit :
>>> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>>      __version__ = 
>>> pkgutil.get_distribution('elle').metadata['version']
> I really dislike this way of specifying the version. For a start it 
> is
> really ugly.
>
> More importantly it means the version information is *only* available 
> if
> the package has been installed by "packaging", and so isn't available
> for the parts of my pre-build process like building the documentation
> (which import the version number to put into the docs).
>
> Currently all my packages have the canonical version number 
> information
> in the package itself using:
>
>      __version__ = '1.2.3'
>
> Anything that needs the version number, including setup.py for upload 
> to
> pypi, has one place to look for it and it doesn't depend on any other
> tools or processes. If switching to "packaging" prevents me from 
> doing
> this then it will inhibit me using "packaging".

 This is similar to my own comment on distutils-sig:

> One of the main complaint against setuptools is that having to change
> your application code because of the packaging tool used was not a 
> good
> idea.  Having to use require instead of import or resource_whatever
> instead of open (or get_data, the most sadly underused function in 
> the
> stdlib) because you use setuptools instead of distutils was a bad 
> thing.
>
> As stated in the PEP, having a __version__ attribute in the module is
> common, so my opinion is that making the packaging tool use that info 
> is
> the Right Thing?, and having the dependency in the reverse sense is
> wrong.  I don?t see a problem with having harmless duplication in the
> *installed* system, once in  elle.__version__ and once in the pkgutil
> metadata database.

 Barry?s reply:

> I'm including this section because at Pycon, some people did express 
> an
> interest in deriving the version number in this direction.  I wanted 
> to
> capture what that might look like.  Since this is an informational 
> PEP, I
> think it makes sense to include alternative approaches, but I tend to 
> agree
> with you that it will be much more common to define 
> module.__version__ and
> derive the metadata from that.

 IOW, you can define the version only once, either in your source file 
 or
 in the setup.cfg file, and the PEP describes how to get that info from
 the other place.  My personal opinion is that the approach using
 pkgutil.get_distribution should be much less prominent than the one
 putting the version in the Python code.

 Regards

From v+python at g.nevcal.com  Sat Apr  9 20:21:28 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Sat, 09 Apr 2011 11:21:28 -0700
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <c53b77e7bf942c8c0007c7096c9b6ce6@netwok.org>
References: "\"<20110405145213.29f706aa@neurotica.wooz.org>"
	<4D9B7A1F.3070106@g.nevcal.com>"
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<c53b77e7bf942c8c0007c7096c9b6ce6@netwok.org>
Message-ID: <4DA0A3A8.2000702@g.nevcal.com>

On 4/9/2011 9:23 AM, ?ric Araujo wrote:
> Hi,
>
> Le 06/04/2011 23:58, Glenn Linderman a ?crit :
>> On 4/6/2011 7:26 AM, Nick Coghlan wrote:
>>> On Wed, Apr 6, 2011 at 6:22 AM, Glenn 
>>> Linderman<v+python at g.nevcal.com>  wrote:
>>>> With more standardization of versions, should the version module be 
>>>> promoted
>>>> to stdlib directly?
>>> When Tarek lands "packaging" (i.e. what distutils2 becomes in the
>>> Python 3.3 stdlib), the standardised version handling will come with
>>> it.
>>
>> I thought that might be part of the answer :)  But that, and below, seem
>> to indicate that use of "packaging" suddenly becomes a requirement for
>> all modules that want to include versions.  The packaging of "version"
>> inside a version of "packaging" implies more dependencies on a larger
>> body of code for a simple function.
> Not really.  Some parts of distutils2/packaging are standalone modules
> that are deliberately exposed publicly for third parties to use:
> version, metadata, markers, etc.  packaging.version is just the full
> name of the module implementing PEP 386.  (The first implementation,
> called verlib, has not been explicitly ended, nor the references in the
> PEP updated.)

Glad for the clarification here.  As long as packaging.version doesn't 
have dependencies on large amounts of code to be loaded from the rest of 
packaging, then I have no serious objection to that structure.

Although, writing "packaging.version" just now, it seems like the result 
could be perceived as the "packaging version" which might be distinct 
from the "real version" or "actual version" or "code version".  That's 
just a terminology thing that could be overcome with adequate documentation.

Then there is the question Nick raised about distributions that want to 
cut back on space, and provide alternate mechanisms than Python's 
packaging module to distribute code... producing the perception that 
they could avoid including packaging in their smallest distribution (but 
would probably package a packaging package using their packaging 
mechanism).  If that dropped packaging.version, it could be problematic 
for doing version checking in applications.  So distributions might have 
to split apart the packaging package.

Finally, if there are other competing historical packaging mechanisms, 
or new ones develop in the future, packaging packaging.version 
separately from the rest of packaging might avoid having different 
versions of version: when included a replacement system might think it 
needs to also replace that component for completeness.

So I still favor a top-level version module that isn't included in 
packaging, and implementation that doesn't depend on packaging.  But 
only at +0.


>> So, no support for single .py file modules, then?
> From packaging?s viewpoint, a project (something with a name and a
> version) is a directory with a setup.cfg file.  The directory can
> contain zero or more Python modules, Python packages, extension modules
> or data files.

 From packaging's viewpoint, there is nothing wrong with that.  But from 
a non-packaging viewpoint, a user of other distribution mechanisms 
wouldn't necessarily want to or need to create a setup.cfg file, nor be 
forced to write code to obtain __version__ by calling a packaging method 
to obtain it.  Not that the PEP as written currently requires that.

>> Caveat: I'm not 100% clear on when/how any of "distutils", "setuptools",
>> or "packaging" are invoked
>
> FTR: setuptools is a monkey-patching set of extensions to distutils:
> packaging is a full replacement of distutils.  packaging does not depend
> on distutils nor setuptools; it is a fork of distutils with some ideas
> and code taken from setuptools.

Thanks, but that part I actually knew from cecent discussion on this 
list.  What I'm not clear on is whether modules packaged by any of 
distutils, setuptools, or packaging, because of being packaged by them, 
wind up including  0%, 10%, 90%, or 100% of the code from distutils, 
setuptools, or packaging at runtime.  My favored percentage would be 0%, 
as I believe a packaging system should do its stuff at the time of 
making, distributing, and installing code, but should get out of the way 
of the runtime code, but I would find maybe 10% acceptable, if it was 
only one or two files to be searched for and included, to achieve 
significant benefits to the packaged code.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110409/c1963e1b/attachment.html>

From ben+python at benfinney.id.au  Sun Apr 10 00:52:24 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Sun, 10 Apr 2011 08:52:24 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
References: <20110405145213.29f706aa@neurotica.wooz.org>
Message-ID: <874o676nrr.fsf@benfinney.id.au>

Howdy Barry,

Nitpick: Please call these ?version strings?. A version string is hardly
ever just one number, and not in the general case anyway.


I'd like to suggest another user story:

Barry Warsaw <barry at python.org> writes:

> User Stories
> ============

    Emily maintains a package consisting of programs and modules in
    several languages that inter-operate; several are Python, but some
    are Unix shell, Perl, and there are some C modules. Emily decides
    the simplest API for all these modules to get the package version
    string is a single text file named ``version`` at the root of the
    project tree. All the programs and modules, including the
    ``setup.py`` file, simply read the contents of ``version`` to get
    the version string.

This is an often-overlooked case, I think. The unspoken assumption is
often that ``setup.py`` is a suitable place for the overall version
string, but this is not the case when that string must be read by
non-Python programs.

-- 
 \      ?If you can't beat them, arrange to have them beaten.? ?George |
  `\                                                            Carlin |
_o__)                                                                  |
Ben Finney
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110410/379de3f5/attachment.pgp>

From debatem1 at gmail.com  Sun Apr 10 02:17:48 2011
From: debatem1 at gmail.com (geremy condra)
Date: Sat, 9 Apr 2011 17:17:48 -0700
Subject: [Python-Dev] Developer wish list
Message-ID: <BANLkTinj_hAxDP7U7cJUEr7jzc7cj-urag@mail.gmail.com>

Earlier this year there was some discussion[0] about putting up a page
on the wiki where developers could list the feature proposals they
most wanted and most hated for the benefit of those posting to
python-ideas. It's taken me a while to get around to it, but I've put
up a skeleton for the page at [1] and would love it if some of you
guys would take a look, let me know what you like/don't like, and
maybe even post a few of your pet projects or pet peeves.

Thanks for your time and effort,
Geremy Condra

[0]: http://mail.python.org/pipermail/python-ideas/2011-March/009230.html
[1]: http://wiki.python.org/moin/wishlist

From pje at telecommunity.com  Sun Apr 10 06:02:52 2011
From: pje at telecommunity.com (P.J. Eby)
Date: Sun, 10 Apr 2011 00:02:52 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <874o676nrr.fsf@benfinney.id.au>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<874o676nrr.fsf@benfinney.id.au>
Message-ID: <20110410040255.749CB3A4077@sparrow.telecommunity.com>

At 08:52 AM 4/10/2011 +1000, Ben Finney wrote:
>This is an often-overlooked case, I think. The unspoken assumption is
>often that ``setup.py`` is a suitable place for the overall version
>string, but this is not the case when that string must be read by
>non-Python programs.

If you haven't used the distutils a lot, you might not realize that 
you can do this:

$ python setup.py --version
0.6c12

(The --name option also works, and they can be used together -- the 
answers will be on two separate lines.)


From exarkun at twistedmatrix.com  Sun Apr 10 17:24:06 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Sun, 10 Apr 2011 15:24:06 -0000
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <20110410040255.749CB3A4077@sparrow.telecommunity.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<874o676nrr.fsf@benfinney.id.au>
	<20110410040255.749CB3A4077@sparrow.telecommunity.com>
Message-ID: <20110410152406.1992.322424191.divmod.xquotient.371@localhost.localdomain>

On 04:02 am, pje at telecommunity.com wrote:
>At 08:52 AM 4/10/2011 +1000, Ben Finney wrote:
>>This is an often-overlooked case, I think. The unspoken assumption is
>>often that ``setup.py`` is a suitable place for the overall version
>>string, but this is not the case when that string must be read by
>>non-Python programs.
>
>If you haven't used the distutils a lot, you might not realize that you 
>can do this:
>
>$ python setup.py --version
>0.6c12
>
>(The --name option also works, and they can be used together -- the 
>answers will be on two separate lines.)

This only works as long as setup.py is around - which it typically no 
longer is after installation is complete.

And though it's common and acceptable enough to launch a child process 
in a shell script in order to get some piece of information, it isn't as 
pleasant in a Python program.  Can you get this version information out 
of setup.py without running a child process and without monkey-patching 
sys.argv and sys.stdout?

Jean-Paul

From pje at telecommunity.com  Sun Apr 10 19:42:56 2011
From: pje at telecommunity.com (P.J. Eby)
Date: Sun, 10 Apr 2011 13:42:56 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <20110410152406.1992.322424191.divmod.xquotient.371@localho
	st.localdomain>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<874o676nrr.fsf@benfinney.id.au>
	<20110410040255.749CB3A4077@sparrow.telecommunity.com>
	<20110410152406.1992.322424191.divmod.xquotient.371@localhost.localdomain>
Message-ID: <20110410174302.F03003A4080@sparrow.telecommunity.com>

At 03:24 PM 4/10/2011 +0000, exarkun at twistedmatrix.com wrote:
>On 04:02 am, pje at telecommunity.com wrote:
>>At 08:52 AM 4/10/2011 +1000, Ben Finney wrote:
>>>This is an often-overlooked case, I think. The unspoken assumption is
>>>often that ``setup.py`` is a suitable place for the overall version
>>>string, but this is not the case when that string must be read by
>>>non-Python programs.
>>
>>If you haven't used the distutils a lot, you might not realize that 
>>you can do this:
>>
>>$ python setup.py --version
>>0.6c12
>>
>>(The --name option also works, and they can be used together -- the 
>>answers will be on two separate lines.)
>
>This only works as long as setup.py is around - which it typically 
>no longer is after installation is complete.
>
>And though it's common and acceptable enough to launch a child 
>process in a shell script in order to get some piece of information, 
>it isn't as pleasant in a Python program.  Can you get this version 
>information out of setup.py without running a child process and 
>without monkey-patching sys.argv and sys.stdout?

I was replying to the part above about "setup.py ...  must be read by 
non-Python programs".

In other words, I thought the question was, "given a 
not-yet-installed source package, how can we find the version number 
without writing Python code".  Your question is a bit different.  ;-)

As it happens, if you have a source distribution of a package, you 
can expect to find a PKG-INFO file that contains version info anyway, 
generated from the source file.  This is true for both distutils and 
setuptools-built source distributions.  (It is not the case, alas, 
for simple revision control checkouts.)

Anyway, I was merely addressing the technical question of how to get 
information from the tools that already exist, rather than advocating 
any solutions.

And, along that same line, monkeypatching sys.argv and sys.stdout 
aren't technically necessary for you to get the information from a 
setup script, but a sandbox to keep the setup script from trying to 
do any installation steps is probably a good idea.  (Some people have 
written setup scripts that actually copy files or do other things 
before they even call setup().  Nasty -- and one of the reasons that 
easy_install has a sandboxing facility.)


From pjenvey at underboss.org  Sun Apr 10 21:41:23 2011
From: pjenvey at underboss.org (Philip Jenvey)
Date: Sun, 10 Apr 2011 12:41:23 -0700
Subject: [Python-Dev] Hosting the Jython hg repo
Message-ID: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>

There's been some chatter in the past about moving some of Jython's infrastructure from SF.net to python.org.

We're in the process of finishing the conversion of Jython's subversion repo to mercurial. Can we host our new repo on http://hg.python.org? To whom should I speak to about setting this up?

The one question that comes to mind is how will repo write permissions be handled/shared between all our repos? With the recent policy of granting Jython committers cpython commit access if they want it, sharing the permissions wouldn't be a problem. In turn we like the idea of reciprocating commit rights to Jython back to cpython committers.

--
Philip Jenvey

From martin at v.loewis.de  Sun Apr 10 21:58:40 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 10 Apr 2011 21:58:40 +0200
Subject: [Python-Dev] Hosting the Jython hg repo
In-Reply-To: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>
References: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>
Message-ID: <4DA20BF0.4020604@v.loewis.de>

> We're in the process of finishing the conversion of Jython's
> subversion repo to mercurial. Can we host our new repo on
> http://hg.python.org? To whom should I speak to about setting this
> up?

Georg Brandl and Antoine Pitrou are managing the Mercurial repositories.

> The one question that comes to mind is how will repo write
> permissions be handled/shared between all our repos? With the recent
> policy of granting Jython committers cpython commit access if they
> want it, sharing the permissions wouldn't be a problem. In turn we
> like the idea of reciprocating commit rights to Jython back to
> cpython committers.

In the past, granting permissions generously wasn't a problem as long
as users where aware what repositories they are allowed to commit to.
There isn't a true push log at this point (IIUC), but at least for
cpython, we can audit what changes have been pushed by what user through
the commit emails.
As it is always possible to revert undesired changes, and to revoke
privileges that are abused, there is no reason to technically enforce
access control.

IOW, cpython committers just shouldn't push to Jython's repository,
and vice versa, except for a good reason.

Ultimately, it's up to Georg and Antoine to decide whether they want
to accept the load. One option would be to grant a Jython developer
control to account management - preferably a single person, who would
then also approve/apply changes to the hooks.

Regards,
Martin

From g.brandl at gmx.net  Sun Apr 10 22:15:04 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 10 Apr 2011 22:15:04 +0200
Subject: [Python-Dev] Hosting the Jython hg repo
In-Reply-To: <4DA20BF0.4020604@v.loewis.de>
References: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>
	<4DA20BF0.4020604@v.loewis.de>
Message-ID: <int34a$roc$1@dough.gmane.org>

On 10.04.2011 21:58, "Martin v. L?wis" wrote:
>> We're in the process of finishing the conversion of Jython's
>> subversion repo to mercurial. Can we host our new repo on
>> http://hg.python.org? To whom should I speak to about setting this
>> up?
> 
> Georg Brandl and Antoine Pitrou are managing the Mercurial repositories.
> 
>> The one question that comes to mind is how will repo write
>> permissions be handled/shared between all our repos? With the recent
>> policy of granting Jython committers cpython commit access if they
>> want it, sharing the permissions wouldn't be a problem. In turn we
>> like the idea of reciprocating commit rights to Jython back to
>> cpython committers.

At the moment, any core developer can push to any repo on hg.python.org.
I would very much like to keep it that way, it makes administration
much easier.  If you're okay with it, then I'm glad to set you and all
Jython committers up at hg.python.org.

> In the past, granting permissions generously wasn't a problem as long
> as users where aware what repositories they are allowed to commit to.
> There isn't a true push log at this point (IIUC), but at least for
> cpython, we can audit what changes have been pushed by what user through
> the commit emails.
> As it is always possible to revert undesired changes, and to revoke
> privileges that are abused, there is no reason to technically enforce
> access control.
> 
> IOW, cpython committers just shouldn't push to Jython's repository,
> and vice versa, except for a good reason.

And I believe that will work.

Georg


From pjenvey at underboss.org  Sun Apr 10 22:37:03 2011
From: pjenvey at underboss.org (Philip Jenvey)
Date: Sun, 10 Apr 2011 13:37:03 -0700
Subject: [Python-Dev] Hosting the Jython hg repo
In-Reply-To: <4DA20BF0.4020604@v.loewis.de>
References: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>
	<4DA20BF0.4020604@v.loewis.de>
Message-ID: <71815E41-019E-4D2F-8985-7EF014EB1380@underboss.org>


On Apr 10, 2011, at 12:58 PM, Martin v. L?wis wrote:

>> The one question that comes to mind is how will repo write
>> permissions be handled/shared between all our repos? With the recent
>> policy of granting Jython committers cpython commit access if they
>> want it, sharing the permissions wouldn't be a problem. In turn we
>> like the idea of reciprocating commit rights to Jython back to
>> cpython committers.
> 
> In the past, granting permissions generously wasn't a problem as long
> as users where aware what repositories they are allowed to commit to.
> There isn't a true push log at this point (IIUC), but at least for
> cpython, we can audit what changes have been pushed by what user through
> the commit emails.
> As it is always possible to revert undesired changes, and to revoke
> privileges that are abused, there is no reason to technically enforce
> access control.
> 
> IOW, cpython committers just shouldn't push to Jython's repository,
> and vice versa, except for a good reason.
> 
> Ultimately, it's up to Georg and Antoine to decide whether they want
> to accept the load. One option would be to grant a Jython developer
> control to account management - preferably a single person, who would
> then also approve/apply changes to the hooks.

That could be me. If this would mean creating an account for me on python.org so I could handle the majority of the maintenance instead of Georg & Antoine, I'd be up for that.

--
Philip Jenvey


From solipsis at pitrou.net  Sun Apr 10 23:44:23 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 10 Apr 2011 23:44:23 +0200
Subject: [Python-Dev] Hosting the Jython hg repo
References: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>
	<4DA20BF0.4020604@v.loewis.de>
Message-ID: <20110410234423.3d5e98c6@pitrou.net>

On Sun, 10 Apr 2011 21:58:40 +0200
"Martin v. L?wis" <martin at v.loewis.de> wrote:
> 
> Ultimately, it's up to Georg and Antoine to decide whether they want
> to accept the load.

I don't want to maintain the Jython repo myself but if Georg or Philip
accepts to do it it's fine.

> One option would be to grant a Jython developer
> control to account management - preferably a single person, who would
> then also approve/apply changes to the hooks.

+1.

Regards

Antoine.



From ben+python at benfinney.id.au  Mon Apr 11 01:51:36 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Mon, 11 Apr 2011 09:51:36 +1000
Subject: [Python-Dev] PEP 396, Module Version Numbers
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<874o676nrr.fsf@benfinney.id.au>
	<20110410040255.749CB3A4077@sparrow.telecommunity.com>
	<20110410152406.1992.322424191.divmod.xquotient.371@localhost.localdomain>
	<20110410174302.F03003A4080@sparrow.telecommunity.com>
Message-ID: <87k4f164xj.fsf@benfinney.id.au>

"P.J. Eby" <pje at telecommunity.com> writes:

> At 03:24 PM 4/10/2011 +0000, exarkun at twistedmatrix.com wrote:
> >On 04:02 am, pje at telecommunity.com wrote:
> >>At 08:52 AM 4/10/2011 +1000, Ben Finney wrote:
> >>>This is an often-overlooked case, I think. The unspoken assumption is
> >>>often that ``setup.py`` is a suitable place for the overall version
> >>>string, but this is not the case when that string must be read by
> >>>non-Python programs.

[?]

> Anyway, I was merely addressing the technical question of how to get
> information from the tools that already exist, rather than advocating
> any solutions.

Thanks for the reply, that capability wasn't really evident to me (like
just about everything Setuptools).

> I was replying to the part above about "setup.py ...  must be read by
> non-Python programs".
>
> In other words, I thought the question was, "given a not-yet-installed
> source package, how can we find the version number without writing
> Python code".

No, that's not the intention of that use case; the non-Python programs
will obviously continue to need access to the package version string
even after the package is installed.

That's why the fictional Emily has decided to keep the version string in
a plain-text ``version`` file for that purpose.

-- 
 \     ?We are all agreed that your theory is crazy. The question that |
  `\      divides us is whether it is crazy enough to have a chance of |
_o__)            being correct.? ?Niels Bohr (to Wolfgang Pauli), 1958 |
Ben Finney


From stephen_yeng at n-pinokyo.com  Mon Apr 11 02:00:14 2011
From: stephen_yeng at n-pinokyo.com (Stephen Yeng)
Date: Mon, 11 Apr 2011 08:00:14 +0800
Subject: [Python-Dev] Make test failed issues for phyton 3.2 on centos5.5
Message-ID: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>

Hello phython team,
I am new to install phyton on Centos5.5
Hope you can help on this issues below when I make test

5 tests failed:
    test_argparse test_distutils test_httpservers test_import
    test_zipfile
31 tests skipped:
    test_bz2 test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp
    test_codecmaps_kr test_codecmaps_tw test_curses test_dbm_gnu
    test_dbm_ndbm test_gdb test_gzip test_kqueue test_ossaudiodev
    test_readline test_smtpnet test_socketserver test_sqlite test_ssl
    test_startfile test_tcl test_timeout test_tk test_ttk_guionly
    test_ttk_textonly test_urllib2net test_urllibnet test_winreg
    test_winsound test_xmlrpc_net test_zipfile64 test_zlib
11 skips unexpected on linux2:
    test_bz2 test_dbm_gnu test_dbm_ndbm test_gzip test_readline
    test_ssl test_tcl test_tk test_ttk_guionly test_ttk_textonly
    test_zlib


I will post the shortest failed test('test_zip') and if you all allowed me
post full log of the 5 failed test I will do it.

== CPython 3.2 (r32:88445, Apr 10 2011, 11:18:27) [GCC 4.1.2 20080704 (Red
Hat 4.1.2-50)]
==   Linux-2.6.18-238.5.1.el5-i686-athlon-with-redhat-5.6-Final
little-endian
==   /tmp/Python-3.2/build/test_python_6187
Testing with flags: sys.flags(debug=0, division_warning=0, inspect=0,
interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0,
ignore_environment=0, verbose=0, bytes_warning=0, quiet=0)
[1/1] test_zipfiles
test test_zipfiles crashed -- <class 'ImportError'>: No module named
test_zipfiles
Traceback (most recent call last):
  File "/tmp/Python-3.2/Lib/test/regrtest.py", line 962, in runtest_inner
    the_package = __import__(abstest, globals(), locals(), [])
ImportError: No module named test_zipfiles
1 test failed:
    test_zipfiles

How should I fix the 5 failed test above? Please help me on that, thanks
you.

-- 
If you have any other question about your web portal please contact me. At
N-Pinokyo we value our customers and will be more than happy to assist you
with any other matter related to our service.

Regards,
Stephen Yeng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110411/5a9c4f92/attachment.html>

From vstinner at edenwall.com  Mon Apr 11 10:14:29 2011
From: vstinner at edenwall.com (Victor Stinner)
Date: Mon, 11 Apr 2011 10:14:29 +0200
Subject: [Python-Dev] Make test failed issues for phyton 3.2 on centos5.5
In-Reply-To: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
References: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
Message-ID: <201104111014.29933.vstinner@edenwall.com>

> [1/1] test_zipfiles
> test test_zipfiles crashed -- <class 'ImportError'>: No module named
> test_zipfiles

It means that you don't have a module named test_zipfiles. Retry with 
"test_zipfile" :-)

You may open an issue (including details) for your failures.

Victor

From fijall at gmail.com  Mon Apr 11 11:39:39 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Mon, 11 Apr 2011 11:39:39 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <inmk53$aak$1@dough.gmane.org>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inmk53$aak$1@dough.gmane.org>
Message-ID: <BANLkTin9Z08OjLDMMmphDO1joXHnZYnMcw@mail.gmail.com>

On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Jesse Noller, 07.04.2011 22:28:
>>
>> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>>>
>>> Hi Daniel,
>>> Thanks for putting this together. ?I am a huge supporter of benchmarking
>>> efforts. ?My brief comment is below.
>>>
>>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>>
>>>> 1. Definition of the benchmark suite. This will entail contacting
>>>> developers of Python implementations (CPython, PyPy, IronPython and
>>>> Jython), via discussion on the appropriate mailing lists. This might
>>>> be achievable as part of this proposal.
>>>>
>>>
>>> If you are reaching out to other projects at this stage, I think you
>>> should
>>> also be in touch with the Cython people ?(even if its 'implementation'
>>> sits on top of CPython).
>>> As a scientist/engineer what I care about is how Cython benchmarks to
>>> CPython. ?I believe that they have some ideas on benchmarking and have
>>> also explored this space. ?Their inclusion would be helpful to me
>>> thinking
>>> this GSoC successful at the end of the day (summer).
>>> Thanks for your consideration.
>>> Be Well
>>> Anthony
>>
>> Right now, we are talking about building "speed.python.org" to test
>> the speed of python interpreters, over time, and alongside one another
>> - cython *is not* an interpreter.
>
> Would you also want to exclude Psyco then? It clearly does not qualify as a
> Python interpreter.
>

Just to clarify - the crucial word here is Python and not the
interpreter. I don't care myself if it's an interpreter or a compiler,
I do care if it can pass the python test suite (modulo things that are
known to be implementation details and agreed upon).

How far is Cython from passing the full test suite? Are there known
incompatibilities that would be considered wontfix?

Cheers,
fijal

From stefan_ml at behnel.de  Mon Apr 11 12:43:08 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 11 Apr 2011 12:43:08 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <BANLkTin9Z08OjLDMMmphDO1joXHnZYnMcw@mail.gmail.com>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>	<inmk53$aak$1@dough.gmane.org>
	<BANLkTin9Z08OjLDMMmphDO1joXHnZYnMcw@mail.gmail.com>
Message-ID: <inulvt$79k$1@dough.gmane.org>

Maciej Fijalkowski, 11.04.2011 11:39:
> On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel<stefan_ml at behnel.de>  wrote:
>> Jesse Noller, 07.04.2011 22:28:
>>>
>>> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>>>>
>>>> Hi Daniel,
>>>> Thanks for putting this together.  I am a huge supporter of benchmarking
>>>> efforts.  My brief comment is below.
>>>>
>>>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>>>
>>>>> 1. Definition of the benchmark suite. This will entail contacting
>>>>> developers of Python implementations (CPython, PyPy, IronPython and
>>>>> Jython), via discussion on the appropriate mailing lists. This might
>>>>> be achievable as part of this proposal.
>>>>>
>>>>
>>>> If you are reaching out to other projects at this stage, I think you
>>>> should
>>>> also be in touch with the Cython people  (even if its 'implementation'
>>>> sits on top of CPython).
>>>> As a scientist/engineer what I care about is how Cython benchmarks to
>>>> CPython.  I believe that they have some ideas on benchmarking and have
>>>> also explored this space.  Their inclusion would be helpful to me
>>>> thinking
>>>> this GSoC successful at the end of the day (summer).
>>>> Thanks for your consideration.
>>>> Be Well
>>>> Anthony
>>>
>>> Right now, we are talking about building "speed.python.org" to test
>>> the speed of python interpreters, over time, and alongside one another
>>> - cython *is not* an interpreter.
>>
>> Would you also want to exclude Psyco then? It clearly does not qualify as a
>> Python interpreter.
>
> Just to clarify - the crucial word here is Python and not the
> interpreter.

Psyco is also not a Python implementation. It doesn't work without CPython, 
just like Cython. But I doubt that anyone would seriously argue for 
excluding Psyco from a Python speed comparison. That was my point here.


> I don't care myself if it's an interpreter or a compiler,
> I do care if it can pass the python test suite (modulo things that are
> known to be implementation details and agreed upon).
>
> How far is Cython from passing the full test suite?

According to our CI server, we currently have 255 failing tests out of 7094 
in Python 2.7.

https://sage.math.washington.edu:8091/hudson/view/cython-devel/job/cython-devel-tests-pyregr-py27-c/

This is not completely accurate as a) it only includes compiling the test 
module, and e.g. not the stdlib modules that are being tested, and b) the 
total number of tests we see depends on how many test modules we can 
compile in order to import and run the contained tests. It also doesn't 
mean that we have >200 compatibility problems, the majority of failures 
tends to be because of just a hand full of bugs.

Another measure is that Cython can currently compile some 160 modules out 
of a bit less than 200 in Django (almost all failures due to one bug about 
incompatibilities between PyCFunction and Python functions) and an 
(untested!) 1219 out of 1538 modules in the stdlib. We haven't put that 
together yet in order to actually test the compiled stdlib modules. That'll 
come.


> Are there known incompatibilities that would be considered wontfix?

There are known incompatibilities that are considered bugs. There are no 
"wontfix" bugs when it comes to Python compatibility. But there are 
obviously developer priorities when it comes to fixing bugs. Cython is a 
lot more than just a Python compiler (such as a programming language that 
keeps people from writing C code), so there are also bugs and feature 
requests apart from Python semantics that we consider more important to 
fix. It's not like all bugs on CPython's bug tracker would get closed 
within a day or so.

Stefan


From fijall at gmail.com  Mon Apr 11 13:00:05 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Mon, 11 Apr 2011 13:00:05 +0200
Subject: [Python-Dev] [GSoC] Developing a benchmark suite (for Python
	3.x)
In-Reply-To: <inulvt$79k$1@dough.gmane.org>
References: <BANLkTim80hX2_K393XugVfOfU1oCvyboNA@mail.gmail.com>
	<BANLkTi=hPxW998svwE3D=3K-gBShO89SXA@mail.gmail.com>
	<BANLkTinxo3ObyVr8qY-td0Vz160wzWzcKQ@mail.gmail.com>
	<inmk53$aak$1@dough.gmane.org>
	<BANLkTin9Z08OjLDMMmphDO1joXHnZYnMcw@mail.gmail.com>
	<inulvt$79k$1@dough.gmane.org>
Message-ID: <BANLkTikF6tMC9GaupbScJqAyVMYPEuG8_A@mail.gmail.com>

On Mon, Apr 11, 2011 at 12:43 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Maciej Fijalkowski, 11.04.2011 11:39:
>>
>> On Fri, Apr 8, 2011 at 11:22 AM, Stefan Behnel<stefan_ml at behnel.de>
>> ?wrote:
>>>
>>> Jesse Noller, 07.04.2011 22:28:
>>>>
>>>> On Thu, Apr 7, 2011 at 3:54 PM, Anthony Scopatz wrote:
>>>>>
>>>>> Hi Daniel,
>>>>> Thanks for putting this together. ?I am a huge supporter of
>>>>> benchmarking
>>>>> efforts. ?My brief comment is below.
>>>>>
>>>>> On Wed, Apr 6, 2011 at 11:52 AM, DasIch wrote:
>>>>>>
>>>>>> 1. Definition of the benchmark suite. This will entail contacting
>>>>>> developers of Python implementations (CPython, PyPy, IronPython and
>>>>>> Jython), via discussion on the appropriate mailing lists. This might
>>>>>> be achievable as part of this proposal.
>>>>>>
>>>>>
>>>>> If you are reaching out to other projects at this stage, I think you
>>>>> should
>>>>> also be in touch with the Cython people ?(even if its 'implementation'
>>>>> sits on top of CPython).
>>>>> As a scientist/engineer what I care about is how Cython benchmarks to
>>>>> CPython. ?I believe that they have some ideas on benchmarking and have
>>>>> also explored this space. ?Their inclusion would be helpful to me
>>>>> thinking
>>>>> this GSoC successful at the end of the day (summer).
>>>>> Thanks for your consideration.
>>>>> Be Well
>>>>> Anthony
>>>>
>>>> Right now, we are talking about building "speed.python.org" to test
>>>> the speed of python interpreters, over time, and alongside one another
>>>> - cython *is not* an interpreter.
>>>
>>> Would you also want to exclude Psyco then? It clearly does not qualify as
>>> a
>>> Python interpreter.
>>
>> Just to clarify - the crucial word here is Python and not the
>> interpreter.
>
> Psyco is also not a Python implementation. It doesn't work without CPython,
> just like Cython. But I doubt that anyone would seriously argue for
> excluding Psyco from a Python speed comparison. That was my point here.
>
>
>> I don't care myself if it's an interpreter or a compiler,
>> I do care if it can pass the python test suite (modulo things that are
>> known to be implementation details and agreed upon).
>>
>> How far is Cython from passing the full test suite?
>
> According to our CI server, we currently have 255 failing tests out of 7094
> in Python 2.7.
>
> https://sage.math.washington.edu:8091/hudson/view/cython-devel/job/cython-devel-tests-pyregr-py27-c/
>
> This is not completely accurate as a) it only includes compiling the test
> module, and e.g. not the stdlib modules that are being tested, and b) the
> total number of tests we see depends on how many test modules we can compile
> in order to import and run the contained tests. It also doesn't mean that we
> have >200 compatibility problems, the majority of failures tends to be
> because of just a hand full of bugs.
>
> Another measure is that Cython can currently compile some 160 modules out of
> a bit less than 200 in Django (almost all failures due to one bug about
> incompatibilities between PyCFunction and Python functions) and an
> (untested!) 1219 out of 1538 modules in the stdlib. We haven't put that
> together yet in order to actually test the compiled stdlib modules. That'll
> come.
>
>
>> Are there known incompatibilities that would be considered wontfix?
>
> There are known incompatibilities that are considered bugs. There are no
> "wontfix" bugs when it comes to Python compatibility. But there are
> obviously developer priorities when it comes to fixing bugs. Cython is a lot
> more than just a Python compiler (such as a programming language that keeps
> people from writing C code), so there are also bugs and feature requests
> apart from Python semantics that we consider more important to fix. It's not
> like all bugs on CPython's bug tracker would get closed within a day or so.

Sure, that was more of a question "do you consider cython
compatibility an issue?". I'm sure there are bugs.

>
> Stefan
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

From tseaver at palladion.com  Mon Apr 11 13:58:26 2011
From: tseaver at palladion.com (Tres Seaver)
Date: Mon, 11 Apr 2011 07:58:26 -0400
Subject: [Python-Dev] Make test failed issues for phyton 3.2 on centos5.5
In-Reply-To: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
References: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
Message-ID: <inuqd0$1hp$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/10/2011 08:00 PM, Stephen Yeng wrote:

> 11 skips unexpected on linux2:
>     test_bz2 test_dbm_gnu test_dbm_ndbm test_gzip test_readline
>     test_ssl test_tcl test_tk test_ttk_guionly test_ttk_textonly
>     test_zlib

Looks like you are missing a bunch of development headers on the system
(at the time Python's 'configure' was run).  E.g., on a Debian system,

 $ sudo apt-get install zlib1g-dev libbz-dev libreadline-dev # etc



Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2i7OIACgkQ+gerLs4ltQ7M5gCg2gCd73Z1Iz5d3q21RvqqlmAY
EisAoLevZfn1euG20tODfUgDFZUkNDrg
=gJIN
-----END PGP SIGNATURE-----


From stephen_yeng at n-pinokyo.com  Mon Apr 11 16:43:03 2011
From: stephen_yeng at n-pinokyo.com (Stephen Yeng)
Date: Mon, 11 Apr 2011 22:43:03 +0800
Subject: [Python-Dev] Make test failed issues for phyton 3.2 on centos5.5
In-Reply-To: <201104111014.29933.vstinner@edenwall.com>
References: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
	<201104111014.29933.vstinner@edenwall.com>
Message-ID: <BANLkTi=bwdZkH7ZPe8UfPrnV_5f=sDCh5Q@mail.gmail.com>

Hello,
Thanks for the reply.
This the once of the test I fail, hope you can help so I can fix the rest 4
errors. :)
----------------------------------------------------------------------
Ran 90 tests in 9.191s

FAILED (errors=1, skipped=25)
test test_zipfile failed -- Traceback (most recent call last):
  File "/tmp/Python-3.2/Lib/test/test_zipfile.py", line 497, in
test_unicode_filenames
    zipfp.open(name).close()
  File "/tmp/Python-3.2/Lib/zipfile.py", line 978, in open
    close_fileobj=not self._filePassed)
  File "/tmp/Python-3.2/Lib/zipfile.py", line 487, in __init__
    self._decompressor = zlib.decompressobj(-15)
AttributeError: 'NoneType' object has no attribute 'decompressobj'

1 test failed:
    test_zipfile


On Mon, Apr 11, 2011 at 4:14 PM, Victor Stinner <vstinner at edenwall.com>wrote:

> > [1/1] test_zipfiles
> > test test_zipfiles crashed -- <class 'ImportError'>: No module named
> > test_zipfiles
>
> It means that you don't have a module named test_zipfiles. Retry with
> "test_zipfile" :-)
>
> You may open an issue (including details) for your failures.
>
> Victor
>



-- 
If you have any other question about your web portal please contact me. At
N-Pinokyo we value our customers and will be more than happy to assist you
with any other matter related to our service.

Regards,
Stephen Yeng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110411/b64b32dc/attachment.html>

From solipsis at pitrou.net  Mon Apr 11 16:58:11 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 11 Apr 2011 16:58:11 +0200
Subject: [Python-Dev] Make test failed issues for phyton 3.2 on centos5.5
References: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
	<201104111014.29933.vstinner@edenwall.com>
	<BANLkTi=bwdZkH7ZPe8UfPrnV_5f=sDCh5Q@mail.gmail.com>
Message-ID: <20110411165811.5d4afcf0@pitrou.net>


Hello,

On Mon, 11 Apr 2011 22:43:03 +0800
Stephen Yeng <stephen_yeng at n-pinokyo.com> wrote:
> Hello,
> Thanks for the reply.
> This the once of the test I fail, hope you can help so I can fix the rest 4
> errors. :)

Please open an issue for each of these failures on
http://bugs.python.org
Bug reports on the mailing-list typically get lost.

Regards

Antoine.



From dmalcolm at redhat.com  Mon Apr 11 20:04:38 2011
From: dmalcolm at redhat.com (David Malcolm)
Date: Mon, 11 Apr 2011 14:04:38 -0400
Subject: [Python-Dev] Make test failed issues for phyton 3.2 on	centos5.5
In-Reply-To: <inuqd0$1hp$1@dough.gmane.org>
References: <BANLkTikm53ADCKCR7rDM=hudXtQ2VrZKgQ@mail.gmail.com>
	<inuqd0$1hp$1@dough.gmane.org>
Message-ID: <1302545078.2881.14.camel@radiator.bos.redhat.com>

On Mon, 2011-04-11 at 07:58 -0400, Tres Seaver wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 04/10/2011 08:00 PM, Stephen Yeng wrote:
> 
> > 11 skips unexpected on linux2:
> >     test_bz2 test_dbm_gnu test_dbm_ndbm test_gzip test_readline
> >     test_ssl test_tcl test_tk test_ttk_guionly test_ttk_textonly
> >     test_zlib
> 
> Looks like you are missing a bunch of development headers on the system
> (at the time Python's 'configure' was run).  E.g., on a Debian system,
> 
>  $ sudo apt-get install zlib1g-dev libbz-dev libreadline-dev # etc

On RHEL 5 (and therefore presumably CentOS), the corresponding command
looks something like this:

sudo yum install \
  readline-devel openssl-devel gmp-devel \
  ncurses-devel gdbm-devel zlib-devel expat-devel \
  libGL-devel tk tix gcc-c++ libX11-devel glibc-devel \
  bzip2 tar findutils pkgconfig tcl-devel tk-devel \
  tix-devel bzip2-devel sqlite-devel \
  db4-devel \
  libffi-devel

You'll want to rerun "configure" after installing these dependencies.

FWIW neither the devguide nor
  http://docs.python.org/using/unix.html#building-python
seems to have a handy guide to how to install all useful build-time deps
on various distros.

I added something similar for PyPy here:
http://codespeak.net/pypy/dist/pypy/doc/getting-started-python.html#translating-the-pypy-python-interpreter
at the PyCon sprint.

Hope this is helpful
Dave


From lukas.lueg at googlemail.com  Mon Apr 11 23:41:42 2011
From: lukas.lueg at googlemail.com (Lukas Lueg)
Date: Mon, 11 Apr 2011 23:41:42 +0200
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
	ccompiler
Message-ID: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>

Hi,

I'm the maintainer of Pyrit (http://pyrit.googlecode.com) and recently
checked in some code that uses the AES-NI intrinsics found in GCC
4.4+. I'm looking for a way how to build the python-extension using
distutils in a sane way and could not get an answer from the
distutils-people about that.

The enable the intrinsics, one must pass '-maes' and '-mpclmul' as
commandline-arguments to gcc, e.g. through extra_compile_args. This is
not always safe to do as previous versions of GCC do not support these
options and cause cc to fail with an error. Such platforms are not
uncommon, e.g. XCode 3.2 on MacOS is shipped with gcc 4.2. I fail to
see how to determine in advance what compiler distutils will use and
what version that compiler has. Therefor I see two options:
- Try to build a small pseudo-extension with the flags enabled, watch
for exceptions and only enable the extra_compile_args on the real
extension if the build succeeds
- Override the build_ext-command with another class and override
build_extension. Try to build the extension and, if a CompilerError is
thrown, remove '-maes' and '-mpclmul' from extra_compile_args. Try
again and re-raise possible CompilerErrors now.

The first option seems rather bogus so I'm currently going with the
second option. After all, this leaves me with the best chance of
enabling the AES-NI-code on compatible machines (no false-negatives
with some kind of auto-detection) and not having people being unable
to compile it at all (false-positives, resulting in final compiler
errors). The downside is that visible error messages are printed to
stderr from the first call to build_ext.build_extension if AES-NI is
actually not supported.

Any other ideas on how to solve this in a better way?


Best regards
Lukas

From ncoghlan at gmail.com  Tue Apr 12 01:32:24 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 12 Apr 2011 09:32:24 +1000
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
	ccompiler
In-Reply-To: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
References: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
Message-ID: <BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>

On Tue, Apr 12, 2011 at 7:41 AM, Lukas Lueg <lukas.lueg at googlemail.com> wrote:
> Any other ideas on how to solve this in a better way?

Have you tried with distutils2? If it can't help you, it should really
be looked into before the packaging API is locked for 3.3.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From lukas.lueg at googlemail.com  Tue Apr 12 08:58:45 2011
From: lukas.lueg at googlemail.com (Lukas Lueg)
Date: Tue, 12 Apr 2011 08:58:45 +0200
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
	ccompiler
In-Reply-To: <BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>
References: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
	<BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>
Message-ID: <BANLkTinv0emCe5mMw0gt43jADQ17FsnWfw@mail.gmail.com>

Distutils2 is not really an option right now as it is not found on
major Linux distributions, FreeBSD or MacOS X

2011/4/12 Nick Coghlan <ncoghlan at gmail.com>:
> On Tue, Apr 12, 2011 at 7:41 AM, Lukas Lueg <lukas.lueg at googlemail.com> wrote:
>> Any other ideas on how to solve this in a better way?
>
> Have you tried with distutils2? If it can't help you, it should really
> be looked into before the packaging API is locked for 3.3.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia
>

From dsalvetti at trapeze.com  Tue Apr 12 17:15:24 2011
From: dsalvetti at trapeze.com (Djoume Salvetti)
Date: Tue, 12 Apr 2011 11:15:24 -0400
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
Message-ID: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>

Hi there,


When calling pdb.set_trace() from within a function, it seems to be
impossible to rebind any local variables:


http://paste.pound-python.org/show/5150/


I couldn't find anything in the documentation about this, should I report a
bug?

-- 
Djoume Salvetti
Director of Development

T:416.601.1999 x 249
www.trapeze.com     twitter: trapeze
175 Bloor St. E., South Tower, Suite 900
Toronto, ON M4W 3R8
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/7dded634/attachment.html>

From alexander.belopolsky at gmail.com  Tue Apr 12 18:17:37 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 12 Apr 2011 12:17:37 -0400
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
Message-ID: <BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>

On Tue, Apr 12, 2011 at 11:15 AM, Djoume Salvetti <dsalvetti at trapeze.com> wrote:
..
> When calling pdb.set_trace() from within a function, it seems to be impossible to rebind any local variables:
>

Works for me (using latest HG clone):

$ cat test.py
gv = 1

def f():
    lv = 1
    import pdb; pdb.set_trace()

if __name__ == '__main__':
    f()
$ ./python.exe test.py
--Return--
> /Users/sasha/Work/python-hg/py3k/test.py(5)f()->None
-> import pdb; pdb.set_trace()
(Pdb) lv = 2
(Pdb) print lv
2


> http://paste.pound-python.org/show/5150/

Please don't use paste services when posting on python-dev.  Postings
to this list are archived much longer than links to paste services
remain valid.

>
> I couldn't find anything in the documentation about this, should I report a bug?

If you find specific versions that are affected by this bug, please
report it at bugs.python.org.

From merwok at netwok.org  Tue Apr 12 18:35:50 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Tue, 12 Apr 2011 18:35:50 +0200
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
 ccompiler
In-Reply-To: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
References: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
Message-ID: <39569ee8bad3f66d6792f14d8e739872@netwok.org>

 Hi,

> I'm the maintainer of Pyrit (http://pyrit.googlecode.com) and 
> recently
> checked in some code that uses the AES-NI intrinsics found in GCC
> 4.4+. I'm looking for a way how to build the python-extension using
> distutils in a sane way and could not get an answer from the
> distutils-people about that.

 Could you tell where and when you asked?  If it was on distutils-sig 
 and
 nobody replied, maybe people were just busy and you could try again.

 Regards

From tjreedy at udel.edu  Tue Apr 12 18:48:39 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 12 Apr 2011 12:48:39 -0400
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
Message-ID: <io1vp6$24s$1@dough.gmane.org>

On 4/12/2011 12:17 PM, Alexander Belopolsky wrote:

> If you find specific versions that are affected by this bug, please
> report it at bugs.python.org.

If Py version >= 2.7 and != 3.0.


-- 
Terry Jan Reedy


From dsalvetti at trapeze.com  Tue Apr 12 19:01:04 2011
From: dsalvetti at trapeze.com (Djoume Salvetti)
Date: Tue, 12 Apr 2011 13:01:04 -0400
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
Message-ID: <BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>

Thank you and sorry about the pastebin.

I can reproduce it on python 2.5.2 and python 2.6.6 but not on python 3.1.2
(all in ubuntu). I'll open a bug.

-- 
Djoume Salvetti
Director of Development

T:416.601.1999 x 249
www.trapeze.com     twitter: trapeze
175 Bloor St. E., South Tower, Suite 900
Toronto, ON M4W 3R8
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/2a24b4a9/attachment.html>

From fuzzyman at voidspace.org.uk  Tue Apr 12 19:08:42 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Tue, 12 Apr 2011 18:08:42 +0100
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
 pdb.set_trace()
In-Reply-To: <BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
	<BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
Message-ID: <4DA4871A.2030103@voidspace.org.uk>

On 12/04/2011 18:01, Djoume Salvetti wrote:
> Thank you and sorry about the pastebin.
>
> I can reproduce it on python 2.5.2 and python 2.6.6 but not on python 
> 3.1.2 (all in ubuntu). I'll open a bug.

Both Python 2.5 and 2.6 are in "security fix only" mode I'm afraid, so 
won't receive fixes for issues like this.

All the best,

Michael Foord
>
> -- 
> Djoume Salvetti
> Director of Development
>
> T:416.601.1999 x 249
> www.trapeze.com <http://www.trapeze.com>     twitter: trapeze
> 175 Bloor St. E., South Tower, Suite 900
> Toronto, ON M4W 3R8
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/d30b455e/attachment.html>

From guido at python.org  Tue Apr 12 19:22:40 2011
From: guido at python.org (Guido van Rossum)
Date: Tue, 12 Apr 2011 10:22:40 -0700
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
	<BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
Message-ID: <BANLkTimPNET1B8Wg7WxRXSszvzx+DgkuOg@mail.gmail.com>

On Tue, Apr 12, 2011 at 10:01 AM, Djoume Salvetti <dsalvetti at trapeze.com> wrote:
> Thank you and sorry about the pastebin.
> I can reproduce it on python 2.5.2 and python 2.6.6 but not on python 3.1.2
> (all in ubuntu). I'll open a bug.

Looking at the pastebin you are using !lv = 2. Why the !? Without it,
it works fine:

Python 2.5.5+ (release25-maint:86106, Dec  9 2010, 10:25:54)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> def f(x):
...  import pdb; pdb.set_trace()
...  return x
...
>>> f(1)
> <stdin>(3)f()
(Pdb) x
1
(Pdb) x = 2
(Pdb) c
2
>>>

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Tue Apr 12 20:10:02 2011
From: guido at python.org (Guido van Rossum)
Date: Tue, 12 Apr 2011 11:10:02 -0700
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTin6nkyDv6Z-6DoSLGyOJY9uSXjo1Q@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
	<BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
	<BANLkTimPNET1B8Wg7WxRXSszvzx+DgkuOg@mail.gmail.com>
	<BANLkTin6nkyDv6Z-6DoSLGyOJY9uSXjo1Q@mail.gmail.com>
Message-ID: <BANLkTinFevNKHRwaburp+036r9LgsMT+4w@mail.gmail.com>

On Tue, Apr 12, 2011 at 11:01 AM, Djoume Salvetti <dsalvetti at trapeze.com> wrote:
> On Tue, Apr 12, 2011 at 1:22 PM, Guido van Rossum <guido at python.org> wrote:
>>
>> Looking at the pastebin you are using !lv = 2. Why the !? Without it,
>> it works fine:
>>
>
>
> I just wanted to make sure I was executing a python statement and not a pdb
> alias.
> I re-tested without the exclamation mark and still have the same issue:
> ?-> import pdb; pdb.set_trace()
> (Pdb) list
> ??1 ? ? gv = 1
> ??2
> ??3 ? ? def f():
> ??4 ? ? ? ? lv = 1
> ??5 ?-> ? ? import pdb; pdb.set_trace()
> ??6
> ??7 ? ? if __name__ == '__main__':
> ??8 ? ? ? ? f()
> [EOF]
> (Pdb) lv
> 1
> (Pdb) lv = 2
> (Pdb) lv
> 1
> (Pdb)

Interesting. You'll find that if you let the function continue, lv is
actually set to 2. Why pdb prints 1 I don't know. It might be
interesting to find out why that is, although since it's fixed in
Python 2.7 and Python 3, perhaps observing the changes in pdb.py (or
other related code) between Python 2.6 and 2.7 might be the quickest
way to find out.

-- 
--Guido van Rossum (python.org/~guido)

From barry at python.org  Tue Apr 12 20:28:25 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 14:28:25 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9B7A1F.3070106@g.nevcal.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
Message-ID: <20110412142825.2600ac4a@neurotica.wooz.org>

On Apr 05, 2011, at 01:22 PM, Glenn Linderman wrote:

>On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>>      __version__ = pkgutil.get_distribution('elle').metadata['version']
>
>The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
>discussed in POP 386.

It really wasn't intended to.  I'm torn about even including this code sample
in the PEP.  I'm highly tempted to rip this out and hand-wave over the
implementation of get_version().  It's not a critical part of the PEP and
might just be distracting.

>Are there issues for finding and loading multiple versions of the same
>module?

Out of scope for this PEP I think.

>Should it be possible to determine a version before loading a module?  If
>yes, the version module would have to be able to find a parse version strings
>in any of the many places this PEP suggests they could be... so that would be
>somewhat complex, but the complexity shouldn't be used to change the
>answer... but if the answer is yes, it might encourage fewer variant cases to
>be supported for acceptable version definition locations for this PEP.

I think the answer can be "yes", but only through distutils2/packaging APIs.
If there's no metadata for a module available, then I don't have a problem
saying the version information can't be determined without importing it.

-Barry

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/343a9be4/attachment.pgp>

From alexander.belopolsky at gmail.com  Tue Apr 12 20:35:54 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 12 Apr 2011 14:35:54 -0400
Subject: [Python-Dev] Hg question
Message-ID: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>

I was preparing a commit to 3.2 and default branches and mistakenly
used -m insread of -l commit option.  As a result, I have

$ hg out
comparing with ssh://hg at hg.python.org/cpython
searching for changes
changeset:   69272:0bf1354fab6b
branch:      3.2
parent:      69268:bfc586c558ed
user:        Alexander Belopolsky <alexander.belopolsky at gmail.com>
date:        Tue Apr 12 14:00:43 2011 -0400
summary:     m.txt

changeset:   69273:516ed700ce22
tag:         tip
parent:      69270:c26d015cbde8
parent:      69272:0bf1354fab6b
user:        Alexander Belopolsky <alexander.belopolsky at gmail.com>
date:        Tue Apr 12 14:02:22 2011 -0400
summary:     m.txt


I would like to replace m.txt in the summary with the content of the
file m.txt.  I tried to use the recipe [1], but qimport fails:

$ hg qimport -r 69272:tip
abort: cannot import merge revision 69273

[1] http://stackoverflow.com/questions/623052/how-to-edit-incorrect-commit-message-in-mercurial

PS: This scenario seems to be a usability regression compared to SVN.
SVN would actually warn me if I tried to use -m with a file name
instead of a message and editing the commit log in SVN is fairly
straightforward.

From barry at python.org  Tue Apr 12 20:40:24 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 14:40:24 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <874o676nrr.fsf@benfinney.id.au>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<874o676nrr.fsf@benfinney.id.au>
Message-ID: <20110412144024.11afb3fd@neurotica.wooz.org>

On Apr 10, 2011, at 08:52 AM, Ben Finney wrote:

>Nitpick: Please call these ?version strings?. A version string is hardly
>ever just one number, and not in the general case anyway.

The PEP title does say version *numbers* (plural), and that seems more general
than using 'strings' here.

>    Emily maintains a package consisting of programs and modules in
>    several languages that inter-operate; several are Python, but some
>    are Unix shell, Perl, and there are some C modules. Emily decides
>    the simplest API for all these modules to get the package version
>    string is a single text file named ``version`` at the root of the
>    project tree. All the programs and modules, including the
>    ``setup.py`` file, simply read the contents of ``version`` to get
>    the version string.
>
>This is an often-overlooked case, I think. The unspoken assumption is
>often that ``setup.py`` is a suitable place for the overall version
>string, but this is not the case when that string must be read by
>non-Python programs.

I'm not certain that the additional story informs any recommendations made by
the PEP.  In the case where the version number is kept in some external file,
then you'd likely see something like this in setup.py:

setup(version=open('version.txt').read())

or this in foo/__init__.py:

__version__ = open('version.txt').read()

The details aren't that important, but the fact that the version is kept
in an external file doesn't change any of the recommendations the PEP is
already making.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/63ca90eb/attachment.pgp>

From dickinsm at gmail.com  Tue Apr 12 20:44:04 2011
From: dickinsm at gmail.com (Mark Dickinson)
Date: Tue, 12 Apr 2011 19:44:04 +0100
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
	<BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
Message-ID: <BANLkTikqdXVhf6EFcF+B0wrCQ-of60iwCQ@mail.gmail.com>

On Tue, Apr 12, 2011 at 6:01 PM, Djoume Salvetti <dsalvetti at trapeze.com> wrote:

> Thank you and sorry about the pastebin.
> I can reproduce it on python 2.5.2 and python 2.6.6 but not on python 3.1.2
> (all in ubuntu). I'll open a bug.

Is http://bugs.python.org/issue5215 the same issue?

Mark

From barry at python.org  Tue Apr 12 20:46:38 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 14:46:38 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
Message-ID: <20110412144638.0d624126@neurotica.wooz.org>

On Apr 07, 2011, at 12:26 AM, Nick Coghlan wrote:

>> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>
>>     DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>>
>>     __version__ = pkgutil.get_distribution('elle').metadata['version']
>>
>> The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
>> discussed in POP 386.
>
>Indeed, I really don't like the RE suggestion - better to tell people
>to just move the version info into the static config file and use
>pkgutil to make it available as shown. That solves the build time vs
>install time problem as well.

I'm actually going to remove the regexp example from the PEP.  It's
distracting, incorrect, and unnecessary (give that `packaging` will have such
an API).

>Yep, this is why the version information should be in the setup.cfg
>file, and hence available via pkgutil without loading the module
>first.

If the version information is in the setup.cfg, then the question is, what's
the code look like to get that stuffed into a module's __version__ attribute?
If it's not the pkgutil ugliness, what is it?  And does it work whether your
in say the source tree of your uninstalled module, or in a Python where the
package was installed via they OS?

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/9f902162/attachment.pgp>

From barry at python.org  Tue Apr 12 20:49:52 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 14:49:52 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D9BC3.7040101@voidspace.org.uk>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
Message-ID: <20110412144952.4228d858@neurotica.wooz.org>

On Apr 07, 2011, at 12:10 PM, Michael Foord wrote:

>>> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>>>
>>>      DEFAULT_VERSION_RE = re.compile(r'(?P<version>\d+\.\d(?:\.\d+)?)')
>>>
>>>      __version__ = pkgutil.get_distribution('elle').metadata['version']
>>>
>
>I really dislike this way of specifying the version. For a start it is really
>ugly.

Agreed!  There should be a higher level API for this, e.g.:

__version__ = pkgutil.get_version('elle')

>More importantly it means the version information is *only* available if the
>package has been installed by "packaging", and so isn't available for the
>parts of my pre-build process like building the documentation (which import
>the version number to put into the docs).

That would have to be an important feature of .get_version() I think.

>Currently all my packages have the canonical version number information in
>the package itself using:
>
>     __version__ = '1.2.3'
>
>Anything that needs the version number, including setup.py for upload to
>pypi, has one place to look for it and it doesn't depend on any other tools
>or processes. If switching to "packaging" prevents me from doing this then it
>will inhibit me using "packaging".

It definitely shouldn't prevent this.  I personally do the same thing, and it
seems the least bad way of doing it.  I think the clear version string
assigned to __version__ is the best recommendation (though not the only one)
that the PEP makes.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/430fc949/attachment.pgp>

From barry at python.org  Tue Apr 12 21:08:16 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 15:08:16 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <2e96c2afb77a3e4f17f8934bc653697a@netwok.org>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
	<2e96c2afb77a3e4f17f8934bc653697a@netwok.org>
Message-ID: <20110412150816.45c36937@neurotica.wooz.org>

On Apr 09, 2011, at 06:23 PM, ?ric Araujo wrote:

>> One of the main complaint against setuptools is that having to change
>> your application code because of the packaging tool used was not a > good
>> idea.  Having to use require instead of import or resource_whatever
>> instead of open (or get_data, the most sadly underused function in > the
>> stdlib) because you use setuptools instead of distutils was a bad > thing.
>>
>> As stated in the PEP, having a __version__ attribute in the module is
>> common, so my opinion is that making the packaging tool use that info > is
>> the Right Thing?, and having the dependency in the reverse sense is
>> wrong.  I don?t see a problem with having harmless duplication in the
>> *installed* system, once in  elle.__version__ and once in the pkgutil
>> metadata database.
>
> Barry?s reply:
>
>> I'm including this section because at Pycon, some people did express > an
>> interest in deriving the version number in this direction.  I wanted > to
>> capture what that might look like.  Since this is an informational > PEP, I
>> think it makes sense to include alternative approaches, but I tend to > agree
>> with you that it will be much more common to define > module.__version__ and
>> derive the metadata from that.
>
> IOW, you can define the version only once, either in your source file  or
> in the setup.cfg file, and the PEP describes how to get that info from
> the other place.  My personal opinion is that the approach using
> pkgutil.get_distribution should be much less prominent than the one
> putting the version in the Python code.

It is already though, right?  To me anyway the PEP does emphasize setting
__version__, but I'm open to specific suggestions.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/25e1641b/attachment.pgp>

From barry at python.org  Tue Apr 12 21:13:06 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 15:13:06 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTi=OWrd06dj8CRCO_B7c9XnKWQZbUw@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
	<BANLkTi=OWrd06dj8CRCO_B7c9XnKWQZbUw@mail.gmail.com>
Message-ID: <20110412151306.017d39c7@neurotica.wooz.org>

On Apr 07, 2011, at 09:59 PM, Nick Coghlan wrote:

>It sounds like part of the PEP needs another trip through
>distutils-sig. An informational PEP really shouldn't be advocating
>standard library changes, but it would make sense for this point of
>view to inform any updates to PEP 386 (the version handling
>standardisation PEP).

I'm certainly open to any suggestions from distutils-sigsters, though I'm not
sure the PEP needs to be discussed there exclusively at this point.

>As I see it, there appear to be two main requests:
>1. Normalised version parsing and comparison should be available even
>if packaging itself is not installed (e.g. as part of pkgutil)
>2. packaging should support extraction of the version metadata from
>the source files when bundling a package for distribution
>
>On point 2, rather than requiring that it be explicitly requested, I
>would suggest the following semantics for determining the version when
>bundling a package ready for distribution:
>
>- if present in the metadata, use that
>- if not present in the metadata, look for __version__ in the module
>source code (or the __init__ source code for an actual package)
>- otherwise warn the developer that no version information has been
>provided so it is automatically being set to "0.0.0a0"

I like that.  Given the recommendations in PEP 396, I think it's more in scope
of the distutils-sig, and the various related PEPs to define the details of
how that would work.  I'd be happy to update the Deriving section of PEP 396
with any such recommendations.  That section isn't meant to be definitive or
even all-encompassing.  It's just meant to give some examples of how you could
do things in your own modules.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/7bad3103/attachment.pgp>

From sjoerd at acm.org  Tue Apr 12 21:20:26 2011
From: sjoerd at acm.org (Sjoerd Mullender)
Date: Tue, 12 Apr 2011 21:20:26 +0200
Subject: [Python-Dev] Hg question
In-Reply-To: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
References: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
Message-ID: <4DA4A5FA.6020700@acm.org>

On 2011-04-12 20:35, Alexander Belopolsky wrote:
> I was preparing a commit to 3.2 and default branches and mistakenly
> used -m insread of -l commit option.  As a result, I have
> 
> $ hg out
> comparing with ssh://hg at hg.python.org/cpython
> searching for changes
> changeset:   69272:0bf1354fab6b
> branch:      3.2
> parent:      69268:bfc586c558ed
> user:        Alexander Belopolsky <alexander.belopolsky at gmail.com>
> date:        Tue Apr 12 14:00:43 2011 -0400
> summary:     m.txt
> 
> changeset:   69273:516ed700ce22
> tag:         tip
> parent:      69270:c26d015cbde8
> parent:      69272:0bf1354fab6b
> user:        Alexander Belopolsky <alexander.belopolsky at gmail.com>
> date:        Tue Apr 12 14:02:22 2011 -0400
> summary:     m.txt
> 
> 
> I would like to replace m.txt in the summary with the content of the
> file m.txt.  I tried to use the recipe [1], but qimport fails:
> 
> $ hg qimport -r 69272:tip
> abort: cannot import merge revision 69273
> 
> [1] http://stackoverflow.com/questions/623052/how-to-edit-incorrect-commit-message-in-mercurial
> 
> PS: This scenario seems to be a usability regression compared to SVN.
> SVN would actually warn me if I tried to use -m with a file name
> instead of a message and editing the commit log in SVN is fairly
> straightforward.

If you didn't push the changes to any other clone, you can hg strip
these changesets and do it again, correctly.  strip is part of the
rebase extension.

You cannot edit history that has already been shared with other clones.
 If you did, it would just come back at the next pull.


-- 
Sjoerd Mullender

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 371 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/3f1449cf/attachment.pgp>

From barry at python.org  Tue Apr 12 21:32:32 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 15:32:32 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9DA53B.9070805@voidspace.org.uk>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9D9BC3.7040101@voidspace.org.uk>
	<4D9DA53B.9070805@voidspace.org.uk>
Message-ID: <20110412153232.38b97c10@neurotica.wooz.org>

On Apr 07, 2011, at 12:51 PM, Michael Foord wrote:

>So I don't think recommending
>"pkgutil.get_distribution('elle').metadata['version']" as a way for packages
>to provide version information is good advice.

I want to make it clear that this section of the PEP is intended only to
provide some choices and examples, not to be definitive.  I've added this
text:

    This could be done in any number of ways, a few of which are outlined
    below.  These are included for illustrative purposes only and are not
    intended to be definitive, complete, or all-encompassing.  Other
    approaches are possible, and some included below may have limitations
    that prevent their use in some situations.

Does that help?
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/eca2c019/attachment.pgp>

From barry at python.org  Tue Apr 12 21:39:27 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 15:39:27 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
Message-ID: <20110412153927.5d9057c7@neurotica.wooz.org>

On Apr 07, 2011, at 02:08 PM, Nick Coghlan wrote:

>(Also, tsk, tsk, Barry for including Standards track proposals in an
>Informational PEP!)

Is that really illegal? :)

>P.S. A nice coincidental progression: PEP 376, 386 and 396 are all
>related to versioning and package metadata

time-machine-ly y'rs,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/28f5f346/attachment.pgp>

From barry at python.org  Tue Apr 12 21:47:37 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 15:47:37 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9D43D5.40603@g.nevcal.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
	<4D9D43D5.40603@g.nevcal.com>
Message-ID: <20110412154737.621e0642@neurotica.wooz.org>

On Apr 06, 2011, at 09:55 PM, Glenn Linderman wrote:

>The PEP doesn't mention PyPI, and at present none of the modules there use
>"packaging" :) So it wasn't obvious to me that the PEP applies only to PyPI,
>and I have used modules that were not available from PyPI yet were still
>distributed and packaged somehow (not using "packaging" clearly).

The core of the PEP does not require packaging or PyPI.  The Specification
section is the most important part of the PEP.  Yes, that does mention
parse_version() from PEP 386, and the Version metadata field from PEP 345, but
I think those cross-references are fine, because it's just referring to the
information contained there.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/a2edbad8/attachment.pgp>

From barry at python.org  Tue Apr 12 21:56:32 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 15:56:32 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
	<4D9D43D5.40603@g.nevcal.com>
	<BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
Message-ID: <20110412155632.6ecce755@neurotica.wooz.org>

On Apr 07, 2011, at 04:53 PM, Nick Coghlan wrote:

>What I would like to see the PEP say is that if you don't *have* a
>setup.cfg file, then go ahead and embed the version directly in your
>Python source file. If you *do* have one, then put the version there
>and retrieve it with "pkgutil" if you want to provide a __version__
>attribute.

I'm not convinced there's consensus on that, i.e. that the version string
should go in setup.cfg if it exists.  It doesn't help that the pkgutil API for
doing that is pretty ugly.

>Barry is welcome to make a feature request to allow that dependency to
>go the other way, with the packaging system reading the version number
>out of the source file, but such a suggestion doesn't belong in an
>Informational PEP. If such a feature is ever accepted, then the
>recommendation in the PEP could be updated.

Note that there's really no reason why packaging has to grow a method to do
this.  It would be a convenience, but not a requirement.  For example, I have
my own helper function (something like the now elided get_version() code) that
digs version strings out of files for my own packages just fine.  True, it
doesn't handle the full normalized version specification, but I don't care
because my version numbers will never look that complex.  If yours does, and
you don't want to rely on the pkgutil API, or you need it to work even when
your module isn't installed, well, write your own code!

The Deriving section of the PEP is not the most important part of it, and is
not making specific recommendations.  If it's not clear that it's only
providing examples, or it's distracting, then maybe it's better off being
removed, cut down or rewritten.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/1d314c05/attachment.pgp>

From barry at python.org  Tue Apr 12 22:01:41 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 16:01:41 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <4D9C2C88.8020604@arbash-meinel.com>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9C2C88.8020604@arbash-meinel.com>
Message-ID: <20110412160141.65875b3a@neurotica.wooz.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Apr 06, 2011, at 11:04 AM, John Arbash Meinel wrote:

>> In other words, parse_version will return a tuple for each version string,
>> that is compatible with StrictVersion but also accept arbitrary version and
>> deal with them so they can be compared:
>> 
>>>>> from pkg_resources import parse_version as V
>>>>> V('1.2')
>> ('00000001', '00000002', '*final')
>>>>> V('1.2b2')
>> ('00000001', '00000002', '*b', '00000002', '*final')
>>>>> V('FunkyVersion')
>> ('*funkyversion', '*final')
>
>bzrlib has certainly used 'version_info' as a tuple indication such as:
>
>version_info = (2, 4, 0, 'dev', 2)
>
>and
>
>version_info = (2, 4, 0, 'beta', 1)
>
>and
>
>version_info = (2, 3, 1, 'final', 0)
>
>etc.
>
>This is mapping what we could sort out from Python's "sys.version_info".

It's probably worth specifying the __version_info__ tuple in more detail in
either PEP 386 or 396.  I think more detail should go *somewhere*, and it
feels like it could go in either PEP.  Maybe Tarek can chime in on that.

>The *really* nice bit is that you can do:
>
>if sys.version_info >= (2, 6):
>  # do stuff for python 2.6(.0) and beyond
>
>Doing that as:
>
>if sys.version_info >= ('000000002', '000000006'):
>
>is pretty ugly.

I personally often do tests against sys.hexversion, which is a little less
ugly (maybe ;).

if sys.hexversion >= 0x20600f0:
    # 2.6 or 2.7

- -Barry
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iQIcBAEBCAAGBQJNpK+lAAoJEBJutWOnSwa/+pYP/R5x6siiCRpGGUjwQti7BZRg
fcddP1kUV/FU6fQFE44ZrXki0d7kx5SyNCTQTphtnBlUiqAvtjYK6oKon9xNqsq/
5KocKrbJZ/h806H0irzElzRXs3M+OcymC2ZQwvR1hqzMrdFRGRQMmanR0yz3LB1x
S2mF+TR2zEdMX4Ace6+Y5Vx4NYHTELapMOfamDgtft+lE+c8w6U7aZX/Gyzgagsd
yXDd33LI4/dRIENA/9NYycm05txebWpbEQsLLytczZqfLX7uXqOe5HTvO2g9CcmD
Yi8AT3ypERAHp+cLED7ICJkD3MY9AMlJBum7wgFjrKvwiJ7tu9x/9nTP3jhE6SaV
oDcyo2qLoCYbBBL+83bsRYK0AEBZAz4fsfJ/2A+a7vIjFrAFsRab7qiLrC9Pg1N+
DC4aFakRkrRBOoLoXnfYmTDq3zqvny4RzsbwP/eD/A13YfquLr8ECL6TFa3WOpNz
cmB6+h6O7AcMHlblON+Cf3PfHcPQC1h9atrkrjOBeG9m5812HcO/sC8lMWk0pUpa
D8OozOJI3ISQvw/rDEFMYKauc7eUIp/2hR4N7NBqBBo28TL38sRvQsQ6UoV0C/aF
3cjAhSHG5g2zsZZADUbyAel5h6MKoyoHkbA11yOHS3RXmE8XASa0zckRYwUwfGtC
M8PDejFisplJhDj2rGSa
=fpk4
-----END PGP SIGNATURE-----

From regebro at gmail.com  Tue Apr 12 22:05:57 2011
From: regebro at gmail.com (Lennart Regebro)
Date: Tue, 12 Apr 2011 22:05:57 +0200
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
Message-ID: <BANLkTik6KgXG5tRxQ-8zUR+z0U5KrbUAng@mail.gmail.com>

Hasn't it always been like that? I tried with Python 2.3 now and it's
the same. I have no memory of that actually changing an existing
variable in any version of Python I've used. More testing turns out
that this works:

-> print "lv is ", lv
(Pdb) lv=2
(Pdb) c
lv is  2

While this seem to "reset" is:

-> print "lv is ", lv
(Pdb) lv=2
(Pdb) lv
1
(Pdb) c
lv is  1

This is the same from Python 2.3 to 2.6. I thought is just was a lack
of feature, that there for some reason was really hard to change the
value of an existing variable from the debugger. I though that for ten
years. It never occurred to me to change the variable and type c
without first checking that the variable had changed... :-)

It is however fixed in 2.7.

-> print "lv is ", lv
(Pdb) lv=2
(Pdb) lv
2
(Pdb) c
lv is  2


But this bug/lack of feature has been there as long as I can remember. :-)

//Lennart

From barry at python.org  Tue Apr 12 22:14:09 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 16:14:09 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <20110407161143.GA9851@unaka.lan>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9C2C88.8020604@arbash-meinel.com>
	<20110407161143.GA9851@unaka.lan>
Message-ID: <20110412161409.660aea25@neurotica.wooz.org>

On Apr 07, 2011, at 09:13 AM, Toshio Kuratomi wrote:

>Barry -- I think we want to talk about NormalizedVersion.from_parts() rather
>than parse_version().

See my previous follow up.  It probably makes sense to be explicit in one
PEP or the other, but...

>So you can't escape needing a function to compare versions.
>(NormalizedVersion does this by letting you compare NormalizedVersions
>together).  Barry if this is correct, maybe __version_info__ is useless and
>I shouldn't have brought it up at pycon?

...yikes!  You might be right about that.  Unless there are any counter
arguments, I think I'll have to remove it from PEP 396.

(Makes me like hexversion even more :).

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/75ab3696/attachment-0001.pgp>

From lac at openend.se  Tue Apr 12 22:14:28 2011
From: lac at openend.se (Laura Creighton)
Date: Tue, 12 Apr 2011 22:14:28 +0200
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: Message from Barry Warsaw <barry@python.org> of "Tue,
	12 Apr 2011 15:56:32 EDT." <20110412155632.6ecce755@neurotica.wooz.org>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
	<4D9D43D5.40603@g.nevcal.com>
	<BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
	<20110412155632.6ecce755@neurotica.wooz.org> 
Message-ID: <201104122014.p3CKESnt027678@theraft.openend.se>

In a message of Tue, 12 Apr 2011 15:56:32 EDT, Barry Warsaw writes:
<snip>
>The Deriving section of the PEP is not the most important part of it, and
> is
>not making specific recommendations.  If it's not clear that it's only
>providing examples, or it's distracting, then maybe it's better off being
>removed, cut down or rewritten.

To me, at any rate, it read as a pretty  important part.  But the version I
just read from http://www.python.org/dev/peps/pep-0396/ still has the
re in it as well.

Laura

From barry at python.org  Tue Apr 12 22:18:31 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 12 Apr 2011 16:18:31 -0400
Subject: [Python-Dev] PEP 396, Module Version Numbers
In-Reply-To: <201104122014.p3CKESnt027678@theraft.openend.se>
References: <20110405145213.29f706aa@neurotica.wooz.org>
	<4D9B7A1F.3070106@g.nevcal.com>
	<BANLkTi==iQ1jwgUG7XFejwnapPreViKfLw@mail.gmail.com>
	<4D9CE1E8.9000203@g.nevcal.com>
	<BANLkTindwJsPf=tGsve6WD-6Yj-6u7MrcA@mail.gmail.com>
	<4D9D43D5.40603@g.nevcal.com>
	<BANLkTimD06e01Rzsz_k=A39Ak=f8n-iKEA@mail.gmail.com>
	<20110412155632.6ecce755@neurotica.wooz.org>
	<201104122014.p3CKESnt027678@theraft.openend.se>
Message-ID: <20110412161831.3fa25028@neurotica.wooz.org>

On Apr 12, 2011, at 10:14 PM, Laura Creighton wrote:

>In a message of Tue, 12 Apr 2011 15:56:32 EDT, Barry Warsaw writes:
><snip>
>>The Deriving section of the PEP is not the most important part of it, and
>> is
>>not making specific recommendations.  If it's not clear that it's only
>>providing examples, or it's distracting, then maybe it's better off being
>>removed, cut down or rewritten.
>
>To me, at any rate, it read as a pretty  important part.  But the version I
>just read from http://www.python.org/dev/peps/pep-0396/ still has the
>re in it as well.

Yep, I haven't committed or pushed the change yet.

-Barry

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/106dfe04/attachment.pgp>

From martin at v.loewis.de  Tue Apr 12 22:20:15 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 12 Apr 2011 22:20:15 +0200
Subject: [Python-Dev] Hg question
In-Reply-To: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
References: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
Message-ID: <4DA4B3FF.8090808@v.loewis.de>

> I would like to replace m.txt in the summary with the content of the
> file m.txt.  I tried to use the recipe [1], but qimport fails:

I'd use "hg export":

hg export -r 69272:tip > ../patch
Edit patch to update commit message
cd ..
rm this_clone
hg clone clean_cpython this_clone
cd this_clone
hg import ../patch

HTH,
Martin

From guido at python.org  Tue Apr 12 23:02:09 2011
From: guido at python.org (Guido van Rossum)
Date: Tue, 12 Apr 2011 14:02:09 -0700
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTik6KgXG5tRxQ-8zUR+z0U5KrbUAng@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTik6KgXG5tRxQ-8zUR+z0U5KrbUAng@mail.gmail.com>
Message-ID: <BANLkTi=-eCmuhCtEZ4YBRNHbvVf4V+oCDw@mail.gmail.com>

On Tue, Apr 12, 2011 at 1:05 PM, Lennart Regebro <regebro at gmail.com> wrote:
> Hasn't it always been like that? I tried with Python 2.3 now and it's
> the same. I have no memory of that actually changing an existing
> variable in any version of Python I've used. More testing turns out
> that this works:
>
> -> print "lv is ", lv
> (Pdb) lv=2
> (Pdb) c
> lv is ?2
>
> While this seem to "reset" is:
>
> -> print "lv is ", lv
> (Pdb) lv=2
> (Pdb) lv
> 1
> (Pdb) c
> lv is ?1
>
> This is the same from Python 2.3 to 2.6. I thought is just was a lack
> of feature, that there for some reason was really hard to change the
> value of an existing variable from the debugger. I though that for ten
> years. It never occurred to me to change the variable and type c
> without first checking that the variable had changed... :-)
>
> It is however fixed in 2.7.
>
> -> print "lv is ", lv
> (Pdb) lv=2
> (Pdb) lv
> 2
> (Pdb) c
> lv is ?2
>
>
> But this bug/lack of feature has been there as long as I can remember. :-)

I swear it was my intention that assigning to locals would work, and I
was surprised to learn that it didn't. I'm glad it's fixed in 2.7
though... :-)

-- 
--Guido van Rossum (python.org/~guido)

From victor.stinner at haypocalc.com  Tue Apr 12 23:08:13 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Tue, 12 Apr 2011 23:08:13 +0200
Subject: [Python-Dev] Hg question
In-Reply-To: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
References: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
Message-ID: <1302642493.19078.3.camel@marge>

Le mardi 12 avril 2011 ? 14:35 -0400, Alexander Belopolsky a ?crit :
> I was preparing a commit to 3.2 and default branches and mistakenly
> used -m insread of -l commit option.  As a result, I have
> 
> $ hg out
> comparing with ssh://hg at hg.python.org/cpython
> searching for changes
> changeset:   69272:0bf1354fab6b
> branch:      3.2
> parent:      69268:bfc586c558ed
> user:        Alexander Belopolsky <alexander.belopolsky at gmail.com>
> date:        Tue Apr 12 14:00:43 2011 -0400
> summary:     m.txt
> 
> changeset:   69273:516ed700ce22
> tag:         tip
> parent:      69270:c26d015cbde8
> parent:      69272:0bf1354fab6b
> user:        Alexander Belopolsky <alexander.belopolsky at gmail.com>
> date:        Tue Apr 12 14:02:22 2011 -0400
> summary:     m.txt
> 
> 
> I would like to replace m.txt in the summary with the content of the
> file m.txt.

I don't know if it is the "right" solution, but I would use hg strip
+histedit. Something like:

$ hg strip 516ed700ce22 # remove commit in the default branch
$ hg update 3.2
$ hg histedit 0bf1354fab6b
<don't touch code>
$ hg ci -l m.txt
$ hg update default
$ hg merge 3.2

WARNING: it is easy to loose work using strip and histedit, so first
make sure that you have a copy of your commits. Use hg log -p, hg
export, clone the whole repository, etc.

Victor


From brett at python.org  Wed Apr 13 00:07:16 2011
From: brett at python.org (Brett Cannon)
Date: Tue, 12 Apr 2011 15:07:16 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
Message-ID: <BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>

Here is the next draft of the PEP. I changed the semantics requirement to
state that 100% branch coverage is required for any Python code that is
being replaced by accelerated C code instead of the broad "must be
semantically equivalent". Also tweaked wording here and there to make
certain things more obvious.

----------------------------------

PEP: 399
Title: Pure Python/C Accelerator Module Compatibility Requirements
Version: $Revision: 88219 $
Last-Modified: $Date: 2011-01-27 13:47:00 -0800 (Thu, 27 Jan 2011) $
Author: Brett Cannon <brett at python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 04-Apr-2011
Python-Version: 3.3
Post-History: 04-Apr-2011, 12-Apr-2011

Abstract
========

The Python standard library under CPython contains various instances
of modules implemented in both pure Python and C (either entirely or
partially). This PEP requires that in these instances that the
C code *must* pass the test suite used for the pure Python code
so as to act as much as a drop-in replacement as possible
(C- and VM-specific tests are exempt). It is also required that new
C-based modules lacking a pure Python equivalent implementation get
special permissions to be added to the standard library.


Rationale
=========

Python has grown beyond the CPython virtual machine (VM). IronPython_,
Jython_, and PyPy_ all currently being viable alternatives to the
CPython VM. This VM ecosystem that has sprung up around the Python
programming language has led to Python being used in many different
areas where CPython cannot be used, e.g., Jython allowing Python to be
used in Java applications.

A problem all of the VMs other than CPython face is handling modules
from the standard library that are implemented (to some extent) in C.
Since they do not typically support the entire `C API of Python`_ they
are unable to use the code used to create the module. Often times this
leads these other VMs to either re-implement the modules in pure
Python or in the programming language used to implement the VM
(e.g., in C# for IronPython). This duplication of effort between
CPython, PyPy, Jython, and IronPython is extremely unfortunate as
implementing a module *at least* in pure Python would help mitigate
this duplicate effort.

The purpose of this PEP is to minimize this duplicate effort by
mandating that all new modules added to Python's standard library
*must* have a pure Python implementation _unless_ special dispensation
is given. This makes sure that a module in the stdlib is available to
all VMs and not just to CPython (pre-existing modules that do not meet
this requirement are exempt, although there is nothing preventing
someone from adding in a pure Python implementation retroactively).

Re-implementing parts (or all) of a module in C (in the case
of CPython) is still allowed for performance reasons, but any such
accelerated code must pass the same test suite (sans VM- or C-specific
tests) to verify semantics and prevent divergence. To accomplish this,
the test suite for the module must have 100% branch coverage of the
pure Python implementation before the acceleration code may be added.
This is to prevent users from accidentally relying
on semantics that are specific to the C code and are not reflected in
the pure Python implementation that other VMs rely upon. For example,
in CPython 3.2.0, ``heapq.heappop()`` does an explicit type
check in its accelerated C code while the Python code uses duck
typing::

    from test.support import import_fresh_module

    c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
    py_heapq = import_fresh_module('heapq', blocked=['_heapq'])


    class Spam:
        """Tester class which defines no other magic methods but
        __len__()."""
        def __len__(self):
            return 0


    try:
        c_heapq.heappop(Spam())
    except TypeError:
        # Explicit type check failure: "heap argument must be a list"
        pass

    try:
        py_heapq.heappop(Spam())
    except AttributeError:
        # Duck typing failure: "'Foo' object has no attribute 'pop'"
        pass

This kind of divergence is a problem for users as they unwittingly
write code that is CPython-specific. This is also an issue for other
VM teams as they have to deal with bug reports from users thinking
that they incorrectly implemented the module when in fact it was
caused by an untested case.


Details
=======

Starting in Python 3.3, any modules added to the standard library must
have a pure Python implementation. This rule can only be ignored if
the Python development team grants a special exemption for the module.
Typically the exemption will be granted only when a module wraps a
specific C-based library (e.g., sqlite3_). In granting an exemption it
will be recognized that the module will be considered exclusive to
CPython and not part of Python's standard library that other VMs are
expected to support. Usage of ``ctypes`` to provide an
API for a C library will continue to be frowned upon as ``ctypes``
lacks compiler guarantees that C code typically relies upon to prevent
certain errors from occurring (e.g., API changes).

Even though a pure Python implementation is mandated by this PEP, it
does not preclude the use of a companion acceleration module. If an
acceleration module is provided it is to be named the same as the
module it is accelerating with an underscore attached as a prefix,
e.g., ``_warnings`` for ``warnings``. The common pattern to access
the accelerated code from the pure Python implementation is to import
it with an ``import *``, e.g., ``from _warnings import *``. This is
typically done at the end of the module to allow it to overwrite
specific Python objects with their accelerated equivalents. This kind
of import can also be done before the end of the module when needed,
e.g., an accelerated base class is provided but is then subclassed by
Python code. This PEP does not mandate that pre-existing modules in
the stdlib that lack a pure Python equivalent gain such a module. But
if people do volunteer to provide and maintain a pure Python
equivalent (e.g., the PyPy team volunteering their pure Python
implementation of the ``csv`` module and maintaining it) then such
code will be accepted.

This requirement does not apply to modules already existing as only C
code in the standard library. It is acceptable to retroactively add a
pure Python implementation of a module implemented entirely in C, but
in those instances the C version is considered the reference
implementation in terms of expected semantics.

Any new accelerated code must act as a drop-in replacement as close
to the pure Python implementation as reasonable. Technical details of
the VM providing the accelerated code are allowed to differ as
necessary, e.g., a class being a ``type`` when implemented in C. To
verify that the Python and equivalent C code operate as similarly as
possible, both code bases must be tested using the same tests which
apply to the pure Python code (tests specific to the C code or any VM
do not follow under this requirement). To make sure that the test
suite is thorough enough to cover all relevant semantics, the tests
must have 100% branch coverage for the Python code being replaced by
C code. This will make sure that the new acceleration code will
operate as much like a drop-in replacement for the Python code is as
possible. Testing should still be done for issues that come up when
working with C code even if it is not explicitly required to meet the
coverage requirement, e.g., Tests should be aware that C code typically
has special paths for things such as built-in types, subclasses of
built-in types, etc.

Acting as a drop-in replacement also dictates that no public API be
provided in accelerated code that does not exist in the pure Python
code.  Without this requirement people could accidentally come to rely
on a detail in the accelerated code which is not made available to
other VMs that use the pure Python implementation. To help verify
that the contract of semantic equivalence is being met, a module must
be tested both with and without its accelerated code as thoroughly as
possible.

As an example, to write tests which exercise both the pure Python and
C accelerated versions of a module, a basic idiom can be followed::

    import collections.abc
    from test.support import import_fresh_module, run_unittest
    import unittest

    c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
    py_heapq = import_fresh_module('heapq', blocked=['_heapq'])


    class ExampleTest(unittest.TestCase):

        def test_heappop_exc_for_non_MutableSequence(self):
            # Raise TypeError when heap is not a
            # collections.abc.MutableSequence.
            class Spam:
                """Test class lacking many ABC-required methods
                (e.g., pop())."""
                def __len__(self):
                    return 0

            heap = Spam()
            self.assertFalse(isinstance(heap,
                                collections.abc.MutableSequence))
            with self.assertRaises(TypeError):
                self.heapq.heappop(heap)


    class AcceleratedExampleTest(ExampleTest):

        """Test using the accelerated code."""

        heapq = c_heapq


    class PyExampleTest(ExampleTest):

        """Test with just the pure Python code."""

        heapq = py_heapq


    def test_main():
        run_unittest(AcceleratedExampleTest, PyExampleTest)


    if __name__ == '__main__':
        test_main()


If this test were to provide 100% branch coverage for
``heapq.heappop()`` in the pure Python implementation then the
accelerated C code would be allowed to be added to CPython's standard
library. If it did not, then the test suite would need to be updated
until 100% branch coverage was provided before the accelerated C code
could be added.


Copyright
=========

This document has been placed in the public domain.


.. _IronPython: http://ironpython.net/
.. _Jython: http://www.jython.org/
.. _PyPy: http://pypy.org/
.. _C API of Python: http://docs.python.org/py3k/c-api/index.html
.. _sqlite3: http://docs.python.org/py3k/library/sqlite3.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/87337fae/attachment.html>

From solipsis at pitrou.net  Wed Apr 13 00:31:36 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 13 Apr 2011 00:31:36 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
Message-ID: <20110413003136.3adb6af8@pitrou.net>

On Tue, 12 Apr 2011 23:59:53 +0200
brett.cannon <python-checkins at python.org> wrote:
> Technical details of
> +the VM providing the accelerated code are allowed to differ as
> +necessary, e.g., a class being a ``type`` when implemented in C.

I don't understand what this means ("a class being a ``type`` when
implemented in C").

> +If this test were to provide 100% branch coverage for
> +``heapq.heappop()`` in the pure Python implementation then the
> +accelerated C code would be allowed to be added to CPython's standard
> +library. If it did not, then the test suite would need to be updated
> +until 100% branch coverage was provided before the accelerated C code
> +could be added.

I really think that's a too strong requirement. We don't want to
paralyze development until the stdlib gets 100% coverage in the tests
suite.

Regards

Antoine.



From benjamin at python.org  Wed Apr 13 00:34:42 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 12 Apr 2011 17:34:42 -0500
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <20110413003136.3adb6af8@pitrou.net>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net>
Message-ID: <BANLkTi=uLk6RQi0UW4PKeXu8vEfOnrscYQ@mail.gmail.com>

2011/4/12 Antoine Pitrou <solipsis at pitrou.net>:
> On Tue, 12 Apr 2011 23:59:53 +0200
> brett.cannon <python-checkins at python.org> wrote:
>> Technical details of
>> +the VM providing the accelerated code are allowed to differ as
>> +necessary, e.g., a class being a ``type`` when implemented in C.
>
> I don't understand what this means ("a class being a ``type`` when
> implemented in C").
>
>> +If this test were to provide 100% branch coverage for
>> +``heapq.heappop()`` in the pure Python implementation then the
>> +accelerated C code would be allowed to be added to CPython's standard
>> +library. If it did not, then the test suite would need to be updated
>> +until 100% branch coverage was provided before the accelerated C code
>> +could be added.
>
> I really think that's a too strong requirement. We don't want to
> paralyze development until the stdlib gets 100% coverage in the tests
> suite.

Presumably this only applies to new code, though, which I would hope
would have comprehensive test coverage regardless of this PEP.



-- 
Regards,
Benjamin

From solipsis at pitrou.net  Wed Apr 13 00:38:32 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 13 Apr 2011 00:38:32 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <BANLkTi=uLk6RQi0UW4PKeXu8vEfOnrscYQ@mail.gmail.com>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net>
	<BANLkTi=uLk6RQi0UW4PKeXu8vEfOnrscYQ@mail.gmail.com>
Message-ID: <20110413003832.3c672db4@pitrou.net>

On Tue, 12 Apr 2011 17:34:42 -0500
Benjamin Peterson <benjamin at python.org> wrote:
> 2011/4/12 Antoine Pitrou <solipsis at pitrou.net>:
> > On Tue, 12 Apr 2011 23:59:53 +0200
> > brett.cannon <python-checkins at python.org> wrote:
> >> Technical details of
> >> +the VM providing the accelerated code are allowed to differ as
> >> +necessary, e.g., a class being a ``type`` when implemented in C.
> >
> > I don't understand what this means ("a class being a ``type`` when
> > implemented in C").
> >
> >> +If this test were to provide 100% branch coverage for
> >> +``heapq.heappop()`` in the pure Python implementation then the
> >> +accelerated C code would be allowed to be added to CPython's standard
> >> +library. If it did not, then the test suite would need to be updated
> >> +until 100% branch coverage was provided before the accelerated C code
> >> +could be added.
> >
> > I really think that's a too strong requirement. We don't want to
> > paralyze development until the stdlib gets 100% coverage in the tests
> > suite.
> 
> Presumably this only applies to new code, though, which I would hope
> would have comprehensive test coverage regardless of this PEP.

True, but comprehensive test coverage is not the same as a formal
requirement of 100% coverage.

Regards

Antoine.

From solipsis at pitrou.net  Wed Apr 13 01:43:15 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 13 Apr 2011 01:43:15 +0200
Subject: [Python-Dev] cpython: Fix 64-bit safety issue in BZ2Compressor
 and BZ2Decompressor.
References: <E1Q9kmK-0002RJ-NW@dinsdale.python.org>
Message-ID: <20110413014315.5ac9d738@pitrou.net>

On Tue, 12 Apr 2011 23:05:40 +0200
nadeem.vawda <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/0010cc5f22d4
> changeset:   69275:0010cc5f22d4
> user:        Nadeem Vawda <nadeem.vawda at gmail.com>
> date:        Tue Apr 12 23:02:42 2011 +0200
> summary:
>   Fix 64-bit safety issue in BZ2Compressor and BZ2Decompressor.
> 
> files:
>   Lib/test/test_bz2.py |  36 +++++++++++++++++++++++++++++++-
>   Modules/_bz2module.c |  33 +++++++++++++++++++++--------
>   2 files changed, 59 insertions(+), 10 deletions(-)

Can you add a Misc/NEWS entry?

Thank you

Antoine.



From solipsis at pitrou.net  Wed Apr 13 01:43:39 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 13 Apr 2011 01:43:39 +0200
Subject: [Python-Dev] cpython (3.1): Fix Issue11703 - urllib2.geturl()
 does not return	correct url when the original
References: <E1Q9myK-0004rL-Te@dinsdale.python.org>
Message-ID: <20110413014339.19e045a6@pitrou.net>

On Wed, 13 Apr 2011 01:26:12 +0200
senthil.kumaran <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/3f240a1cd245
> changeset:   69284:3f240a1cd245
> branch:      3.1
> parent:      69277:707078ca0a77
> user:        Senthil Kumaran <orsenthil at gmail.com>
> date:        Wed Apr 13 07:01:19 2011 +0800
> summary:
>   Fix Issue11703 - urllib2.geturl() does not return correct url when the original url contains #fragment. Patch Contribution by Santoso Wijaya.
> 
> files:
>   Lib/test/test_urllib.py     |  10 ++++++++++
>   Lib/test/test_urllib2.py    |  15 ++++++++++++++-
>   Lib/test/test_urllib2net.py |   2 +-
>   Lib/urllib/request.py       |   9 ++++++---
>   4 files changed, 31 insertions(+), 5 deletions(-)

Can you add a Misc/NEWS entry?

Thank you

Antoine.



From tseaver at palladion.com  Wed Apr 13 01:50:34 2011
From: tseaver at palladion.com (Tres Seaver)
Date: Tue, 12 Apr 2011 19:50:34 -0400
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <20110413003136.3adb6af8@pitrou.net>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net>
Message-ID: <io2oga$jb8$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/12/2011 06:31 PM, Antoine Pitrou wrote:
> On Tue, 12 Apr 2011 23:59:53 +0200
> brett.cannon <python-checkins at python.org> wrote:
>> Technical details of
>> +the VM providing the accelerated code are allowed to differ as
>> +necessary, e.g., a class being a ``type`` when implemented in C.
> 
> I don't understand what this means ("a class being a ``type`` when
> implemented in C").
> 
>> +If this test were to provide 100% branch coverage for
>> +``heapq.heappop()`` in the pure Python implementation then the
>> +accelerated C code would be allowed to be added to CPython's standard
>> +library. If it did not, then the test suite would need to be updated
>> +until 100% branch coverage was provided before the accelerated C code
>> +could be added.
> 
> I really think that's a too strong requirement. We don't want to
> paralyze development until the stdlib gets 100% coverage in the tests
> suite.

Anybody who is either a) providing a C accelerator for an existing
stdlib module which doesn't yet have 100% coverage, or b) writing a new
module proposed for the stdlib without providing 100% coverage needs to
be prepared to defend that practice against the presumption stated by
the PEP.  We can grandfather existing code already in the stdlib, but
anybody making substantial changes to such code without first providing
coverage ought to be challenged.

Trying to accelerate existing code which doesn't have the coverage is
insane:  how can you know that the accelerator doesn't subtly change the
semantics without tests?


Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2k5UoACgkQ+gerLs4ltQ5QMgCfda1S5DKbWfrJpy8bp8in0qyr
oisAn01TP7TT41Mj8q3+rusJ+vccNhcS
=s0Q6
-----END PGP SIGNATURE-----


From solipsis at pitrou.net  Wed Apr 13 02:07:17 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 13 Apr 2011 02:07:17 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net> <io2oga$jb8$1@dough.gmane.org>
Message-ID: <20110413020717.421a5695@pitrou.net>

On Tue, 12 Apr 2011 19:50:34 -0400
Tres Seaver <tseaver at palladion.com> wrote:
> Trying to accelerate existing code which doesn't have the coverage is
> insane:  how can you know that the accelerator doesn't subtly change the
> semantics without tests?

Well, why do you think tests guarantee that the semantics are the same?
Tests are not a magic bullet. "Covering" a code path doesn't ensure
that every possible behaviour is accounted for.

And if you think that is "insane", you should probably wipe most of the
software you are using on your computer, because most existing software
doesn't have 100% test coverage.

Regards

Antoine.



From rdmurray at bitdance.com  Wed Apr 13 02:49:45 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 12 Apr 2011 20:49:45 -0400
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTik6KgXG5tRxQ-8zUR+z0U5KrbUAng@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTik6KgXG5tRxQ-8zUR+z0U5KrbUAng@mail.gmail.com>
Message-ID: <20110413005006.3E66B2500D0@mailhost.webabinitio.net>

On Tue, 12 Apr 2011 22:05:57 +0200, Lennart Regebro <regebro at gmail.com> wrote:
> This is the same from Python 2.3 to 2.6. I thought is just was a lack
> of feature, that there for some reason was really hard to change the
> value of an existing variable from the debugger. I though that for ten
> years. It never occurred to me to change the variable and type c
> without first checking that the variable had changed... :-)
> 
> It is however fixed in 2.7.

For the curious:

http://bugs.python.org/issue5215

--
R. David Murray           http://www.bitdance.com

From foom at fuhm.net  Wed Apr 13 04:34:40 2011
From: foom at fuhm.net (James Y Knight)
Date: Tue, 12 Apr 2011 22:34:40 -0400
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <io2oga$jb8$1@dough.gmane.org>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net> <io2oga$jb8$1@dough.gmane.org>
Message-ID: <C364A74C-4264-480C-A868-05D1F259D3CF@fuhm.net>

On Apr 12, 2011, at 7:50 PM, Tres Seaver wrote:
> Trying to accelerate existing code which doesn't have the coverage is
> insane:  how can you know that the accelerator doesn't subtly change the
> semantics without tests?

But even if you do have 100% python source code branch coverage, that's not nearly enough. There are thousands of branches inside the python interpreter, which you also need to have full coverage on to *really* ensure that the behavior of the code does not subtly change.

Good luck with that.

James

From cournape at gmail.com  Wed Apr 13 05:25:59 2011
From: cournape at gmail.com (David Cournapeau)
Date: Wed, 13 Apr 2011 12:25:59 +0900
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
	ccompiler
In-Reply-To: <BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>
References: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
	<BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>
Message-ID: <BANLkTikc3JZ3VqHjc_oHDzrZHNsyYrO86w@mail.gmail.com>

On Tue, Apr 12, 2011 at 8:32 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Tue, Apr 12, 2011 at 7:41 AM, Lukas Lueg <lukas.lueg at googlemail.com> wrote:
>> Any other ideas on how to solve this in a better way?
>
> Have you tried with distutils2? If it can't help you, it should really
> be looked into before the packaging API is locked for 3.3.

distutil2 is almost identical to distutils as far as compilation goes,
so I am not sure why it would help the OP.

@Lukas: if you want to check for compiler flag support, the best way
to do it in distutils is to use the config support: look in particular
in the try_compile/try_link methods. The schema is basically:

# code may refer to e.g. a trivial extension source code
try_compile(code) # check that the current option set is sane
for each additional flag you are interested:
   save compiler option
   add the additional flag
   if try_compile(code) == 0:
       restore compiler option

cheers,

David

From stefan_ml at behnel.de  Wed Apr 13 06:28:58 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Wed, 13 Apr 2011 06:28:58 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <20110413020717.421a5695@pitrou.net>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>	<20110413003136.3adb6af8@pitrou.net>
	<io2oga$jb8$1@dough.gmane.org> <20110413020717.421a5695@pitrou.net>
Message-ID: <io38qa$p6k$1@dough.gmane.org>

Antoine Pitrou, 13.04.2011 02:07:
> On Tue, 12 Apr 2011 19:50:34 -0400
> Tres Seaver wrote:
>> Trying to accelerate existing code which doesn't have the coverage is
>> insane:  how can you know that the accelerator doesn't subtly change the
>> semantics without tests?
>
> Well, why do you think tests guarantee that the semantics are the same?
> Tests are not a magic bullet. "Covering" a code path doesn't ensure
> that every possible behaviour is accounted for.

This is particularly true when it comes to input types. There are different 
protocols out there that people use in their code, iteration vs. item 
access being only the most famous ones, inheritance vs. wrapping being 
another issue. Duck-typed Python code may work with a lot more input types 
than C code, even with 100% test coverage. This has been partly mentioned 
in the PEP, but not as clearly in the context of test coverage. Tests can 
only catch issues with the input they use themselves, not with all input 
the code will encounter in the wild.

However, I think we are really discussing a theoretical issue here. All the 
PEP is trying to achieve is to raise the bar for C code in the stdlib, for 
exactly the reason that it can easily introduce subtle semantic differences 
in comparison to generic Python code.

I think it would help to point out in the PEP that code that fails to touch 
the theoretical 100% test coverage bar is not automatically excluded from 
integration, but needs solid reasoning, review and testing in the wild in 
order to be considered an equivalent alternative implementation. But then 
again, this should actually be required anyway, even for code with an 
exceedingly high test coverage.

Stefan


From g.brandl at gmx.net  Wed Apr 13 08:54:00 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 13 Apr 2011 08:54:00 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <20110413020717.421a5695@pitrou.net>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net> <io2oga$jb8$1@dough.gmane.org>
	<20110413020717.421a5695@pitrou.net>
Message-ID: <io3hab$tan$1@dough.gmane.org>

On 13.04.2011 02:07, Antoine Pitrou wrote:
> On Tue, 12 Apr 2011 19:50:34 -0400
> Tres Seaver <tseaver at palladion.com> wrote:
>> Trying to accelerate existing code which doesn't have the coverage is
>> insane:  how can you know that the accelerator doesn't subtly change the
>> semantics without tests?
> 
> Well, why do you think tests guarantee that the semantics are the same?
> Tests are not a magic bullet. "Covering" a code path doesn't ensure
> that every possible behaviour is accounted for.

def foo(a, b):
    if condition(a):
        bar = b
    do_something_with(bar)

This has 100% coverage if "condition" is usually true :)

Georg


From ncoghlan at gmail.com  Wed Apr 13 08:59:27 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 13 Apr 2011 16:59:27 +1000
Subject: [Python-Dev] Hg question
In-Reply-To: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
References: <BANLkTi=hzMJqXCX-f_a0QYkofAiOjJMEpA@mail.gmail.com>
Message-ID: <BANLkTinVLFdnz9UzQ+YZ74QxdEtLu+AhMQ@mail.gmail.com>

On Wed, Apr 13, 2011 at 4:35 AM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> I was preparing a commit to 3.2 and default branches and mistakenly
> used -m insread of -l commit option. ?As a result, I have

If you had caught the change before merging to default, then "hg
rollback" would have done the trick, but since there are *two* commits
you want to alter, then it seems like one of the hg strip approaches
others have suggested will be necessary.

Agreed on the usability annoyances arising from mixing the desire for
a relative clean history in the main repository and hg's near total
intolerance for mistakes, though :P

Cheers,
Nick.


-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From stefan_ml at behnel.de  Wed Apr 13 09:06:30 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Wed, 13 Apr 2011 09:06:30 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <io3hab$tan$1@dough.gmane.org>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>	<20110413003136.3adb6af8@pitrou.net>
	<io2oga$jb8$1@dough.gmane.org>	<20110413020717.421a5695@pitrou.net>
	<io3hab$tan$1@dough.gmane.org>
Message-ID: <io3i1m$19a$1@dough.gmane.org>

Georg Brandl, 13.04.2011 08:54:
> On 13.04.2011 02:07, Antoine Pitrou wrote:
>> On Tue, 12 Apr 2011 19:50:34 -0400
>> Tres Seaver wrote:
>>> Trying to accelerate existing code which doesn't have the coverage is
>>> insane:  how can you know that the accelerator doesn't subtly change the
>>> semantics without tests?
>>
>> Well, why do you think tests guarantee that the semantics are the same?
>> Tests are not a magic bullet. "Covering" a code path doesn't ensure
>> that every possible behaviour is accounted for.
>
> def foo(a, b):
>      if condition(a):
>          bar = b
>      do_something_with(bar)
>
> This has 100% coverage if "condition" is usually true :)

I understand that you are joking. However, the PEP mentions *branch* 
coverage as the 100% goal, which would imply that the above issue gets caught.

Stefan


From dsalvetti at trapeze.com  Tue Apr 12 20:01:46 2011
From: dsalvetti at trapeze.com (Djoume Salvetti)
Date: Tue, 12 Apr 2011 14:01:46 -0400
Subject: [Python-Dev] Bug? Can't rebind local variables after calling
	pdb.set_trace()
In-Reply-To: <BANLkTimPNET1B8Wg7WxRXSszvzx+DgkuOg@mail.gmail.com>
References: <BANLkTi=ax+KiiT=FFhtLJ4pM3m0Ji-hhvg@mail.gmail.com>
	<BANLkTin-AuHVyVATUu9xx+XaHvde17A6yQ@mail.gmail.com>
	<BANLkTikDDfx4FAnxS1kSS2qH2LjBamvBpg@mail.gmail.com>
	<BANLkTimPNET1B8Wg7WxRXSszvzx+DgkuOg@mail.gmail.com>
Message-ID: <BANLkTin6nkyDv6Z-6DoSLGyOJY9uSXjo1Q@mail.gmail.com>

On Tue, Apr 12, 2011 at 1:22 PM, Guido van Rossum <guido at python.org> wrote:
>
> Looking at the pastebin you are using !lv = 2. Why the !? Without it,
> it works fine:
>
>

I just wanted to make sure I was executing a python statement and not a pdb
alias.
I re-tested without the exclamation mark and still have the same issue:

 -> import pdb; pdb.set_trace()
(Pdb) list
  1     gv = 1
  2
  3     def f():
  4         lv = 1
  5  ->     import pdb; pdb.set_trace()
  6
  7     if __name__ == '__main__':
  8         f()
[EOF]
(Pdb) lv
1
(Pdb) lv = 2
(Pdb) lv
1
(Pdb)


-- 
Djoume Salvetti
Director of Development

T:416.601.1999 x 249
www.trapeze.com     twitter: trapeze
175 Bloor St. E., South Tower, Suite 900
Toronto, ON M4W 3R8
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110412/d1d6040b/attachment.html>

From orsenthil at gmail.com  Wed Apr 13 04:33:37 2011
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Wed, 13 Apr 2011 10:33:37 +0800
Subject: [Python-Dev] cpython (3.1): Fix Issue11703 - urllib2.geturl()
 does not return	correct url when the original
In-Reply-To: <20110413014339.19e045a6@pitrou.net>
References: <E1Q9myK-0004rL-Te@dinsdale.python.org>
	<20110413014339.19e045a6@pitrou.net>
Message-ID: <20110413023336.GB16932@kevin>

On Wed, Apr 13, 2011 at 01:43:39AM +0200, Antoine Pitrou wrote:
> Can you add a Misc/NEWS entry?

Added. Thanks for noticing this.

-- 
Senthil

From solipsis at pitrou.net  Wed Apr 13 13:52:25 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 13 Apr 2011 13:52:25 +0200
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net> <io2oga$jb8$1@dough.gmane.org>
	<20110413020717.421a5695@pitrou.net> <io38qa$p6k$1@dough.gmane.org>
Message-ID: <20110413135225.03d110cd@pitrou.net>

On Wed, 13 Apr 2011 06:28:58 +0200
Stefan Behnel <stefan_ml at behnel.de> wrote:
> 
> However, I think we are really discussing a theoretical issue here. All the 
> PEP is trying to achieve is to raise the bar for C code in the stdlib, for 
> exactly the reason that it can easily introduce subtle semantic differences 
> in comparison to generic Python code.

True. But then we're much better without a formal requirement that
some people will start trying to require because they don't understand
that its metric is pointless.

> I think it would help to point out in the PEP that code that fails to touch 
> the theoretical 100% test coverage bar is not automatically excluded from 
> integration, but needs solid reasoning, review and testing in the wild in 
> order to be considered an equivalent alternative implementation.
> But then 
> again, this should actually be required anyway, even for code with an 
> exceedingly high test coverage.

I'm not sure what kind of "testing in the wild" you refer to. If you
mean that it should have e.g. been published on the Cheeseshop, I don't
think that's an useful requirement for an accelerator module.

Regards

Antoine.



From orsenthil at gmail.com  Wed Apr 13 14:52:03 2011
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Wed, 13 Apr 2011 20:52:03 +0800
Subject: [Python-Dev] Mentor for Py3 benchmarking
In-Reply-To: <BANLkTinbZbdXqFs2nzgkmdC97uUg13W9zw@mail.gmail.com>
References: <BANLkTinbZbdXqFs2nzgkmdC97uUg13W9zw@mail.gmail.com>
Message-ID: <20110413125203.GB3200@kevin>

Hi Arc,

I think you should forward this to python-dev. (CCed)
There was a discussion on this over there, so someone should be
definitely interested.

On Tue, Apr 12, 2011 at 11:33:55AM -0400, Arc Riley wrote:
> We have a number of students who proposed to port PyPy's benchmarking suite to
> Python3 to run on speed.python.org, we don't have a mentor for these at the
> moment.
> 
> Would anyone here (pref previous GSoC mentor/student) like to take one of these
> on?
> 
> We have until Monday (4/18) to evaluate students, get patches/blogs/etc taken
> care of, and all mentors assigned.? If there are people here who want to mentor
> talk to either Tarek (for packaging) or Martin v. L?wis (for python-core).? If
> you're an existing python-dev contributor we could especially use your help.

-- 
Senthil


From tjreedy at udel.edu  Wed Apr 13 17:36:54 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 13 Apr 2011 11:36:54 -0400
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <20110413135225.03d110cd@pitrou.net>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>	<20110413003136.3adb6af8@pitrou.net>
	<io2oga$jb8$1@dough.gmane.org>	<20110413020717.421a5695@pitrou.net>
	<io38qa$p6k$1@dough.gmane.org> <20110413135225.03d110cd@pitrou.net>
Message-ID: <io4fum$lkm$1@dough.gmane.org>

On 4/13/2011 7:52 AM, Antoine Pitrou wrote:
> On Wed, 13 Apr 2011 06:28:58 +0200
> Stefan Behnel<stefan_ml at behnel.de>  wrote:

>> I think it would help to point out in the PEP that code that fails to touch
>> the theoretical 100% test coverage bar is not automatically excluded from
>> integration, but needs solid reasoning, review and testing in the wild in
>> order to be considered an equivalent alternative implementation.
>> But then
>> again, this should actually be required anyway, even for code with an
>> exceedingly high test coverage.
>
> I'm not sure what kind of "testing in the wild" you refer to. If you
> mean that it should have e.g. been published on the Cheeseshop, I don't
> think that's an useful requirement for an accelerator module.

The real testing in the wild will come after the accelerator is 
released. Is there any easy way for users to avoid the accelerator, to 
see if it is the source of a problem, short of editing the import in the 
.py file? Test/support appears to jump through some hoops to do so.

-- 
Terry Jan Reedy


From raymond.hettinger at gmail.com  Wed Apr 13 18:46:38 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 13 Apr 2011 09:46:38 -0700
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <20110413135225.03d110cd@pitrou.net>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net> <io2oga$jb8$1@dough.gmane.org>
	<20110413020717.421a5695@pitrou.net> <io38qa$p6k$1@dough.gmane.org>
	<20110413135225.03d110cd@pitrou.net>
Message-ID: <36A9BA73-06C5-426F-9EAB-832A21550866@gmail.com>


On Apr 13, 2011, at 4:52 AM, Antoine Pitrou wrote:

> On Wed, 13 Apr 2011 06:28:58 +0200
> Stefan Behnel <stefan_ml at behnel.de> wrote:
>> 
>> However, I think we are really discussing a theoretical issue here. All the 
>> PEP is trying to achieve is to raise the bar for C code in the stdlib, for 
>> exactly the reason that it can easily introduce subtle semantic differences 
>> in comparison to generic Python code.
> 
> True. But then we're much better without a formal requirement that
> some people will start trying to require because they don't understand
> that its metric is pointless.

I concur.

For most part, anyone converting from C-to-Python or Python-to-C
is already doing their best to make the two as similar as they can.
The PEP falls short because its coverage metric conflates 
the published API with its implementation details.  
The PEP seems confused about the role of white box testing
versus black box testing.
Nor does the PEP provide useful guidance to anyone working 
on a  non-trivial conversion such as decimal, OrderedDict, or threading.

If the underlying theme is "nothing written in C is good for PyPy",
perhaps the PEP should address whether we want any new
C modules at all.  That would be better than setting a formal
requirement that doesn't address any of the realities of building
two versions of a module and keeping them in sync.


Raymond

From rdmurray at bitdance.com  Wed Apr 13 19:00:40 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 13 Apr 2011 13:00:40 -0400
Subject: [Python-Dev] peps: Update PEP 399 to include comments from
	python-dev.
In-Reply-To: <io38qa$p6k$1@dough.gmane.org>
References: <E1Q9lcn-0006zA-Q7@dinsdale.python.org>
	<20110413003136.3adb6af8@pitrou.net> <io2oga$jb8$1@dough.gmane.org>
	<20110413020717.421a5695@pitrou.net> <io38qa$p6k$1@dough.gmane.org>
Message-ID: <20110413170110.798722500D4@mailhost.webabinitio.net>

Antoine Pitrou, 13.04.2011 02:07:
> On Tue, 12 Apr 2011 19:50:34 -0400
> Tres Seaver wrote:
>> Trying to accelerate existing code which doesn't have the coverage is
>> insane:  how can you know that the accelerator doesn't subtly change the
>> semantics without tests?
>
> Well, why do you think tests guarantee that the semantics are the same?
> Tests are not a magic bullet. "Covering" a code path doesn't ensure
> that every possible behaviour is accounted for.

When I suggested we add 100% branch coverage as a recommendation or
requirement to the PEP, I pointed out that it was a place to *start*.
Nobody is saying it guarantees the semantics are the same, that was the
whole point of replacing the statement about semantics with the statement
about test coverage.  When we find places where the two versions don't
match, we will have to (a) decide the compatibility issue[*] and (b) add
tests that enshrine the decision.

As Tres said, if I were *writing* an accelerator, I'd want to start
with 100% branch coverage just to have as good as practical a check on
my implementation as I could.  I'd also try to think of additional tests.

I'm doing this in email (increasing test coverage to 100% before rewriting
algorithms) even though I'm not doing C accelerators.  It just seems
like the sensible thing to do.  (You may think I'm really crazy, since
some of the tests needed to get to 100% branch coverage will be testing
lines of code that I'm removing....but those tests represent particular
edges cases and I want to know that those edge cases continue to pass
after I change the code.)

[*] Maybe the PEP needs to talk about the basis on which those decisions
will be made:  maintaining compatibility across Python implementations.
In other words, a CPython C accelerator can be viewed as *breaking
compatibility with standard Python* if it doesn't implement the documented
interface of the Python version of the module.  (My apologies if this
is in fact already discussed, I didn't reread the PEP to check.)  The
idea is to use the test suite as the check for this, adding tests as
necessary.

--
R. David Murray           http://www.bitdance.com

From rdmurray at bitdance.com  Wed Apr 13 19:04:26 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 13 Apr 2011 13:04:26 -0400
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
	ccompiler
In-Reply-To: <BANLkTikc3JZ3VqHjc_oHDzrZHNsyYrO86w@mail.gmail.com>
References: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
	<BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>
	<BANLkTikc3JZ3VqHjc_oHDzrZHNsyYrO86w@mail.gmail.com>
Message-ID: <20110413170446.830FA2500D4@mailhost.webabinitio.net>

On Wed, 13 Apr 2011 12:25:59 +0900, David Cournapeau <cournape at gmail.com> wrote:
> On Tue, Apr 12, 2011 at 8:32 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> > On Tue, Apr 12, 2011 at 7:41 AM, Lukas Lueg <lukas.lueg at googlemail.com> wrote:
> >> Any other ideas on how to solve this in a better way?
> >
> > Have you tried with distutils2? If it can't help you, it should really
> > be looked into before the packaging API is locked for 3.3.
> 
> distutil2 is almost identical to distutils as far as compilation goes,
> so I am not sure why it would help the OP.

As I read it, Nick's thought wasn't that distutils2 would help the OP,
but that the OP could help Distutils2 and the community by taking his
use case to the developers and making sure that that use case is
supported before the release of packaging in 3.3.

--
R. David Murray           http://www.bitdance.com

From g.brandl at gmx.net  Wed Apr 13 20:21:45 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 13 Apr 2011 20:21:45 +0200
Subject: [Python-Dev] cpython: Fix 64-bit safety issue in BZ2Compressor
	and BZ2Decompressor.
In-Reply-To: <E1Q9kmK-0002RJ-NW@dinsdale.python.org>
References: <E1Q9kmK-0002RJ-NW@dinsdale.python.org>
Message-ID: <io4pk1$krk$1@dough.gmane.org>

On 12.04.2011 23:05, nadeem.vawda wrote:
> http://hg.python.org/cpython/rev/0010cc5f22d4
> changeset:   69275:0010cc5f22d4
> user:        Nadeem Vawda <nadeem.vawda at gmail.com>
> date:        Tue Apr 12 23:02:42 2011 +0200
> summary:
>   Fix 64-bit safety issue in BZ2Compressor and BZ2Decompressor.
> 
> files:
>   Lib/test/test_bz2.py |  36 +++++++++++++++++++++++++++++++-
>   Modules/_bz2module.c |  33 +++++++++++++++++++++--------
>   2 files changed, 59 insertions(+), 10 deletions(-)
> 
> 
> diff --git a/Lib/test/test_bz2.py b/Lib/test/test_bz2.py
> --- a/Lib/test/test_bz2.py
> +++ b/Lib/test/test_bz2.py

Hi,

for reviewing and "hg log" purposes it would be good to include a bit
more information in the commit message, like where the safety issue
was and what its potential implications were.

(Of course that is different if there is an issue on the tracker
to refer to; usually the issue is explained clearly in the issue.)

Georg


From g.brandl at gmx.net  Wed Apr 13 20:24:32 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 13 Apr 2011 20:24:32 +0200
Subject: [Python-Dev] cpython (merge default -> default): merge from
	push conflict.
In-Reply-To: <E1Q9p7U-00076A-M9@dinsdale.python.org>
References: <E1Q9p7U-00076A-M9@dinsdale.python.org>
Message-ID: <io4pp2$ldr$1@dough.gmane.org>

On 13.04.2011 03:43, senthil.kumaran wrote:
> http://hg.python.org/cpython/rev/a4d1a3e0f7bd
> changeset:   69306:a4d1a3e0f7bd
> parent:      69305:35b16d49c0b1
> parent:      69299:c8d075051e88
> user:        Senthil Kumaran <orsenthil at gmail.com>
> date:        Wed Apr 13 09:38:51 2011 +0800
> summary:
>   merge from push conflict.

Hi,

this message is not quite correct -- there is no conflict involved.
You're just merging two heads on the same branch in order to have
only one head in the master repo.

Georg


From nadeem.vawda at gmail.com  Wed Apr 13 20:45:44 2011
From: nadeem.vawda at gmail.com (Nadeem Vawda)
Date: Wed, 13 Apr 2011 20:45:44 +0200
Subject: [Python-Dev] cpython: Fix 64-bit safety issue in BZ2Compressor
 and BZ2Decompressor.
In-Reply-To: <io4pk1$krk$1@dough.gmane.org>
References: <E1Q9kmK-0002RJ-NW@dinsdale.python.org>
	<io4pk1$krk$1@dough.gmane.org>
Message-ID: <BANLkTikCNkkDz-5wiLgDV05=gAmFb7pZpA@mail.gmail.com>

Thanks for the feedback. I'll be sure to include more information in my
future commit messages.

Nadeem

From gzlist at googlemail.com  Thu Apr 14 01:23:57 2011
From: gzlist at googlemail.com (Martin (gzlist))
Date: Thu, 14 Apr 2011 00:23:57 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <4D9E1AA4.4020607@voidspace.org.uk>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>
	<4D9DEB19.10307@voidspace.org.uk>
	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>
	<4D9E1AA4.4020607@voidspace.org.uk>
Message-ID: <BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>

On 07/04/2011, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 07/04/2011 20:18, Robert Collins wrote:
>>
>> Testtools did something to address this problem, but I forget what it
>> was offhand.

Some issues were worked around, but I don't remember any comprehensive solution.

> The proposed "fix" is to make test suite runs destructive, either
> replacing TestCase instances with None or pop'ing tests after they are
> run (the latter being what twisted Trial does). run-in-a-loop helpers
> could still repeatedly iterate over suites, just not call the suite.

Just pop-ing is unlikely to be sufficient in practice. The Bazaar test
suite (which uses testtools nowadays) has code that pops during the
run, but still keeps every case alive for the duration. That trebles
the runtime on my memory-constrained box unless I add a hack that
clears the __dict__ of every testcase after it's run.

Martin

From orsenthil at gmail.com  Thu Apr 14 01:58:56 2011
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Thu, 14 Apr 2011 07:58:56 +0800
Subject: [Python-Dev] cpython (merge default -> default): merge from
 push conflict.
In-Reply-To: <io4pp2$ldr$1@dough.gmane.org>
References: <E1Q9p7U-00076A-M9@dinsdale.python.org>
	<io4pp2$ldr$1@dough.gmane.org>
Message-ID: <20110413235856.GA2581@kevin>

On Wed, Apr 13, 2011 at 08:24:32PM +0200, Georg Brandl wrote:
> > summary:
> >   merge from push conflict.
> 
> this message is not quite correct -- there is no conflict involved.
> You're just merging two heads on the same branch in order to have
> only one head in the master repo.

Okay, got it. I should have just said merge. (I probably typed push
conflict, because push was not allowed as some had already pushed to
repo in quick succession), it is just a merge.


Thanks,
Senthil

From ncoghlan at gmail.com  Thu Apr 14 05:05:42 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 14 Apr 2011 13:05:42 +1000
Subject: [Python-Dev] Pass possibly imcompatible options to distutil's
	ccompiler
In-Reply-To: <20110413170446.830FA2500D4@mailhost.webabinitio.net>
References: <BANLkTin1Lw-icaWEFSuTa+OUg5nKV37inw@mail.gmail.com>
	<BANLkTi=1h+E6ofr1Rs5Q=pZhXEeJ+nRP1w@mail.gmail.com>
	<BANLkTikc3JZ3VqHjc_oHDzrZHNsyYrO86w@mail.gmail.com>
	<20110413170446.830FA2500D4@mailhost.webabinitio.net>
Message-ID: <BANLkTinpQzGpRgfmkfJ0pu6BgVQJ0QSOqg@mail.gmail.com>

On Thu, Apr 14, 2011 at 3:04 AM, R. David Murray <rdmurray at bitdance.com> wrote:
> As I read it, Nick's thought wasn't that distutils2 would help the OP,
> but that the OP could help Distutils2 and the community by taking his
> use case to the developers and making sure that that use case is
> supported before the release of packaging in 3.3.

A little of both, actually. (I don't know either distutils or
distutils2 well enough to know precisely what the latter handles
better, I just know that it is designed to be easier to extend without
being as fragile as custom commands in distutils)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From john at arbash-meinel.com  Thu Apr 14 13:12:00 2011
From: john at arbash-meinel.com (John Arbash Meinel)
Date: Thu, 14 Apr 2011 13:12:00 +0200
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>	<4D9DEB19.10307@voidspace.org.uk>	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>	<4D9E1AA4.4020607@voidspace.org.uk>
	<BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>
Message-ID: <4DA6D680.9030707@arbash-meinel.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 4/14/2011 1:23 AM, Martin (gzlist) wrote:
> On 07/04/2011, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
>> On 07/04/2011 20:18, Robert Collins wrote:
>>>
>>> Testtools did something to address this problem, but I forget what it
>>> was offhand.
> 
> Some issues were worked around, but I don't remember any comprehensive solution.
> 
>> The proposed "fix" is to make test suite runs destructive, either
>> replacing TestCase instances with None or pop'ing tests after they are
>> run (the latter being what twisted Trial does). run-in-a-loop helpers
>> could still repeatedly iterate over suites, just not call the suite.
> 
> Just pop-ing is unlikely to be sufficient in practice. The Bazaar test
> suite (which uses testtools nowadays) has code that pops during the
> run, but still keeps every case alive for the duration. That trebles
> the runtime on my memory-constrained box unless I add a hack that
> clears the __dict__ of every testcase after it's run.
> 
> Martin

I think we would be ok with merging the __dict__ clearing as long as it
doesn't do it for failed tests, etc.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2m1oAACgkQJdeBCYSNAAPHmwCfQSNW8Pk7V7qx6Jl/gYthFVxE
p0cAn0XRvRR+Rqb+yiJnaVEzUOBdwOpf
=19YJ
-----END PGP SIGNATURE-----

From fuzzyman at voidspace.org.uk  Thu Apr 14 13:34:55 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 14 Apr 2011 12:34:55 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>	<4D9DEB19.10307@voidspace.org.uk>	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>	<4D9E1AA4.4020607@voidspace.org.uk>
	<BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>
Message-ID: <4DA6DBDF.6000202@voidspace.org.uk>

On 14/04/2011 00:23, Martin (gzlist) wrote:
> On 07/04/2011, Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>> On 07/04/2011 20:18, Robert Collins wrote:
>>> Testtools did something to address this problem, but I forget what it
>>> was offhand.
> Some issues were worked around, but I don't remember any comprehensive solution.
>
>> The proposed "fix" is to make test suite runs destructive, either
>> replacing TestCase instances with None or pop'ing tests after they are
>> run (the latter being what twisted Trial does). run-in-a-loop helpers
>> could still repeatedly iterate over suites, just not call the suite.
> Just pop-ing is unlikely to be sufficient in practice. The Bazaar test
> suite (which uses testtools nowadays) has code that pops during the
> run, but still keeps every case alive for the duration. That trebles
> the runtime on my memory-constrained box unless I add a hack that
> clears the __dict__ of every testcase after it's run.
I'd be interested to know what is keeping the tests alive even when the 
test suite isn't. As far as I know there is nothing else in unittest 
that would do that.

It's either a general problem that unittest can fix, or it is a problem 
*caused* by the bazaar test suite and should be fixed there. Bazaar does 
some funky stuff copying tests to run them with different backends, so 
it is possible that this is the cause of the problem (and it isn't a 
general problem).

All the best,

Michael Foord
> Martin


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From ricardokirkner at gmail.com  Thu Apr 14 15:09:50 2011
From: ricardokirkner at gmail.com (Ricardo Kirkner)
Date: Thu, 14 Apr 2011 10:09:50 -0300
Subject: [Python-Dev] python and super
Message-ID: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>

Hi all,

I recently stumbled upon an issue with a class in the mro chain not
calling super, therefore breaking the chain (ie, further base classes
along the chain didn't get called).
I understand it is currently a requirement that all classes that are
part of the mro chain behave and always call super. My question is,
shouldn't/wouldn't it be better,
if python took ownership of that part, and ensured all classes get
called, even if some class misbehaved?

For example, if using a stack-like structure, pushing super calls and
popping until the stack was empty, couldn't this restriction be
removed?

Thanks,
Ricardo

From benjamin at python.org  Thu Apr 14 15:15:10 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 14 Apr 2011 08:15:10 -0500
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
Message-ID: <BANLkTinoyS84jpdUZnfib9TcNP_F3PTtMg@mail.gmail.com>

2011/4/14 Ricardo Kirkner <ricardokirkner at gmail.com>:
> Hi all,
>
> I recently stumbled upon an issue with a class in the mro chain not
> calling super, therefore breaking the chain (ie, further base classes
> along the chain didn't get called).
> I understand it is currently a requirement that all classes that are
> part of the mro chain behave and always call super. My question is,
> shouldn't/wouldn't it be better,
> if python took ownership of that part, and ensured all classes get
> called, even if some class misbehaved?
>
> For example, if using a stack-like structure, pushing super calls and
> popping until the stack was empty, couldn't this restriction be
> removed?

No. See line 2 of the Zen of Python.



-- 
Regards,
Benjamin

From solipsis at pitrou.net  Thu Apr 14 15:23:38 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 14 Apr 2011 15:23:38 +0200
Subject: [Python-Dev] python and super
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<BANLkTinoyS84jpdUZnfib9TcNP_F3PTtMg@mail.gmail.com>
Message-ID: <20110414152338.4d5133db@pitrou.net>

On Thu, 14 Apr 2011 08:15:10 -0500
Benjamin Peterson <benjamin at python.org> wrote:
> 2011/4/14 Ricardo Kirkner <ricardokirkner at gmail.com>:
> > Hi all,
> >
> > I recently stumbled upon an issue with a class in the mro chain not
> > calling super, therefore breaking the chain (ie, further base classes
> > along the chain didn't get called).
> > I understand it is currently a requirement that all classes that are
> > part of the mro chain behave and always call super. My question is,
> > shouldn't/wouldn't it be better,
> > if python took ownership of that part, and ensured all classes get
> > called, even if some class misbehaved?
> >
> > For example, if using a stack-like structure, pushing super calls and
> > popping until the stack was empty, couldn't this restriction be
> > removed?
> 
> No. See line 2 of the Zen of Python.

You could have quoted it explicitly :)
FWIW, line 2 is:
    Explicit is better than implicit.

Regards

Antoine.



From g.rodola at gmail.com  Thu Apr 14 15:36:56 2011
From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=)
Date: Thu, 14 Apr 2011 15:36:56 +0200
Subject: [Python-Dev] python and super
In-Reply-To: <20110414152338.4d5133db@pitrou.net>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<BANLkTinoyS84jpdUZnfib9TcNP_F3PTtMg@mail.gmail.com>
	<20110414152338.4d5133db@pitrou.net>
Message-ID: <BANLkTinw5F9FvQsPz19x8Tj2kZKEzqGiSA@mail.gmail.com>

:-)

2011/4/14 Antoine Pitrou <solipsis at pitrou.net>

> On Thu, 14 Apr 2011 08:15:10 -0500
> Benjamin Peterson <benjamin at python.org> wrote:
> > 2011/4/14 Ricardo Kirkner <ricardokirkner at gmail.com>:
> > > Hi all,
> > >
> > > I recently stumbled upon an issue with a class in the mro chain not
> > > calling super, therefore breaking the chain (ie, further base classes
> > > along the chain didn't get called).
> > > I understand it is currently a requirement that all classes that are
> > > part of the mro chain behave and always call super. My question is,
> > > shouldn't/wouldn't it be better,
> > > if python took ownership of that part, and ensured all classes get
> > > called, even if some class misbehaved?
> > >
> > > For example, if using a stack-like structure, pushing super calls and
> > > popping until the stack was empty, couldn't this restriction be
> > > removed?
> >
> > No. See line 2 of the Zen of Python.
>
> You could have quoted it explicitly :)
> FWIW, line 2 is:
>    Explicit is better than implicit.
>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110414/e0d1fbbc/attachment.html>

From steve at pearwood.info  Thu Apr 14 15:43:52 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 14 Apr 2011 23:43:52 +1000
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
Message-ID: <4DA6FA18.708@pearwood.info>

Ricardo Kirkner wrote:
> Hi all,
> 
> I recently stumbled upon an issue with a class in the mro chain not
> calling super, therefore breaking the chain (ie, further base classes
> along the chain didn't get called).
> I understand it is currently a requirement that all classes that are
> part of the mro chain behave and always call super. My question is,
> shouldn't/wouldn't it be better,
> if python took ownership of that part, and ensured all classes get
> called, even if some class misbehaved?

Consider the difference between extending the method and replacing it. 
(I've always known that as "overloading" and "overriding", but the 
terminology varies.) If Python automagically always called super(), how 
would you replace a method?

For that matter, at which point would you automagically call super()? At 
the start of the overloaded method, before the subclass code runs? At 
the end, after the subclass code? Somewhere in the middle?

class Spam(Ham):
     def method(self):
         # Overload method.
         super().method()  # at the start of the method?
         do_stuff()
         super().method()  # in the middle of the method?
         do_more_stuff()
         super().method()  # or at the end of the overloaded method?


What arguments should be passed? What do you do with the result?

If you can think of a way for Python to automagically tell when to call 
super(), what arguments to pass to it, and what to do with the result, 
your crystal ball is better than mine.


-- 
Steven


From ronaldoussoren at mac.com  Thu Apr 14 16:18:22 2011
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Thu, 14 Apr 2011 16:18:22 +0200
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
Message-ID: <70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>


On 14 Apr, 2011, at 15:09, Ricardo Kirkner wrote:

> Hi all,
> 
> I recently stumbled upon an issue with a class in the mro chain not
> calling super, therefore breaking the chain (ie, further base classes
> along the chain didn't get called).
> I understand it is currently a requirement that all classes that are
> part of the mro chain behave and always call super. My question is,
> shouldn't/wouldn't it be better,
> if python took ownership of that part, and ensured all classes get
> called, even if some class misbehaved?

Not calling a method on super isn't  necessarily misbehavior.  It would be odd to not call super in __init__, but for other methods not calling the superclass implementation is fairly common.

Ronald


From fuzzyman at voidspace.org.uk  Thu Apr 14 16:55:06 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 14 Apr 2011 15:55:06 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
Message-ID: <4DA70ACA.4070204@voidspace.org.uk>

On 14/04/2011 15:18, Ronald Oussoren wrote:
> On 14 Apr, 2011, at 15:09, Ricardo Kirkner wrote:
>
>> Hi all,
>>
>> I recently stumbled upon an issue with a class in the mro chain not
>> calling super, therefore breaking the chain (ie, further base classes
>> along the chain didn't get called).
>> I understand it is currently a requirement that all classes that are
>> part of the mro chain behave and always call super. My question is,
>> shouldn't/wouldn't it be better,
>> if python took ownership of that part, and ensured all classes get
>> called, even if some class misbehaved?
> Not calling a method on super isn't  necessarily misbehavior.  It would be odd to not call super in __init__, but for other methods not calling the superclass implementation is fairly common.
>

Right, but where you have an inheritance chain where all the classes do 
call super but one doesn't then you can get breakage. This is a problem 
where you want to use multiple inheritance but a parent class of *one* 
of the classes doesn't call super. Not only does the super of its 
parents not get called - but the chain stops and other methods (in 
another branch of the inheritance tree) also don't get called. And if 
the base classes are not all under your control there maybe no fix - 
except possibly monkey patching.

Ricardo isn't suggesting that Python should always call super for you, 
but when you *start* the chain by calling super then Python could ensure 
that all the methods are called for you. If an individual method doesn't 
call super then a theoretical implementation could skip the parents  
methods (unless another child calls super).

All the best,

Michael Foord



> Ronald
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From lac at openend.se  Thu Apr 14 17:02:20 2011
From: lac at openend.se (Laura Creighton)
Date: Thu, 14 Apr 2011 17:02:20 +0200
Subject: [Python-Dev] python and super
In-Reply-To: Message from Michael Foord <fuzzyman@voidspace.org.uk> 
	of "Thu, 14 Apr 2011 15:55:06 BST." <4DA70ACA.4070204@voidspace.org.uk> 
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk> 
Message-ID: <201104141502.p3EF2KuK005728@theraft.openend.se>

I think that if you add this, people will start relying on it.

Laura


From fuzzyman at voidspace.org.uk  Thu Apr 14 17:04:21 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 14 Apr 2011 16:04:21 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <201104141502.p3EF2KuK005728@theraft.openend.se>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<201104141502.p3EF2KuK005728@theraft.openend.se>
Message-ID: <4DA70CF5.6010702@voidspace.org.uk>

On 14/04/2011 16:02, Laura Creighton wrote:
> I think that if you add this, people will start relying on it.
>

And the specific problem with that would be?

Michael

> Laura


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From pje at telecommunity.com  Thu Apr 14 17:34:52 2011
From: pje at telecommunity.com (P.J. Eby)
Date: Thu, 14 Apr 2011 11:34:52 -0400
Subject: [Python-Dev] python and super
In-Reply-To: <4DA70ACA.4070204@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
Message-ID: <20110414153503.F125B3A4063@sparrow.telecommunity.com>

At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
>Ricardo isn't suggesting that Python should always call super for 
>you, but when you *start* the chain by calling super then Python 
>could ensure that all the methods are called for you. If an 
>individual method doesn't call super then a theoretical 
>implementation could skip the parents
>methods (unless another child calls super).

That would break classes that deliberately don't call super.  I can 
think of examples in my own code that would break, especially in 
__init__() cases.

It's perfectly sensible and useful for there to be classes that 
intentionally fail to call super(), and yet have a subclass that 
wants to use super().  So, this change would expose an internal 
implementation detail of a class to its subclasses, and make "fragile 
base class" problems worse.  (i.e., where an internal change to a 
base class breaks a previously-working subclass).


From fuzzyman at voidspace.org.uk  Thu Apr 14 17:37:38 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 14 Apr 2011 16:37:38 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <20110414153503.F125B3A4063@sparrow.telecommunity.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
Message-ID: <4DA714C2.7000006@voidspace.org.uk>

On 14/04/2011 16:34, P.J. Eby wrote:
> At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
>> Ricardo isn't suggesting that Python should always call super for 
>> you, but when you *start* the chain by calling super then Python 
>> could ensure that all the methods are called for you. If an 
>> individual method doesn't call super then a theoretical 
>> implementation could skip the parents
>> methods (unless another child calls super).
>
> That would break classes that deliberately don't call super.  I can 
> think of examples in my own code that would break, especially in 
> __init__() cases.
>
> It's perfectly sensible and useful for there to be classes that 
> intentionally fail to call super(), and yet have a subclass that wants 
> to use super().  So, this change would expose an internal 
> implementation detail of a class to its subclasses, and make "fragile 
> base class" problems worse.  (i.e., where an internal change to a base 
> class breaks a previously-working subclass).
It shouldn't do. What I was suggesting is that a method not calling 
super shouldn't stop a *sibling* method being called, but could still 
prevent the *parent* method being called.

Michael

-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From ricardokirkner at gmail.com  Thu Apr 14 17:48:29 2011
From: ricardokirkner at gmail.com (Ricardo Kirkner)
Date: Thu, 14 Apr 2011 12:48:29 -0300
Subject: [Python-Dev] python and super
In-Reply-To: <4DA714C2.7000006@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<4DA714C2.7000006@voidspace.org.uk>
Message-ID: <BANLkTinOkFy49DqAFjt5N1D-UWxVg+E+Gw@mail.gmail.com>

Exactly what Michael said. Stopping the chain going upwards is one
thing. Stopping it going sideways is another.

On Thu, Apr 14, 2011 at 12:37 PM, Michael Foord
<fuzzyman at voidspace.org.uk> wrote:
> On 14/04/2011 16:34, P.J. Eby wrote:
>>
>> At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
>>>
>>> Ricardo isn't suggesting that Python should always call super for you,
>>> but when you *start* the chain by calling super then Python could ensure
>>> that all the methods are called for you. If an individual method doesn't
>>> call super then a theoretical implementation could skip the parents
>>> methods (unless another child calls super).
>>
>> That would break classes that deliberately don't call super. ?I can think
>> of examples in my own code that would break, especially in __init__() cases.
>>
>> It's perfectly sensible and useful for there to be classes that
>> intentionally fail to call super(), and yet have a subclass that wants to
>> use super(). ?So, this change would expose an internal implementation detail
>> of a class to its subclasses, and make "fragile base class" problems worse.
>> ?(i.e., where an internal change to a base class breaks a previously-working
>> subclass).
>
> It shouldn't do. What I was suggesting is that a method not calling super
> shouldn't stop a *sibling* method being called, but could still prevent the
> *parent* method being called.
>
> Michael
>
> --
> http://www.voidspace.org.uk/
>
> May you do good and not evil
> May you find forgiveness for yourself and forgive others
> May you share freely, never taking more than you give.
> -- the sqlite blessing http://www.sqlite.org/different.html
>
>

From urban.dani+py at gmail.com  Thu Apr 14 17:56:20 2011
From: urban.dani+py at gmail.com (Daniel Urban)
Date: Thu, 14 Apr 2011 17:56:20 +0200
Subject: [Python-Dev] python and super
In-Reply-To: <70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
Message-ID: <BANLkTi=qnc8cySPVs-d4wf-iQM4QxN+ifQ@mail.gmail.com>

On Thu, Apr 14, 2011 at 16:18, Ronald Oussoren <ronaldoussoren at mac.com> wrote:
> It would be odd to not call super in __init__, but for other methods not calling the superclass implementation is fairly common.

Yes it is odd, that for example list.__init__ doesn't call super :-)
(http://bugs.python.org/issue8733)

Daniel

From raymond.hettinger at gmail.com  Thu Apr 14 18:02:13 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 14 Apr 2011 09:02:13 -0700
Subject: [Python-Dev] python and super
In-Reply-To: <20110414153503.F125B3A4063@sparrow.telecommunity.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
Message-ID: <825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>


On Apr 14, 2011, at 8:34 AM, P.J. Eby wrote:

> At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
>> Ricardo isn't suggesting that Python should always call super for you, but when you *start* the chain by calling super then Python could ensure that all the methods are called for you. If an individual method doesn't call super then a theoretical implementation could skip the parents
>> methods (unless another child calls super).
> 
> That would break classes that deliberately don't call super.  I can think of examples in my own code that would break, especially in __init__() cases.
> 
> It's perfectly sensible and useful for there to be classes that intentionally fail to call super(), and yet have a subclass that wants to use super().  So, this change would expose an internal implementation detail of a class to its subclasses, and make "fragile base class" problems worse.  (i.e., where an internal change to a base class breaks a previously-working subclass).

I agree.  Better for someone to submit a recipe for a variant of super and see if there is any uptake.


Raymond


From fuzzyman at voidspace.org.uk  Thu Apr 14 18:10:11 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 14 Apr 2011 17:10:11 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
Message-ID: <4DA71C63.3030809@voidspace.org.uk>

On 14/04/2011 17:02, Raymond Hettinger wrote:
> On Apr 14, 2011, at 8:34 AM, P.J. Eby wrote:
>
>> At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
>>> Ricardo isn't suggesting that Python should always call super for you, but when you *start* the chain by calling super then Python could ensure that all the methods are called for you. If an individual method doesn't call super then a theoretical implementation could skip the parents
>>> methods (unless another child calls super).
>> That would break classes that deliberately don't call super.  I can think of examples in my own code that would break, especially in __init__() cases.
>>
>> It's perfectly sensible and useful for there to be classes that intentionally fail to call super(), and yet have a subclass that wants to use super().  So, this change would expose an internal implementation detail of a class to its subclasses, and make "fragile base class" problems worse.  (i.e., where an internal change to a base class breaks a previously-working subclass).
> I agree.  Better for someone to submit a recipe for a variant of super and see if there is any uptake.

In Python 3 super is treated specially by the compiler, so an 
alternative implementation that behaves similarly to the built-in one 
modulo this change is not possible.

Two use cases for the suggested alternative behaviour have been 
presented. What is the use case for a method not wanting to prevent its 
*sibling* methods in a multiple inheritance situation being called?

I believe the use case Phillip (and others) have presented is for 
methods preventing their *parent* methods being called.

All the best,

Michael Foord

>
> Raymond
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From ronaldoussoren at mac.com  Thu Apr 14 18:59:57 2011
From: ronaldoussoren at mac.com (Ronald Oussoren)
Date: Thu, 14 Apr 2011 18:59:57 +0200
Subject: [Python-Dev] python and super
In-Reply-To: <4DA71C63.3030809@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
Message-ID: <8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>


On 14 Apr, 2011, at 18:10, Michael Foord wrote:

> On 14/04/2011 17:02, Raymond Hettinger wrote:
>> On Apr 14, 2011, at 8:34 AM, P.J. Eby wrote:
>> 
>>> At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
>>>> Ricardo isn't suggesting that Python should always call super for you, but when you *start* the chain by calling super then Python could ensure that all the methods are called for you. If an individual method doesn't call super then a theoretical implementation could skip the parents
>>>> methods (unless another child calls super).
>>> That would break classes that deliberately don't call super.  I can think of examples in my own code that would break, especially in __init__() cases.
>>> 
>>> It's perfectly sensible and useful for there to be classes that intentionally fail to call super(), and yet have a subclass that wants to use super().  So, this change would expose an internal implementation detail of a class to its subclasses, and make "fragile base class" problems worse.  (i.e., where an internal change to a base class breaks a previously-working subclass).
>> I agree.  Better for someone to submit a recipe for a variant of super and see if there is any uptake.
> 
> In Python 3 super is treated specially by the compiler, so an alternative implementation that behaves similarly to the built-in one modulo this change is not possible.
> 
> Two use cases for the suggested alternative behaviour have been presented. What is the use case for a method not wanting to prevent its *sibling* methods in a multiple inheritance situation being called?
> 
> I believe the use case Phillip (and others) have presented is for methods preventing their *parent* methods being called.

What would the semantics be of a super that intentially calls all siblings? In particular what is the return value of such a call? The implementation can't know how to combine the implementations in the inheritance chain and should refuse the tempation to guess.

Ronald




From glyph at twistedmatrix.com  Thu Apr 14 20:16:57 2011
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Thu, 14 Apr 2011 14:16:57 -0400
Subject: [Python-Dev] python and super
In-Reply-To: <8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
Message-ID: <41DC4006-9621-4403-A1E6-319B87B4EBF6@twistedmatrix.com>

On Apr 14, 2011, at 12:59 PM, Ronald Oussoren wrote:

> What would the semantics be of a super that (...)

I think it's long past time that this move to python-ideas, if you don't mind.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110414/d225865c/attachment.html>

From brett at python.org  Thu Apr 14 20:53:56 2011
From: brett at python.org (Brett Cannon)
Date: Thu, 14 Apr 2011 11:53:56 -0700
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
	merge from 3.2.
In-Reply-To: <E1QAEzE-0005M1-D0@dinsdale.python.org>
References: <E1QAEzE-0005M1-D0@dinsdale.python.org>
Message-ID: <BANLkTi=ZEUYGq1if_SdzR+N-JHbd2H-r1Q@mail.gmail.com>

I think you have the wrong issue #; that one has to do with string
exceptions.

On Wed, Apr 13, 2011 at 22:21, senthil.kumaran
<python-checkins at python.org>wrote:

> http://hg.python.org/cpython/rev/7563f10275a2
> changeset:   69350:7563f10275a2
> parent:      69344:1f767f834e67
> parent:      69349:37d1b749eebb
> user:        Senthil Kumaran <orsenthil at gmail.com>
> date:        Thu Apr 14 13:20:41 2011 +0800
> summary:
>  merge from 3.2.
>
> Fix closes Issue1147.
>
> files:
>  Lib/nturl2path.py       |   5 ++++-
>  Lib/test/test_urllib.py |  18 ++++++++++++++++++
>  Misc/NEWS               |   3 +++
>  3 files changed, 25 insertions(+), 1 deletions(-)
>
>
> diff --git a/Lib/nturl2path.py b/Lib/nturl2path.py
> --- a/Lib/nturl2path.py
> +++ b/Lib/nturl2path.py
> @@ -27,9 +27,12 @@
>     drive = comp[0][-1].upper()
>     components = comp[1].split('/')
>     path = drive + ':'
> -    for  comp in components:
> +    for comp in components:
>         if comp:
>             path = path + '\\' + urllib.parse.unquote(comp)
> +    # Issue #11474 - handing url such as |c/|
> +    if path.endswith(':') and url.endswith('/'):
> +        path += '\\'
>     return path
>
>  def pathname2url(p):
> diff --git a/Lib/test/test_urllib.py b/Lib/test/test_urllib.py
> --- a/Lib/test/test_urllib.py
> +++ b/Lib/test/test_urllib.py
> @@ -9,6 +9,7 @@
>  import unittest
>  from test import support
>  import os
> +import sys
>  import tempfile
>
>  def hexescape(char):
> @@ -1021,6 +1022,23 @@
>                          "url2pathname() failed; %s != %s" %
>                          (expect, result))
>
> +    @unittest.skipUnless(sys.platform == 'win32',
> +                         'test specific to the urllib.url2path function.')
> +    def test_ntpath(self):
> +        given = ('/C:/', '///C:/', '/C|//')
> +        expect = 'C:\\'
> +        for url in given:
> +            result = urllib.request.url2pathname(url)
> +            self.assertEqual(expect, result,
> +                             'urllib.request..url2pathname() failed; %s !=
> %s' %
> +                             (expect, result))
> +        given = '///C|/path'
> +        expect = 'C:\\path'
> +        result = urllib.request.url2pathname(given)
> +        self.assertEqual(expect, result,
> +                         'urllib.request.url2pathname() failed; %s != %s'
> %
> +                         (expect, result))
> +
>  class Utility_Tests(unittest.TestCase):
>     """Testcase to test the various utility functions in the urllib."""
>
> diff --git a/Misc/NEWS b/Misc/NEWS
> --- a/Misc/NEWS
> +++ b/Misc/NEWS
> @@ -103,6 +103,9 @@
>  Library
>  -------
>
> +- Issue #11474: Fix the bug with url2pathname() handling of '/C|/' on
> Windows.
> +  Patch by Santoso Wijaya.
> +
>  - Issue #11684: complete email.parser bytes API by adding
> BytesHeaderParser.
>
>  - The bz2 module now handles 4GiB+ input buffers correctly.
>
> --
> Repository URL: http://hg.python.org/cpython
>
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110414/51186587/attachment.html>

From sandro.tosi at gmail.com  Thu Apr 14 21:22:27 2011
From: sandro.tosi at gmail.com (Sandro Tosi)
Date: Thu, 14 Apr 2011 21:22:27 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
Message-ID: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>

Hi all,
it all started with issue10019.

The version we have in cpython of json is simplejson 2.0.9 highly
patched (either because it was converted to py3k, and because of the
normal flow of issues/bugfixes) while upstream have already released
2.1.13 .

Their 2 roads had diverged a lot, and since this blocks any further
update of cpython's json from upstream, I'd like to close this gap.

This isn't exactly an easy task, and this email is more about a
brainstorming on the ways we have to achieve the goal: being able to
upgrade json to 2.1.13.

Luckily, upstream is receptive for patches, so part of the job is to
forward patches written for cpython not already in the upstream code.

But how am I going to do this? let's do a brain-dump:

- the history goes back at changeset f686aced02a3 (May 2009, wow) when
2.0.9 was merged on trunk
- I can navigate from that CS up to tip, and examine the diffs and see
if they apply to 2.1.3 and prepare a set of patches to be forwarded
- part of those diffs is about py3k conversion, that probably needs to
be extended to other part of the upstream code not currently in
cpython. For those "new" code parts, do you have some guides about
porting a project to py3k? it would be my first time and other than
building it and running it with python3 i don't know what to do :)
- once (and if :) I reach the point where I've all the relevant
patches applied on 2.1.3 what's the next step?
-- take 2.1.3 + patches, copy it on Lib/json + test + Modules and see
what breaks?
-- what about the doc? (uh luckily I just noticed it's already in the
upstream repo, so another thing to sync)
- what are we going to do in the long run? how can we assure we'll be
having a healthy collaboration with upsteam? f.e. in case a bug is
reported (and later on fixed) in cpython? is there a policy for
projects present in cpython and also maintained elsewhere?

At the end: do you have some suggestions that might this task be
successful? advice on the steps above, tips about the merge, something
like this.

Thanks a lot for your time,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi

From solipsis at pitrou.net  Thu Apr 14 22:46:19 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 14 Apr 2011 22:46:19 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
Message-ID: <20110414224619.18930f91@pitrou.net>

On Thu, 14 Apr 2011 21:22:27 +0200
Sandro Tosi <sandro.tosi at gmail.com> wrote:
> 
> But how am I going to do this? let's do a brain-dump:

IMHO, you should compute the diff between 2.0.9 and 2.1.3 and try to
apply it to the CPython source tree (you'll probably have to change the
file paths).

> - what are we going to do in the long run? how can we assure we'll be
> having a healthy collaboration with upsteam?

Tricky question... :/

Regards

Antoine.



From martin at v.loewis.de  Thu Apr 14 23:04:09 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 14 Apr 2011 23:04:09 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
Message-ID: <4DA76149.2000603@v.loewis.de>

> - what are we going to do in the long run? how can we assure we'll be
> having a healthy collaboration with upsteam? f.e. in case a bug is
> reported (and later on fixed) in cpython? is there a policy for
> projects present in cpython and also maintained elsewhere?
> 
> At the end: do you have some suggestions that might this task be
> successful? advice on the steps above, tips about the merge, something
> like this.

I think it would be useful if the porting was done all-over, in a way
that allows upstream to provide 2.x and 3.x out of a single code base,
and get this port merged into upstream.

If there are bug fixes that we made on the json algorithms proper, these
would have to be identified and redone, or simply ignored (hoping that
somebody will re-report them if the issue persists).

A necessary prerequisite is that we have a dedicated maintainer of
the json package.

Regards,
Martin

From raymond.hettinger at gmail.com  Thu Apr 14 23:29:10 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 14 Apr 2011 14:29:10 -0700
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
Message-ID: <52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>


On Apr 14, 2011, at 12:22 PM, Sandro Tosi wrote:

> The version we have in cpython of json is simplejson 2.0.9 highly
> patched (either because it was converted to py3k, and because of the
> normal flow of issues/bugfixes) while upstream have already released
> 2.1.13 .
> 
> Their 2 roads had diverged a lot, and since this blocks any further
> update of cpython's json from upstream, I'd like to close this gap.

Are you proposing updates to the Python 3.3 json module
to include newer features like use_decimal and changing
the indent argument from an integer to a string?


> - what are we going to do in the long run? 

If Bob shows no interest in Python 3, then
the code bases will probably continue to diverge.

Since the JSON spec is set in stone, the changes
will mostly be about API (indentation, object conversion, etc)
and optimization.  I presume the core parsing logic won't
be changing much.


Raymond




From list-sink at trainedmonkeystudios.org  Thu Apr 14 23:56:50 2011
From: list-sink at trainedmonkeystudios.org (Terrence Cole)
Date: Thu, 14 Apr 2011 14:56:50 -0700
Subject: [Python-Dev] python and super
In-Reply-To: <4DA71C63.3030809@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
Message-ID: <1302818210.5819.8.camel@localhost>

On Thu, 2011-04-14 at 17:10 +0100, Michael Foord wrote:
> On 14/04/2011 17:02, Raymond Hettinger wrote:
> > On Apr 14, 2011, at 8:34 AM, P.J. Eby wrote:
> >
> >> At 03:55 PM 4/14/2011 +0100, Michael Foord wrote:
> >>> Ricardo isn't suggesting that Python should always call super for you, but when you *start* the chain by calling super then Python could ensure that all the methods are called for you. If an individual method doesn't call super then a theoretical implementation could skip the parents
> >>> methods (unless another child calls super).
> >> That would break classes that deliberately don't call super.  I can think of examples in my own code that would break, especially in __init__() cases.
> >>
> >> It's perfectly sensible and useful for there to be classes that intentionally fail to call super(), and yet have a subclass that wants to use super().  So, this change would expose an internal implementation detail of a class to its subclasses, and make "fragile base class" problems worse.  (i.e., where an internal change to a base class breaks a previously-working subclass).
> > I agree.  Better for someone to submit a recipe for a variant of super and see if there is any uptake.
> 
> In Python 3 super is treated specially by the compiler, so an 
> alternative implementation that behaves similarly to the built-in one 
> modulo this change is not possible.

I know that super does some astonishing *runtime* hackery with co_code
when you don't pass arguments, but I thought that was all that was
needed.  What does the compiler have to do specially for super that
would prevent somebody from implementing something like it?

> Two use cases for the suggested alternative behaviour have been 
> presented. What is the use case for a method not wanting to prevent its 
> *sibling* methods in a multiple inheritance situation being called?
> 
> I believe the use case Phillip (and others) have presented is for 
> methods preventing their *parent* methods being called.
>
> All the best,
> 
> Michael Foord
> 
> >
> > Raymond
> >
> 
> 



From pjenvey at underboss.org  Fri Apr 15 00:25:58 2011
From: pjenvey at underboss.org (Philip Jenvey)
Date: Thu, 14 Apr 2011 15:25:58 -0700
Subject: [Python-Dev] Hosting the Jython hg repo
In-Reply-To: <20110410234423.3d5e98c6@pitrou.net>
References: <A75E64A6-247D-41A8-B6D8-3CAA96D94616@underboss.org>
	<4DA20BF0.4020604@v.loewis.de> <20110410234423.3d5e98c6@pitrou.net>
Message-ID: <DCEBB677-9364-482E-8C1C-0781DD0F6CA5@underboss.org>


On Apr 10, 2011, at 2:44 PM, Antoine Pitrou wrote:

> On Sun, 10 Apr 2011 21:58:40 +0200
> "Martin v. L?wis" <martin at v.loewis.de> wrote:
>> 
>> Ultimately, it's up to Georg and Antoine to decide whether they want
>> to accept the load.
> 
> I don't want to maintain the Jython repo myself but if Georg or Philip
> accepts to do it it's fine.
> 
>> One option would be to grant a Jython developer
>> control to account management - preferably a single person, who would
>> then also approve/apply changes to the hooks.
> 
> +1.

Let's go ahead with this option then. Can someone please grant me said access?

--
Philip Jenvey


From ricardokirkner at gmail.com  Fri Apr 15 00:32:58 2011
From: ricardokirkner at gmail.com (Ricardo Kirkner)
Date: Thu, 14 Apr 2011 19:32:58 -0300
Subject: [Python-Dev] python and super
In-Reply-To: <8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
Message-ID: <BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>

>
> What would the semantics be of a super that intentially calls all siblings? In particular what is the return value of such a call? The implementation can't know how to combine the implementations in the inheritance chain and should refuse the tempation to guess.

I'll give you the example I came upon:

I have a TestCase class, which inherits from both Django's TestCase
and from some custom TestCases that act as mixin classes. So I have
something like

class MyTestCase(TestCase, Mixin1, Mixin2):
   ...

now django's TestCase class inherits from unittest2.TestCase, which we
found was not calling super. Even if this is a bug and should be fixed
in unittest2, this is an example where I, as a consumer of django,
shouldn't have to be worried about how django's TestCase class is
implemented. Since I explicitely base off 3 classes, I expected all 3
classes to be initialized, and I expect the setUp method to be called
on all of them.

If I'm assuming/expecting unreasonable things, please enlighten me.
Otherwise, there you have a real-world use case for when you'd want
the sibling classes to be called even if one class breaks the mro
chain (in this case TestCase).

Thanks,
Ricardo

From ethan at stoneleaf.us  Fri Apr 15 01:00:34 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 14 Apr 2011 16:00:34 -0700
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
Message-ID: <4DA77C92.80007@stoneleaf.us>

Ricardo Kirkner wrote:
>> What would the semantics be of a super that intentially calls all
 >> siblings? In particular what is the return value of such a call?
 >> The implementation can't know how to combine the implementations
 >> in the inheritance chain and should refuse the tempation to guess.
> 
> I'll give you the example I came upon:
> 
> I have a TestCase class, which inherits from both Django's TestCase
> and from some custom TestCases that act as mixin classes. So I have
> something like
> 
> class MyTestCase(TestCase, Mixin1, Mixin2):
>    ...
> 
> now django's TestCase class inherits from unittest2.TestCase, which we
> found was not calling super. Even if this is a bug and should be fixed
> in unittest2, this is an example where I, as a consumer of django,
> shouldn't have to be worried about how django's TestCase class is
> implemented. Since I explicitely base off 3 classes, I expected all 3
> classes to be initialized, and I expect the setUp method to be called
> on all of them.
> 
> If I'm assuming/expecting unreasonable things, please enlighten me.
> Otherwise, there you have a real-world use case for when you'd want
> the sibling classes to be called even if one class breaks the mro
> chain (in this case TestCase).

How does python tell your use-case from, say, this:

class Mixin3(unittest2.TestCase):
     "stuff happens"

class MyTestCase(TestCase, Mixin1, Mixin2, Mixin3):
     ...

Here we have django's TestCase that does *not* want to call 
unittest2.TestCase (assuming that's not a bug), but it gets called 
anyway because the Mixin3 sibling has it as a base class.  So does this 
mean that TestCase and Mixin3 just don't play well together?

Maybe composition instead of inheritance is the answer (in this case, 
anyway ;).

~Ethan~

From ben+python at benfinney.id.au  Fri Apr 15 01:19:36 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Fri, 15 Apr 2011 09:19:36 +1000
Subject: [Python-Dev] Adding test case methods to TestCase subclasses (was:
	python and super)
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA77C92.80007@stoneleaf.us>
Message-ID: <87zkns2zg7.fsf_-_@benfinney.id.au>

Ethan Furman <ethan at stoneleaf.us> writes:

> Here we have django's TestCase that does *not* want to call
> unittest2.TestCase (assuming that's not a bug), but it gets called
> anyway because the Mixin3 sibling has it as a base class.  So does
> this mean that TestCase and Mixin3 just don't play well together?
>
> Maybe composition instead of inheritance is the answer (in this case,
> anyway ;).

TestCase subclasses is a multiple-inheritance use case that I share. The
mix-ins add test cases (methods named ?test_? on the mix-in class) to
the TestCase subclass. I would prefer not to use multiple inheritance
for this if it can be achieved in a better way.

How can composition add test cases detectable by Python 2's ?unittest?
and Python 3's ?unittest2??

-- 
 \         ?The userbase for strong cryptography declines by half with |
  `\      every additional keystroke or mouseclick required to make it |
_o__)                                             work.? ?Carl Ellison |
Ben Finney


From raymond.hettinger at gmail.com  Fri Apr 15 01:39:01 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 14 Apr 2011 16:39:01 -0700
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
Message-ID: <188F4538-E8C6-4BDA-BE65-5131052D9449@gmail.com>


On Apr 14, 2011, at 3:32 PM, Ricardo Kirkner wrote:

>> 
>> What would the semantics be of a super that intentially calls all siblings? In particular what is the return value of such a call? The implementation can't know how to combine the implementations in the inheritance chain and should refuse the tempation to guess.
> 
> I'll give you the example I came upon:
> 
> I have a TestCase class, which inherits from both Django's TestCase
> and from some custom TestCases that act as mixin classes. So I have
> something like
> 
> class MyTestCase(TestCase, Mixin1, Mixin2):
>   ...
> 
> now django's TestCase class inherits from unittest2.TestCase, which we
> found was not calling super. Even if this is a bug and should be fixed
> in unittest2, this is an example where I, as a consumer of django,
> shouldn't have to be worried about how django's TestCase class is
> implemented. Since I explicitely base off 3 classes, I expected all 3
> classes to be initialized, and I expect the setUp method to be called
> on all of them.
> 
> If I'm assuming/expecting unreasonable things, please enlighten me.

For cooperative-multiple-inheritance to work, the classes
need to cooperate by having been designed to work together
in a series of cooperative super calls.

If an external non-cooperative class needs to be used, then
it should be wrapped in a class that makes an explicit
__init__ call to the external class and then calls super().__init__()
to continue the forwarding.


Raymond

From tjreedy at udel.edu  Fri Apr 15 02:00:11 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 14 Apr 2011 20:00:11 -0400
Subject: [Python-Dev] cpython (merge 3.2 -> default): merge from 3.2.
In-Reply-To: <BANLkTi=ZEUYGq1if_SdzR+N-JHbd2H-r1Q@mail.gmail.com>
References: <E1QAEzE-0005M1-D0@dinsdale.python.org>
	<BANLkTi=ZEUYGq1if_SdzR+N-JHbd2H-r1Q@mail.gmail.com>
Message-ID: <io81q9$fu5$1@dough.gmane.org>

On 4/14/2011 2:53 PM, Brett Cannon wrote:
> I think you have the wrong issue #; that one has to do with string
> exceptions.

>     Fix closes Issue1147.

Right, wrong issue. Log should be corrected if it has not been.

>     +- Issue #11474: Fix the bug with url2pathname() handling of '/C|/'
>     on Windows.

Correct one. Senthil has already closed this manually.

-- 
Terry Jan Reedy


From greg.ewing at canterbury.ac.nz  Fri Apr 15 02:37:04 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 15 Apr 2011 12:37:04 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
Message-ID: <4DA79330.7050401@canterbury.ac.nz>

Ricardo Kirkner wrote:
> My question is,
> shouldn't/wouldn't it be better,
> if python took ownership of that part, and ensured all classes get
> called, even if some class misbehaved?

I don't think so. If a class isn't designed to be part of
a super chain, there are likely to be other issues that
can't be fixed as simply as this.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Fri Apr 15 02:58:14 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 15 Apr 2011 12:58:14 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <20110414153503.F125B3A4063@sparrow.telecommunity.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
Message-ID: <4DA79826.7090904@canterbury.ac.nz>

P.J. Eby wrote:

> It's perfectly sensible and useful for there to be classes that 
> intentionally fail to call super(), and yet have a subclass that wants 
> to use super().

One such case is where someone is using super() in a
single-inheritance environment as a way of not having to
write the base class name explicitly into calls to base
methods. (I wouldn't recommend using super() that way
myself, but some people do.) In that situation, any failure
to call super() is almost certainly deliberate.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Fri Apr 15 03:02:46 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 15 Apr 2011 13:02:46 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <4DA714C2.7000006@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<4DA714C2.7000006@voidspace.org.uk>
Message-ID: <4DA79936.1010300@canterbury.ac.nz>

Michael Foord wrote:
> What I was suggesting is that a method not calling 
> super shouldn't stop a *sibling* method being called, but could still 
> prevent the *parent* method being called.

There isn't necessarily a clear distinction between parents
and siblings.

class A:
   ...

class B(A):
   ...

class C(A, B):
   ...

In C, is A a parent of B or a sibling of B?

-- 
Greg

> 
> Michael
> 


From steve at pearwood.info  Fri Apr 15 03:23:52 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 15 Apr 2011 11:23:52 +1000
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
Message-ID: <4DA79E28.2060406@pearwood.info>

Ricardo Kirkner wrote:

> I have a TestCase class, which inherits from both Django's TestCase
> and from some custom TestCases that act as mixin classes. So I have
> something like
> 
> class MyTestCase(TestCase, Mixin1, Mixin2):
>    ...
> 
> now django's TestCase class inherits from unittest2.TestCase, which we
> found was not calling super. Even if this is a bug and should be fixed
> in unittest2, this is an example where I, as a consumer of django,
> shouldn't have to be worried about how django's TestCase class is
> implemented. Since I explicitely base off 3 classes, I expected all 3
> classes to be initialized, and I expect the setUp method to be called
> on all of them.
> 
> If I'm assuming/expecting unreasonable things, please enlighten me.

If we treat django's failure to use super as a bug, you want the Python 
language to work-around that bug so that:

"I, as a consumer of django, shouldn't have to be worried about bugs in 
django". (For at least one class of bug.)

If we *don't* treat django's failure to use super as a bug, but as a 
deliberate design choice, then you are trying to do something which 
django doesn't support. Possibly *deliberately* doesn't support. You 
want the Python language to add that support so that:

"I, as a consumer of django, shouldn't have to be worried about whether 
django supports what I want to do or not".

Either way you look at it, I think it's extremely unreasonable to expect 
the language to work-around bugs in third-party applications, or to add 
features to them that the third-party developers either didn't consider 
or don't want.

Multiple inheritance is tricky enough to get right without adding "Do 
What I Mean" black magic to it. I'd rather work around bugs in 
third-party classes than try to deal with Python actively subverting the 
code I read and write by mysteriously calling superclass methods where 
there is no call to a superclass method.




-- 
Steven

From orsenthil at gmail.com  Fri Apr 15 03:31:54 2011
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Fri, 15 Apr 2011 09:31:54 +0800
Subject: [Python-Dev] cpython (merge 3.2 -> default): merge from 3.2.
In-Reply-To: <io81q9$fu5$1@dough.gmane.org>
References: <E1QAEzE-0005M1-D0@dinsdale.python.org>
	<BANLkTi=ZEUYGq1if_SdzR+N-JHbd2H-r1Q@mail.gmail.com>
	<io81q9$fu5$1@dough.gmane.org>
Message-ID: <20110415013153.GA2511@kevin>

On Thu, Apr 14, 2011 at 08:00:11PM -0400, Terry Reedy wrote:
> On 4/14/2011 2:53 PM, Brett Cannon wrote:
> >I think you have the wrong issue #; that one has to do with string
> >exceptions.
> 
> >    Fix closes Issue1147.
> 
> Right, wrong issue. Log should be corrected if it has not been.
> 
> >    +- Issue #11474: Fix the bug with url2pathname() handling of '/C|/'
> >    on Windows.

Yes, I copy-pasted the issue number and seem to have missed the last
digit. I wondered why it was not automatically closed when I had to
manually close it. Now, I realize the reason.

Shall correct the logs.

Thanks,
Senthil

From greg.ewing at canterbury.ac.nz  Fri Apr 15 03:39:09 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 15 Apr 2011 13:39:09 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <188F4538-E8C6-4BDA-BE65-5131052D9449@gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<188F4538-E8C6-4BDA-BE65-5131052D9449@gmail.com>
Message-ID: <4DA7A1BD.1040408@canterbury.ac.nz>

Raymond Hettinger wrote:

> If an external non-cooperative class needs to be used, then
> it should be wrapped in a class that makes an explicit
> __init__ call to the external class and then calls super().__init__()
> to continue the forwarding.

I don't think it's as simple as that. Isn't that super() call
going to call the __init__() method that you just explicitly
called *again*?

Seems like you would at least need to use super(BaseClass)...
to skip the one you just called. But it's not immediately
obvious to me that this won't ever skip other classes that
you *do* want to call.

-- 
Greg

From rdmurray at bitdance.com  Fri Apr 15 03:45:37 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 14 Apr 2011 21:45:37 -0400
Subject: [Python-Dev] python and super
In-Reply-To: <4DA79826.7090904@canterbury.ac.nz>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<4DA79826.7090904@canterbury.ac.nz>
Message-ID: <20110415014558.5189A2500D6@mailhost.webabinitio.net>

On Fri, 15 Apr 2011 12:58:14 +1200, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> P.J. Eby wrote:
> 
> > It's perfectly sensible and useful for there to be classes that 
> > intentionally fail to call super(), and yet have a subclass that wants 
> > to use super().
> 
> One such case is where someone is using super() in a
> single-inheritance environment as a way of not having to
> write the base class name explicitly into calls to base
> methods. (I wouldn't recommend using super() that way
> myself, but some people do.) In that situation, any failure
> to call super() is almost certainly deliberate.

Why not?  It seems more useful than using it for chaining,
especially given the compiler hack in Python3.

--
R. David Murray           http://www.bitdance.com

From me at gustavonarea.net  Fri Apr 15 10:35:06 2011
From: me at gustavonarea.net (Gustavo Narea)
Date: Fri, 15 Apr 2011 09:35:06 +0100
Subject: [Python-Dev] Releases for recent security vulnerability
Message-ID: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>

Hi all,

How come a description of how to exploit a security vulnerability
comes before a release for said vulnerability? I'm talking about this:
http://blog.python.org/2011/04/urllib-security-vulnerability-fixed.html

My understanding is that the whole point of asking people not to
report security vulnerability publicly was to allow time to release a
fix.

If developers haven't had enough time to release the fix, that's fine.
But I can't think of a sensible reason why it should be announced
first.

Cheers,

 - Gustavo.

From orsenthil at gmail.com  Fri Apr 15 11:07:17 2011
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Fri, 15 Apr 2011 17:07:17 +0800
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
Message-ID: <20110415090717.GA9367@kevin>

On Fri, Apr 15, 2011 at 09:35:06AM +0100, Gustavo Narea wrote:
> 
> How come a description of how to exploit a security vulnerability
> comes before a release for said vulnerability? I'm talking about this:
> http://blog.python.org/2011/04/urllib-security-vulnerability-fixed.html
> 
> My understanding is that the whole point of asking people not to
> report security vulnerability publicly was to allow time to release a
> fix.

Yes, I agree with you. I am surprised that it made it to blog and just
catching more attention (via Responses/Retweets) than what it is
worth.

FWIW, if we analyze the technical details more carefully,
urllib/urllib2 as a library could have redirected to file:// url, but
it is library and not web-server and person who wrote the server could
catch the redirection and handle it at higher level too. This may
sound less drastic than what it appears in the post.

Anyways it was an issue and it is fixed.

-- 
Senthil

<calc> Knghtbrd: irc doesn't compile c code very well ;)

From greg.ewing at canterbury.ac.nz  Fri Apr 15 12:21:03 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 15 Apr 2011 22:21:03 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <20110415014558.5189A2500D6@mailhost.webabinitio.net>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<4DA79826.7090904@canterbury.ac.nz>
	<20110415014558.5189A2500D6@mailhost.webabinitio.net>
Message-ID: <4DA81C0F.80100@canterbury.ac.nz>

R. David Murray wrote:

> Why not?  It seems more useful than using it for chaining,
> especially given the compiler hack in Python3.

Because it's prone to doing the wrong thing if the class
using it is ever involved in multiple inheritance. If
you're expecting the call to go to a particular class,
it's safer to explicitly name that class.

-- 
Greg

From jcea at jcea.es  Fri Apr 15 13:34:59 2011
From: jcea at jcea.es (Jesus Cea)
Date: Fri, 15 Apr 2011 13:34:59 +0200
Subject: [Python-Dev] http://docs.python.org/py3k/ pointing to 2.7
Message-ID: <4DA82D63.5050803@jcea.es>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

http://docs.python.org/py3k/ takes you to 2.7, by default.

Should we update it to point to 3.2?. If the point is to promote Python 3...

I would point it to 3.2, with a big "access to documentation to legacy
2.7" (beside the small left column link). What do you think?.

- -- 
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea at jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea at jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTagtY5lgi5GaxT1NAQKCigP/YwAMrqyQglgJa85rXBQrFVdvHPcNASab
Fw7PWTMdg7Hxof/cXn9gsdiR3fGqVrRv9G5V64hxi6WN5aVbXDyAMJzxsCEAtPfW
PkRDvdZKsKD1xgxLKIZo1gUCY80Xqrts+kXRJKGtA/TeXzmqhhknHQyHW0oiYK3t
jL9qRKGxVO0=
=gFw7
-----END PGP SIGNATURE-----

From solipsis at pitrou.net  Fri Apr 15 13:46:40 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 15 Apr 2011 13:46:40 +0200
Subject: [Python-Dev] http://docs.python.org/py3k/ pointing to 2.7
References: <4DA82D63.5050803@jcea.es>
Message-ID: <20110415134640.59a253f2@pitrou.net>

On Fri, 15 Apr 2011 13:34:59 +0200
Jesus Cea <jcea at jcea.es> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> http://docs.python.org/py3k/ takes you to 2.7, by default.

Really? Perhaps it has already been fixed, but I read ?Python v3.2
documentation? on that page.

Antoine.



From brian.curtin at gmail.com  Fri Apr 15 14:30:54 2011
From: brian.curtin at gmail.com (Brian Curtin)
Date: Fri, 15 Apr 2011 07:30:54 -0500
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
Message-ID: <BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>

On Apr 15, 2011 3:46 AM, "Gustavo Narea" <me at gustavonarea.net> wrote:
>
> Hi all,
>
> How come a description of how to exploit a security vulnerability
> comes before a release for said vulnerability? I'm talking about this:
> http://blog.python.org/2011/04/urllib-security-vulnerability-fixed.html
>
> My understanding is that the whole point of asking people not to
> report security vulnerability publicly was to allow time to release a
> fix.

To me, the fix *was* released. Sure, no fancy installers were generated yet,
but people who are susceptible to this issue 1) now know about it, and 2)
have a way to patch their system *if needed*.

If that's wrong, I apologize for writing the post too early. On top of that,
it seems I didn't get all of the details right either, so apologies on that
as well.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110415/dd411cf9/attachment.html>

From jnoller at gmail.com  Fri Apr 15 14:36:16 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Fri, 15 Apr 2011 08:36:16 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
Message-ID: <BANLkTikJAyHJVffSCf24KfAbFPuOKzoP2g@mail.gmail.com>

On Fri, Apr 15, 2011 at 8:30 AM, Brian Curtin <brian.curtin at gmail.com> wrote:
>
> On Apr 15, 2011 3:46 AM, "Gustavo Narea" <me at gustavonarea.net> wrote:
>>
>> Hi all,
>>
>> How come a description of how to exploit a security vulnerability
>> comes before a release for said vulnerability? I'm talking about this:
>> http://blog.python.org/2011/04/urllib-security-vulnerability-fixed.html
>>
>> My understanding is that the whole point of asking people not to
>> report security vulnerability publicly was to allow time to release a
>> fix.
>
> To me, the fix *was* released. Sure, no fancy installers were generated yet,
> but people who are susceptible to this issue 1) now know about it, and 2)
> have a way to patch their system *if needed*.
>
> If that's wrong, I apologize for writing the post too early. On top of that,
> it seems I didn't get all of the details right either, so apologies on that
> as well.

The code is open source: Anyone watching the commits/list know that
this issue was fixed. It's better to keep it in the public's eyes, so
they know *something was fixed and they should patch* than to rely on
people *not* watching these channels.

Assume the bad guys already knew about the exploit: We have to spread
the knowledge of the fix as far and as wide as we can so that people
even know there is an issue, and that it was fixed. This applies to
users and *vendors* as well.

A blog post is good communication to our users. I have to side with
Brian on this one.

jesse

From solipsis at pitrou.net  Fri Apr 15 14:59:40 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 15 Apr 2011 14:59:40 +0200
Subject: [Python-Dev] Releases for recent security vulnerability
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<BANLkTikJAyHJVffSCf24KfAbFPuOKzoP2g@mail.gmail.com>
Message-ID: <20110415145940.56f2aadf@pitrou.net>

On Fri, 15 Apr 2011 08:36:16 -0400
Jesse Noller <jnoller at gmail.com> wrote:
> On Fri, Apr 15, 2011 at 8:30 AM, Brian Curtin <brian.curtin at gmail.com> wrote:
> >
> > On Apr 15, 2011 3:46 AM, "Gustavo Narea" <me at gustavonarea.net> wrote:
> >>
> >> Hi all,
> >>
> >> How come a description of how to exploit a security vulnerability
> >> comes before a release for said vulnerability? I'm talking about this:
> >> http://blog.python.org/2011/04/urllib-security-vulnerability-fixed.html
> >>
> >> My understanding is that the whole point of asking people not to
> >> report security vulnerability publicly was to allow time to release a
> >> fix.
> >
> > To me, the fix *was* released. Sure, no fancy installers were generated yet,
> > but people who are susceptible to this issue 1) now know about it, and 2)
> > have a way to patch their system *if needed*.
> >
> > If that's wrong, I apologize for writing the post too early. On top of that,
> > it seems I didn't get all of the details right either, so apologies on that
> > as well.
> 
> The code is open source: Anyone watching the commits/list know that
> this issue was fixed. It's better to keep it in the public's eyes, so
> they know *something was fixed and they should patch* than to rely on
> people *not* watching these channels.
> 
> Assume the bad guys already knew about the exploit: We have to spread
> the knowledge of the fix as far and as wide as we can so that people
> even know there is an issue, and that it was fixed. This applies to
> users and *vendors* as well.

True. However, many open source projects take the habit of cutting a
release when a hole is discovered and fixed. It depends how seriously
they (and their users) take security. Of course, there are different
kinds of security issues, more or less severe. I don't know how severe
the above issue is.

Relying on a vendor distribution (such as a Linux distro, or
ActiveState) is hopefully enough to get these security updates in time
without patching anything by hand. I don't think many people compile
Python for production use, but many do use our Windows installers.

Regards

Antoine.



From fuzzyman at voidspace.org.uk  Fri Apr 15 15:30:11 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 15 Apr 2011 14:30:11 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <4DA79936.1010300@canterbury.ac.nz>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<4DA714C2.7000006@voidspace.org.uk>
	<4DA79936.1010300@canterbury.ac.nz>
Message-ID: <4DA84863.80307@voidspace.org.uk>

On 15/04/2011 02:02, Greg Ewing wrote:
> Michael Foord wrote:
>> What I was suggesting is that a method not calling super shouldn't 
>> stop a *sibling* method being called, but could still prevent the 
>> *parent* method being called.
>
> There isn't necessarily a clear distinction between parents
> and siblings.
>
> class A:
>   ...
>
> class B(A):
>   ...
>
> class C(A, B):
>   ...
>
> In C, is A a parent of B or a sibling of B?
>
For a super call in C, B is a sibling to A. For a super call in B, A is 
a parent.

With the semantics I was suggesting if C calls super, but A doesn't then 
B would still get called.

All the best,

Michael

-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Fri Apr 15 15:53:26 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 15 Apr 2011 14:53:26 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <4DA79E28.2060406@pearwood.info>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA79E28.2060406@pearwood.info>
Message-ID: <4DA84DD6.20608@voidspace.org.uk>

On 15/04/2011 02:23, Steven D'Aprano wrote:
> Ricardo Kirkner wrote:
>
>> I have a TestCase class, which inherits from both Django's TestCase
>> and from some custom TestCases that act as mixin classes. So I have
>> something like
>>
>> class MyTestCase(TestCase, Mixin1, Mixin2):
>>    ...
>>
>> now django's TestCase class inherits from unittest2.TestCase, which we
>> found was not calling super. Even if this is a bug and should be fixed
>> in unittest2, this is an example where I, as a consumer of django,
>> shouldn't have to be worried about how django's TestCase class is
>> implemented. Since I explicitely base off 3 classes, I expected all 3
>> classes to be initialized, and I expect the setUp method to be called
>> on all of them.
>>
>> If I'm assuming/expecting unreasonable things, please enlighten me.
>
> If we treat django's failure to use super as a bug, you want the 
> Python language to work-around that bug so that:

What you say (that this particular circumstance could be treated as a 
bug in django) is true, however consider the "recently" introduced 
problem caused by object.__init__ not taking arguments. This makes it 
impossible to use super correctly in various circumstances.

     http://freshfoo.com/blog/object__init__takes_no_parameters

Given the following classes (Python 3):

class A:
     def __init__(self, a):
         print ('A')

class B:
     def __init__(self, a):
         print ('B')

class C(B):
     def __init__(self, a):
         print ('C')
         super().__init__(a)

It is impossible to inherit from both C and A and have all parent 
__init__ methods called correctly. Changing the semantics of super as 
described would fix this problem.

For:

class D(C, A):
     def __init__(self, a):
         super().__init__(a)

D(1)

This is printed:
C
B

(A __init__ is not called).

For this:

class D(A, C):
     def __init__(self, a):
         super().__init__(a)

D(1)

The following is printed:
A

(B and C __init__ methods are not called.)

All the best,

Michael Foord


>
> "I, as a consumer of django, shouldn't have to be worried about bugs 
> in django". (For at least one class of bug.)
>
> If we *don't* treat django's failure to use super as a bug, but as a 
> deliberate design choice, then you are trying to do something which 
> django doesn't support. Possibly *deliberately* doesn't support. You 
> want the Python language to add that support so that:
>
> "I, as a consumer of django, shouldn't have to be worried about 
> whether django supports what I want to do or not".
>
> Either way you look at it, I think it's extremely unreasonable to 
> expect the language to work-around bugs in third-party applications, 
> or to add features to them that the third-party developers either 
> didn't consider or don't want.
>
> Multiple inheritance is tricky enough to get right without adding "Do 
> What I Mean" black magic to it. I'd rather work around bugs in 
> third-party classes than try to deal with Python actively subverting 
> the code I read and write by mysteriously calling superclass methods 
> where there is no call to a superclass method.
>
>
>
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fdrake at acm.org  Fri Apr 15 15:54:53 2011
From: fdrake at acm.org (Fred Drake)
Date: Fri, 15 Apr 2011 09:54:53 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <20110415145940.56f2aadf@pitrou.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<BANLkTikJAyHJVffSCf24KfAbFPuOKzoP2g@mail.gmail.com>
	<20110415145940.56f2aadf@pitrou.net>
Message-ID: <BANLkTik_jXOTB2vKrgEwJn0-B=pm5HDYJA@mail.gmail.com>

On Fri, Apr 15, 2011 at 8:59 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Relying on a vendor distribution (such as a Linux distro, or
> ActiveState) is hopefully enough to get these security updates in time
> without patching anything by hand. I don't think many people compile
> Python for production use, but many do use our Windows installers.

Antoine,

I actually expect many companies build their own Python for production use;
relying on the system Python has long been considered a stability vulnerability
by many of us.  This is especially the case for large deployments,
where machines
are less likely to receive updates quickly.

I'd strongly recommend making sure releases are available for download quickly
in cases like this, even if (in any particular case) we think a vulnerability is
unlikely to affect many users.  Whenever we think something like that, we're
always wrong.


  -Fred

-- 
Fred L. Drake, Jr.? ? <fdrake at acm.org>
"Give me the luxuries of life and I will willingly do without the necessities."
?? --Frank Lloyd Wright

From victor.stinner at haypocalc.com  Fri Apr 15 15:47:22 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Fri, 15 Apr 2011 15:47:22 +0200
Subject: [Python-Dev] http://docs.python.org/py3k/ pointing to 2.7
In-Reply-To: <4DA82D63.5050803@jcea.es>
References: <4DA82D63.5050803@jcea.es>
Message-ID: <1302875242.18033.13.camel@marge>

Le vendredi 15 avril 2011 ? 13:34 +0200, Jesus Cea a ?crit :
> http://docs.python.org/py3k/ takes you to 2.7, by default.
> 
> Should we update it to point to 3.2?. If the point is to promote Python 3...
> 
> I would point it to 3.2, with a big "access to documentation to legacy
> 2.7" (beside the small left column link). What do you think?.

http://docs.python.org/ points to 2.7 yes. I'm already reading 3.3 doc
to develop with Python 2.5: I prefer the most recent doc, and the API is
usually exactly the same. So I vote +1 to make 3.3 the default doc.
Anyway, Python 2 is a dead language!


I don't like URL prefixes to indicate the version: "py3k/" for 3.2 (last
Python 3 stable version), no prefix for 2.7, "release/2.6.6/" for 2.6,
"dev/py3k/" for 3.3, etc.

Can't we keep it simple as:
 - "2.6/" for 2.6
 - "2.7/" for 2.7
 - "3.1/" for 3.1
 - "3.2/" for 3.2
 - "3.3/" for 3.3
 - "2.x/" (or maybe just "2/"?) as a redirection to 2.7
 - "3.x/" (or maybe just "3/"?) as a redirection to 3.3

http://docs.python.org/ may be a redirection (to
http://docs.python.org/3.x/) instead of directly the doc. So it would be
intuitive to replace 2 by 3 (or 3 by 2) in the URL.


The http://www.python.org/doc/versions/ page uses other URLS:
http://www.python.org/doc/<version>/ which are redirections to
http://docs.python.org/release/<version>/. For example:
http://www.python.org/doc/3.2/ is a redirection to
http://docs.python.org/release/3.2/ (which give the same content than
http://docs.python.org/py3k/ !).


On the left of http://docs.python.org/, you have links to other versions
of the doc: 2.6, 3.2, 3.3... but not 3.1. 3.3 doc have links to 2.7 and
3.2 (not 2.6 or 3.1). If there is a rule to choose links on the left, I
don't understand this rule.

Victor


From jnoller at gmail.com  Fri Apr 15 16:04:53 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Fri, 15 Apr 2011 10:04:53 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <20110415145940.56f2aadf@pitrou.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<BANLkTikJAyHJVffSCf24KfAbFPuOKzoP2g@mail.gmail.com>
	<20110415145940.56f2aadf@pitrou.net>
Message-ID: <BANLkTina-op8bcAHtQKqksyqrC=9tN4Bcg@mail.gmail.com>

On Fri, Apr 15, 2011 at 8:59 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Fri, 15 Apr 2011 08:36:16 -0400
> Jesse Noller <jnoller at gmail.com> wrote:
>> On Fri, Apr 15, 2011 at 8:30 AM, Brian Curtin <brian.curtin at gmail.com> wrote:
>> >
>> > On Apr 15, 2011 3:46 AM, "Gustavo Narea" <me at gustavonarea.net> wrote:
>> >>
>> >> Hi all,
>> >>
>> >> How come a description of how to exploit a security vulnerability
>> >> comes before a release for said vulnerability? I'm talking about this:
>> >> http://blog.python.org/2011/04/urllib-security-vulnerability-fixed.html
>> >>
>> >> My understanding is that the whole point of asking people not to
>> >> report security vulnerability publicly was to allow time to release a
>> >> fix.
>> >
>> > To me, the fix *was* released. Sure, no fancy installers were generated yet,
>> > but people who are susceptible to this issue 1) now know about it, and 2)
>> > have a way to patch their system *if needed*.
>> >
>> > If that's wrong, I apologize for writing the post too early. On top of that,
>> > it seems I didn't get all of the details right either, so apologies on that
>> > as well.
>>
>> The code is open source: Anyone watching the commits/list know that
>> this issue was fixed. It's better to keep it in the public's eyes, so
>> they know *something was fixed and they should patch* than to rely on
>> people *not* watching these channels.
>>
>> Assume the bad guys already knew about the exploit: We have to spread
>> the knowledge of the fix as far and as wide as we can so that people
>> even know there is an issue, and that it was fixed. This applies to
>> users and *vendors* as well.
>
> True. However, many open source projects take the habit of cutting a
> release when a hole is discovered and fixed. It depends how seriously
> they (and their users) take security. Of course, there are different
> kinds of security issues, more or less severe. I don't know how severe
> the above issue is.
>
> Relying on a vendor distribution (such as a Linux distro, or
> ActiveState) is hopefully enough to get these security updates in time
> without patching anything by hand. I don't think many people compile
> Python for production use, but many do use our Windows installers.
>
> Regards
>
> Antoine.
>

Agreed; but all I'm defending is the post describing what, and how it
was fixed. Hiding it until we get around to eventually cutting a
release doesn't make the fix, or vulnerability go away. We need to
issue a release *quickly* - and we need to notify all of our consumers
quickly.

jesse

From marks at dcs.gla.ac.uk  Fri Apr 15 16:10:09 2011
From: marks at dcs.gla.ac.uk (Mark Shannon)
Date: Fri, 15 Apr 2011 15:10:09 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <4DA84863.80307@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<4DA714C2.7000006@voidspace.org.uk>	<4DA79936.1010300@canterbury.ac.nz>
	<4DA84863.80307@voidspace.org.uk>
Message-ID: <4DA851C1.5050005@dcs.gla.ac.uk>

Michael Foord wrote:
> On 15/04/2011 02:02, Greg Ewing wrote:
>> Michael Foord wrote:
>>> What I was suggesting is that a method not calling super shouldn't 
>>> stop a *sibling* method being called, but could still prevent the 
>>> *parent* method being called.
>> There isn't necessarily a clear distinction between parents
>> and siblings.
>>
>> class A:
>>   ...
>>
>> class B(A):
>>   ...
>>
>> class C(A, B):
>>   ...
>>
>> In C, is A a parent of B or a sibling of B?
>>
Its neither, as C can't exist:

>>> class A: pass
...
>>> class B(A): pass
...
>>> class C(A,B):pass
...
Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
TypeError: Cannot create a consistent method resolution
order (MRO) for bases B, A

> For a super call in C, B is a sibling to A. For a super call in B, A is 
> a parent.
> 
> With the semantics I was suggesting if C calls super, but A doesn't then 
> B would still get called.
> 

A class cannot precede any of its sub-classes in an MRO,
see http://en.wikipedia.org/wiki/C3_linearization

If A is a "parent" (super-class) of B, then B must precede A in any MRO 
that contains them both.
"Siblings", in the context of a single MRO  are thus classes between 
which there is no sub-class/super-class relation.

Mark.

From carl at oddbird.net  Fri Apr 15 17:18:11 2011
From: carl at oddbird.net (Carl Meyer)
Date: Fri, 15 Apr 2011 10:18:11 -0500
Subject: [Python-Dev] python and super
In-Reply-To: <4DA84DD6.20608@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA79E28.2060406@pearwood.info>
	<4DA84DD6.20608@voidspace.org.uk>
Message-ID: <4DA861B3.9010506@oddbird.net>



On 04/15/2011 08:53 AM, Michael Foord wrote:
>> If we treat django's failure to use super as a bug, you want the
>> Python language to work-around that bug so that:
> 
> What you say (that this particular circumstance could be treated as a
> bug in django) is true, 

Just as a side note: if there is a bug demonstrated here, it is in
unittest2, not Django. Django's TestCase subclasses don't even override
__init__ or setUp, so there is no opportunity for them to call or fail
to call super() in either case.

If you re-read Ricardo's original presentation of the case, he correctly
noted that it is unittest2's TestCase which does not call super() and
thus prevents cooperative multiple inheritance. I'm not sure who in this
thread first mis-read his post and called it a possible bug in Django,
but it was a mis-reading which now appears to be self-propagating ;-)

Carl

From fuzzyman at voidspace.org.uk  Fri Apr 15 18:02:56 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 15 Apr 2011 17:02:56 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <4DA861B3.9010506@oddbird.net>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA79E28.2060406@pearwood.info>	<4DA84DD6.20608@voidspace.org.uk>
	<4DA861B3.9010506@oddbird.net>
Message-ID: <4DA86C30.3010902@voidspace.org.uk>

On 15/04/2011 16:18, Carl Meyer wrote:
>
> On 04/15/2011 08:53 AM, Michael Foord wrote:
>>> If we treat django's failure to use super as a bug, you want the
>>> Python language to work-around that bug so that:
>> What you say (that this particular circumstance could be treated as a
>> bug in django) is true,
> Just as a side note: if there is a bug demonstrated here, it is in
> unittest2, not Django. Django's TestCase subclasses don't even override
> __init__ or setUp, so there is no opportunity for them to call or fail
> to call super() in either case.
>
> If you re-read Ricardo's original presentation of the case, he correctly
> noted that it is unittest2's TestCase which does not call super() and
> thus prevents cooperative multiple inheritance. I'm not sure who in this
> thread first mis-read his post and called it a possible bug in Django,
> but it was a mis-reading which now appears to be self-propagating ;-)
>
Well yes, but it is also a bug in the copy of unittest2 embedded in 
django - so whilst it can be fixed in unittest2 (simply deleting the 
setUp and tearDown methods which do nothing but override 
unittest.TestCase.setUp and tearDown) it *also* needs to be fixed in 
django.

This particular issue does illustrate the problem well though - the 
methods in unittest2 don't call up to their parent class (which is fine 
because those methods are empty), but in not calling up also they 
prevent sibling methods being called in a multiple inheritance situation.

So for those who have been saying that not wanting to call up to parents 
is a valid use case, yes I quite agree.  But you have to be aware that 
because of the semantics of super, not calling up to your parents 
basically prevents those methods being used in the presence of multiple 
inheritance.

All the best,

Michael Foord

> Carl
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From status at bugs.python.org  Fri Apr 15 18:07:20 2011
From: status at bugs.python.org (Python tracker)
Date: Fri, 15 Apr 2011 18:07:20 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20110415160720.D9D7A1CB59@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2011-04-08 - 2011-04-15)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    2734 ( -7)
  closed 20899 (+54)
  total  23633 (+47)

Open issues with patches: 1185 


Issues opened (31)
==================

#9670: Exceed Recursion Limit in Thread
http://bugs.python.org/issue9670  reopened by ned.deily

#11803: Memory leak in sub-interpreters
http://bugs.python.org/issue11803  reopened by swapnil

#11805: package_data only allows one glob per-package
http://bugs.python.org/issue11805  opened by erik.bray

#11807: Documentation of add_subparsers lacks information about parame
http://bugs.python.org/issue11807  opened by gruszczy

#11811: ssl.get_server_certificate() does not work for IPv6 addresses
http://bugs.python.org/issue11811  opened by pwouters

#11812: transient test_telnetlib failure
http://bugs.python.org/issue11812  opened by pitrou

#11813: inspect.getattr_static doesn't get module attributes
http://bugs.python.org/issue11813  opened by Trundle

#11816: Refactor the dis module to provide better building blocks for 
http://bugs.python.org/issue11816  opened by eltoder

#11820: idle3 shell os.system swallows shell command output
http://bugs.python.org/issue11820  opened by Thekent

#11822: Improve disassembly to show embedded code objects
http://bugs.python.org/issue11822  opened by rhettinger

#11823: disassembly needs argument counts on calls with keyword args
http://bugs.python.org/issue11823  opened by rhettinger

#11824: freeze.py broken due to ABI flags
http://bugs.python.org/issue11824  opened by Trundle

#11826: Leak in atexitmodule
http://bugs.python.org/issue11826  opened by skrah

#11827: mention of list2cmdline() in docs of subprocess.Popen
http://bugs.python.org/issue11827  opened by eli.bendersky

#11828: startswith and endswith don't accept None as slice index
http://bugs.python.org/issue11828  opened by hkBst

#11829: inspect.getattr_static code execution with meta-metaclasses
http://bugs.python.org/issue11829  opened by Trundle

#11831: "python -w" causes "no Python documentation found" error when 
http://bugs.python.org/issue11831  opened by susam

#11832: Add option to pause regrtest to attach a debugger
http://bugs.python.org/issue11832  opened by brian.curtin

#11834: wrong module installation dir on Windows
http://bugs.python.org/issue11834  opened by techtonik

#11835: python (x64) ctypes incorrectly pass structures parameter
http://bugs.python.org/issue11835  opened by Abraham.Soedjito

#11836: multiprocessing.queues.SimpleQueue is undocumented
http://bugs.python.org/issue11836  opened by pitrou

#11838: IDLE: make interactive code runnable.
http://bugs.python.org/issue11838  opened by terry.reedy

#11839: argparse: unexpected behavior of default for FileType('w')
http://bugs.python.org/issue11839  opened by Paolo.Elvati

#11841: Bug in the verson comparison
http://bugs.python.org/issue11841  opened by tarek

#11842: slice.indices with negative step and default stop
http://bugs.python.org/issue11842  opened by durban

#11844: Update json to upstream simplejson latest release
http://bugs.python.org/issue11844  opened by sandro.tosi

#11846: Implementation question for (-5) - 256 caching, and doc update
http://bugs.python.org/issue11846  opened by antlong

#11847: OSError importing antigravity module
http://bugs.python.org/issue11847  opened by ackounts

#11849: ElementTree memory leak
http://bugs.python.org/issue11849  opened by kaifeng

#11851: Flushing the standard input causes an error
http://bugs.python.org/issue11851  opened by pasma10 at concepts.nl

#11837: smtplib._quote_periods triggers spurious type error in re.sub
http://bugs.python.org/issue11837  opened by axel



Most recent 15 issues with no replies (15)
==========================================

#11839: argparse: unexpected behavior of default for FileType('w')
http://bugs.python.org/issue11839

#11838: IDLE: make interactive code runnable.
http://bugs.python.org/issue11838

#11837: smtplib._quote_periods triggers spurious type error in re.sub
http://bugs.python.org/issue11837

#11836: multiprocessing.queues.SimpleQueue is undocumented
http://bugs.python.org/issue11836

#11829: inspect.getattr_static code execution with meta-metaclasses
http://bugs.python.org/issue11829

#11826: Leak in atexitmodule
http://bugs.python.org/issue11826

#11824: freeze.py broken due to ABI flags
http://bugs.python.org/issue11824

#11813: inspect.getattr_static doesn't get module attributes
http://bugs.python.org/issue11813

#11812: transient test_telnetlib failure
http://bugs.python.org/issue11812

#11804: expat parser not xml 1.1 (breaks xmlrpclib)
http://bugs.python.org/issue11804

#11784: multiprocessing.Process.join: timeout argument doesn't specify
http://bugs.python.org/issue11784

#11781: test/test_email directory does not get installed by 'make inst
http://bugs.python.org/issue11781

#11780: email.encoders are broken
http://bugs.python.org/issue11780

#11769: test_notify() of test_threading hang on "x86 XP-4 3.x":
http://bugs.python.org/issue11769

#11758: increase xml.dom.minidom test coverage
http://bugs.python.org/issue11758



Most recent 15 issues waiting for review (15)
=============================================

#11841: Bug in the verson comparison
http://bugs.python.org/issue11841

#11835: python (x64) ctypes incorrectly pass structures parameter
http://bugs.python.org/issue11835

#11832: Add option to pause regrtest to attach a debugger
http://bugs.python.org/issue11832

#11831: "python -w" causes "no Python documentation found" error when 
http://bugs.python.org/issue11831

#11829: inspect.getattr_static code execution with meta-metaclasses
http://bugs.python.org/issue11829

#11828: startswith and endswith don't accept None as slice index
http://bugs.python.org/issue11828

#11827: mention of list2cmdline() in docs of subprocess.Popen
http://bugs.python.org/issue11827

#11826: Leak in atexitmodule
http://bugs.python.org/issue11826

#11824: freeze.py broken due to ABI flags
http://bugs.python.org/issue11824

#11823: disassembly needs argument counts on calls with keyword args
http://bugs.python.org/issue11823

#11816: Refactor the dis module to provide better building blocks for 
http://bugs.python.org/issue11816

#11813: inspect.getattr_static doesn't get module attributes
http://bugs.python.org/issue11813

#11807: Documentation of add_subparsers lacks information about parame
http://bugs.python.org/issue11807

#11802: filecmp.cmp needs a documented way to clear cache
http://bugs.python.org/issue11802

#11800: regrtest --timeout: apply the timeout on a function, not on th
http://bugs.python.org/issue11800



Top 10 most discussed issues (10)
=================================

#11783: email parseaddr and formataddr should be IDNA aware
http://bugs.python.org/issue11783  15 msgs

#11828: startswith and endswith don't accept None as slice index
http://bugs.python.org/issue11828  14 msgs

#11827: mention of list2cmdline() in docs of subprocess.Popen
http://bugs.python.org/issue11827  11 msgs

#11816: Refactor the dis module to provide better building blocks for 
http://bugs.python.org/issue11816  10 msgs

#8326: Cannot import name SemLock on Ubuntu
http://bugs.python.org/issue8326   8 msgs

#11802: filecmp.cmp needs a documented way to clear cache
http://bugs.python.org/issue11802   8 msgs

#10496: "import site failed" when Python can't find home directory (sy
http://bugs.python.org/issue10496   7 msgs

#11277: test_zlib.test_big_buffer crashes under BSD (Mac OS X and Free
http://bugs.python.org/issue11277   7 msgs

#9544: xdrlib.Packer().pack_fstring throws a TypeError when called wi
http://bugs.python.org/issue9544   6 msgs

#11776: types.MethodType() params and usage is not documented
http://bugs.python.org/issue11776   6 msgs



Issues closed (50)
==================

#2650: re.escape should not escape underscore
http://bugs.python.org/issue2650  closed by ezio.melotti

#3056: Simplify the Integral ABC
http://bugs.python.org/issue3056  closed by rhettinger

#4783: document that json.load/dump can???t be used twice on the same
http://bugs.python.org/issue4783  closed by ezio.melotti

#4877: xml.parsers.expat ParseFile() causes segmentation fault when p
http://bugs.python.org/issue4877  closed by ezio.melotti

#5057: Unicode-width	dependent	optimization	leads	to	non-portable pyc
http://bugs.python.org/issue5057  closed by ezio.melotti

#8428: buildbot: test_multiprocessing timeout (test_notify_all? test_
http://bugs.python.org/issue8428  closed by pitrou

#8429: buildbot: test_subprocess timeout
http://bugs.python.org/issue8429  closed by haypo

#8431: buildbot: hung on ARM Debian
http://bugs.python.org/issue8431  closed by haypo

#8448: buildbot: test_subprocess failure (test_no_leaking, Broken pip
http://bugs.python.org/issue8448  closed by haypo

#8776: Bytes version of sys.argv
http://bugs.python.org/issue8776  closed by haypo

#9233: json.load failure when C optimizations aren't built
http://bugs.python.org/issue9233  closed by ezio.melotti

#9904: Cosmetic issues that may warrant a fix in symtable.h/c
http://bugs.python.org/issue9904  closed by eli.bendersky

#10019: json.dumps with indent = 0 not adding newlines
http://bugs.python.org/issue10019  closed by r.david.murray

#10121: test_multiprocessing stuck in test_make_pool if run in a loop
http://bugs.python.org/issue10121  closed by sandro.tosi

#11186: pydoc: HTMLDoc.index() doesn't support PEP 383
http://bugs.python.org/issue11186  closed by haypo

#11369: Add caching for the isEnabledFor() computation
http://bugs.python.org/issue11369  closed by vinay.sajip

#11388: Implement MutableSequence.clear()
http://bugs.python.org/issue11388  closed by eli.bendersky

#11402: _PyUnicode_Init leaks a little memory once
http://bugs.python.org/issue11402  closed by skrah

#11467: urlparse.urlsplit() regression for paths consisting of digits
http://bugs.python.org/issue11467  closed by orsenthil

#11474: url2pathname() handling of '/C|/' on Windows
http://bugs.python.org/issue11474  closed by orsenthil

#11506: b'' += gives SystemError instead of SyntaxError
http://bugs.python.org/issue11506  closed by python-dev

#11593: Add encoding parameter to logging.basicConfig
http://bugs.python.org/issue11593  closed by vinay.sajip

#11650: Faulty RESTART/EINTR handling in Parser/myreadline.c
http://bugs.python.org/issue11650  closed by sdaoden

#11652: urlib{, 2} returns a pair of integers as the content-length va
http://bugs.python.org/issue11652  closed by orsenthil

#11684: Add email.parser.BytesHeaderParser
http://bugs.python.org/issue11684  closed by r.david.murray

#11703: Bug in python >= 2.7 with urllib2 fragment
http://bugs.python.org/issue11703  closed by orsenthil

#11718: Teach IDLE's open-module command to find packages
http://bugs.python.org/issue11718  closed by rhettinger

#11719: test_msilib skip unexpected on non-Windows platforms
http://bugs.python.org/issue11719  closed by rosslagerwall

#11740: difflib html diff takes extremely long
http://bugs.python.org/issue11740  closed by benjamin.peterson

#11747: unified_diff function product incorrect range information
http://bugs.python.org/issue11747  closed by rhettinger

#11772: email header wrapping edge case failure
http://bugs.python.org/issue11772  closed by r.david.murray

#11782: email.generator.Generator.flatten() fails
http://bugs.python.org/issue11782  closed by r.david.murray

#11806: Missing 2 hyphens in the docs
http://bugs.python.org/issue11806  closed by rhettinger

#11808: $MACOSX_DEPLOYMENT_TARGET mismatch ... during configure
http://bugs.python.org/issue11808  closed by ned.deily

#11809: Rietveld Code Review Tool can't handle well-known controls
http://bugs.python.org/issue11809  closed by georg.brandl

#11810: _socket fails to build on OpenIndiana
http://bugs.python.org/issue11810  closed by pitrou

#11814: possible typo in multiprocessing.Pool._terminate
http://bugs.python.org/issue11814  closed by pitrou

#11815: Simplifications in concurrent.futures
http://bugs.python.org/issue11815  closed by pitrou

#11817: berkeley db 5.1 support
http://bugs.python.org/issue11817  closed by r.david.murray

#11818: tempfile.TemporaryFile example in docs doesnt work
http://bugs.python.org/issue11818  closed by rosslagerwall

#11819: '-m unittest' should not pretend it works on Python 2.5/2.6
http://bugs.python.org/issue11819  closed by georg.brandl

#11821: smtplib should provide a means to validate a remote server ssl
http://bugs.python.org/issue11821  closed by pitrou

#11825: faulthandler: failure without threads
http://bugs.python.org/issue11825  closed by haypo

#11830: "import decimal" fails in Turkish locale
http://bugs.python.org/issue11830  closed by rhettinger

#11833: ord() doesn't show complete UNICODE
http://bugs.python.org/issue11833  closed by ezio.melotti

#11840: Improvements to c-api/unicode documentation
http://bugs.python.org/issue11840  closed by ezio.melotti

#11843: distutils doc: duplicate line in table
http://bugs.python.org/issue11843  closed by ezio.melotti

#11845: Refcounting error in compute_slice_indices in rangeobject.c
http://bugs.python.org/issue11845  closed by ezio.melotti

#11848: Comment for random.betavariate is intriguing and incomplete
http://bugs.python.org/issue11848  closed by ezio.melotti

#11850: mktime - OverflowError: mktime argument out of range -	on very
http://bugs.python.org/issue11850  closed by haypo

From bob at redivi.com  Fri Apr 15 18:31:02 2011
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 15 Apr 2011 09:31:02 -0700
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
Message-ID: <BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>

On Thu, Apr 14, 2011 at 2:29 PM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 14, 2011, at 12:22 PM, Sandro Tosi wrote:
>
>> The version we have in cpython of json is simplejson 2.0.9 highly
>> patched (either because it was converted to py3k, and because of the
>> normal flow of issues/bugfixes) while upstream have already released
>> 2.1.13 .
>>
>> Their 2 roads had diverged a lot, and since this blocks any further
>> update of cpython's json from upstream, I'd like to close this gap.
>
> Are you proposing updates to the Python 3.3 json module
> to include newer features like use_decimal and changing
> the indent argument from an integer to a string?

https://github.com/simplejson/simplejson/blob/master/CHANGES.txt

>> - what are we going to do in the long run?
>
> If Bob shows no interest in Python 3, then
> the code bases will probably continue to diverge.

I don't have any real interest in Python 3, but if someone contributes
the code to make simplejson work in Python 3 I'm willing to apply the
patches run the tests against any future changes. The porting work to
make it suitable for the standard library at that point should be
something that can be automated since it will be moving some files
around and changing the string simplejson to json in a whole bunch of
places.

> Since the JSON spec is set in stone, the changes
> will mostly be about API (indentation, object conversion, etc)
> and optimization. ?I presume the core parsing logic won't
> be changing much.

Actually the core parsing logic is very different (and MUCH faster),
which is why the merge is tricky. There's the potential for it to
change more in the future, there's definitely more room for
optimization. Probably not in the pure python parser, but the C one.

-bob

From gzlist at googlemail.com  Fri Apr 15 18:49:17 2011
From: gzlist at googlemail.com (Martin (gzlist))
Date: Fri, 15 Apr 2011 17:49:17 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <4DA6DBDF.6000202@voidspace.org.uk>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>
	<4D9DEB19.10307@voidspace.org.uk>
	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>
	<4D9E1AA4.4020607@voidspace.org.uk>
	<BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>
	<4DA6DBDF.6000202@voidspace.org.uk>
Message-ID: <BANLkTi=nwqYS468F7VN+XBH_dZzKt6iFtA@mail.gmail.com>

On 14/04/2011, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> I'd be interested to know what is keeping the tests alive even when the
> test suite isn't. As far as I know there is nothing else in unittest
> that would do that.

The main cause is some handy code for collecting and filtering tests
by name, which unintentionally keeps alive a list outside the
TestSuite instance.

There's also the problem of reference cycles involving exc_info, bound
methods, and so on that make the lifetimes of test cases
unpredictable. That's mostly a problem for systems with a very limited
allotment of certain resources such as socket handles. However it also
makes ensuring the code doesn't regress back to leaking-the-universe
more complicated as tests may still survive past the pop.

> It's either a general problem that unittest can fix, or it is a problem
> *caused* by the bazaar test suite and should be fixed there. Bazaar does
> some funky stuff copying tests to run them with different backends, so
> it is possible that this is the cause of the problem (and it isn't a
> general problem).

The fact it's easy to accidentally keep objects alive is a general
problem. If every project that writes their own little test loader
risks reverting to immortal cases, that's not really progress. The
Bazaar example is a warning because the intention was the same as
yours, but ended up being a behaviour regression that went unnoticed
by most of the developers while crippling others. And as John
mentioned, the fix hasn't yet landed, mostly because the hack is good
enough for me and the right thing is too complicated.

Martin

From merwok at netwok.org  Fri Apr 15 19:32:18 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Fri, 15 Apr 2011 19:32:18 +0200
Subject: [Python-Dev] Adding test case methods to TestCase subclasses
In-Reply-To: <87zkns2zg7.fsf_-_@benfinney.id.au>
References: "\"<BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>"
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>"
	<4DA77C92.80007@stoneleaf.us> <87zkns2zg7.fsf_-_@benfinney.id.au>
Message-ID: <bc744dc75f0eda9f59f264c77cd732f3@netwok.org>

 Hi,

 I just wanted to clear a slight misunderstanding:

> How can composition add test cases detectable by Python 2's 
> ?unittest?
> and Python 3's ?unittest2??

 The package shipped in the stdlib is named unittest in all Python
 versions.  The codebase that has seen a lot of improvements thanks to
 Michael Foord is in 2.7 and 3.2 (some bits already in 3.1, I think).
 The standalone release of that improved codebase is called unittest2.

 Cheers

From ncoghlan at gmail.com  Fri Apr 15 20:07:43 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 16 Apr 2011 04:07:43 +1000
Subject: [Python-Dev] python and super
In-Reply-To: <4DA84863.80307@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<4DA714C2.7000006@voidspace.org.uk>
	<4DA79936.1010300@canterbury.ac.nz>
	<4DA84863.80307@voidspace.org.uk>
Message-ID: <BANLkTi=9=8z0Ezk8YWH04XHLULO3A_L9yQ@mail.gmail.com>

On Fri, Apr 15, 2011 at 11:30 PM, Michael Foord
<fuzzyman at voidspace.org.uk> wrote:
> On 15/04/2011 02:02, Greg Ewing wrote:
>> There isn't necessarily a clear distinction between parents
>> and siblings.
>>
>> class A:
>> ?...
>>
>> class B(A):
>> ?...
>>
>> class C(A, B):
>> ?...
>>
>> In C, is A a parent of B or a sibling of B?

As has been pointed out elsewhere in the thread, that definition of C
isn't allowed :)

>>> class C(A, B): pass
...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: Cannot create a consistent method resolution
order (MRO) for bases A, B

Once you turn the order of definition around (class C(B, A)) it
becomes clear that A remains B's parent regardless of the existence of
C:

>>> C.__mro__
(<class '__main__.C'>, <class '__main__.B'>, <class '__main__.A'>,
<class 'object'>)

The whole discussion of trying to distinguish parents from siblings
when invoking super() *doesn't make sense*. The entire *point* of the
multiple inheritance handling is to linearise the type hierarchy into
a method resolution order that consists of a single chain of classes
that are called in sequence (with any class in the chain allowed to
terminate the sequence at any time).

Cooperative super() calls are exactly that: cooperative. Just as
cooperative threading breaks down if one task doesn't play by the
rules, such is also the case with cooperative super calls.

There are two ways to handle this:

- Option 1 is to tailor your inheritance hierarchy such that any
"non-cooperative" classes always appear on the right-most end of the
MRO (e.g. as "A" and "object" do in the example above). This can be
tricky, but is doable if there is just the one recalcitrant class
causing problems (e.g. I wouldn't be surprised to hear that a simple
rearrangement to "class MyTestCase(Mixin1, Mixin2, TestCase)"
sufficiently rearranged the "MyTestCase" MRO to make this problem go
away).

- Option 2 is to do as Raymond suggests: noncooperative classes are
incorporated via "has-a" composition (potentially as a proxy object)
rather than "is-a" inheritance. For any methods which require
cooperative calls, the cooperative wrapper provides that behaviour,
while delegating the heavy lifting to the underlying object.

Essentially, any cooperative hierarchy requires a base class that
defines the rules of cooperation and provides "no-op" termination
methods for any cooperative calls. Non-cooperative classes must either
be parents of that base class, or else they must be wrapped as
described in Option 2.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ethan at stoneleaf.us  Fri Apr 15 21:40:26 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 15 Apr 2011 12:40:26 -0700
Subject: [Python-Dev] Adding test case methods to TestCase subclasses
In-Reply-To: <87zkns2zg7.fsf_-_@benfinney.id.au>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA77C92.80007@stoneleaf.us>
	<87zkns2zg7.fsf_-_@benfinney.id.au>
Message-ID: <4DA89F2A.1070809@stoneleaf.us>

Ben Finney wrote:
> Ethan Furman <ethan at stoneleaf.us> writes:
> 
>> Here we have django's TestCase that does *not* want to call
>> unittest2.TestCase (assuming that's not a bug), but it gets called
>> anyway because the Mixin3 sibling has it as a base class.  So does
>> this mean that TestCase and Mixin3 just don't play well together?
>>
>> Maybe composition instead of inheritance is the answer (in this case,
>> anyway ;).
> 
> TestCase subclasses is a multiple-inheritance use case that I share. The
> mix-ins add test cases (methods named ?test_? on the mix-in class) to
> the TestCase subclass. I would prefer not to use multiple inheritance
> for this if it can be achieved in a better way.
> 
> How can composition add test cases detectable by Python's ?unittest??

Metaclasses, if's that an option...

8<-------------------------------------------------------------
import unittest
from composite import Composite  # python 3 only

class Spam():
     def test_spam_01(self):
         print('testing spam_01')
     def test_spam_02(self):
         print('testing spam_02')

class Eggs():
     def test_eggs_01(self):
         print('testing eggs_01')
     def test_eggs_02(self):
         print('testing eggs_02')

class TestAll(
         unittest.TestCase,
         metaclass=Composite,
         parts=(Spam, Eggs)):
     def setUp(self):
         print('Setting up...')
     def tearDown(self):
         print('Tearing down...')
     def test_something(self):
         print('testing something')

if __name__ == '__main__':
     unittest.main()
8<-------------------------------------------------------------

or a class decorator

8<-------------------------------------------------------------
class Compose(object):  # python 3 only
     def __init__(self, *parts):
         self.parts = parts
     def __call__(self, func):
         for part in self.parts:
             for attr in dir(part):
                 if attr[:2] == attr[-2:] == '__':
                     continue
                 setattr(func, attr, getattr(part, attr))
         return func

import unittest

class Spam():
     def test_spam_01(self):
         print('testing spam_01')
     def test_spam_02(self):
         print('testing spam_02')

class Eggs():
     def test_eggs_01(self):
         print('testing eggs_01')
     def test_eggs_02(self):
         print('testing eggs_02')

@Compose(Spam, Eggs)
class TestAll(unittest.TestCase):
     def setUp(self):
         print('Setting up...')
     def tearDown(self):
         print('Tearing down...')
     def test_something(self):
         print('testing something')

if __name__ == '__main__':
     unittest.main()
8<-------------------------------------------------------------

The decorator, as written, doesn't work on py2, and doesn't do any error 
checking (so overwrites methods in the final class) -- but I'm sure it 
could be spiffed up.

~Ethan~

From solipsis at pitrou.net  Fri Apr 15 22:44:37 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 15 Apr 2011 22:44:37 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
	<BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>
Message-ID: <20110415224437.18fe63cb@pitrou.net>

> 
> > Since the JSON spec is set in stone, the changes
> > will mostly be about API (indentation, object conversion, etc)
> > and optimization. ?I presume the core parsing logic won't
> > be changing much.
> 
> Actually the core parsing logic is very different (and MUCH faster),

Are you talking about the Python logic or the C logic?



From bob at redivi.com  Fri Apr 15 23:18:04 2011
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 15 Apr 2011 14:18:04 -0700
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <20110415224437.18fe63cb@pitrou.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
	<BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>
	<20110415224437.18fe63cb@pitrou.net>
Message-ID: <BANLkTimODuaPj8B-cH+8UTr+BfEEev0RrQ@mail.gmail.com>

On Friday, April 15, 2011, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>
>> > Since the JSON spec is set in stone, the changes
>> > will mostly be about API (indentation, object conversion, etc)
>> > and optimization. ?I presume the core parsing logic won't
>> > be changing much.
>>
>> Actually the core parsing logic is very different (and MUCH faster),
>
> Are you talking about the Python logic or the C logic?

Both, actually. IIRC simplejson in pure python typically beats json
with it's C extension.

-bob

From solipsis at pitrou.net  Fri Apr 15 23:20:53 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 15 Apr 2011 23:20:53 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <BANLkTimODuaPj8B-cH+8UTr+BfEEev0RrQ@mail.gmail.com>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
	<BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>
	<20110415224437.18fe63cb@pitrou.net>
	<BANLkTimODuaPj8B-cH+8UTr+BfEEev0RrQ@mail.gmail.com>
Message-ID: <1302902453.3723.1.camel@localhost.localdomain>

Le vendredi 15 avril 2011 ? 14:18 -0700, Bob Ippolito a ?crit :
> On Friday, April 15, 2011, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >>
> >> > Since the JSON spec is set in stone, the changes
> >> > will mostly be about API (indentation, object conversion, etc)
> >> > and optimization.  I presume the core parsing logic won't
> >> > be changing much.
> >>
> >> Actually the core parsing logic is very different (and MUCH faster),
> >
> > Are you talking about the Python logic or the C logic?
> 
> Both, actually. IIRC simplejson in pure python typically beats json
> with it's C extension.

Really? It would be nice to see some concrete benchmarks against both
repo tips.

Regards

Antoine.



From ben+python at benfinney.id.au  Fri Apr 15 23:25:28 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Sat, 16 Apr 2011 07:25:28 +1000
Subject: [Python-Dev] Adding test case methods to TestCase subclasses
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA77C92.80007@stoneleaf.us> <87zkns2zg7.fsf_-_@benfinney.id.au>
	<4DA89F2A.1070809@stoneleaf.us>
Message-ID: <87ipuf2omv.fsf@benfinney.id.au>

Ethan Furman <ethan at stoneleaf.us> writes:

> Ben Finney wrote:
> > TestCase subclasses is a multiple-inheritance use case that I share.
> > The mix-ins add test cases (methods named ?test_? on the mix-in
> > class) to the TestCase subclass. I would prefer not to use multiple
> > inheritance for this if it can be achieved in a better way.
> >
> > How can composition add test cases detectable by Python's ?unittest??
>
> Metaclasses, if's that an option...
[?]
> or a class decorator
[?]

Both interesting, thank you. But Python 3 isn't an option for several
projects where I'd like to use this.

-- 
 \     ?What is needed is not the will to believe but the will to find |
  `\       out, which is the exact opposite.? ?Bertrand Russell, _Free |
_o__)                           Thought and Official Propaganda_, 1928 |
Ben Finney


From bob at redivi.com  Fri Apr 15 23:27:04 2011
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 15 Apr 2011 14:27:04 -0700
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <1302902453.3723.1.camel@localhost.localdomain>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
	<BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>
	<20110415224437.18fe63cb@pitrou.net>
	<BANLkTimODuaPj8B-cH+8UTr+BfEEev0RrQ@mail.gmail.com>
	<1302902453.3723.1.camel@localhost.localdomain>
Message-ID: <BANLkTimD-GtPg1h09Q0gRP-jxzjZuiLW-w@mail.gmail.com>

On Fri, Apr 15, 2011 at 2:20 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Le vendredi 15 avril 2011 ? 14:18 -0700, Bob Ippolito a ?crit :
>> On Friday, April 15, 2011, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> >>
>> >> > Since the JSON spec is set in stone, the changes
>> >> > will mostly be about API (indentation, object conversion, etc)
>> >> > and optimization. ?I presume the core parsing logic won't
>> >> > be changing much.
>> >>
>> >> Actually the core parsing logic is very different (and MUCH faster),
>> >
>> > Are you talking about the Python logic or the C logic?
>>
>> Both, actually. IIRC simplejson in pure python typically beats json
>> with it's C extension.
>
> Really? It would be nice to see some concrete benchmarks against both
> repo tips.

Maybe in a few weeks or months when I have time to finish up the
benchmarks that I was working on... but it should be pretty easy for
anyone to show that the version in CPython is very slow (and uses a
lot more memory) in comparison to simplejson.

-bob

From ethan at stoneleaf.us  Sat Apr 16 00:56:14 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 15 Apr 2011 15:56:14 -0700
Subject: [Python-Dev] Adding test case methods to TestCase subclasses
In-Reply-To: <87ipuf2omv.fsf@benfinney.id.au>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA77C92.80007@stoneleaf.us>
	<87zkns2zg7.fsf_-_@benfinney.id.au>	<4DA89F2A.1070809@stoneleaf.us>
	<87ipuf2omv.fsf@benfinney.id.au>
Message-ID: <4DA8CD0E.8060208@stoneleaf.us>

Ben Finney wrote:
> Ethan Furman <ethan at stoneleaf.us> writes:
 >> Ben Finney wrote:
 >>>
>>> How can composition add test cases detectable by Python's ?unittest??
 >>
>> Metaclasses, if's that an option...
> [?]
>> or a class decorator
> [?]
> 
> Both interesting, thank you. But Python 3 isn't an option for several
> projects where I'd like to use this.
> 

Well, I'm sure there's a way to do it -- alas, I lack the time to find 
it either in the docs, archives, or by experimentation.


What I did find is that if you have your functions in modules, instead 
of in classes, it works fine in Python 2.6+.

8<---spam.py-------------------------------------------------------
def test_spam_01(self):
     print('testing spam_01')
def test_spam_02(self):
     print('testing spam_02')
8<-----------------------------------------------------------------

8<---eggs.py-------------------------------------------------------
def test_eggs_01(self):
     print('testing eggs_01')
def test_eggs_02(self):
     print('testing eggs_02')
8<-----------------------------------------------------------------

8<---test_compose.py-----------------------------------------------
import unittest

class Compose(object):  # 2.6-2.7, functions must be in modules
     def __init__(self, *parts):
         self.parts = parts
     def __call__(self, func):
         for part in self.parts:
             for attr in dir(part):
                 if attr[:2] == attr[-2:] == '__':
                     continue
                 if getattr(cls, attr, None):
                     raise AttributeError(
                         "%s already exists in %s" % (attr, cls))
                 setattr(func, attr, getattr(part, attr))
         return func

@Compose(spam, eggs)
class TestAll(unittest.TestCase):
     def setUp(self):
         print('Setting up...')
     def tearDown(self):
         print('Tearing down...')
     def test_something(self):
         print('testing something')

if __name__ == '__main__':
     unittest.main()
8<---test_compose.py-----------------------------------------------

Compose now has rudimentary error checking, and if can live with your 
extras living in their own .py files, this might work for you.

~Ethan~

From ethan at stoneleaf.us  Sat Apr 16 01:00:45 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 15 Apr 2011 16:00:45 -0700
Subject: [Python-Dev] Adding test case methods to TestCase subclasses
In-Reply-To: <4DA8CD0E.8060208@stoneleaf.us>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA77C92.80007@stoneleaf.us>	<87zkns2zg7.fsf_-_@benfinney.id.au>	<4DA89F2A.1070809@stoneleaf.us>	<87ipuf2omv.fsf@benfinney.id.au>
	<4DA8CD0E.8060208@stoneleaf.us>
Message-ID: <4DA8CE1D.6010504@stoneleaf.us>

Ethan Furman wrote:
> Ben Finney wrote:
>> Ethan Furman <ethan at stoneleaf.us> writes:
> 8<---test_compose.py-----------------------------------------------
> import unittest

ack!  There should be an 'import new' here as well.  :/

From solipsis at pitrou.net  Sat Apr 16 01:12:29 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 01:12:29 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <BANLkTimD-GtPg1h09Q0gRP-jxzjZuiLW-w@mail.gmail.com>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
	<BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>
	<20110415224437.18fe63cb@pitrou.net>
	<BANLkTimODuaPj8B-cH+8UTr+BfEEev0RrQ@mail.gmail.com>
	<1302902453.3723.1.camel@localhost.localdomain>
	<BANLkTimD-GtPg1h09Q0gRP-jxzjZuiLW-w@mail.gmail.com>
Message-ID: <20110416011229.14fdde20@pitrou.net>

On Fri, 15 Apr 2011 14:27:04 -0700
Bob Ippolito <bob at redivi.com> wrote:
> On Fri, Apr 15, 2011 at 2:20 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > Le vendredi 15 avril 2011 ? 14:18 -0700, Bob Ippolito a ?crit :
> >> On Friday, April 15, 2011, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >> >>
> >> >> > Since the JSON spec is set in stone, the changes
> >> >> > will mostly be about API (indentation, object conversion, etc)
> >> >> > and optimization. ?I presume the core parsing logic won't
> >> >> > be changing much.
> >> >>
> >> >> Actually the core parsing logic is very different (and MUCH faster),
> >> >
> >> > Are you talking about the Python logic or the C logic?
> >>
> >> Both, actually. IIRC simplejson in pure python typically beats json
> >> with it's C extension.
> >
> > Really? It would be nice to see some concrete benchmarks against both
> > repo tips.
> 
> Maybe in a few weeks or months when I have time to finish up the
> benchmarks that I was working on... but it should be pretty easy for
> anyone to show that the version in CPython is very slow (and uses a
> lot more memory) in comparison to simplejson.

Well, here's a crude microbenchmark. I'm comparing 2.6+simplejson 2.1.3
to 3.3+json, so I'm avoiding integers:

* json.dumps:

$ python -m timeit -s "from simplejson import dumps, loads; \
    d = dict((str(i), str(i)) for i in range(1000))" \
   "dumps(d)"

- 2.6+simplejson: 372 usec per loop
- 3.2+json: 352 usec per loop

* json.loads:

$ python -m timeit -s "from simplejson import dumps, loads; \
    d = dict((str(i), str(i)) for i in range(1000)); s = dumps(d)" \
    "loads(s)"

- 2.6+simplejson: 224 usec per loop
- 3.2+json: 233 usec per loop


The runtimes look quite similar.

Antoine.

From greg.ewing at canterbury.ac.nz  Sat Apr 16 01:38:36 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 16 Apr 2011 11:38:36 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <4DA84DD6.20608@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA79E28.2060406@pearwood.info> <4DA84DD6.20608@voidspace.org.uk>
Message-ID: <4DA8D6FC.9060707@canterbury.ac.nz>

Michael Foord wrote:

> consider the "recently" introduced problem caused by object.__init__
 > not taking arguments. This makes it impossible to use super correctly
 > in various circumstances.
 >
 > ...
 >
> It is impossible to inherit from both C and A and have all parent 
> __init__ methods called correctly. Changing the semantics of super as 
> described would fix this problem.

I don't see how, because auto-super-calling would eventually
end up trying to call object.__init__ with arguments and fail.

You might think to "fix" this by making a special case of
object.__init__ and refraining from calling it. But the same
problem arises in a more general way whenever some class in
the mix has a method with the right name but the wrong
signature, which is likely to happen if you try to mix
classes that weren't designed to be mixed together.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Sat Apr 16 01:48:54 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 16 Apr 2011 11:48:54 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <4DA851C1.5050005@dcs.gla.ac.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<4DA714C2.7000006@voidspace.org.uk> <4DA79936.1010300@canterbury.ac.nz>
	<4DA84863.80307@voidspace.org.uk> <4DA851C1.5050005@dcs.gla.ac.uk>
Message-ID: <4DA8D966.20600@canterbury.ac.nz>

Mark Shannon wrote:

>>>> class A: pass
>>>> class B(A): pass
>>>> class C(A,B):pass
> 
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> TypeError: Cannot create a consistent method resolution
> order (MRO) for bases B, A

All right, but this is okay:

class C(B, A): pass

 > Michael Foord wrote:
 >
>> For a super call in C, B is a sibling to A. For a super call in B, A 
>> is a parent.
>>
>> With the semantics I was suggesting if C calls super, but A doesn't 
>> then B would still get called.

which is contradicted by:

> "Siblings", in the context of a single MRO  are thus classes between 
> which there is no sub-class/super-class relation.

So I maintain that the situation is far from clear. :-)

-- 
Greg

From greg.ewing at canterbury.ac.nz  Sat Apr 16 01:56:08 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 16 Apr 2011 11:56:08 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <4DA86C30.3010902@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA79E28.2060406@pearwood.info> <4DA84DD6.20608@voidspace.org.uk>
	<4DA861B3.9010506@oddbird.net> <4DA86C30.3010902@voidspace.org.uk>
Message-ID: <4DA8DB18.2070301@canterbury.ac.nz>

Michael Foord wrote:
> But you have to be aware that 
> because of the semantics of super, not calling up to your parents 
> basically prevents those methods being used in the presence of multiple 
> inheritance.

No, it prevents them being used in the presence of super().
Multiple inheritance is still possible the old-fashioned way
using explicit upcalls as long as the classes are sufficiently
independent.

If they're *not* sufficiently independent, and haven't been
specifically designed to cooperate with each other, attempting
to make them cooperate automatically is as likely to do harm
as good.

-- 
Greg

From bob at redivi.com  Sat Apr 16 02:03:55 2011
From: bob at redivi.com (Bob Ippolito)
Date: Fri, 15 Apr 2011 17:03:55 -0700
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <20110416011229.14fdde20@pitrou.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<52676033-A4C9-43BD-8B34-18A487B9B2E4@gmail.com>
	<BANLkTi=Cpjp7aBnXZ8oUf7M_uQma+evpoQ@mail.gmail.com>
	<20110415224437.18fe63cb@pitrou.net>
	<BANLkTimODuaPj8B-cH+8UTr+BfEEev0RrQ@mail.gmail.com>
	<1302902453.3723.1.camel@localhost.localdomain>
	<BANLkTimD-GtPg1h09Q0gRP-jxzjZuiLW-w@mail.gmail.com>
	<20110416011229.14fdde20@pitrou.net>
Message-ID: <BANLkTi=mH-YAFpMDCeZVfaJ=vh=_52EOhg@mail.gmail.com>

On Fri, Apr 15, 2011 at 4:12 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Fri, 15 Apr 2011 14:27:04 -0700
> Bob Ippolito <bob at redivi.com> wrote:
>> On Fri, Apr 15, 2011 at 2:20 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> > Le vendredi 15 avril 2011 ? 14:18 -0700, Bob Ippolito a ?crit :
>> >> On Friday, April 15, 2011, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> >> >>
>> >> >> > Since the JSON spec is set in stone, the changes
>> >> >> > will mostly be about API (indentation, object conversion, etc)
>> >> >> > and optimization. ?I presume the core parsing logic won't
>> >> >> > be changing much.
>> >> >>
>> >> >> Actually the core parsing logic is very different (and MUCH faster),
>> >> >
>> >> > Are you talking about the Python logic or the C logic?
>> >>
>> >> Both, actually. IIRC simplejson in pure python typically beats json
>> >> with it's C extension.
>> >
>> > Really? It would be nice to see some concrete benchmarks against both
>> > repo tips.
>>
>> Maybe in a few weeks or months when I have time to finish up the
>> benchmarks that I was working on... but it should be pretty easy for
>> anyone to show that the version in CPython is very slow (and uses a
>> lot more memory) in comparison to simplejson.
>
> Well, here's a crude microbenchmark. I'm comparing 2.6+simplejson 2.1.3
> to 3.3+json, so I'm avoiding integers:
>
> * json.dumps:
>
> $ python -m timeit -s "from simplejson import dumps, loads; \
> ? ?d = dict((str(i), str(i)) for i in range(1000))" \
> ? "dumps(d)"
>
> - 2.6+simplejson: 372 usec per loop
> - 3.2+json: 352 usec per loop
>
> * json.loads:
>
> $ python -m timeit -s "from simplejson import dumps, loads; \
> ? ?d = dict((str(i), str(i)) for i in range(1000)); s = dumps(d)" \
> ? ?"loads(s)"
>
> - 2.6+simplejson: 224 usec per loop
> - 3.2+json: 233 usec per loop
>
>
> The runtimes look quite similar.

That's the problem with trivial benchmarks. With more typical data
(for us, anyway) you should see very different results.

-bob

From matt at vazor.com  Sat Apr 16 02:41:03 2011
From: matt at vazor.com (Matt Billenstein)
Date: Sat, 16 Apr 2011 00:41:03 +0000
Subject: [Python-Dev] Status of json (simplejson) in cpython
Message-ID: <4+paau3tcbbkacaeaayadylc5gexxg7frucnoa3nna353qvownngklidxrptdfdc7q4h7aznkfyteyphnohqijsuswycqxpcvjita5ir3iwbkumvc2zzhbtm3kldwd66ab3kvr6wi=+338714@messaging-master.com>

On Fri, Apr 15, 2011 at 05:03:55PM -0700, Bob Ippolito wrote:
> On Fri, Apr 15, 2011 at 4:12 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > On Fri, 15 Apr 2011 14:27:04 -0700
> > Bob Ippolito <bob at redivi.com> wrote:
> >> On Fri, Apr 15, 2011 at 2:20 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >
> > Well, here's a crude microbenchmark. I'm comparing 2.6+simplejson 2.1.3
> > to 3.3+json, so I'm avoiding integers:
> >
> > * json.dumps:
> >
> > $ python -m timeit -s "from simplejson import dumps, loads; \
> > ?? ??d = dict((str(i), str(i)) for i in range(1000))" \
> > ?? "dumps(d)"
> >
> > - 2.6+simplejson: 372 usec per loop
> > - 3.2+json: 352 usec per loop
> >
> > * json.loads:
> >
> > $ python -m timeit -s "from simplejson import dumps, loads; \
> > ?? ??d = dict((str(i), str(i)) for i in range(1000)); s = dumps(d)" \
> > ?? ??"loads(s)"
> >
> > - 2.6+simplejson: 224 usec per loop
> > - 3.2+json: 233 usec per loop
> >
> >
> > The runtimes look quite similar.
> 
> That's the problem with trivial benchmarks. With more typical data
> (for us, anyway) you should see very different results.

Slightly less crude benchmark showing simplejson is quite a bit faster:

http://pastebin.com/g1WqUPwm

250ms vs 5.5s encoding and decoding an 11KB json object 1000 times...

m

-- 
Matt Billenstein
matt at vazor.com
http://www.vazor.com/

From matt at vazor.com  Sat Apr 16 03:24:58 2011
From: matt at vazor.com (Matt Billenstein)
Date: Sat, 16 Apr 2011 01:24:58 +0000
Subject: [Python-Dev] Status of json (simplejson) in cpython
Message-ID: <4+paau3tcbbkacaeaayadylcywmxxg7fw4jrydlvecp3pst2bwu4y5uowe6puupsok6p6rssqnrcb6phno2qiykyrsedobvbmyqmjgvhjlnrcrbtnei5t2y5rlftzt66ab3mjr6xy=+339239@messaging-master.com>

On Fri, Apr 15, 2011 at 05:03:55PM -0700, Bob Ippolito wrote:
> On Fri, Apr 15, 2011 at 4:12 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > On Fri, 15 Apr 2011 14:27:04 -0700
> > Bob Ippolito <bob at redivi.com> wrote:
> >> On Fri, Apr 15, 2011 at 2:20 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >
> > Well, here's a crude microbenchmark. I'm comparing 2.6+simplejson 2.1.3
> > to 3.3+json, so I'm avoiding integers:
> >
> > * json.dumps:
> >
> > $ python -m timeit -s "from simplejson import dumps, loads; \
> > ?? ??d = dict((str(i), str(i)) for i in range(1000))" \
> > ?? "dumps(d)"
> >
> > - 2.6+simplejson: 372 usec per loop
> > - 3.2+json: 352 usec per loop
> >
> > * json.loads:
> >
> > $ python -m timeit -s "from simplejson import dumps, loads; \
> > ?? ??d = dict((str(i), str(i)) for i in range(1000)); s = dumps(d)" \
> > ?? ??"loads(s)"
> >
> > - 2.6+simplejson: 224 usec per loop
> > - 3.2+json: 233 usec per loop
> >
> >
> > The runtimes look quite similar.
> 
> That's the problem with trivial benchmarks. With more typical data
> (for us, anyway) you should see very different results.

Slightly less crude benchmark showing simplejson is quite a bit faster:

http://pastebin.com/g1WqUPwm

250ms vs 5.5s encoding and decoding an 11KB json object 1000 times...

m

-- 
Matt Billenstein
matt at vazor.com
http://www.vazor.com/

From ethan at stoneleaf.us  Sat Apr 16 05:44:53 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 15 Apr 2011 20:44:53 -0700
Subject: [Python-Dev] Adding test case methods to TestCase subclasses
In-Reply-To: <87ipuf2omv.fsf@benfinney.id.au>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA77C92.80007@stoneleaf.us>
	<87zkns2zg7.fsf_-_@benfinney.id.au>	<4DA89F2A.1070809@stoneleaf.us>
	<87ipuf2omv.fsf@benfinney.id.au>
Message-ID: <4DA910B5.20406@stoneleaf.us>

Ben Finney wrote:
> Both interesting, thank you. But Python 3 isn't an option for several
> projects where I'd like to use this.

Okay, I took some time to try and figure this out (have I mentioned how 
much I love Python 3's clean-up?), and I have something -- lightly 
tested with methods, properties, and attributes, with the objects being 
kept in classes instead of modules.

Posted to ActiveState.
http://code.activestate.com/recipes/577658-composition-of-classes-instead-of-multiple-inherit

Hope this helps!

~Ethan~

From g.brandl at gmx.net  Sat Apr 16 12:48:18 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Sat, 16 Apr 2011 12:48:18 +0200
Subject: [Python-Dev] http://docs.python.org/py3k/ pointing to 2.7
In-Reply-To: <1302875242.18033.13.camel@marge>
References: <4DA82D63.5050803@jcea.es> <1302875242.18033.13.camel@marge>
Message-ID: <iobf4i$acn$1@dough.gmane.org>

Am 15.04.2011 15:47, schrieb Victor Stinner:
> Le vendredi 15 avril 2011 ? 13:34 +0200, Jesus Cea a ?crit :
>> http://docs.python.org/py3k/ takes you to 2.7, by default.
>> 
>> Should we update it to point to 3.2?. If the point is to promote Python 3...
>> 
>> I would point it to 3.2, with a big "access to documentation to legacy
>> 2.7" (beside the small left column link). What do you think?.
> 
> http://docs.python.org/ points to 2.7 yes. I'm already reading 3.3 doc
> to develop with Python 2.5: I prefer the most recent doc, and the API is
> usually exactly the same. So I vote +1 to make 3.3 the default doc.
> Anyway, Python 2 is a dead language!

That is exactly the kind of comment that leads to FUD outside of python-dev.
Of course Python 2 is not dead as long as 2.7 is maintained.

As long as we're getting complaints from users that the examples in the
2.7 docs don't work with Python 2.5, there is no way making docs.python.org
show docs for 3.2 is the better choice.

> I don't like URL prefixes to indicate the version: "py3k/" for 3.2 (last
> Python 3 stable version), no prefix for 2.7, "release/2.6.6/" for 2.6,
> "dev/py3k/" for 3.3, etc.
> 
> Can't we keep it simple as:
>  - "2.6/" for 2.6
>  - "2.7/" for 2.7
>  - "3.1/" for 3.1
>  - "3.2/" for 3.2
>  - "3.3/" for 3.3

That we have.

>  - "2.x/" (or maybe just "2/"?) as a redirection to 2.7
>  - "3.x/" (or maybe just "3/"?) as a redirection to 3.3

That we could introduce, if it's really needed.  (We'd need to keep
the old URLs as well as redirections of course, introducing even
more ways to spell things.)

> http://docs.python.org/ may be a redirection (to
> http://docs.python.org/3.x/) instead of directly the doc. So it would be
> intuitive to replace 2 by 3 (or 3 by 2) in the URL.
> 
> 
> The http://www.python.org/doc/versions/ page uses other URLS:
> http://www.python.org/doc/<version>/

These are legacy URLs and are redirected, as you note.

> which are redirections to
> http://docs.python.org/release/<version>/. For example:
> http://www.python.org/doc/3.2/ is a redirection to
> http://docs.python.org/release/3.2/ (which give the same content than
> http://docs.python.org/py3k/ !).

No, they don't.  /release/version docs are those *released with* a specific
version, i.e. frozen at the point of the release. /py3k/ (and /version/)
docs are refreshed daily from the respective branch.

> On the left of http://docs.python.org/, you have links to other versions
> of the doc: 2.6, 3.2, 3.3... but not 3.1. 3.3 doc have links to 2.7 and
> 3.2 (not 2.6 or 3.1). If there is a rule to choose links on the left, I
> don't understand this rule.

3.1 is not really interesting for 2.7 users since 3.2 is out, while 2.6 is
not really interesting for 3.2 users.

Georg


From vinay_sajip at yahoo.co.uk  Sat Apr 16 11:50:25 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sat, 16 Apr 2011 09:50:25 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
Message-ID: <loom.20110416T113016-741@post.gmane.org>

Sandro Tosi <sandro.tosi <at> gmail.com> writes:

> The version we have in cpython of json is simplejson 2.0.9 highly
> patched (either because it was converted to py3k, and because of the
> normal flow of issues/bugfixes) while upstream have already released
> 2.1.13 .

I think you mean 2.1.3?
 
> Their 2 roads had diverged a lot, and since this blocks any further
> update of cpython's json from upstream, I'd like to close this gap.
> 
> This isn't exactly an easy task, and this email is more about a
> brainstorming on the ways we have to achieve the goal: being able to
> upgrade json to 2.1.13.
> 
> Luckily, upstream is receptive for patches, so part of the job is to
> forward patches written for cpython not already in the upstream code.
> 
> But how am I going to do this? let's do a brain-dump:
> 
> - the history goes back at changeset f686aced02a3 (May 2009, wow) when
> 2.0.9 was merged on trunk
> - I can navigate from that CS up to tip, and examine the diffs and see
> if they apply to 2.1.3 and prepare a set of patches to be forwarded
> - part of those diffs is about py3k conversion, that probably needs to
> be extended to other part of the upstream code not currently in
> cpython. For those "new" code parts, do you have some guides about
> porting a project to py3k? it would be my first time and other than
> building it and running it with python3 i don't know what to do :)
> - once (and if :) I reach the point where I've all the relevant
> patches applied on 2.1.3 what's the next step?

If it is generally considered desirable to maintain some synchrony between
simplejson and stdlib json, then since Bob has stated that he no interest in
Python 3, it may be better to:

1. Convert the simplejson codebase so that it runs on both Python 2 and 3
(without running 2to3 on it). Once this is done, if upstream accepts these
changes, ongoing maintenance will be fairly simple for upstream, and changes
only really need to consider exception and string/byte literal syntax, for the
most part.
2. Merge this new simplejson with stdlib json for 3.3.

I looked at step 1 a few weeks ago and have made some progress with it. I've
just forked simplejson on Github and posted my changes to my fork:

https://github.com/vsajip/simplejson

All 136 tests pass on Python 2.7 (just as a control/sanity check), and on Python
3.2, there are 4 failures and 12 errors - see complete results at

https://gist.github.com/923019

I haven't looked at the C extension yet, just the Python code. I believe most of
the test failures will be down to string literals in the tests which should be
bytes, e.g. test_unicode.py:test_ensure_ascii_false_bytestring_encoding.

So, it looks quite encouraging, and if you think my approach has merit, please
take a look at my fork, and give feedback/join in!

Note that I used the same approach when porting pip/virtualenv to Python 3,
which seems to have gone quite smoothly :-)

Regards,


Vinay Sajip


From solipsis at pitrou.net  Sat Apr 16 13:30:13 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 13:30:13 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <4+paau3tcbbkacaeaayadylc5gexxg7frucnoa3nna353qvownngklidxrptdfdc7q4h7aznkfyteyphnohqijsuswycqxpcvjita5ir3iwbkumvc2zzhbtm3kldwd66ab3kvr6wi=+338714@messaging-master.com>
Message-ID: <20110416133013.5eb1f284@pitrou.net>

On Sat, 16 Apr 2011 00:41:03 +0000
Matt Billenstein <matt at vazor.com> wrote:
> 
> Slightly less crude benchmark showing simplejson is quite a bit faster:
> 
> http://pastebin.com/g1WqUPwm
> 
> 250ms vs 5.5s encoding and decoding an 11KB json object 1000 times...

This doesn't have much value if you don't say which version of Python
you ran json with. You should use 3.2, otherwise you might miss some
optimizations.

Regards

Antoine.



From me at gustavonarea.net  Sat Apr 16 13:45:42 2011
From: me at gustavonarea.net (Gustavo Narea)
Date: Sat, 16 Apr 2011 12:45:42 +0100
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
Message-ID: <4DA98166.2010604@gustavonarea.net>

Hello,

On 15/04/11 13:30, Brian Curtin wrote:
> To me, the fix *was* released.

No, it wasn't. It was *committed* to the repository.

> Sure, no fancy installers were generated yet, but people who are
> susceptible to this issue 1) now know about it, and 2) have a way to
> patch their system *if needed*.

Well, that's a long shot. I doubt the people/organizations affected are
all aware. And I doubt they are all capable of patching their system or
getting a patched Python from a trusted party.

Three weeks after this security vulnerability was *publicly* reported on
bugs.python.org, and two days after it was semi-officially announced,
I'm still waiting for security updates for my Ubuntu and Debian systems!

I reckon if this had been handled differently (i.e., making new releases
and communicating it via the relevant channels [1]), we wouldn't have
the situation we have right now.

May I suggest that you adopt a policy for handling security issues like
Django's?
http://docs.djangoproject.com/en/1.3/internals/contributing/#reporting-security-issues

Cheers,

[1] For example,
<http://mail.python.org/mailman/listinfo/python-announce-list>,
<http://www.python.org/news/>, <http://www.python.org/news/security/>.

-- 
Gustavo Narea <xri://=Gustavo>.
| Tech blog: =Gustavo/(+blog)/tech  ~  About me: =Gustavo/about |


From vinay_sajip at yahoo.co.uk  Sat Apr 16 16:05:18 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sat, 16 Apr 2011 14:05:18 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
Message-ID: <loom.20110416T155214-636@post.gmane.org>

Sandro Tosi <sandro.tosi <at> gmail.com> writes:

> Luckily, upstream is receptive for patches, so part of the job is to
> forward patches written for cpython not already in the upstream code.

Further to my earlier response to your post, I should mention that my fork of
simplejson at

https://github.com/vsajip/simplejson/

passes all 136 tests for Python 2.7 and 3.2 (not been able to test with 3.3a0
yet). No tests were skipped, though adjustments were made for binary/string
literals and for one case where sorting was applied to incompatible types in the
tests.

Test output is at https://gist.github.com/923019

Bob - If you're reading this, what would you say to having a look at my fork,
and comment on the feasibility of merging my changes back into your master? The
changes are fairly easy to understand, all tests pass, and it's a 2.x/3.x single
codebase, so maintenance should be easier than with multiple codebases.

Admittedly I haven't looked at the C code yet, but that's next on my list.

Regards,

Vinay Sajip


From solipsis at pitrou.net  Sat Apr 16 16:19:31 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 16:19:31 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
Message-ID: <20110416161931.089d2014@pitrou.net>


Hello Vinay,

On Sat, 16 Apr 2011 09:50:25 +0000 (UTC)
Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:
> 
> If it is generally considered desirable to maintain some synchrony between
> simplejson and stdlib json, then since Bob has stated that he no interest in
> Python 3, it may be better to:
> 
> 1. Convert the simplejson codebase so that it runs on both Python 2 and 3
> (without running 2to3 on it). Once this is done, if upstream accepts these
> changes, ongoing maintenance will be fairly simple for upstream, and changes
> only really need to consider exception and string/byte literal syntax, for the
> most part.
> 2. Merge this new simplejson with stdlib json for 3.3.

What you're proposing doesn't address the question of who is going to
do the ongoing maintenance. Bob apparently isn't interested in
maintaining stdlib code, and python-dev members aren't interested in
maintaining simplejson (assuming it would be at all possible). Since
both groups of people want to work on separate codebases, I don't see
how sharing a single codebase would be possible.

Regards

Antoine.



From ncoghlan at gmail.com  Sat Apr 16 16:23:42 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 17 Apr 2011 00:23:42 +1000
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <4DA98166.2010604@gustavonarea.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
Message-ID: <BANLkTint4fVi=Joy+ythOAOOzKL_VzTHPg@mail.gmail.com>

On Sat, Apr 16, 2011 at 9:45 PM, Gustavo Narea <me at gustavonarea.net> wrote:
> I reckon if this had been handled differently (i.e., making new releases
> and communicating it via the relevant channels [1]), we wouldn't have
> the situation we have right now.

Nope, we would have a situation where the security team were still
attempting to coordinate with the release managers to cut new source
releases and new binary releases, and not even releasing the source
level patches that *will* allow many, many people to fix the problem
on their own.

I don't agree that such a situation would be better than the status
quo (i.e. where both the problem and *how to fix it yourself* are
public knowledge).

The *exact* patches for all affected versions of Python are readily
available by checking the changesets linked from
http://bugs.python.org/issue11662#msg132517

> May I suggest that you adopt a policy for handling security issues like
> Django's?
> http://docs.djangoproject.com/en/1.3/internals/contributing/#reporting-security-issues

When the list of people potentially using the software is "anyone
running Linux or Mac OS X and an awful lot of people running Windows
or an embedded device", private pre-announcements simply aren't a
practical reality. Neither is "stopping all other development" when
most of the core development team aren't on the security at python.org
list and don't even know a security issue exists until it is announced
publicly. Take those two impractical steps out of the process, and
what you have *is* the python.org procedure for dealing with security
issues.

And when official python.org releases require coordination of
volunteers scattered around the planet, there is a harsh trade-off to
be made when it comes to deciding how long to wait before publishing
the information people need in order to fix the issue themselves.

Bumping the priority of the next round of python.org releases should
definitely be on the agenda, but the "rapid response" side of things
needs to come from the OS vendors with paid release engineers. Dealing
with security issues on behalf of their end users is one of the key
reasons they're getting paid for free software in the first place.

It may be worth asking the OS vendors whether or not they have
representatives that receive the security at python.org notifications,
and if not, why they haven't approached python-dev about receiving
such notifications.

> Cheers,
>
> [1] For example,
> <http://mail.python.org/mailman/listinfo/python-announce-list>,
> <http://www.python.org/news/>, <http://www.python.org/news/security/>.

Agreed that an announcement should be made on those locations, with a
list of links to the exact changesets for each affected version.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From dirkjan at ochtman.nl  Sat Apr 16 16:42:38 2011
From: dirkjan at ochtman.nl (Dirkjan Ochtman)
Date: Sat, 16 Apr 2011 16:42:38 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <20110416161931.089d2014@pitrou.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
Message-ID: <BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>

On Sat, Apr 16, 2011 at 16:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
> What you're proposing doesn't address the question of who is going to
> do the ongoing maintenance. Bob apparently isn't interested in
> maintaining stdlib code, and python-dev members aren't interested in
> maintaining simplejson (assuming it would be at all possible). Since
> both groups of people want to work on separate codebases, I don't see
> how sharing a single codebase would be possible.

>From reading this thread, it seems to me like the proposal is that Bob
maintains a simplejson for both 2.x and 3.x and that the current
stdlib json is replaced by a (trivially changed) version of
simplejson.

Cheers,

Dirkjan

From solipsis at pitrou.net  Sat Apr 16 16:52:08 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 16:52:08 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
Message-ID: <1302965528.3490.26.camel@localhost.localdomain>

Le samedi 16 avril 2011 ? 16:42 +0200, Dirkjan Ochtman a ?crit :
> On Sat, Apr 16, 2011 at 16:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > What you're proposing doesn't address the question of who is going to
> > do the ongoing maintenance. Bob apparently isn't interested in
> > maintaining stdlib code, and python-dev members aren't interested in
> > maintaining simplejson (assuming it would be at all possible). Since
> > both groups of people want to work on separate codebases, I don't see
> > how sharing a single codebase would be possible.
> 
> From reading this thread, it seems to me like the proposal is that Bob
> maintains a simplejson for both 2.x and 3.x and that the current
> stdlib json is replaced by a (trivially changed) version of
> simplejson.

The thing is, we want to bring our own changes to the json module and
its tests (and have already done so, although some have been backported
to simplejson).

Regards

Antoine.



From michael at python.org  Sat Apr 16 16:50:29 2011
From: michael at python.org (Michael Foord)
Date: Sat, 16 Apr 2011 15:50:29 +0100
Subject: [Python-Dev] Python Language Summit at EuroPython: 19th June
Message-ID: <4DA9ACB5.6030505@python.org>

Hello all,

This is an invite to all core-python developers, and developers of 
alternative implementations, to attend the Python Language Summit at 
EuroPython. The summit will be on June 19th and EuroPython this year 
will be held at the beautiful city of Florence in Italy.

     http://ep2011.europython.eu/

If you are not a core-Python developer but would like to attend then 
please email me privately and I will let you know if spaces are 
available. If you are a core developer, or you have received a direct 
invitation, then please respond by private email to let me know if you 
are able to attend. A maybe is fine, you can always change your mind 
later. Attending for only part of the day is fine.

We expect the summit to run from 10am - 4pm with appropriate breaks.

Like previous language summits it is an opportunity to discuss topics 
like, Python 3 adoption, PEPs and changes for Python 3.3, the future of 
Python 2.7, documentation, package index, web site, etc.

If you have topics you'd like to discuss at the language summit please 
let me know.

Volunteers for taking notes at the language summit, for posting to 
Python-dev and the Python Insiders blog after the event, would be much 
appreciated.

All the best,

Michael Foord

N.B. Due to my impending doom (oops, I mean impending fatherhood) I am 
not yet 100% certain I will be able to attend. If I can't I will arrange 
for someone else to chair.

-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From catch-all at masklinn.net  Sat Apr 16 17:07:23 2011
From: catch-all at masklinn.net (Xavier Morel)
Date: Sat, 16 Apr 2011 17:07:23 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <1302965528.3490.26.camel@localhost.localdomain>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
	<1302965528.3490.26.camel@localhost.localdomain>
Message-ID: <033B3B0C-441D-4B06-8B44-99168D0BF91B@masklinn.net>

On 2011-04-16, at 16:52 , Antoine Pitrou wrote:
> Le samedi 16 avril 2011 ? 16:42 +0200, Dirkjan Ochtman a ?crit :
>> On Sat, Apr 16, 2011 at 16:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>> What you're proposing doesn't address the question of who is going to
>>> do the ongoing maintenance. Bob apparently isn't interested in
>>> maintaining stdlib code, and python-dev members aren't interested in
>>> maintaining simplejson (assuming it would be at all possible). Since
>>> both groups of people want to work on separate codebases, I don't see
>>> how sharing a single codebase would be possible.
>> 
>> From reading this thread, it seems to me like the proposal is that Bob
>> maintains a simplejson for both 2.x and 3.x and that the current
>> stdlib json is replaced by a (trivially changed) version of
>> simplejson.
> 
> The thing is, we want to bring our own changes to the json module and
> its tests (and have already done so, although some have been backported
> to simplejson).

Depending on what those changes are, would it not be possible to apply the vast majority of them to simplejson itself?

Furthermore, now that python uses Mercurial, it should be possible (or even easy) to use a versioned queue (via MQ) for the trivial adaptation, and the temporary alterations (things which will likely be merged back into simplejson but are not yet, stuff like that) should it not?

From solipsis at pitrou.net  Sat Apr 16 17:25:54 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 17:25:54 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <033B3B0C-441D-4B06-8B44-99168D0BF91B@masklinn.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
	<1302965528.3490.26.camel@localhost.localdomain>
	<033B3B0C-441D-4B06-8B44-99168D0BF91B@masklinn.net>
Message-ID: <1302967554.3490.41.camel@localhost.localdomain>

Le samedi 16 avril 2011 ? 17:07 +0200, Xavier Morel a ?crit :
> On 2011-04-16, at 16:52 , Antoine Pitrou wrote:
> > Le samedi 16 avril 2011 ? 16:42 +0200, Dirkjan Ochtman a ?crit :
> >> On Sat, Apr 16, 2011 at 16:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >>> What you're proposing doesn't address the question of who is going to
> >>> do the ongoing maintenance. Bob apparently isn't interested in
> >>> maintaining stdlib code, and python-dev members aren't interested in
> >>> maintaining simplejson (assuming it would be at all possible). Since
> >>> both groups of people want to work on separate codebases, I don't see
> >>> how sharing a single codebase would be possible.
> >> 
> >> From reading this thread, it seems to me like the proposal is that Bob
> >> maintains a simplejson for both 2.x and 3.x and that the current
> >> stdlib json is replaced by a (trivially changed) version of
> >> simplejson.
> > 
> > The thing is, we want to bring our own changes to the json module and
> > its tests (and have already done so, although some have been backported
> > to simplejson).
> 
> Depending on what those changes are, would it not be possible to apply
> the vast majority of them to simplejson itself?

Sure, but the thing is, I don't *think* we are interested in backporting
stuff to simplejson much more than Bob is interested in porting stuff to
the json module.

I've contributed a couple of patches myself after they were integrated
to CPython (they are part of the performance improvements Bob is talking
about), but that was exceptional. Backporting a patch to another project
with a different directory structure, a slightly different code, etc. is
tedious and not very rewarding for us Python core developers, while we
could do other work on our limited free time.

Also, some types of work would be tedious to backport, for example if we
refactor the tests to test both the C and Python implementations.

> Furthermore, now that python uses Mercurial, it should be possible (or
> even easy) to use a versioned queue (via MQ) for the trivial
> adaptation, and the temporary alterations (things which will likely be
> merged back into simplejson but are not yet, stuff like that) should
> it not?

Perhaps, perhaps not. That would require someone motivated to put it in
place, ensure that it doesn't get in the way, document it, etc.
Honestly, I don't think maintaining a single stdlib module should
require such an amount of logistics.

Regards

Antoine.



From stefan_ml at behnel.de  Sat Apr 16 18:04:53 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Sat, 16 Apr 2011 18:04:53 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <20110416161931.089d2014@pitrou.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
Message-ID: <iocen5$sq5$1@dough.gmane.org>

Antoine Pitrou, 16.04.2011 16:19:
> On Sat, 16 Apr 2011 09:50:25 +0000 (UTC)
> Vinay Sajip wrote:
>>
>> If it is generally considered desirable to maintain some synchrony between
>> simplejson and stdlib json, then since Bob has stated that he no interest in
>> Python 3, it may be better to:
>>
>> 1. Convert the simplejson codebase so that it runs on both Python 2 and 3
>> (without running 2to3 on it). Once this is done, if upstream accepts these
>> changes, ongoing maintenance will be fairly simple for upstream, and changes
>> only really need to consider exception and string/byte literal syntax, for the
>> most part.
>> 2. Merge this new simplejson with stdlib json for 3.3.
>
> What you're proposing doesn't address the question of who is going to
> do the ongoing maintenance. Bob apparently isn't interested in
> maintaining stdlib code, and python-dev members aren't interested in
> maintaining simplejson (assuming it would be at all possible). Since
> both groups of people want to work on separate codebases, I don't see
> how sharing a single codebase would be possible.

Well, if that is not possible, then the CPython devs will have a hard time 
maintaining the json accelerator module in the long run. I quickly skipped 
through the github version in simplejson, and it truly is some complicated 
piece of code. Not in the sense that the code is ununderstandable, it's 
actually fairly straight forward string processing code, but it's so 
extremely optimised and tailored and has so much code duplicated for the 
bytes and unicode types (apparently following the copy+paste+adapt pattern) 
that it will be pretty hard to adapt to future changes of CPython, 
especially the upcoming PEP 393 implementation. Maintaining this is clearly 
no fun.

Stefan


From bob at redivi.com  Sat Apr 16 18:11:15 2011
From: bob at redivi.com (Bob Ippolito)
Date: Sat, 16 Apr 2011 09:11:15 -0700
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <1302967554.3490.41.camel@localhost.localdomain>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
	<1302965528.3490.26.camel@localhost.localdomain>
	<033B3B0C-441D-4B06-8B44-99168D0BF91B@masklinn.net>
	<1302967554.3490.41.camel@localhost.localdomain>
Message-ID: <BANLkTim9tV4VkFmWnL48TqG4G253jDNBww@mail.gmail.com>

On Saturday, April 16, 2011, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Le samedi 16 avril 2011 ? 17:07 +0200, Xavier Morel a ?crit :
>> On 2011-04-16, at 16:52 , Antoine Pitrou wrote:
>> > Le samedi 16 avril 2011 ? 16:42 +0200, Dirkjan Ochtman a ?crit :
>> >> On Sat, Apr 16, 2011 at 16:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> >>> What you're proposing doesn't address the question of who is going to
>> >>> do the ongoing maintenance. Bob apparently isn't interested in
>> >>> maintaining stdlib code, and python-dev members aren't interested in
>> >>> maintaining simplejson (assuming it would be at all possible). Since
>> >>> both groups of people want to work on separate codebases, I don't see
>> >>> how sharing a single codebase would be possible.
>> >>
>> >> From reading this thread, it seems to me like the proposal is that Bob
>> >> maintains a simplejson for both 2.x and 3.x and that the current
>> >> stdlib json is replaced by a (trivially changed) version of
>> >> simplejson.
>> >
>> > The thing is, we want to bring our own changes to the json module and
>> > its tests (and have already done so, although some have been backported
>> > to simplejson).
>>
>> Depending on what those changes are, would it not be possible to apply
>> the vast majority of them to simplejson itself?
>
> Sure, but the thing is, I don't *think* we are interested in backporting
> stuff to simplejson much more than Bob is interested in porting stuff to
> the json module.

I've backported every useful patch (for 2.x) I noticed from json to
simplejson. Would be happy to apply any that I missed if anyone can
point these out.

> I've contributed a couple of patches myself after they were integrated
> to CPython (they are part of the performance improvements Bob is talking
> about), but that was exceptional. Backporting a patch to another project
> with a different directory structure, a slightly different code, etc. is
> tedious and not very rewarding for us Python core developers, while we
> could do other work on our limited free time.

That's exactly why I am not interested in stdlib maintenance myself, I
only use 2.x and that's frozen... so I can't maintain the version we
would actually use.

> Also, some types of work would be tedious to backport, for example if we
> refactor the tests to test both the C and Python implementations.

simplejson's test suite has tested both for quite some time.

>> Furthermore, now that python uses Mercurial, it should be possible (or
>> even easy) to use a versioned queue (via MQ) for the trivial
>> adaptation, and the temporary alterations (things which will likely be
>> merged back into simplejson but are not yet, stuff like that) should
>> it not?
>
> Perhaps, perhaps not. That would require someone motivated to put it in
> place, ensure that it doesn't get in the way, document it, etc.
> Honestly, I don't think maintaining a single stdlib module should
> require such an amount of logistics.

It certainly shouldn't, especially because neither of them changes very fast.

-bob

From solipsis at pitrou.net  Sat Apr 16 18:37:25 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 18:37:25 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net> <iocen5$sq5$1@dough.gmane.org>
Message-ID: <20110416183725.0bb4a2d0@pitrou.net>

On Sat, 16 Apr 2011 18:04:53 +0200
Stefan Behnel <stefan_ml at behnel.de> wrote:
> 
> Well, if that is not possible, then the CPython devs will have a hard time 
> maintaining the json accelerator module in the long run. I quickly skipped 
> through the github version in simplejson, and it truly is some complicated 
> piece of code. Not in the sense that the code is ununderstandable, it's 
> actually fairly straight forward string processing code, but it's so 
> extremely optimised and tailored and has so much code duplicated for the 
> bytes and unicode types (apparently following the copy+paste+adapt pattern) 
> that it will be pretty hard to adapt to future changes of CPython, 
> especially the upcoming PEP 393 implementation.

Well, first, the Python 3 version doesn't have the duplicated code
since it doesn't accept bytes input. Second, it's not that complicated,
and we have already brought improvements to it, meaning we know the
code ("we" is at least Raymond and I). For example, see
http://bugs.python.org/issue11856 for a pending patch.

> Maintaining this is clearly no fun.

No more than any optimized piece of C code, but no less either.
It's actually quite straightforward compared to other classes such as
TextIOWrapper.

PEP 393 will be a challenge for significant chunks of the interpreter
and extension modules; it's not a json-specific issue.

Regards

Antoine.



From python-dev at masklinn.net  Sat Apr 16 18:45:34 2011
From: python-dev at masklinn.net (Xavier Morel)
Date: Sat, 16 Apr 2011 18:45:34 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <1302967554.3490.41.camel@localhost.localdomain>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
	<1302965528.3490.26.camel@localhost.localdomain>
	<033B3B0C-441D-4B06-8B44-99168D0BF91B@masklinn.net>
	<1302967554.3490.41.camel@localhost.localdomain>
Message-ID: <D1A7101A-B60A-4652-B60E-9FF1281535DF@masklinn.net>

On 2011-04-16, at 17:25 , Antoine Pitrou wrote:
> Le samedi 16 avril 2011 ? 17:07 +0200, Xavier Morel a ?crit :
>> On 2011-04-16, at 16:52 , Antoine Pitrou wrote:
>>> Le samedi 16 avril 2011 ? 16:42 +0200, Dirkjan Ochtman a ?crit :
>>>> On Sat, Apr 16, 2011 at 16:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>>>> What you're proposing doesn't address the question of who is going to
>>>>> do the ongoing maintenance. Bob apparently isn't interested in
>>>>> maintaining stdlib code, and python-dev members aren't interested in
>>>>> maintaining simplejson (assuming it would be at all possible). Since
>>>>> both groups of people want to work on separate codebases, I don't see
>>>>> how sharing a single codebase would be possible.
>>>> 
>>>> From reading this thread, it seems to me like the proposal is that Bob
>>>> maintains a simplejson for both 2.x and 3.x and that the current
>>>> stdlib json is replaced by a (trivially changed) version of
>>>> simplejson.
>>> 
>>> The thing is, we want to bring our own changes to the json module and
>>> its tests (and have already done so, although some have been backported
>>> to simplejson).
>> 
>> Depending on what those changes are, would it not be possible to apply
>> the vast majority of them to simplejson itself?
> 
> Sure, but the thing is, I don't *think* we are interested in backporting
> stuff to simplejson much more than Bob is interested in porting stuff to
> the json module.
I was mostly thinking it could work the other way around, really: simplejson seems to move slightly faster than the stdlib's json (though it's not a high-churn module either these days), so improvements (from Python and third parties alike) could be applied there first and then forward-ported, rather than the other way around.

> I've contributed a couple of patches myself after they were integrated
> to CPython (they are part of the performance improvements Bob is talking
> about), but that was exceptional. Backporting a patch to another project
> with a different directory structure, a slightly different code, etc. is
> tedious and not very rewarding for us Python core developers, while we
> could do other work on our limited free time.
Sure, I can understand that, but wouldn't it be easier if the two versions were kept in better sync (mostly removing the "slightly different code" part)?

>> Furthermore, now that python uses Mercurial, it should be possible (or
>> even easy) to use a versioned queue (via MQ) for the trivial
>> adaptation, and the temporary alterations (things which will likely be
>> merged back into simplejson but are not yet, stuff like that) should
>> it not?
> Perhaps, perhaps not. That would require someone motivated to put it in
> place, ensure that it doesn't get in the way, document it, etc.
> Honestly, I don't think maintaining a single stdlib module should
> require such an amount of logistics.

I don't think mercurial queues really amount to logistic, it takes a bit of learning but fundamentally they're not much work, and make synchronization with upstream packages much easier. Which would (I believe) benefit both projects and ? ultimately ? language users by avoiding too extreme differences (on both API/features and performances).

I'm thinking of a relation along the lines of Michael Foord's unittest2 (except maybe inverted, in that unittest2 is a backport of a next version's unittest)

From vinay_sajip at yahoo.co.uk  Sat Apr 16 18:47:49 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sat, 16 Apr 2011 16:47:49 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
Message-ID: <loom.20110416T181907-569@post.gmane.org>

Hi Antoine,

Antoine Pitrou <solipsis <at> pitrou.net> writes:

> What you're proposing doesn't address the question of who is going to
> do the ongoing maintenance.

I agree, my suggestion is orthogonal to the question of who maintains stdlib
json. But if the json module is languishing in comparison to simplejson, then
bringing the code bases closer together may be worthwhile. I've just been
experimenting with the feasibility of getting simplejson running on Python
3.x, and at present I have it working in the sense of all tests passing on
3.2. 

Bob has said he isn't interested in Python 3, but he has said that "if
someone contributes the code to make simplejson work in Python 3 I'm willing
to apply the patches run the tests against any future changes."

I take this to mean that Bob is undertaking to keep the codebase working in
both 2.x and 3.x in the future (though I'm sure he'll correct me if I've got it
wrong). 

I'm also assuming Bob will be receptive to patches which are functional
improvements added in stdlib json in 3.x, as his comments seem to indicate
that this is the case.

ISTM that for some library maintainers who are invested in 2.x and who don't
have the time or inclination to manage separate 2.x and 3.x codebases, a
common codebase is the way to go. This certainly seems to be the case for pip
and virtualenv, which we recently got running under Python 3 using a common
codebase approach. Certainly, the amount of work required for ongoing
maintenance can be much less, and only a little discipline is needed when
adding new code.

Bob made a comment in passing that simplejson (Python) is about as fast as
stdlib json (C extension), on 2.x. That may or may not prove to be the case on
3.x, but at least it is now possible to run simplejson on 3.x (Python only, so
far) to make a comparison.

It may be that no-one is willing or able to serve as an effective maintainer
of stdlib json, but assuming that Bob will continue to maintain and improve
simplejson and if an automatic mechanism for converting from a 3.x-compatible
simplejson to json can be made to work, that could be a way forward.

It's obviously early days to see how things will pan out, but it seems worth
exploring the avenue a little further, if Bob is amenable to this approach.

Regards,

Vinay


From solipsis at pitrou.net  Sat Apr 16 19:14:06 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 19:14:06 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <D1A7101A-B60A-4652-B60E-9FF1281535DF@masklinn.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<BANLkTimatfEDwk_wKjw8yY99bPH-VnUSCQ@mail.gmail.com>
	<1302965528.3490.26.camel@localhost.localdomain>
	<033B3B0C-441D-4B06-8B44-99168D0BF91B@masklinn.net>
	<1302967554.3490.41.camel@localhost.localdomain>
	<D1A7101A-B60A-4652-B60E-9FF1281535DF@masklinn.net>
Message-ID: <1302974046.3490.57.camel@localhost.localdomain>


> > I've contributed a couple of patches myself after they were integrated
> > to CPython (they are part of the performance improvements Bob is talking
> > about), but that was exceptional. Backporting a patch to another project
> > with a different directory structure, a slightly different code, etc. is
> > tedious and not very rewarding for us Python core developers, while we
> > could do other work on our limited free time.
> Sure, I can understand that, but wouldn't it be easier if the two
> versions were kept in better sync (mostly removing the "slightly
> different code" part)?

You are assuming that we intend to backport all our json patches to
simplejson. I can't speak for other people, but I'm personally not
interested in doing that work (even if you find an "easier" scheme than
the current one).

Also, as Raymond said, it's not much of an issue if json and simplejson
diverge. Bob said he had no interest in porting simplejson to 3.x, while
we don't have any interest in making non-bugfix changes to 2.x json. As
long as basic functionality is identical and compliance to the spec is
ensured, I think most common uses are covered by both libraries.

So, unless you manage to find a scheme where porting patches is almost
zero-cost (for either us or Bob), I don't think it will happen.

> I'm thinking of a relation along the lines of Michael Foord's
> unittest2 (except maybe inverted, in that unittest2 is a backport of a
> next version's unittest)

Well, the big difference here is that Michael maintains both the stdlib
version and the standalone project, meaning he's committed to avoid any
divergence between the two codebases.

Regards

Antoine.



From solipsis at pitrou.net  Sat Apr 16 19:27:36 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 19:27:36 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
Message-ID: <20110416192736.1f9dd279@pitrou.net>

On Sat, 16 Apr 2011 16:47:49 +0000 (UTC)
Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:
> 
> > What you're proposing doesn't address the question of who is going to
> > do the ongoing maintenance.
> 
> I agree, my suggestion is orthogonal to the question of who maintains stdlib
> json.

No, that's not what I'm talking about. The json module *is* maintained
(take a look at "hg log"), even though it may be less active than
simplejson (but simplejson doesn't receive many changes either).

I am talking about maintenance of the "shared codebase" you are talking
about. Mandating a single codebase between two different languages
(Python 2 and Python 3) and two different libraries (json and
simplejson) comes at a high maintenance cost, and it's not obvious in
your proposal who will bear that cost in the long run (you?). It is not
a one-time cost, but an ongoing one.

> Bob has said he isn't interested in Python 3, but he has said that "if
> someone contributes the code to make simplejson work in Python 3 I'm willing
> to apply the patches run the tests against any future changes."

I can't speak for Bob, but this assumes the patches are not invasive
and don't degrade performance. It's not obvious that will be the case.

> Bob made a comment in passing that simplejson (Python) is about as fast as
> stdlib json (C extension), on 2.x.

I think Bob tested with an outdated version of the stdlib json module
(2.6 or 2.7, perhaps). In my latest measurements, the 3.2 json C module
is as fast as the C simplejson module, the only difference being in
parsing of numbers, which is addressed in
http://bugs.python.org/issue11856

> That may or may not prove to be the case on
> 3.x, but at least it is now possible to run simplejson on 3.x (Python only, so
> far) to make a comparison.

Feel free to share your numbers.

Regards

Antoine.



From martin at v.loewis.de  Sat Apr 16 20:40:18 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 16 Apr 2011 20:40:18 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <loom.20110416T181907-569@post.gmane.org>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
Message-ID: <4DA9E292.20805@v.loewis.de>

> I agree, my suggestion is orthogonal to the question of who maintains stdlib
> json. But if the json module is languishing in comparison to simplejson, then
> bringing the code bases closer together may be worthwhile.

Right: *if* the module is languishing. But it's not. It just diverges.

> It may be that no-one is willing or able to serve as an effective maintainer
> of stdlib json, but assuming that Bob will continue to maintain and improve
> simplejson

Does it actually need improvement?

Regards,
Martin

From vinay_sajip at yahoo.co.uk  Sat Apr 16 21:13:58 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sat, 16 Apr 2011 19:13:58 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<4DA9E292.20805@v.loewis.de>
Message-ID: <loom.20110416T211116-728@post.gmane.org>

Martin v. L?wis <martin <at> v.loewis.de> writes:

> Does it actually need improvement?

I can't actually say, but I assume it keeps changing for the better - albeit
slowly. I wasn't thinking of specific improvements, just the idea of continuous
improvement in general...

Regards,

Vinay Sajip




From brett at python.org  Sat Apr 16 22:57:09 2011
From: brett at python.org (Brett Cannon)
Date: Sat, 16 Apr 2011 13:57:09 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
Message-ID: <BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>

In the grand python-dev tradition of "silence means acceptance", I consider
this PEP finalized and implicitly accepted.

On Tue, Apr 12, 2011 at 15:07, Brett Cannon <brett at python.org> wrote:

> Here is the next draft of the PEP. I changed the semantics requirement to
> state that 100% branch coverage is required for any Python code that is
> being replaced by accelerated C code instead of the broad "must be
> semantically equivalent". Also tweaked wording here and there to make
> certain things more obvious.
>
> ----------------------------------
>
> PEP: 399
> Title: Pure Python/C Accelerator Module Compatibility Requirements
>
> Version: $Revision: 88219 $
> Last-Modified: $Date: 2011-01-27 13:47:00 -0800 (Thu, 27 Jan 2011) $
> Author: Brett Cannon <brett at python.org>
> Status: Draft
> Type: Informational
> Content-Type: text/x-rst
> Created: 04-Apr-2011
> Python-Version: 3.3
> Post-History: 04-Apr-2011, 12-Apr-2011
>
>
> Abstract
> ========
>
> The Python standard library under CPython contains various instances
> of modules implemented in both pure Python and C (either entirely or
> partially). This PEP requires that in these instances that the
> C code *must* pass the test suite used for the pure Python code
> so as to act as much as a drop-in replacement as possible
> (C- and VM-specific tests are exempt). It is also required that new
>
> C-based modules lacking a pure Python equivalent implementation get
> special permissions to be added to the standard library.
>
>
> Rationale
> =========
>
> Python has grown beyond the CPython virtual machine (VM). IronPython_,
> Jython_, and PyPy_ all currently being viable alternatives to the
> CPython VM. This VM ecosystem that has sprung up around the Python
> programming language has led to Python being used in many different
> areas where CPython cannot be used, e.g., Jython allowing Python to be
> used in Java applications.
>
> A problem all of the VMs other than CPython face is handling modules
> from the standard library that are implemented (to some extent) in C.
>
> Since they do not typically support the entire `C API of Python`_ they
> are unable to use the code used to create the module. Often times this
> leads these other VMs to either re-implement the modules in pure
> Python or in the programming language used to implement the VM
> (e.g., in C# for IronPython). This duplication of effort between
> CPython, PyPy, Jython, and IronPython is extremely unfortunate as
> implementing a module *at least* in pure Python would help mitigate
> this duplicate effort.
>
> The purpose of this PEP is to minimize this duplicate effort by
> mandating that all new modules added to Python's standard library
> *must* have a pure Python implementation _unless_ special dispensation
> is given. This makes sure that a module in the stdlib is available to
> all VMs and not just to CPython (pre-existing modules that do not meet
> this requirement are exempt, although there is nothing preventing
> someone from adding in a pure Python implementation retroactively).
>
>
> Re-implementing parts (or all) of a module in C (in the case
> of CPython) is still allowed for performance reasons, but any such
> accelerated code must pass the same test suite (sans VM- or C-specific
> tests) to verify semantics and prevent divergence. To accomplish this,
> the test suite for the module must have 100% branch coverage of the
> pure Python implementation before the acceleration code may be added.
>
> This is to prevent users from accidentally relying
> on semantics that are specific to the C code and are not reflected in
> the pure Python implementation that other VMs rely upon. For example,
> in CPython 3.2.0, ``heapq.heappop()`` does an explicit type
> check in its accelerated C code while the Python code uses duck
> typing::
>
>
>     from test.support import import_fresh_module
>
>     c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
>     py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
>
>
>     class Spam:
>         """Tester class which defines no other magic methods but
>         __len__()."""
>         def __len__(self):
>             return 0
>
>
>     try:
>         c_heapq.heappop(Spam())
>     except TypeError:
>         # Explicit type check failure: "heap argument must be a list"
>
>         pass
>
>     try:
>         py_heapq.heappop(Spam())
>     except AttributeError:
>         # Duck typing failure: "'Foo' object has no attribute 'pop'"
>
>         pass
>
> This kind of divergence is a problem for users as they unwittingly
> write code that is CPython-specific. This is also an issue for other
> VM teams as they have to deal with bug reports from users thinking
> that they incorrectly implemented the module when in fact it was
> caused by an untested case.
>
>
> Details
> =======
>
> Starting in Python 3.3, any modules added to the standard library must
> have a pure Python implementation. This rule can only be ignored if
> the Python development team grants a special exemption for the module.
> Typically the exemption will be granted only when a module wraps a
>
> specific C-based library (e.g., sqlite3_). In granting an exemption it
> will be recognized that the module will be considered exclusive to
>
> CPython and not part of Python's standard library that other VMs are
> expected to support. Usage of ``ctypes`` to provide an
> API for a C library will continue to be frowned upon as ``ctypes``
> lacks compiler guarantees that C code typically relies upon to prevent
> certain errors from occurring (e.g., API changes).
>
> Even though a pure Python implementation is mandated by this PEP, it
> does not preclude the use of a companion acceleration module. If an
> acceleration module is provided it is to be named the same as the
> module it is accelerating with an underscore attached as a prefix,
> e.g., ``_warnings`` for ``warnings``. The common pattern to access
> the accelerated code from the pure Python implementation is to import
> it with an ``import *``, e.g., ``from _warnings import *``. This is
> typically done at the end of the module to allow it to overwrite
> specific Python objects with their accelerated equivalents. This kind
> of import can also be done before the end of the module when needed,
> e.g., an accelerated base class is provided but is then subclassed by
> Python code. This PEP does not mandate that pre-existing modules in
> the stdlib that lack a pure Python equivalent gain such a module. But
> if people do volunteer to provide and maintain a pure Python
> equivalent (e.g., the PyPy team volunteering their pure Python
> implementation of the ``csv`` module and maintaining it) then such
> code will be accepted.
>
> This requirement does not apply to modules already existing as only C
> code in the standard library. It is acceptable to retroactively add a
> pure Python implementation of a module implemented entirely in C, but
> in those instances the C version is considered the reference
> implementation in terms of expected semantics.
>
> Any new accelerated code must act as a drop-in replacement as close
> to the pure Python implementation as reasonable. Technical details of
> the VM providing the accelerated code are allowed to differ as
> necessary, e.g., a class being a ``type`` when implemented in C. To
> verify that the Python and equivalent C code operate as similarly as
> possible, both code bases must be tested using the same tests which
> apply to the pure Python code (tests specific to the C code or any VM
> do not follow under this requirement). To make sure that the test
> suite is thorough enough to cover all relevant semantics, the tests
> must have 100% branch coverage for the Python code being replaced by
> C code. This will make sure that the new acceleration code will
> operate as much like a drop-in replacement for the Python code is as
> possible. Testing should still be done for issues that come up when
> working with C code even if it is not explicitly required to meet the
> coverage requirement, e.g., Tests should be aware that C code typically
> has special paths for things such as built-in types, subclasses of
> built-in types, etc.
>
> Acting as a drop-in replacement also dictates that no public API be
>
> provided in accelerated code that does not exist in the pure Python
> code.  Without this requirement people could accidentally come to rely
>  on a detail in the accelerated code which is not made available to
>
> other VMs that use the pure Python implementation. To help verify
> that the contract of semantic equivalence is being met, a module must
> be tested both with and without its accelerated code as thoroughly as
> possible.
>
> As an example, to write tests which exercise both the pure Python and
> C accelerated versions of a module, a basic idiom can be followed::
>
>
>     import collections.abc
>     from test.support import import_fresh_module, run_unittest
>     import unittest
>
>     c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
>     py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
>
>
>     class ExampleTest(unittest.TestCase):
>
>         def test_heappop_exc_for_non_MutableSequence(self):
>             # Raise TypeError when heap is not a
>             # collections.abc.MutableSequence.
>             class Spam:
>                 """Test class lacking many ABC-required methods
>                 (e.g., pop())."""
>                 def __len__(self):
>                     return 0
>
>             heap = Spam()
>             self.assertFalse(isinstance(heap,
>                                 collections.abc.MutableSequence))
>             with self.assertRaises(TypeError):
>                 self.heapq.heappop(heap)
>
>
>     class AcceleratedExampleTest(ExampleTest):
>
>         """Test using the accelerated code."""
>
>
>         heapq = c_heapq
>
>
>     class PyExampleTest(ExampleTest):
>
>         """Test with just the pure Python code."""
>
>         heapq = py_heapq
>
>
>     def test_main():
>         run_unittest(AcceleratedExampleTest, PyExampleTest)
>
>
>     if __name__ == '__main__':
>         test_main()
>
>
> If this test were to provide 100% branch coverage for
> ``heapq.heappop()`` in the pure Python implementation then the
> accelerated C code would be allowed to be added to CPython's standard
> library. If it did not, then the test suite would need to be updated
> until 100% branch coverage was provided before the accelerated C code
> could be added.
>
>
>
> Copyright
> =========
>
> This document has been placed in the public domain.
>
>
> .. _IronPython: http://ironpython.net/
> .. _Jython: http://www.jython.org/
> .. _PyPy: http://pypy.org/
> .. _C API of Python: http://docs.python.org/py3k/c-api/index.html
> .. _sqlite3: http://docs.python.org/py3k/library/sqlite3.html
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110416/28c1e5f7/attachment-0001.html>

From stefan at bytereef.org  Sat Apr 16 23:23:52 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Sat, 16 Apr 2011 23:23:52 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibiilty Requirements
In-Reply-To: <BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
Message-ID: <20110416212352.GA19573@sleipnir.bytereef.org>

Brett Cannon <brett at python.org> wrote:
> In the grand python-dev tradition of "silence means acceptance", I consider
> this PEP finalized and implicitly accepted.

I did not really see an answer to these concerns:

http://mail.python.org/pipermail/python-dev/2011-April/110672.html
http://mail.python.org/pipermail/python-dev/2011-April/110675.html



Stefan Krah



From martin at v.loewis.de  Sat Apr 16 23:28:20 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Sat, 16 Apr 2011 23:28:20 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <loom.20110416T211116-728@post.gmane.org>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>
	<loom.20110416T211116-728@post.gmane.org>
Message-ID: <4DAA09F4.3000001@v.loewis.de>

Am 16.04.2011 21:13, schrieb Vinay Sajip:
> Martin v. L?wis <martin <at> v.loewis.de> writes:
> 
>> Does it actually need improvement?
> 
> I can't actually say, but I assume it keeps changing for the better - albeit
> slowly. I wasn't thinking of specific improvements, just the idea of continuous
> improvement in general...

Hmm. I cannot believe in the notion of "continuous improvement"; I'd
guess that it is rather "continuous change".

I can see three possible areas of improvment:
1. Bugs: if there are any, they should clearly be fixed. However, JSON
   is a simple format, so the implementation should be able to converge
   to something fairly correct quickly.
2. Performance: there is always room for performance improvements.
   However, I strongly recommend to not bother unless a severe
   bottleneck can be demonstrated.
3. API changes: people apparently want JSON to be more flexible wrt.
   Python types that are not directly supported in JSON. I'd rather take
   a conservative approach here, involving a lot of people before adding
   an API feature or even an incompatibility.

Regards,
Martin

From brett at python.org  Sat Apr 16 23:45:52 2011
From: brett at python.org (Brett Cannon)
Date: Sat, 16 Apr 2011 14:45:52 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <20110416212352.GA19573@sleipnir.bytereef.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
Message-ID: <BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>

On Sat, Apr 16, 2011 at 14:23, Stefan Krah <stefan at bytereef.org> wrote:

> Brett Cannon <brett at python.org> wrote:
> > In the grand python-dev tradition of "silence means acceptance", I
> consider
> > this PEP finalized and implicitly accepted.
>
> I did not really see an answer to these concerns:
>
> http://mail.python.org/pipermail/python-dev/2011-April/110672.html
>

Antoine does seem sold on the 100% branch coverage requirement and views it
as pointless. I disagree. =)

As for the exception Stefan is saying may be granted, that is not in the PEP
so I consider it unimportant. If we really feel the desire to grant an
exception we can (since we can break any of our own rules that we
collectively choose to), but I'm assuming we won't.


> http://mail.python.org/pipermail/python-dev/2011-April/110675.html
>

Raymond thinks that have a testing requirement conflates having
implementations match vs. APIs. Well, as we all know, the stdlib ends up
having its implementation details relied upon constantly by people whether
they mean to or not,  so making sure that this is properly tested helps deal
with this known reality.

This is a damned-if-you-do-damned-if-you-don't situation. The first draft of
this PEP said to be "semantically equivalent w/ divergence where technically
required", but I got pushback from being too wishy-washy w/ lack of concrete
details. So I introduce a concrete metric that some are accusing of being
inaccurate for the goals of the PEP. I'm screwed or I'm screwed. =) So I am
choosing to go with the one that has a side benefit of also increasing test
coverage.

Now if people would actually support simply not accepting any more C modules
into the Python stdlib (this does not apply to CPython's stdlib), then I'm
all for that. I only went with the "accelerator modules are okay" route to
help get acceptance for the PEP. But if people are willing to go down a more
stringent route and say that any module which uses new C code is considered
CPython-specific and thus any acceptance of such modules will be damn hard
to accomplish as it will marginalize the value of the code, that's fine by
me.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110416/f8998e0f/attachment.html>

From solipsis at pitrou.net  Sat Apr 16 23:54:58 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 16 Apr 2011 23:54:58 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
Message-ID: <20110416235458.675ee5a7@pitrou.net>

On Sat, 16 Apr 2011 14:45:52 -0700
Brett Cannon <brett at python.org> wrote:
> On Sat, Apr 16, 2011 at 14:23, Stefan Krah <stefan at bytereef.org> wrote:
> 
> > Brett Cannon <brett at python.org> wrote:
> > > In the grand python-dev tradition of "silence means acceptance", I
> > consider
> > > this PEP finalized and implicitly accepted.
> >
> > I did not really see an answer to these concerns:
> >
> > http://mail.python.org/pipermail/python-dev/2011-April/110672.html
> >
> 
> Antoine does seem sold on the 100% branch coverage requirement and views it
> as pointless.

Not really. I think this is an unreasonable requirement because of the
reasons I've stated in my previous messages :)
If you rephrase it to remove the "100% coverage" requirement and
replace it by something like "comprehensive coverage", then I'm ok.

> Now if people would actually support simply not accepting any more C modules
> into the Python stdlib (this does not apply to CPython's stdlib), then I'm
> all for that.

Hmm, what's the difference between "the Python stdlib" and "CPython's
stdlib"?

I'm also not sure how you would enforce that anyway. If it means
using ctypes to interface with system C libraries, I'm -10 on it :)

Regards

Antoine.



From vinay_sajip at yahoo.co.uk  Sat Apr 16 23:55:51 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sat, 16 Apr 2011 21:55:51 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>
	<loom.20110416T211116-728@post.gmane.org>
	<4DAA09F4.3000001@v.loewis.de>
Message-ID: <loom.20110416T235249-532@post.gmane.org>

Martin v. L?wis <martin <at> v.loewis.de> writes:

> I can see three possible areas of improvment:
> 1. Bugs: if there are any, they should clearly be fixed. However, JSON
>    is a simple format, so the implementation should be able to converge
>    to something fairly correct quickly.
> 2. Performance: there is always room for performance improvements.
>    However, I strongly recommend to not bother unless a severe
>    bottleneck can be demonstrated.
> 3. API changes: people apparently want JSON to be more flexible wrt.
>    Python types that are not directly supported in JSON. I'd rather take
>    a conservative approach here, involving a lot of people before adding
>    an API feature or even an incompatibility.

I agree with all these points, though I was only thinking of Nos. 1 and 2. Over
a longer timeframe, improvements may also come with changes in the spec
(unlikely in the short and medium term, but you never know in the long term).

Regards,

Vinay Sajip


From ericsnowcurrently at gmail.com  Sun Apr 17 00:22:48 2011
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 16 Apr 2011 16:22:48 -0600
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <20110416235458.675ee5a7@pitrou.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<20110416235458.675ee5a7@pitrou.net>
Message-ID: <BANLkTimo=v=tJXvkNPmEQSQOmYw0vdaAtg@mail.gmail.com>

On Sat, Apr 16, 2011 at 3:54 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:

>
> Hmm, what's the difference between "the Python stdlib" and "CPython's
> stdlib"?
>
> I'm also not sure how you would enforce that anyway. If it means
> using ctypes to interface with system C libraries, I'm -10 on it :)
>
>
Sounds like Brett is talking about the distinction apparently discussed at
the language summit ("Standalone Standard Library"):

http://blog.python.org/2011/03/2011-language-summit-report.html

-eric

<http://blog.python.org/2011/03/2011-language-summit-report.html>

> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110416/24d21c42/attachment.html>

From fuzzyman at voidspace.org.uk  Sun Apr 17 00:48:45 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Sat, 16 Apr 2011 23:48:45 +0100
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <4DAA09F4.3000001@v.loewis.de>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>	<loom.20110416T211116-728@post.gmane.org>
	<4DAA09F4.3000001@v.loewis.de>
Message-ID: <4DAA1CCD.60805@voidspace.org.uk>

On 16/04/2011 22:28, "Martin v. L?wis" wrote:
> Am 16.04.2011 21:13, schrieb Vinay Sajip:
>> Martin v. L?wis<martin<at>  v.loewis.de>  writes:
>>
>>> Does it actually need improvement?
>> I can't actually say, but I assume it keeps changing for the better - albeit
>> slowly. I wasn't thinking of specific improvements, just the idea of continuous
>> improvement in general...
> Hmm. I cannot believe in the notion of "continuous improvement"; I'd
> guess that it is rather "continuous change".
>
> I can see three possible areas of improvment:
> 1. Bugs: if there are any, they should clearly be fixed. However, JSON
>     is a simple format, so the implementation should be able to converge
>     to something fairly correct quickly.
> 2. Performance: there is always room for performance improvements.
>     However, I strongly recommend to not bother unless a severe
>     bottleneck can be demonstrated.
Well, there was a 5x speedup demonstrated comparing simplejson to the 
standard library json module. That sound like *very* worth pursuing (and 
crazy not to pursue). I've had json serialisation be the bottleneck in 
web applications generating several megabytes of json for some requests.

All the best,

Michael Foord
> 3. API changes: people apparently want JSON to be more flexible wrt.
>     Python types that are not directly supported in JSON. I'd rather take
>     a conservative approach here, involving a lot of people before adding
>     an API feature or even an incompatibility.
>
> Regards,
> Martin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From steve at pearwood.info  Sun Apr 17 01:03:36 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 17 Apr 2011 09:03:36 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
 Module	Compatibiilty Requirements
In-Reply-To: <BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
Message-ID: <4DAA2048.4090207@pearwood.info>

Brett Cannon wrote:
> In the grand python-dev tradition of "silence means acceptance", I consider
> this PEP finalized and implicitly accepted.

How long does that silence have to last?

I didn't notice a definition of what counts as "100% branch coverage". 
Apologies if I merely failed to notice it, but I think it should be 
explicitly defined.

Presumably it means that any time you have an explicit branch 
(if...elif...else, try...except...else, for...else, etc.) you need a 
test that goes down each branch. But it isn't clear to me whether it's 
sufficient to test each branch in isolation, or whether you need to test 
all combinations.

That is, if you have five branches, A or B, C or D, E or F, G or H, I or 
J, within a single code unit (function? something else?), is it 
sufficient to have at least one test that goes down each of A...J, or do 
you need to explicitly test each of:

A-C-E-G-I
A-C-E-G-J
A-C-E-H-I
A-C-E-H-J
A-C-F-G-I
...
B-D-F-H-J

(10 tests versus 32 tests).

If the latter, this could become impractical *very* fast. But if not, I 
don't see how we can claim 100% coverage when there are code paths which 
are never tested.

At the very least, I think you need to explicitly define what you mean 
by "100% branch coverage". Possibly this will assist in the disagreement 
between you and Antoine re "100% versus "comprehensive" coverage.



-- 
Steven

From solipsis at pitrou.net  Sun Apr 17 01:16:34 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 01:16:34 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<4DA9E292.20805@v.loewis.de>
	<loom.20110416T211116-728@post.gmane.org>
	<4DAA09F4.3000001@v.loewis.de> <4DAA1CCD.60805@voidspace.org.uk>
Message-ID: <20110417011634.142092b2@pitrou.net>

On Sat, 16 Apr 2011 23:48:45 +0100
Michael Foord <fuzzyman at voidspace.org.uk> wrote:

> On 16/04/2011 22:28, "Martin v. L?wis" wrote:
> > Am 16.04.2011 21:13, schrieb Vinay Sajip:
> >> Martin v. L?wis<martin<at>  v.loewis.de>  writes:
> >>
> >>> Does it actually need improvement?
> >> I can't actually say, but I assume it keeps changing for the better - albeit
> >> slowly. I wasn't thinking of specific improvements, just the idea of continuous
> >> improvement in general...
> > Hmm. I cannot believe in the notion of "continuous improvement"; I'd
> > guess that it is rather "continuous change".
> >
> > I can see three possible areas of improvment:
> > 1. Bugs: if there are any, they should clearly be fixed. However, JSON
> >     is a simple format, so the implementation should be able to converge
> >     to something fairly correct quickly.
> > 2. Performance: there is always room for performance improvements.
> >     However, I strongly recommend to not bother unless a severe
> >     bottleneck can be demonstrated.
> Well, there was a 5x speedup demonstrated comparing simplejson to the 
> standard library json module.

No.



From matt at vazor.com  Sun Apr 17 00:47:29 2011
From: matt at vazor.com (Matt Billenstein)
Date: Sat, 16 Apr 2011 22:47:29 +0000
Subject: [Python-Dev] Status of json (simplejson) in cpython
Message-ID: <4+paau3tclbkacaeaa2abyla42kez3oglkglau7kaf3w7fluhovwpncmystvht6sswtpoj7ikuj42gzmxtcu5vdyqq5uceswtdd6zfpck3s4fgwskem3kbnuljq6ypzayx27gb6yy=+465659@messaging-master.com>

On Sat, Apr 16, 2011 at 01:30:13PM +0200, Antoine Pitrou wrote:
> On Sat, 16 Apr 2011 00:41:03 +0000
> Matt Billenstein <matt at vazor.com> wrote:
> > 
> > Slightly less crude benchmark showing simplejson is quite a bit faster:
> > 
> > http://pastebin.com/g1WqUPwm
> > 
> > 250ms vs 5.5s encoding and decoding an 11KB json object 1000 times...
> 
> This doesn't have much value if you don't say which version of Python
> you ran json with. You should use 3.2, otherwise you might miss some
> optimizations.

Yes, that was 2.6.5 -- 3.2 native json is comparable to simplejson here taking
about 330ms...

m

-- 
Matt Billenstein
matt at vazor.com
http://www.vazor.com/

From steve at pearwood.info  Sun Apr 17 03:48:07 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 17 Apr 2011 11:48:07 +1000
Subject: [Python-Dev] python and super
In-Reply-To: <4DA84DD6.20608@voidspace.org.uk>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA79E28.2060406@pearwood.info> <4DA84DD6.20608@voidspace.org.uk>
Message-ID: <4DAA46D7.1020500@pearwood.info>

Michael Foord wrote:
> On 15/04/2011 02:23, Steven D'Aprano wrote:
[...]
>> If we treat django's failure to use super as a bug, you want the 
>> Python language to work-around that bug so that:
> 
> What you say (that this particular circumstance could be treated as a 
> bug in django) is true, however consider the "recently" introduced 
> problem caused by object.__init__ not taking arguments. This makes it 
> impossible to use super correctly in various circumstances.
[...]
> It is impossible to inherit from both C and A and have all parent 
> __init__ methods called correctly. Changing the semantics of super as 
> described would fix this problem.

So you say. I don't have an an opinion on whether or not you are 
technically correct, but adding DWIM black-magic to super scares me. It 
scares me even if it were guaranteed to *only* apply to __init__, but if 
it applied to arbitrary methods, it frankly terrifies me.

If it were limited to only apply to __init__, there would be a constant 
stream of requests that we loosen the restriction and "make super just 
work" for all methods, despite the dangers of DWIM code.




-- 
Steven


From raymond.hettinger at gmail.com  Sun Apr 17 04:19:32 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sat, 16 Apr 2011 19:19:32 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
Message-ID: <4862031C-A420-41A5-82B0-713262407802@gmail.com>


On Apr 16, 2011, at 2:45 PM, Brett Cannon wrote:

> 
> 
> On Sat, Apr 16, 2011 at 14:23, Stefan Krah <stefan at bytereef.org> wrote:
> Brett Cannon <brett at python.org> wrote:
> > In the grand python-dev tradition of "silence means acceptance", I consider
> > this PEP finalized and implicitly accepted.

I haven't seen any responses that said, yes this is a well thought-out proposal that will actually benefit any of the various implementations.

Almost none of the concerns that have been raised has been addressed.  Does the PEP only apply to purely algorithmic modules such as heapq or does it apply to anything written in C (like an xz compressor or for example)?  Does testing every branch in a given implementation now guarantee every implementation detail or do we only promise the published API (historically, we've *always* done the latter)?  Is there going to be any guidance on the commonly encountered semantic differences between C modules and their Python counterparts (thread-safety, argument handling, tracebacks, all possible exceptions, monkey-patchable pure python classes versus hard-wired C types etc)?

The PEP seems to be predicated on a notion that anything written in C is bad and that all testing is good.  AFAICT, it doesn't provide any practical advice to someone pursuing a non-trivial project (such as decimal or threading).  The PEP mostly seems to be about discouraging any further work in C.  If that's the case, it should just come out and say it rather than tangentially introducing ambiguous testing requirements that don't make a lot of sense.

The PEP also makes some unsupported claims about saving labor.  My understanding is the IronPython and Jython tend to re-implement modules using native constructs.  Even with PyPy, the usual pure python idioms aren't necessarily what is best for PyPy, so I expect some rewriting there also.  It seems the lion's share of the work in making other implementations has to do with interpreter details and whatnot -- I would be surprised if the likes of bisect or heapq took even one-tenth of one percent of the total development time for any of the other implementations.


> 
> I did not really see an answer to these concerns:
> 
> http://mail.python.org/pipermail/python-dev/2011-April/110672.html
> 
> Antoine does seem sold on the 100% branch coverage requirement and views it as pointless. I disagree. =)
> 
> As for the exception Stefan is saying may be granted, that is not in the PEP so I consider it unimportant. If we really feel the desire to grant an exception we can (since we can break any of our own rules that we collectively choose to), but I'm assuming we won't.
>  
> http://mail.python.org/pipermail/python-dev/2011-April/110675.html
> 
> Raymond thinks that have a testing requirement conflates having implementations match vs. APIs.

That is not an accurate restatement of my post.

> Well, as we all know, the stdlib ends up having its implementation details relied upon constantly by people whether they mean to or not,  so making sure that this is properly tested helps deal with this known reality.

If you're saying that all implementation details (including internal branching logic) are now guaranteed behaviors, then I think this PEP has completely lost its way.  I don't know of any implementors asking for this.


> This is a damned-if-you-do-damned-if-you-don't situation. The first draft of this PEP said to be "semantically equivalent w/ divergence where technically required", but I got pushback from being too wishy-washy w/ lack of concrete details. So I introduce a concrete metric that some are accusing of being inaccurate for the goals of the PEP. I'm screwed or I'm screwed. =) So I am choosing to go with the one that has a side benefit of also increasing test coverage.

Maybe that is just an indication that the proposal isn't mature yet.   To me, it doesn't seem well thought out and isn't realistic.  


> Now if people would actually support simply not accepting any more C modules into the Python stdlib (this does not apply to CPython's stdlib), then I'm all for that.
> I only went with the "accelerator modules are okay" route to help get acceptance for the PEP. But if people are willing to go down a more stringent route and say that any module which uses new C code is considered CPython-specific and thus any acceptance of such modules will be damn hard to accomplish as it will marginalize the value of the code, that's fine by me.

Is that what people want?   For example, do we want to accept a C version of decimal?  Without it, the decimal module is unusable for people with high volumes of data.  Do we want things like an xz compressor to be written in pure python and only in Python?  I don't think this benefits our users.

I'm not really clear what it is you're trying to get at.  For PyPy, IronPython, and Jython to succeed, does the CPython project need to come to a halt?  I don't think many people here really believe that to be the case.


Raymond





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110416/94cf62d8/attachment.html>

From exarkun at twistedmatrix.com  Sun Apr 17 04:20:38 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Sun, 17 Apr 2011 02:20:38 -0000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibiilty Requirements
In-Reply-To: <4DAA2048.4090207@pearwood.info>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<4DAA2048.4090207@pearwood.info>
Message-ID: <20110417022038.1992.1778105560.divmod.xquotient.540@localhost.localdomain>

On 16 Apr, 11:03 pm, steve at pearwood.info wrote:
>Brett Cannon wrote:
>>In the grand python-dev tradition of "silence means acceptance", I 
>>consider
>>this PEP finalized and implicitly accepted.
>
>How long does that silence have to last?
>
>I didn't notice a definition of what counts as "100% branch coverage". 
>Apologies if I merely failed to notice it, but I think it should be 
>explicitly defined.
>
>Presumably it means that any time you have an explicit branch 
>(if...elif...else, try...except...else, for...else, etc.) you need a 
>test that goes down each branch. But it isn't clear to me whether it's 
>sufficient to test each branch in isolation, or whether you need to 
>test all combinations.
>
>That is, if you have five branches, A or B, C or D, E or F, G or H, I 
>or J, within a single code unit (function? something else?), is it 
>sufficient to have at least one test that goes down each of A...J, or 
>do you need to explicitly test each of:
>
>A-C-E-G-I
>A-C-E-G-J
>A-C-E-H-I
>A-C-E-H-J
>A-C-F-G-I
>...
>B-D-F-H-J
>
>(10 tests versus 32 tests).
>
>If the latter, this could become impractical *very* fast. But if not, I 
>don't see how we can claim 100% coverage when there are code paths 
>which are never tested.

The mostly commonly used definition of branch coverage is that each 
outcome of each individual branch is executed, not that all possible 
combinations of all branches in a unit are executed.  I haven't heard 
anyone in this thread propose the latter, only the former.

"100% coverage" by itself is certainly ambiguous.
>
>At the very least, I think you need to explicitly define what you mean 
>by "100% branch coverage". Possibly this will assist in the 
>disagreement between you and Antoine re "100% versus "comprehensive" 
>coverage.

I suspect that everyone who has said "branch coverage" in this thread 
has intended the definition given above (and encourage anyone who meant 
something else to clarify their position).

Jean-Paul

From brian.curtin at gmail.com  Sun Apr 17 04:32:48 2011
From: brian.curtin at gmail.com (Brian Curtin)
Date: Sat, 16 Apr 2011 21:32:48 -0500
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <4DA98166.2010604@gustavonarea.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
Message-ID: <BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>

On Sat, Apr 16, 2011 at 06:45, Gustavo Narea <me at gustavonarea.net> wrote:

> Hello,
>
> On 15/04/11 13:30, Brian Curtin wrote:
> > To me, the fix *was* released.
>
> No, it wasn't. It was *committed* to the repository.
>

Yep, and that's enough for me. If you have a vulnerable system, you can now
patch it with an accepted fix.


>
> > Sure, no fancy installers were generated yet, but people who are
> > susceptible to this issue 1) now know about it, and 2) have a way to
> > patch their system *if needed*.
>
> Well, that's a long shot. I doubt the people/organizations affected are
> all aware.


Hence why this blog exists and why this post was made...

And I doubt they are all capable of patching their system or
> getting a patched Python from a trusted party.
>

Maybe that's where the post fell short. Should I have added a section with
an example of how to apply the patch to an example system like 2.6?


> Three weeks after this security vulnerability was *publicly* reported on
> bugs.python.org, and two days after it was semi-officially announced,
> I'm still waiting for security updates for my Ubuntu and Debian systems!
>
> I reckon if this had been handled differently (i.e., making new releases
> and communicating it via the relevant channels [1]), we wouldn't have
> the situation we have right now.


I don't really think there's a "situation" here, and I fail to see how the
development blog isn't one of the relevant channels.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110416/bb0b3412/attachment.html>

From dreamingforward at gmail.com  Sun Apr 17 04:38:56 2011
From: dreamingforward at gmail.com (Mark Janssen)
Date: Sat, 16 Apr 2011 20:38:56 -0600
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
Message-ID: <BANLkTi=xLocxCQGAQZ3Xy3z6sXsxZnt-cA@mail.gmail.com>

On Thu, Apr 14, 2011 at 7:09 AM, Ricardo Kirkner
<ricardokirkner at gmail.com> wrote:
> I recently stumbled upon an issue with a class in the mro chain not
> calling super, therefore breaking the chain (ie, further base classes
> along the chain didn't get called).
> I understand it is currently a requirement that all classes that are
> part of the mro chain behave and always call super. My question is,
> shouldn't/wouldn't it be better,
> if python took ownership of that part, and ensured all classes get
> called, even if some class misbehaved?

I get annoyed by this issue as well, in various forms.

It seems like such a discussion would have been resolved by now in the
multitude of OOP languages, but I have to say it is quite strange to
me that there is no distinction made between IS-A relationship and
HAS-A relationships with regard to the issue of Inheritence.  Python,
confusingly makes no syntactic distinction, and its semantic
distinction (through MRO and programmer conventions) seems quite
suboptimal and "special-cased".  --No fault of anyone's, perhaps it is
indeed an unresolved issue within Computer Science.

It should be clear that IS-A inheritance is really trying to say (or
should be) that the following set/class (of methods and attributes) is
a *super-set* of its "parent" (--See how the OO lexicon is already
confused and mixing metaphors?).  In this case, manually calling
super() is not only completely redundant but adds various confusions.

With regard to inheritence, I too would like to see automatic calls to
super classes in every case were there is a complete sClearly there is
utility in the notion of a set-theoretic containment


DISCARDING::  the points are moot and need finer granularity that only
the pangaia model can fix.

From dreamingforward at gmail.com  Sun Apr 17 04:41:08 2011
From: dreamingforward at gmail.com (Mark Janssen)
Date: Sat, 16 Apr 2011 20:41:08 -0600
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTi=xLocxCQGAQZ3Xy3z6sXsxZnt-cA@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<BANLkTi=xLocxCQGAQZ3Xy3z6sXsxZnt-cA@mail.gmail.com>
Message-ID: <BANLkTimAf-jc5rApq1jWkZ92ETSr+PwY2w@mail.gmail.com>

Argh!  Sorry list.  I meant to discard the post that was just sent.

Please accept my humblest apologies...

Mark

From rdmurray at bitdance.com  Sun Apr 17 07:32:15 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Sun, 17 Apr 2011 01:32:15 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <4862031C-A420-41A5-82B0-713262407802@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
Message-ID: <20110417053245.42D262500D7@mailhost.webabinitio.net>

On Sat, 16 Apr 2011 19:19:32 -0700, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> On Apr 16, 2011, at 2:45 PM, Brett Cannon wrote:
> 
> >
> >
> > On Sat, Apr 16, 2011 at 14:23, Stefan Krah <stefan at bytereef.org> wrote:
> > Brett Cannon <brett at python.org> wrote:
> > > In the grand python-dev tradition of "silence means acceptance", I consider
> > > this PEP finalized and implicitly accepted.
> 
> I haven't seen any responses that said, yes this is a well thought-out proposal
> that will actually benefit any of the various implementations.

In that case it may well be that the silence is because the other
implementations think the PEP is OK.  They certainly voted in favor of
the broad outline of it at the language summit.  Perhaps representatives
will speak up, or perhaps Brett will need to poll them proactively.

> Almost none of the concerns that have been raised has been addressed.  Does the
> PEP only apply to purely algorithmic modules such as heapq or does it apply to
> anything written in C (like an xz compressor or for example)?  Does testing

Anything (new) written in C that can be also written in Python (and
usually is first, to at least prototype it).  If an XZ compressor is a
wrapper around an external library, that would be a different story.

> every branch in a given implementation now guarantee every implementation detail
> or do we only promise the published API (historically, we've *always* done the
> latter)?

As Brett said, people do come to depend on the details of the
implementation.  But IMO the PEP should be clarified to say that the
tests we are talking about should be tests *of the published API*.
That is, blackbox tests, not whitebox tests.

> Is there going to be any guidance on the commonly encountered semantic
> differences between C modules and their Python counterparts (thread-safety,
> argument handling, tracebacks, all possible exceptions, monkey-patchable pure
> python classes versus hard-wired C types etc)?

Presumably we will need to develop such guidance.

> The PEP seems to be predicated on a notion that anything written in C is bad and
> that all testing is good.  AFAICT, it doesn't provide any practical advice to
> someone pursuing a non-trivial project (such as decimal or threading).  The PEP

Decimal already has a Python implementation with a very comprehensive
test suite (no, I don't know if it has 100% coverage).  My understanding
is that Stefan's code passes the Python test suite.  So I'm not sure
what the issue is, there.  Stefan?

Threading is an existing module, so it doesn't seem to me that the PEP
particularly applies to it.

> The PEP also makes some unsupported claims about saving labor.  My understanding
> is the IronPython and Jython tend to re-implement modules using native
> constructs.  Even with PyPy, the usual pure python idioms aren't necessarily
> what is best for PyPy, so I expect some rewriting there also.  It seems the
> lion's share of the work in making other implementations has to do with
> interpreter details and whatnot -- I would be surprised if the likes of bisect
> or heapq took even one-tenth of one percent of the total development time for
> any of the other implementations.

That's an orthogonal issue.  Having *working* Python implementations of as
much of the stdlib as practical makes it easier to spin up a new Python
language implementation:  once you get the language working, you've got
all the bits of the stdlib that have Python versions.  *Then* you can
implement accelerators (and if you are CPython, you do that in C...)

> If you're saying that all implementation details (including internal branching
> logic) are now guaranteed behaviors, then I think this PEP has completely lost
> its way.  I don't know of any implementors asking for this.

I don't think the PEP is asking this either (or if it is I agree it
shouldn't be).  The way to get full branch coverage (and yes Exarkun is
right, this is about individual branches; see coverage.py --branch) is
to provide test cases that exercise the published API such that those
branches are taken.  If you can't do that, then what is that branch
of the Python code for?  If you can do that, how is the test case
testing an implementation detail?  It is testing the behavior of the
API.  The 100% branch coverage metric is just a measurable way to
improve test coverage.  As I've said before, it does not guarantee that all
important (API) test cases are covered, but it is one way to improve
that coverage that has a measure attached, and measures are helpful.

I personally have no problem with the 100% coverage being made a
recommendation in the PEP rather than a requirement.  It sounds like
that might be acceptable to Antoine.  Actually, I would also be fine with
saying "comprehensive" instead, with a note that 100% branch coverage is
a good way to head toward that goal, since a comprehensive test suite
should contain more tests than the minimum set needed to get to 100%
branch coverage.

A relevant story:  to achieve 100% branch coverage in one of the email
modules I had to resort to one test that used the API in a way for
which the behavior of the API is *not* documented, and one white
box test.  I marked both of these as to their nature, and would not
expect a theoretical email C accelerator to pass either of those tests.
For the one that requires a white box test, that code path will probably
eventually go away; for the undocumented API use, it will get documented
and the test adjusted accordingly...and writing that test revealed the
need for said documentation.

Perhaps we need a @python_implementation_detail skip decorator?

> Is that what people want?   For example, do we want to accept a C version of
> decimal?  Without it, the decimal module is unusable for people with high
> volumes of data.  Do we want things like an xz compressor to be written in pure
> python and only in Python?  I don't think this benefits our users.
> 
> I'm not really clear what it is you're trying to get at.  For PyPy, IronPython,
> and Jython to succeed, does the CPython project need to come to a halt?  I don't
> think many people here really believe that to be the case.

No, I don't think any of these things are aims.  But if/once the Python
stdlib is a separate repo, then in *that* repo you'd only have pure
Python modules, with the CPython-specific C accelerators living in the
CPython repo.  (Yes, there are still quite a few details to work out
about how this would work!  We aren't ready to do it yet; this PEP is
just trying to pave the way.)

--
R. David Murray           http://www.bitdance.com

From stefan_ml at behnel.de  Sun Apr 17 08:22:20 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Sun, 17 Apr 2011 08:22:20 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <4+paau3tclbkacaeaa2abyla42kez3oglkglau7kaf3w7fluhovwpncmystvht6sswtpoj7ikuj42gzmxtcu5vdyqq5uceswtdd6zfpck3s4fgwskem3kbnuljq6ypzayx27gb6yy=+465659@messaging-master.com>
References: <4+paau3tclbkacaeaa2abyla42kez3oglkglau7kaf3w7fluhovwpncmystvht6sswtpoj7ikuj42gzmxtcu5vdyqq5uceswtdd6zfpck3s4fgwskem3kbnuljq6ypzayx27gb6yy=+465659@messaging-master.com>
Message-ID: <ioe0us$dg5$1@dough.gmane.org>

Matt Billenstein, 17.04.2011 00:47:
> On Sat, Apr 16, 2011 at 01:30:13PM +0200, Antoine Pitrou wrote:
>> On Sat, 16 Apr 2011 00:41:03 +0000
>> Matt Billenstein wrote:
>>>
>>> Slightly less crude benchmark showing simplejson is quite a bit faster:
>>>
>>> http://pastebin.com/g1WqUPwm
>>>
>>> 250ms vs 5.5s encoding and decoding an 11KB json object 1000 times...
>>
>> This doesn't have much value if you don't say which version of Python
>> you ran json with. You should use 3.2, otherwise you might miss some
>> optimizations.
>
> Yes, that was 2.6.5 -- 3.2 native json is comparable to simplejson here taking
> about 330ms...

 From the POV of CPython 3.2, is "native" Python or C?

Stefan


From martin at v.loewis.de  Sun Apr 17 08:28:56 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Sun, 17 Apr 2011 08:28:56 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <4DAA1CCD.60805@voidspace.org.uk>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>	<loom.20110416T211116-728@post.gmane.org>	<4DAA09F4.3000001@v.loewis.de>
	<4DAA1CCD.60805@voidspace.org.uk>
Message-ID: <4DAA88A8.3080507@v.loewis.de>

> Well, there was a 5x speedup demonstrated comparing simplejson to the
> standard library json module.

Can you kindly point to that demonstration?

> That sound like *very* worth pursuing (and
> crazy not to pursue). I've had json serialisation be the bottleneck in
> web applications generating several megabytes of json for some requests.

Hmm. I'd claim that the web application that needs to generate several
megabytes of json for something should be redesigned. I also wonder
whether the bottleneck was the *generation*, the transmission, or
the processing of the data on the receiving end.

Regards,
Martin

From tjreedy at udel.edu  Sun Apr 17 08:32:57 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 17 Apr 2011 02:32:57 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <20110417053245.42D262500D7@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>	<20110416212352.GA19573@sleipnir.bytereef.org>	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
Message-ID: <ioe1in$fqr$1@dough.gmane.org>

On 4/17/2011 1:32 AM, R. David Murray wrote:

> As Brett said, people do come to depend on the details of the
> implementation.  But IMO the PEP should be clarified to say that the
> tests we are talking about should be tests *of the published API*.
> That is, blackbox tests, not whitebox tests.

I think 100% *branch* coverage is barking up the wrong tree.
Better to say comprehensive *api* coverage. Bugs on the tracker 
generally come from not having that. (I am not saying 'all' to allow for 
bugs that happen from weird interactions or corner cases in spite of 
what could reasonably be called comprehensive._

> I don't think the PEP is asking this either (or if it is I agree it
> shouldn't be).  The way to get full branch coverage (and yes Exarkun is
> right, this is about individual branches; see coverage.py --branch) is
> to provide test cases that exercise the published API such that those
> branches are taken.  If you can't do that, then what is that branch
> of the Python code for?  If you can do that, how is the test case
> testing an implementation detail?  It is testing the behavior of the
> API.

Right.

-- 
Terry Jan Reedy


From stefan_ml at behnel.de  Sun Apr 17 09:21:32 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Sun, 17 Apr 2011 09:21:32 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <20110416192736.1f9dd279@pitrou.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>
	<20110416192736.1f9dd279@pitrou.net>
Message-ID: <ioe4dt$qo5$1@dough.gmane.org>

Antoine Pitrou, 16.04.2011 19:27:
> On Sat, 16 Apr 2011 16:47:49 +0000 (UTC)
> Vinay Sajip wrote:
>> Bob made a comment in passing that simplejson (Python) is about as fast as
>> stdlib json (C extension), on 2.x.
>
> I think Bob tested with an outdated version of the stdlib json module
> (2.6 or 2.7, perhaps). In my latest measurements, the 3.2 json C module
> is as fast as the C simplejson module, the only difference being in
> parsing of numbers, which is addressed in
> http://bugs.python.org/issue11856

Ok, but then, what's the purpose of having the old Python implementation in 
the stdlib? The other Python implementations certainly won't be happy with 
something that is way slower (algorithmically!) than the current version of 
the non-stdlib implementation. The fact that the CPython json maintainers 
are happy with the performance of the C implementation does not mean that 
the performance of the pure Python implementation can be ignored now.

Note: I don't personally care about this question since Cython does not 
suffer from this issue anyway. This is just the general question about the 
relation of the C module and the Python module in the stdlib. Functional 
compatibility is not necessarily enough.

Stefan


From raymond.hettinger at gmail.com  Sun Apr 17 09:30:22 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sun, 17 Apr 2011 00:30:22 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <20110417053245.42D262500D7@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
Message-ID: <4F1A17A7-CA6E-42BD-A856-15DD92EAEE76@gmail.com>


>>>> In the grand python-dev tradition of "silence means acceptance", I consider
>>>> this PEP finalized and implicitly accepted.
>> 
>> I haven't seen any responses that said, yes this is a well thought-out proposal
>> that will actually benefit any of the various implementations.
> 
> In that case it may well be that the silence is because the other
> implementations think the PEP is OK.  They certainly voted in favor of
> the broad outline of it at the language summit.  

Sounds like it was implicitly accepted even before it was written or any of the details were discussed.  

The big picture of "let's do something to make life easier for other implementations" is a worthy goal.  What that something should be is still a bit ambiguous.


>> every branch in a given implementation now guarantee every implementation detail
>> or do we only promise the published API (historically, we've *always* done the
>> latter)?
> 
> As Brett said, people do come to depend on the details of the
> implementation.  But IMO the PEP should be clarified to say that the
> tests we are talking about should be tests *of the published API*.
> That is, blackbox tests, not whitebox tests.

+1 That's an excellent suggestion.  Without that change, it seems like the PEP is overreaching.


>> Is there going to be any guidance on the commonly encountered semantic
>> differences between C modules and their Python counterparts (thread-safety,
>> argument handling, tracebacks, all possible exceptions, monkey-patchable pure
>> python classes versus hard-wired C types etc)?
> 
> Presumably we will need to develop such guidance.

+1 That would be very helpful.  Right now, the PEP doesn't address any of the commonly encountered differences.


> I personally have no problem with the 100% coverage being made a
> recommendation in the PEP rather than a requirement.  It sounds like
> that might be acceptable to Antoine.  Actually, I would also be fine with
> saying "comprehensive" instead, with a note that 100% branch coverage is
> a good way to head toward that goal, since a comprehensive test suite
> should contain more tests than the minimum set needed to get to 100%
> branch coverage.

+1 better test coverage is always a good thing (IMO).


Raymond

From matt at vazor.com  Sun Apr 17 09:31:46 2011
From: matt at vazor.com (Matt Billenstein)
Date: Sun, 17 Apr 2011 07:31:46 +0000
Subject: [Python-Dev] Status of json (simplejson) in cpython
Message-ID: <4+paau3tclbkacaeaa2abylq4yla43oglkglau7kaf3w7fluhovwpmnwirtvht6sswtpoj7ikuj42gzmxtcu5vdyqqdusgtdl5zbpck3s5fkwckeljqmdeo23rak67zayx3nsb6vq=+647455@messaging-master.com>

On Sun, Apr 17, 2011 at 08:22:20AM +0200, Stefan Behnel wrote:
> Matt Billenstein, 17.04.2011 00:47:
> >On Sat, Apr 16, 2011 at 01:30:13PM +0200, Antoine Pitrou wrote:
> >>On Sat, 16 Apr 2011 00:41:03 +0000
> >>Matt Billenstein wrote:
> >>>
> >>>Slightly less crude benchmark showing simplejson is quite a bit faster:
> >>>
> >>>http://pastebin.com/g1WqUPwm
> >>>
> >>>250ms vs 5.5s encoding and decoding an 11KB json object 1000 times...
> >>
> >>This doesn't have much value if you don't say which version of Python
> >>you ran json with. You should use 3.2, otherwise you might miss some
> >>optimizations.
> >
> >Yes, that was 2.6.5 -- 3.2 native json is comparable to simplejson here taking
> >about 330ms...
> 
> From the POV of CPython 3.2, is "native" Python or C?

"Native" as in the version that ships with 3.2.

And actually I think my test with 2.6.5 wasn't using the C extension for some
reason so that 5.5s number isn't right -- a fresh build of 2.7.1 gives me a
runtime of around 350ms.

m

-- 
Matt Billenstein
matt at vazor.com
http://www.vazor.com/

From stefan at bytereef.org  Sun Apr 17 12:14:51 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Sun, 17 Apr 2011 12:14:51 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibility Requirements
In-Reply-To: <BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
Message-ID: <20110417101451.GA23490@sleipnir.bytereef.org>

Brett Cannon <brett at python.org> wrote:
>     Since they do not typically support the entire `C API of Python`_ they
>     are unable to use the code used to create the module. Often times this
>     leads these other VMs to either re-implement the modules in pure
>     Python or in the programming language used to implement the VM
>     (e.g., in C# for IronPython). This duplication of effort between
>     CPython, PyPy, Jython, and IronPython is extremely unfortunate as
>     implementing a module *at least* in pure Python would help mitigate
>     this duplicate effort.
> 
>     The purpose of this PEP is to minimize this duplicate effort by
>     mandating that all new modules added to Python's standard library
>     *must* have a pure Python implementation _unless_ special dispensation
>     is given. This makes sure that a module in the stdlib is available to
>     all VMs and not just to CPython (pre-existing modules that do not meet
>     this requirement are exempt, although there is nothing preventing
>     someone from adding in a pure Python implementation retroactively).

I'm not sure that I understand the duplication of effort: If there
is a C module without a Python implementation in the stdlib, then
the PyPy, Jython, and IronPython developers are free to cooperate
and implement a single Python version. I would not consider this
a duplication of effort.

If, on the other hand, they choose to provide three individual
implementations in C#, Java and (?), then that is their own choice
and surely not the fault of the C module developer.


By contrast, this PEP puts a great burden on the developers of
new C modules. If this PEP is accepted, it is the C module developers
who will have to do duplicate work.

In my view, the PEP should have a clause that *active* participation
of PyPy, Jython, and IronPython developers is expected if they want
pure compatible Python versions to exist.



>     Re-implementing parts (or all) of a module in C (in the case
>     of CPython) is still allowed for performance reasons, but any such
>     accelerated code must pass the same test suite (sans VM- or C-specific
>     tests) to verify semantics and prevent divergence. To accomplish this,
>     the test suite for the module must have 100% branch coverage of the
>     pure Python implementation before the acceleration code may be added.

Raymond has pointed out that the PEP seems to discourage C modules. This
is one of the examples. Since implementing C modules takes a lot of time,
I'd appreciate to know if they are just tolerated or actually welcome.



>     As an example, to write tests which exercise both the pure Python and
>     C accelerated versions of a module, a basic idiom can be followed::
[cut]
> 
>     ??????????? heap = Spam()
>     ??????????? self.assertFalse(isinstance(heap,
>     ??????????????????????????????? collections.abc.MutableSequence))
>     ??????????? with self.assertRaises(TypeError):
>     ??????????????? self.heapq.heappop(heap)

If all possible exceptions must match, then in the case of decimal the
PEP should give permission to change the published API of an existing
Python module (in this case decimal.py). Otherwise, I see no way of
accomplishing this goal.


It is possible to give many frivolous examples:

>>> from decimal import *
>>> 
>>> class C():
...     def __init__(self):
...         self.traps = 'invalid'
... 
>>> # No exception
... setcontext(C())
>>> 


>>> from cdecimal import *
>>> class C():
...     def __init__(self):
...         self.traps = 'invalid'
... 
>>> setcontext(C())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: argument must be a context.
>>> 


In the case of duck typing, the only solution I see is to lock down the
types in decimal.py, thus changing the API. This is one of the things that
should be decided *before* the PEP is accepted.



Stefan Krah



From vinay_sajip at yahoo.co.uk  Sun Apr 17 12:33:26 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sun, 17 Apr 2011 10:33:26 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<20110416192736.1f9dd279@pitrou.net>
Message-ID: <loom.20110417T122545-907@post.gmane.org>

Antoine Pitrou <solipsis <at> pitrou.net> writes:

> Feel free to share your numbers.

I've now got my fork working on Python 3.2 with speedups. According to a
non-scientific simple test:

Python 2.7
==========
Python version: 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) 
[GCC 4.5.2]
11.21484375 KiB read
Timing simplejson:
0.271898984909
Timing stdlib json:
0.338716030121

Python 3.2
==========
Python version: 3.2 (r32:88445, Mar 25 2011, 19:28:28) 
[GCC 4.5.2]
11.21484375 KiB read
Timing simplejson:
0.3150200843811035
Timing stdlib json:
0.32146596908569336

Based on this test script:

https://gist.github.com/923927

and the simplejson version here:

https://github.com/vsajip/simplejson/

Regards,

Vinay Sajip



From vinay_sajip at yahoo.co.uk  Sun Apr 17 12:46:41 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Sun, 17 Apr 2011 10:46:41 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net> <iocen5$sq5$1@dough.gmane.org>
Message-ID: <loom.20110417T123902-831@post.gmane.org>

Stefan Behnel <stefan_ml <at> behnel.de> writes:

> Well, if that is not possible, then the CPython devs will have a hard time 
> maintaining the json accelerator module in the long run. I quickly skipped 
> through the github version in simplejson, and it truly is some complicated 
> piece of code. Not in the sense that the code is ununderstandable, it's 
> actually fairly straight forward string processing code, but it's so 
> extremely optimised and tailored and has so much code duplicated for the 
> bytes and unicode types (apparently following the copy+paste+adapt pattern) 
> that it will be pretty hard to adapt to future changes of CPython, 
> especially the upcoming PEP 393 implementation. Maintaining this is clearly 
> no fun.

Do we even need this complexity in Python 3.x? The speedup code for 2.x is
taking different, parallel paths for str and unicode types, either of which
might be legitimately passed into JSON APIs in 2.x code. However, in Python 3.x,
ISTM we should not be passing in bytes to JSON APIs. So there'd be no equivalent
parallel paths for bytes for 3.x speedup code to worry about.

Anyway, some simple numbers posted by me elsewhere on this thread show
simplejson to be only around 2% faster. Talk of a 5x speedup appears to be
comparing non-speeded up vs. speeded up code, in which case the comparison isn't
valid.

Of course, people might find other workloads which show bigger disparity in
performance, or might find something in my 3.x port of simplejson which
invalidates my finding of a 2% difference.

Regards,

Vinay Sajip


From stefan at bytereef.org  Sun Apr 17 12:53:02 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Sun, 17 Apr 2011 12:53:02 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibility Requirements
In-Reply-To: <BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
Message-ID: <20110417105302.GA23896@sleipnir.bytereef.org>

Brett Cannon <brett at python.org> wrote:
> Now if people would actually support simply not accepting any more C modules
> into the Python stdlib (this does not apply to CPython's stdlib), then I'm all
> for that. I only went with the "accelerator modules are okay" route to help get
> acceptance for the PEP. But if people are willing to go down a more stringent
> route and say that any module which uses new C code is considered
> CPython-specific and thus any acceptance of such modules will be damn hard to
> accomplish as it will marginalize the value of the code, that's fine by me.


Could you explain why C code marginalizes the value of the code? Most
people use CPython and they definitely want fast C modules. Also,
many people actually use CPython specifically for its C-API.

It has been suggested recently that wrapping the ICU library would be
desirable for Python. Should all such projects be discouraged because
it does not benefit PyPy, Jython and IronPython?


I find these projects very interesting and wish them well, but IMO the
reality is that CPython will continue to be the dominant player for
at least another 10 years.


Stefan Krah



From stefan at bytereef.org  Sun Apr 17 13:18:48 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Sun, 17 Apr 2011 13:18:48 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibility Requirements
In-Reply-To: <20110417053245.42D262500D7@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
Message-ID: <20110417111848.GA24101@sleipnir.bytereef.org>

R. David Murray <rdmurray at bitdance.com> wrote:
> > The PEP seems to be predicated on a notion that anything written in C is bad and
> > that all testing is good.  AFAICT, it doesn't provide any practical advice to
> > someone pursuing a non-trivial project (such as decimal or threading).  The PEP
> 
> Decimal already has a Python implementation with a very comprehensive
> test suite (no, I don't know if it has 100% coverage).  My understanding
> is that Stefan's code passes the Python test suite.  So I'm not sure
> what the issue is, there.  Stefan?

test_decimal.py does not have 100% coverage yet. cdecimal passes the tests,
but several decimal.py functions would have to perform type checking to
get identical exception behavior.

The current version of the joint unit tests is here:

http://hg.python.org/features/cdecimal/file/b00f8fa70126/Lib/test/decimal_tests.py


cdecimal specific behavior is guarded by HAVE_CDECIMAL, so it is
possible to grep for the differences.



As an aside, test_decimal.py constitutes at most 1% of the total tests.
The important tests (mathematical correctness and conformance to the
specification) are in two separate test suites, one of which runs
tests against decimal.py and the other against decNumber. These tests
can easily take a week to run, so they can't be part of the regression
tests.


Stefan Krah



From solipsis at pitrou.net  Sun Apr 17 13:46:14 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 13:46:14 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<20110416192736.1f9dd279@pitrou.net> <ioe4dt$qo5$1@dough.gmane.org>
Message-ID: <20110417134614.59d6af08@pitrou.net>

On Sun, 17 Apr 2011 09:21:32 +0200
Stefan Behnel <stefan_ml at behnel.de> wrote:
> Antoine Pitrou, 16.04.2011 19:27:
> > On Sat, 16 Apr 2011 16:47:49 +0000 (UTC)
> > Vinay Sajip wrote:
> >> Bob made a comment in passing that simplejson (Python) is about as fast as
> >> stdlib json (C extension), on 2.x.
> >
> > I think Bob tested with an outdated version of the stdlib json module
> > (2.6 or 2.7, perhaps). In my latest measurements, the 3.2 json C module
> > is as fast as the C simplejson module, the only difference being in
> > parsing of numbers, which is addressed in
> > http://bugs.python.org/issue11856
> 
> Ok, but then, what's the purpose of having the old Python implementation in 
> the stdlib? The other Python implementations certainly won't be happy with 
> something that is way slower (algorithmically!) than the current version of 
> the non-stdlib implementation.

Again, I don't think it's "way slower" since the code should be almost
identical (simplejson hasn't changed much in the last year). That's
assuming you measure performance on 3.2 or 3.3, not something older.

Besides, the primary selling point of the stdlib implementation is
that... it's the stdlib implementation. You have a json
serializer/deserializer by default without having to install any
third-party package. For most people that's probably sufficient; people
with specific needs *may* benefit from installing simplejson.

Also, the pure Python paths are still used if you customize some
parameters (I don't remember which ones exactly, you could take a look
at e.g. Lib/json/encoder.py if you are interested).

Regards

Antoine.



From solipsis at pitrou.net  Sun Apr 17 13:48:56 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 13:48:56 +0200
Subject: [Python-Dev] Releases for recent security vulnerability
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
Message-ID: <20110417134856.4a3cc78b@pitrou.net>

On Sat, 16 Apr 2011 21:32:48 -0500
Brian Curtin <brian.curtin at gmail.com> wrote:
> > Three weeks after this security vulnerability was *publicly* reported on
> > bugs.python.org, and two days after it was semi-officially announced,
> > I'm still waiting for security updates for my Ubuntu and Debian systems!
> >
> > I reckon if this had been handled differently (i.e., making new releases
> > and communicating it via the relevant channels [1]), we wouldn't have
> > the situation we have right now.
> 
> 
> I don't really think there's a "situation" here, and I fail to see how the
> development blog isn't one of the relevant channels.

If we want to make official announcements (like releases or security
warnings), I don't think the blog is appropriate. A separate
announcement channel (mailing-list or newsgroup) would be better, where
people can subscribe knowing they will only get a couple of e-mails a
year.

Regards

Antoine.



From fdrake at acm.org  Sun Apr 17 14:30:33 2011
From: fdrake at acm.org (Fred Drake)
Date: Sun, 17 Apr 2011 08:30:33 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <20110417134856.4a3cc78b@pitrou.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
	<20110417134856.4a3cc78b@pitrou.net>
Message-ID: <BANLkTimwCxS520TgXoKCEzAEewTjgv2hNQ@mail.gmail.com>

On Sun, Apr 17, 2011 at 7:48 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> A separate announcement channel (mailing-list or newsgroup) would be better,
> where people can subscribe knowing they will only get a couple of e-mails a
> year.

Sounds like python-announce to me, with a matching entry on the front
of www.python.org.


  -Fred

-- 
Fred L. Drake, Jr.? ? <fdrake at acm.org>
"Give me the luxuries of life and I will willingly do without the necessities."
?? --Frank Lloyd Wright

From solipsis at pitrou.net  Sun Apr 17 14:53:25 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 14:53:25 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
Message-ID: <20110417145325.5a9c9436@pitrou.net>

On Sun, 17 Apr 2011 01:32:15 -0400
"R. David Murray" <rdmurray at bitdance.com> wrote:
> 
> I personally have no problem with the 100% coverage being made a
> recommendation in the PEP rather than a requirement.  It sounds like
> that might be acceptable to Antoine.  Actually, I would also be fine with
> saying "comprehensive" instead, with a note that 100% branch coverage is
> a good way to head toward that goal, since a comprehensive test suite
> should contain more tests than the minimum set needed to get to 100%
> branch coverage.

If that's a recommendation then it's ok, although I would still prefer
we don't advocate such metrics. It's too easy for some people to get
obsessed about numeric measurements of "quality", leading them to
dubious workarounds and tricks (e.g. when using style-checking tools ?
la pylint).

Regards

Antoine.



From jnoller at gmail.com  Sun Apr 17 15:30:17 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Sun, 17 Apr 2011 09:30:17 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <20110417134856.4a3cc78b@pitrou.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
	<20110417134856.4a3cc78b@pitrou.net>
Message-ID: <BANLkTin=XUvrcPK0err5mjbmZvkVkdkwUw@mail.gmail.com>

On Sun, Apr 17, 2011 at 7:48 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Sat, 16 Apr 2011 21:32:48 -0500
> Brian Curtin <brian.curtin at gmail.com> wrote:
>> > Three weeks after this security vulnerability was *publicly* reported on
>> > bugs.python.org, and two days after it was semi-officially announced,
>> > I'm still waiting for security updates for my Ubuntu and Debian systems!
>> >
>> > I reckon if this had been handled differently (i.e., making new releases
>> > and communicating it via the relevant channels [1]), we wouldn't have
>> > the situation we have right now.
>>
>>
>> I don't really think there's a "situation" here, and I fail to see how the
>> development blog isn't one of the relevant channels.
>
> If we want to make official announcements (like releases or security
> warnings), I don't think the blog is appropriate. A separate
> announcement channel (mailing-list or newsgroup) would be better, where
> people can subscribe knowing they will only get a couple of e-mails a
> year.
>
> Regards
>
> Antoine.

And whose responsibility is it to email yet another mythical list? The
person posting the fix? The person who found and filed the CVE? The
release manager?

Brian *helped* us by raising awareness of the issue: At least now
there's a chance that one or more of the OS vendors *saw* that this
was an issue that was fixed.

From solipsis at pitrou.net  Sun Apr 17 15:42:49 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 15:42:49 +0200
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTin=XUvrcPK0err5mjbmZvkVkdkwUw@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
	<20110417134856.4a3cc78b@pitrou.net>
	<BANLkTin=XUvrcPK0err5mjbmZvkVkdkwUw@mail.gmail.com>
Message-ID: <1303047769.3539.6.camel@localhost.localdomain>

Le dimanche 17 avril 2011 ? 09:30 -0400, Jesse Noller a ?crit :
> >
> > If we want to make official announcements (like releases or security
> > warnings), I don't think the blog is appropriate. A separate
> > announcement channel (mailing-list or newsgroup) would be better, where
> > people can subscribe knowing they will only get a couple of e-mails a
> > year.
> >
> > Regards
> >
> > Antoine.
> 
> And whose responsibility is it to email yet another mythical list? The
> person posting the fix? The person who found and filed the CVE? The
> release manager?

Well, whose responsibility is it to make blog posts about security
issues? If you can answer this question then the other question
shouldn't be any more difficult to answer ;)

I don't think the people who may be interested in security announcements
want to monitor a generic development blog, since Python is far from the
only piece of software they rely on. /I/ certainly wouldn't want to.

Also, I think Gustavo's whole point is that if we don't have a
well-defined, deterministic procedure for security announcements and
releases, then it's just as though we didn't care about security at all.
Saying "look, we mentioned this one on our development blog" isn't
really reassuring for the target group of people.

Regards

Antoine.



From stefan_ml at behnel.de  Sun Apr 17 15:50:06 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Sun, 17 Apr 2011 15:50:06 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <loom.20110417T122545-907@post.gmane.org>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<20110416192736.1f9dd279@pitrou.net>
	<loom.20110417T122545-907@post.gmane.org>
Message-ID: <ioer6e$te1$1@dough.gmane.org>

Vinay Sajip, 17.04.2011 12:33:
> Antoine Pitrou writes:
>> Feel free to share your numbers.
>
> I've now got my fork working on Python 3.2 with speedups. According to a
> non-scientific simple test:
>
> Python 2.7
> ==========
> Python version: 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
> [GCC 4.5.2]
> 11.21484375 KiB read
> Timing simplejson:
> 0.271898984909
> Timing stdlib json:
> 0.338716030121
>
> Python 3.2
> ==========
> Python version: 3.2 (r32:88445, Mar 25 2011, 19:28:28)
> [GCC 4.5.2]
> 11.21484375 KiB read
> Timing simplejson:
> 0.3150200843811035
> Timing stdlib json:
> 0.32146596908569336
>
> Based on this test script:
>
> https://gist.github.com/923927
>
> and the simplejson version here:
>
> https://github.com/vsajip/simplejson/

Is this using the C accelerated version in both cases? What about the pure 
Python versions? Could you provide numbers for both?

Stefan


From solipsis at pitrou.net  Sun Apr 17 15:50:49 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 15:50:49 +0200
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTimwCxS520TgXoKCEzAEewTjgv2hNQ@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
	<20110417134856.4a3cc78b@pitrou.net>
	<BANLkTimwCxS520TgXoKCEzAEewTjgv2hNQ@mail.gmail.com>
Message-ID: <20110417155049.00577a00@pitrou.net>

On Sun, 17 Apr 2011 08:30:33 -0400
Fred Drake <fdrake at acm.org> wrote:
> On Sun, Apr 17, 2011 at 7:48 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > A separate announcement channel (mailing-list or newsgroup) would be better,
> > where people can subscribe knowing they will only get a couple of e-mails a
> > year.
> 
> Sounds like python-announce to me, with a matching entry on the front
> of www.python.org.

Looking at python-announce, it can receive an arbitrary amount of
announcements from third-party projects, or even call for papers for
random conferences. It's probably easy to (dis)miss an important message
in all the churn.

Regards

Antoine.

From p.f.moore at gmail.com  Sun Apr 17 15:53:56 2011
From: p.f.moore at gmail.com (Paul Moore)
Date: Sun, 17 Apr 2011 14:53:56 +0100
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <20110417053245.42D262500D7@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
Message-ID: <BANLkTi=DKF3miuQHMXoAuX-XN0JMeYeujA@mail.gmail.com>

On 17 April 2011 06:32, R. David Murray <rdmurray at bitdance.com> wrote:
> I don't think the PEP is asking this either (or if it is I agree it
> shouldn't be). ?The way to get full branch coverage (and yes Exarkun is
> right, this is about individual branches; see coverage.py --branch)

One thing I'm definitely uncomfortable about is expressing the
requirement in a way that depends on a non-stdlib module
(coverage.py). Should coverage.py be added to the stdlib if we're
going to take test coverage as a measure? Hmm, maybe it goes without
saying, but does coverage.py work on Jython, IronPython, etc? (A quick
google search actually indicates that there might be some issues still
to be resolved...)

Paul.

From jnoller at gmail.com  Sun Apr 17 16:00:00 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Sun, 17 Apr 2011 10:00:00 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <1303047769.3539.6.camel@localhost.localdomain>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
	<20110417134856.4a3cc78b@pitrou.net>
	<BANLkTin=XUvrcPK0err5mjbmZvkVkdkwUw@mail.gmail.com>
	<1303047769.3539.6.camel@localhost.localdomain>
Message-ID: <BANLkTik_kVMpxp-TkrMM_epcBfvT5L3cDQ@mail.gmail.com>

On Sun, Apr 17, 2011 at 9:42 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Le dimanche 17 avril 2011 ? 09:30 -0400, Jesse Noller a ?crit :
>> >
>> > If we want to make official announcements (like releases or security
>> > warnings), I don't think the blog is appropriate. A separate
>> > announcement channel (mailing-list or newsgroup) would be better, where
>> > people can subscribe knowing they will only get a couple of e-mails a
>> > year.
>> >
>> > Regards
>> >
>> > Antoine.
>>
>> And whose responsibility is it to email yet another mythical list? The
>> person posting the fix? The person who found and filed the CVE? The
>> release manager?
>
> Well, whose responsibility is it to make blog posts about security
> issues? If you can answer this question then the other question
> shouldn't be any more difficult to answer ;)
>
> I don't think the people who may be interested in security announcements
> want to monitor a generic development blog, since Python is far from the
> only piece of software they rely on. /I/ certainly wouldn't want to.
>
> Also, I think Gustavo's whole point is that if we don't have a
> well-defined, deterministic procedure for security announcements and
> releases, then it's just as though we didn't care about security at all.
> Saying "look, we mentioned this one on our development blog" isn't
> really reassuring for the target group of people.
>
> Regards
>
> Antoine.

I'm not arguing against us having a well defined, deterministic
procedure! We need one, for sure - I'm just defending Brian's actions
as perfectly rational and reasonable. Without his post, that CVE would
have been published, publicly available on other sites (CVE tracking
sites, and hence on the radar for people looking to exploit it), and
no one would be the wiser.

At least it got *some* attention this way. Is it the right thing to do
moving forward? Probably not - but do we have the people/person
willing to head up defining the policy and procedure, and do we have
the needed contacts in the OS vendors/3rd party distributors to notify
them rapidly in the case of fixing something like this?

A lag of several weeks from fixing a security issue to a source level
release from us that OS vendors can run with is too slow honestly.

jesse

From jacob at jacobian.org  Sun Apr 17 16:03:51 2011
From: jacob at jacobian.org (Jacob Kaplan-Moss)
Date: Sun, 17 Apr 2011 09:03:51 -0500
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTint4fVi=Joy+ythOAOOzKL_VzTHPg@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTint4fVi=Joy+ythOAOOzKL_VzTHPg@mail.gmail.com>
Message-ID: <BANLkTi=YLZu+40eAuXfBAyaORMxGCFsMRA@mail.gmail.com>

On Sat, Apr 16, 2011 at 9:23 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Sat, Apr 16, 2011 at 9:45 PM, Gustavo Narea <me at gustavonarea.net> wrote:
>> May I suggest that you adopt a policy for handling security issues like
>> Django's?
>> http://docs.djangoproject.com/en/1.3/internals/contributing/#reporting-security-issues
>
> When the list of people potentially using the software is "anyone
> running Linux or Mac OS X and an awful lot of people running Windows
> or an embedded device", private pre-announcements simply aren't a
> practical reality. Neither is "stopping all other development" when
> most of the core development team aren't on the security at python.org
> list and don't even know a security issue exists until it is announced
> publicly. Take those two impractical steps out of the process, and
> what you have *is* the python.org procedure for dealing with security
> issues.

Just to fill in a bit of missing detail about our process since the
doc doesn't perfectly describe what happens:

* Our pre-announce list is *really* short. It consists of release
managers for various distributions that distribute packaged versions
of Django -- Ubuntu, RedHat, and the like. Yes it's a bit of
bookkeeping, but we feel it's really important to our users: not
everyone installs the Django package *we* put out, so we think it's
important to coordinate security releases with downstream distributors
so that users get a fixed version of Django regardless of how they're
installing Django in the first place.

* We don't really halt all development. I don't know why that's in
there, except maybe that it pre-dates there being more than a
couple-three committers. The point is just that we treat the security
issue as our most important issue at the moment and fix it as quickly
as possible.

I don't really have a point here as it pertains to python-dev, but I
thought it's important to clarify what Django *actually* does if it's
being discussed as a model.

Jacob

From rdmurray at bitdance.com  Sun Apr 17 16:54:03 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Sun, 17 Apr 2011 10:54:03 -0400
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTin=XUvrcPK0err5mjbmZvkVkdkwUw@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTincrv+EOb651TXA-9w6Jwv5B86Fxw@mail.gmail.com>
	<20110417134856.4a3cc78b@pitrou.net>
	<BANLkTin=XUvrcPK0err5mjbmZvkVkdkwUw@mail.gmail.com>
Message-ID: <20110417145433.4CF822500D7@mailhost.webabinitio.net>

On Sun, 17 Apr 2011 09:30:17 -0400, Jesse Noller <jnoller at gmail.com> wrote:
> On Sun, Apr 17, 2011 at 7:48 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > On Sat, 16 Apr 2011 21:32:48 -0500 Brian Curtin <brian.curtin at gmail.com> wrote:
> >> > Three weeks after this security vulnerability was *publicly* reported on
> >> > bugs.python.org, and two days after it was semi-officially announced,
> >> > I'm still waiting for security updates for my Ubuntu and Debian systems!
> >> >
> >> > I reckon if this had been handled differently (i.e., making new releases
> >> > and communicating it via the relevant channels [1]), we wouldn't have
> >> > the situation we have right now.
> >>
> >> I don't really think there's a "situation" here, and I fail to see how the
> >> development blog isn't one of the relevant channels.
> >
> > If we want to make official announcements (like releases or security
> > warnings), I don't think the blog is appropriate. A separate
> > announcement channel (mailing-list or newsgroup) would be better, where
> > people can subscribe knowing they will only get a couple of e-mails a
> > year.
> 
> And whose responsibility is it to email yet another mythical list? The
> person posting the fix? The person who found and filed the CVE? The
> release manager?
> 
> Brian *helped* us by raising awareness of the issue: At least now
> there's a chance that one or more of the OS vendors *saw* that this
> was an issue that was fixed.

That fact that Brian helped publicize it is not really relevant to
Antoine's point.  The *obvious* answer to your question about whose
responsibility it is is: *the security team*.  Brian's blog post would
then have been much more like he envisioned it when he wrote it, a peek
inside the process, rather than appearing to be the primary announcement
as many seem to be perceiving it.

That's how distributions, at least, handle this.  There's a mailing list for
security related announcements on which only the "security officer" or
"security team" posts announcements, and security related announcements
*only*.  Then then the people responsible for security in any context
(a distribution, a security manager for a company, J Random User) can
subscribe to it and get *only* security announcements.  That allows them
to easily prioritize those announcements on receipt.

Python should have such a mailing list.

--
R. David Murray           http://www.bitdance.com

From ncoghlan at gmail.com  Sun Apr 17 17:02:09 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 18 Apr 2011 01:02:09 +1000
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTi=YLZu+40eAuXfBAyaORMxGCFsMRA@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTint4fVi=Joy+ythOAOOzKL_VzTHPg@mail.gmail.com>
	<BANLkTi=YLZu+40eAuXfBAyaORMxGCFsMRA@mail.gmail.com>
Message-ID: <BANLkTimsgPk8uMMnCxhavSmrBCJ1Yyd_mw@mail.gmail.com>

On Mon, Apr 18, 2011 at 12:03 AM, Jacob Kaplan-Moss <jacob at jacobian.org> wrote:
> Just to fill in a bit of missing detail about our process since the
> doc doesn't perfectly describe what happens:
>
> * Our pre-announce list is *really* short. It consists of release
> managers for various distributions that distribute packaged versions
> of Django -- Ubuntu, RedHat, and the like. Yes it's a bit of
> bookkeeping, but we feel it's really important to our users: not
> everyone installs the Django package *we* put out, so we think it's
> important to coordinate security releases with downstream distributors
> so that users get a fixed version of Django regardless of how they're
> installing Django in the first place.

I'd rather have Red Hat and Canonical reps *on* the
security at python.org list rather than a separate pre-announce list.

> * We don't really halt all development. I don't know why that's in
> there, except maybe that it pre-dates there being more than a
> couple-three committers. The point is just that we treat the security
> issue as our most important issue at the moment and fix it as quickly
> as possible.

That makes a lot more sense.

> I don't really have a point here as it pertains to python-dev, but I
> thought it's important to clarify what Django *actually* does if it's
> being discussed as a model.

I'd personally like to see a couple of adjustments to
http://www.python.org/news/security/:

1. Identify a specific point-of-contact for the security list, for
security-related questions that aren't actually security issues (e.g.
how would a core developer go about asking to join the PSRT?)
2. Specifically state on the security page where vulnerabilities and
fixes will be announced and the information those announcements will
contain (as a reference for the PSRT when responding to an issue, and
also to inform others of the expected procedure)

The current page does a decent job of describing how to report a
security issue, but doesn't describe anything beyond that.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fuzzyman at voidspace.org.uk  Sun Apr 17 17:13:10 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Sun, 17 Apr 2011 16:13:10 +0100
Subject: [Python-Dev] python and super
In-Reply-To: <4DAA46D7.1020500@pearwood.info>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>
	<4DA70ACA.4070204@voidspace.org.uk>
	<20110414153503.F125B3A4063@sparrow.telecommunity.com>
	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>
	<4DA71C63.3030809@voidspace.org.uk>
	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
	<4DA79E28.2060406@pearwood.info> <4DA84DD6.20608@voidspace.org.uk>
	<4DAA46D7.1020500@pearwood.info>
Message-ID: <BANLkTi=8aoFMEMT+aj5HJ9XF6NHUvxTt_A@mail.gmail.com>

On 17 April 2011 02:48, Steven D'Aprano <steve at pearwood.info> wrote:

> Michael Foord wrote:
>
>> On 15/04/2011 02:23, Steven D'Aprano wrote:
>>
> [...]
>
>  If we treat django's failure to use super as a bug, you want the Python
>>> language to work-around that bug so that:
>>>
>>
>> What you say (that this particular circumstance could be treated as a bug
>> in django) is true, however consider the "recently" introduced problem
>> caused by object.__init__ not taking arguments. This makes it impossible to
>> use super correctly in various circumstances.
>>
> [...]
>
>  It is impossible to inherit from both C and A and have all parent __init__
>> methods called correctly. Changing the semantics of super as described would
>> fix this problem.
>>
>
> So you say. I don't have an an opinion on whether or not you are
> technically correct, but adding DWIM black-magic to super scares me.



Well, super is already pretty "magic" and what I'm suggesting is no more
magic than currently exists. I'm suggesting (but it won't happen - no-one
else is in favour :-) *extending* the existing algorithm in a predictable
and understandable way. The main advantage is that it allows methods to
express "don't call my parent class methods but don't halt the chain of
calling", which is currently not possible (so in that context I don't really
know what you mean by "DWIM black-magic"). I'm *not* suggesting full auto
calling.

All the best,

Michael




> It scares me even if it were guaranteed to *only* apply to __init__, but if
> it applied to arbitrary methods, it frankly terrifies me.
>
> If it were limited to only apply to __init__, there would be a constant
> stream of requests that we loosen the restriction and "make super just work"
> for all methods, despite the dangers of DWIM code.
>
>
>
>
>
> --
> Steven
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>



-- 

http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110417/5473cc50/attachment.html>

From rdmurray at bitdance.com  Sun Apr 17 17:41:47 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Sun, 17 Apr 2011 11:41:47 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibility Requirements
In-Reply-To: <20110417101451.GA23490@sleipnir.bytereef.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110417101451.GA23490@sleipnir.bytereef.org>
Message-ID: <20110417154208.1C9AF2500CF@mailhost.webabinitio.net>

On Sun, 17 Apr 2011 12:14:51 +0200, Stefan Krah <stefan at bytereef.org> wrote:
> I'm not sure that I understand the duplication of effort: If there
> is a C module without a Python implementation in the stdlib, then
> the PyPy, Jython, and IronPython developers are free to cooperate
> and implement a single Python version. I would not consider this
> a duplication of effort.

Yes, that's exactly what we are trying to encourage.  If the Python
standard library is seen as common property of all Python implementations,
then this is *much* more likely to happen.

> If, on the other hand, they choose to provide three individual
> implementations in C#, Java and (?), then that is their own choice
> and surely not the fault of the C module developer.

Right.

> By contrast, this PEP puts a great burden on the developers of
> new C modules. If this PEP is accepted, it is the C module developers
> who will have to do duplicate work.

This is true only because of the current "blessed" position of CPython
in the Python ecosystem.  If a separate Python stdlib is the common
property of all Python implementations, then the same double burden
would apply to, say, an IronPython developer writing a module in C#
and wanting it included in the stdlib.

> In my view, the PEP should have a clause that *active* participation
> of PyPy, Jython, and IronPython developers is expected if they want
> pure compatible Python versions to exist.

> >     Re-implementing parts (or all) of a module in C (in the case
> >     of CPython) is still allowed for performance reasons, but any such
> >     accelerated code must pass the same test suite (sans VM- or C-specific
> >     tests) to verify semantics and prevent divergence. To accomplish this,
> >     the test suite for the module must have 100% branch coverage of the
> >     pure Python implementation before the acceleration code may be added.
> 
> Raymond has pointed out that the PEP seems to discourage C modules. This
> is one of the examples. Since implementing C modules takes a lot of time,
> I'd appreciate to know if they are just tolerated or actually welcome.

I believe they are welcome, but that they are a CPython implementation
detail, and the PEP is trying to make that distinction clear.

One can also imagine a C module getting accepted to the stdblib
because everybody agrees that (a) it can't be implemented in Python and
(b) every Python implementation should support it.  In that case only
the test suite will be part of the implementation-independent part of
the stdlib.  I do think that such modules (and we already have several)
should have a higher bar to cross to get in to the stdlib than modules
that have a pure Python implementation.

> If all possible exceptions must match, then in the case of decimal the
> PEP should give permission to change the published API of an existing
> Python module (in this case decimal.py). Otherwise, I see no way of
> accomplishing this goal.

This may well be what needs to be done, both for CPython and for other
implementations.  When we agree that some test covers something that is
an implementation detail, the tests should be so marked.  Making changes
to the API and tests to accommodate specific Python implementations
(including CPython) will be the right thing to do in some cases.
Obviously these will have to be considered on a case by case basis.

The Python sdtlib and its tests is already the standard that other
implementations need to conform to.  The PEP is trying to lay out some
rules so that CPython has to conform on equal footing with the
other implementations.

> It is possible to give many frivolous examples:
> 
> >>> from decimal import *
> >>>
> 
> >>> class C():
> ...     def __init__(self):
> ...         self.traps = 'invalid'
> ...
> 
> >>> # No exception
> ... setcontext(C())
> >>> 
> 
> 
> 
> >>> from cdecimal import *
> >>> class C():
> ...     def __init__(self):
> ...         self.traps = 'invalid'
> ... 
> 
> >>> setcontext(C())
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> TypeError: argument must be a context.
> >>>
>
> In the case of duck typing, the only solution I see is to lock down the
> types in decimal.py, thus changing the API. This is one of the things that
> should be decided *before* the PEP is accepted.

Here you perceive the burden we are currently placing on the other
implementations.  That's the world they live in *now*.  The PEP is asking
CPython to share this pain equally.

I agree that this is a concrete example that the PEP could address.
I myself don't know enough about decimal/cdecimal or the Python C API
to know why cdecimal can't duck type here, but it certainly sounds
like a good example to use to clarify the requirements being advocated
by the PEP.  I won't be surprised to find that the issues involved are
the same issues that an accelerator module for the other Python
implementations would face.

--
R. David Murray           http://www.bitdance.com

From rdmurray at bitdance.com  Sun Apr 17 17:51:41 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Sun, 17 Apr 2011 11:51:41 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <4F1A17A7-CA6E-42BD-A856-15DD92EAEE76@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
	<4F1A17A7-CA6E-42BD-A856-15DD92EAEE76@gmail.com>
Message-ID: <20110417155202.DDE8E2500D1@mailhost.webabinitio.net>

On Sun, 17 Apr 2011 00:30:22 -0700, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> >>>> In the grand python-dev tradition of "silence means acceptance", I consider
> >>>> this PEP finalized and implicitly accepted.
> >>
> >> I haven't seen any responses that said, yes this is a well thought-out proposal
> >> that will actually benefit any of the various implementations.
> >
> > In that case it may well be that the silence is because the other
> > implementations think the PEP is OK.  They certainly voted in favor of
> > the broad outline of it at the language summit.
> 
> Sounds like it was implicitly accepted even before it was written or any of the
> details were discussed.

No, just the principle that something along these lines would be good.
Any final decision of course requires the actual PEP to look at, which
was also acknowledged at the summit.  My point was that lack of comment
from the other implementations *might* indicate they liked how the PEP
turned out.  But it might also mean they aren't paying attention, which
would be bad...

> The big picture of "let's do something to make life easier for other
> implementations" is a worthy goal.  What that something should be is still a bit
> ambiguous.

As I said in another email, I think the something that should be
done is to put CPython on equal footing implementation-pain-wise and
lets-make-this-work-wise with the other implementations.  The end result
will be better test coverage and clearer APIs in the stdlib.

--
R. David Murray           http://www.bitdance.com

From fuzzyman at voidspace.org.uk  Sun Apr 17 18:05:03 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Sun, 17 Apr 2011 17:05:03 +0100
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <20110417011634.142092b2@pitrou.net>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>	<loom.20110416T211116-728@post.gmane.org>	<4DAA09F4.3000001@v.loewis.de>
	<4DAA1CCD.60805@voidspace.org.uk>
	<20110417011634.142092b2@pitrou.net>
Message-ID: <4DAB0FAF.3010700@voidspace.org.uk>

On 17/04/2011 00:16, Antoine Pitrou wrote:
> On Sat, 16 Apr 2011 23:48:45 +0100
> Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>
>> On 16/04/2011 22:28, "Martin v. L?wis" wrote:
>>> Am 16.04.2011 21:13, schrieb Vinay Sajip:
>>>> Martin v. L?wis<martin<at>   v.loewis.de>   writes:
>>>>
>>>>> Does it actually need improvement?
>>>> I can't actually say, but I assume it keeps changing for the better - albeit
>>>> slowly. I wasn't thinking of specific improvements, just the idea of continuous
>>>> improvement in general...
>>> Hmm. I cannot believe in the notion of "continuous improvement"; I'd
>>> guess that it is rather "continuous change".
>>>
>>> I can see three possible areas of improvment:
>>> 1. Bugs: if there are any, they should clearly be fixed. However, JSON
>>>      is a simple format, so the implementation should be able to converge
>>>      to something fairly correct quickly.
>>> 2. Performance: there is always room for performance improvements.
>>>      However, I strongly recommend to not bother unless a severe
>>>      bottleneck can be demonstrated.
>> Well, there was a 5x speedup demonstrated comparing simplejson to the
>> standard library json module.
> No.
>
Yes.

> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Sun Apr 17 18:09:17 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Sun, 17 Apr 2011 17:09:17 +0100
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <4DAA88A8.3080507@v.loewis.de>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>	<loom.20110416T211116-728@post.gmane.org>	<4DAA09F4.3000001@v.loewis.de>
	<4DAA1CCD.60805@voidspace.org.uk> <4DAA88A8.3080507@v.loewis.de>
Message-ID: <4DAB10AD.4010207@voidspace.org.uk>

On 17/04/2011 07:28, "Martin v. L?wis" wrote:
>> Well, there was a 5x speedup demonstrated comparing simplejson to the
>> standard library json module.
> Can you kindly point to that demonstration?
>
Hmm... according to a later email in this thread it is 350ms vs 250ms 
for an 11kb sample. That's a nice speedup but not a 5x one. Bob Ippolito 
did claim that simplejson was faster than json for real world workloads 
and I see no reason not to believe him. :-)

>> That sound like *very* worth pursuing (and
>> crazy not to pursue). I've had json serialisation be the bottleneck in
>> web applications generating several megabytes of json for some requests.
> Hmm. I'd claim that the web application that needs to generate several
> megabytes of json for something should be redesigned.

It was displaying (including sorting) large amounts of information in 
tables through a web ui. The customer wanted all the information 
available in the ables, so all the data needed to be sent. We did 
filtering on the server side where possible to minimize the data sent, 
but it was ~10mb for many of the queries. We also cached the data on the 
client and only updated as needed.

We could have "redesigned" the customer requirements I suppose...

>   I also wonder
> whether the bottleneck was the *generation*,
The bottleneck was generation. I benchmarked and optimised. (We were 
using simplejson but I trimmed down the data sent to the absolute 
minimum needed by the client app rather than merely serialising all the 
source data from the django model objects - I didn't optimise within 
simplejson itself...)

> the transmission, or
> the processing of the data on the receiving end.
>
Processing was done in IronPython in Silverlight using the .NET 
de-serialization APIs which were dramatically faster than the Python 
handling on the other side.

All the best,

Michael

> Regards,
> Martin


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From fuzzyman at voidspace.org.uk  Sun Apr 17 18:32:31 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Sun, 17 Apr 2011 17:32:31 +0100
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <4DAB0FAF.3010700@voidspace.org.uk>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<4DA9E292.20805@v.loewis.de>	<loom.20110416T211116-728@post.gmane.org>	<4DAA09F4.3000001@v.loewis.de>	<4DAA1CCD.60805@voidspace.org.uk>	<20110417011634.142092b2@pitrou.net>
	<4DAB0FAF.3010700@voidspace.org.uk>
Message-ID: <4DAB161F.1060404@voidspace.org.uk>

On 17/04/2011 17:05, Michael Foord wrote:
> On 17/04/2011 00:16, Antoine Pitrou wrote:
>> On Sat, 16 Apr 2011 23:48:45 +0100
>> Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>>
>>> On 16/04/2011 22:28, "Martin v. L?wis" wrote:
>>>> Am 16.04.2011 21:13, schrieb Vinay Sajip:
>>>>> Martin v. L?wis<martin<at>   v.loewis.de>   writes:
>>>>>
>>>>>> Does it actually need improvement?
>>>>> I can't actually say, but I assume it keeps changing for the 
>>>>> better - albeit
>>>>> slowly. I wasn't thinking of specific improvements, just the idea 
>>>>> of continuous
>>>>> improvement in general...
>>>> Hmm. I cannot believe in the notion of "continuous improvement"; I'd
>>>> guess that it is rather "continuous change".
>>>>
>>>> I can see three possible areas of improvment:
>>>> 1. Bugs: if there are any, they should clearly be fixed. However, JSON
>>>>      is a simple format, so the implementation should be able to 
>>>> converge
>>>>      to something fairly correct quickly.
>>>> 2. Performance: there is always room for performance improvements.
>>>>      However, I strongly recommend to not bother unless a severe
>>>>      bottleneck can be demonstrated.
>>> Well, there was a 5x speedup demonstrated comparing simplejson to the
>>> standard library json module.
>> No.
>>
> Yes.

Well, maybe not. :-)

>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: 
>> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From stefan at bytereef.org  Sun Apr 17 19:17:11 2011
From: stefan at bytereef.org (Stefan Krah)
Date: Sun, 17 Apr 2011 19:17:11 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator
	Module	Compatibility Requirements
In-Reply-To: <20110417154208.1C9AF2500CF@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110417101451.GA23490@sleipnir.bytereef.org>
	<20110417154208.1C9AF2500CF@mailhost.webabinitio.net>
Message-ID: <20110417171711.GA26304@sleipnir.bytereef.org>

R. David Murray <rdmurray at bitdance.com> wrote:
[snip a lot]

Thank you, this cleared up many things.


> > In the case of duck typing, the only solution I see is to lock down the
> > types in decimal.py, thus changing the API. This is one of the things that
> > should be decided *before* the PEP is accepted.
> 
> Here you perceive the burden we are currently placing on the other
> implementations.  That's the world they live in *now*.  The PEP is asking
> CPython to share this pain equally.
> 
> I agree that this is a concrete example that the PEP could address.
> I myself don't know enough about decimal/cdecimal or the Python C API
> to know why cdecimal can't duck type here, but it certainly sounds
> like a good example to use to clarify the requirements being advocated
> by the PEP.  I won't be surprised to find that the issues involved are
> the same issues that an accelerator module for the other Python
> implementations would face.

The technical reason is that the context is a speed critical data structure,
so I'm doing some tricks to emulate the context flags and traps dictionaries.


But I actually prefer that the context is locked down. The context
settings are absolutely crucial for the correctness of the result.
Here is a mistake that I've made multiple times while trying something
out with decimal.py:

>>> from decimal import *
>>> c = getcontext()
# Meaning c.Emax and c.Emin:
>>> c.emax = 99
>>> c.emin = -99
# The operation silently uses the unchanged context:
>>> Decimal(2)**99999
Decimal('4.995010465071922539720163822E+30102')
>>> 


cdecimal raises an AttributeError:

>>> from cdecimal import *
>>> c = getcontext()
>>> c.emax = 99
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'cdecimal.Context' object has no attribute 'emax'
>>> 


So, if one of the goals of the PEP is to clean up various APIs, I'm all
for it. My concern is though that the process will be very slow due to
lack of time and general reluctance to change APIs. And this is where
I see a potentially negative effect:

Is it worth to stall development over relatively minor issues? Will
these differences actually affect someone in practice? Will the
four Python implementations block each other?



Stefan Krah


From rdmurray at bitdance.com  Sun Apr 17 19:50:55 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Sun, 17 Apr 2011 13:50:55 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibility Requirements
In-Reply-To: <20110417171711.GA26304@sleipnir.bytereef.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110417101451.GA23490@sleipnir.bytereef.org>
	<20110417154208.1C9AF2500CF@mailhost.webabinitio.net>
	<20110417171711.GA26304@sleipnir.bytereef.org>
Message-ID: <20110417175116.1D7452500D7@mailhost.webabinitio.net>

On Sun, 17 Apr 2011 19:17:11 +0200, Stefan Krah <stefan at bytereef.org> wrote:
> R. David Murray <rdmurray at bitdance.com> wrote:
> [snip a lot]
> 
> Thank you, this cleared up many things.

Heh.  Keep in mind that this is my viewpoint.  I *think* Brett agrees
with me.  I'm sure he'll speak up if he doesn't.

> The technical reason is that the context is a speed critical data structure,
> so I'm doing some tricks to emulate the context flags and traps dictionaries.

[snip]

Thanks, your explanation seems to me to make a good case for making the
decimal.py implementation less permissive.

> So, if one of the goals of the PEP is to clean up various APIs, I'm all
> for it. My concern is though that the process will be very slow due to
> lack of time and general reluctance to change APIs. And this is where
> I see a potentially negative effect:

Well, the general reluctance to change APIs is certainly an issue.
But since you are advocating cdecimal changing the API *anyway*, if it
is going to go in to CPython this would have to be addressed regardless.
So I don't see that the PEP affects the speed of that part of the process
from CPython's point of view.

> Is it worth to stall development over relatively minor issues? Will
> these differences actually affect someone in practice? Will the
> four Python implementations block each other?

In my vision it wouldn't stall development in any place it shouldn't
be stalled by our normal backward compatibility rules.  It would be a
bug in the bug tracker saying "the API of module X has some undesirable
characteristics that get in the way of implementing accelerators, can
we change it?"  Again, I don't see this as changing what the current
procedure should be anyway, just clarifying it and making it more likely
that we will *notice* the changes and deal with them proactively rather
than finding out about them after the accelerator is in the field, having
introduced a backward-incompatible change unintentionally.  (Note: I'm
sure that we will still accidentally do this anyway, I'm just hoping to
reduce the frequency of such occurrences).

--
R. David Murray           http://www.bitdance.com

From stephen at xemacs.org  Sun Apr 17 20:32:34 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Mon, 18 Apr 2011 03:32:34 +0900
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <4DA98166.2010604@gustavonarea.net>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
Message-ID: <87wris7mpp.fsf@uwakimon.sk.tsukuba.ac.jp>

Gustavo Narea writes:

 > Well, that's a long shot. I doubt the people/organizations affected are
 > all aware.

That's really not Python's responsibility.  That's theirs.  Caveats:
Python should have a single place where security patches are announced
*first*, before developer blogs and the like.  Python's documentation
should make it clear at the most important entry points that the
appropriate place to report possible security issues is
security at python.org, not the tracker.  In particular, the tracker's
top page (the one you get from http://bugs.python.org/) should make
that clear.

Ironically, Brian's blog entry outlines a plausible security policy,
but a quick Google didn't find it elsewhere on site:python.org.  Oops,
a different search did find it -- under News/Security Advisories.

The tracker suggestion submitted as
http://psf.upfronthosting.co.za/roundup/meta/issue393.

 > And I doubt they are all capable of patching their system or
 > getting a patched Python from a trusted party.

Then they shouldn't be running Python in a security context, should
they?  Seriously, if they want somebody else to take care of their
security issues for them, they should pay for it.  As in almost areas
of life, security is at best worth what you pay for it.

 > Three weeks after this security vulnerability was *publicly* reported on
 > bugs.python.org,

Again, that's an issue with the reporter not knowing the policy, not
the policy itself, which is to report to security at python.org,
cf. Brian's post and the Security Advisory page.  The caveats above
apply, though.

 > and two days after it was semi-officially announced,
 > I'm still waiting for security updates for my Ubuntu and Debian systems!

Yeah, well, so much for depending on Ubuntu and Debian.  There are
reasons why people pay for RHEL.

 > I reckon if this had been handled differently (i.e., making new releases
 > and communicating it via the relevant channels [1]), we wouldn't have
 > the situation we have right now.

Of course not.  So what?  The question is "what is the best way to
reduce risks?"  *It is not possible to reduce all risks
simultaneously.*  What you are saying is "please keep things obscure
until I'm up to date."

It seems to be a consensus in the security community that most
security holes *are* sufficiently obscure that announcing the problem
before announcing a fix is likely to increase the likelihood of black
hats finding exploits and successfully executing them more than it
increases the likelihood that (3rd party) white hats will find and
contribute a fix.  So the question is whether to rely on obscurity
after the fix is devised.

Now, once there is a fix, there's always hysteresis in implementation,
as you note with respect to Ubuntu and Debian.  If you don't announce
the fix once available, you are increasing risk to conscientious,
patch-capable admins dramatically compared to the case where you give
them the patch.  I don't see why your Ubuntu/Debian systems should
take precedence over systems of those who prefer to rely on self-
built-and-maintained systems.  (In fact, since I generally fall into
the latter category, may I suggest it should be the other way around?
<wink />)

 > May I suggest that you adopt a policy for handling security issues like
 > Django's?
 > http://docs.djangoproject.com/en/1.3/internals/contributing/#reporting-security-issues

I'm -1 on that, except to the extent that it corresponds to existing
policy.  Eg,

    This will probably mean a new release of Django, but in some cases
    it may simply be patches against current releases.

seems to apply to the current case.  I really don't think the policy
Django claims is appropriate for Python for the following reasons,
among others:

    Halt all other development as long as is needed to develop a fix,
    including patches against the current and two previous releases.

is nonsense.  Perhaps what this means is that the "long-time, highly
trusted Django developers" on the security at django list *volunteer* to
put other Django work aside until the security hole is patched.  But
certainly the "hundreds of contributors around the world" won't stop
their work -- they won't know about the moratorium since they're not
on the security list.
In the case of Python, it's not even possible to stop commits without
closing the repo, as not all committers are on the security list.

Even for the security crew, in many cases of security problems, it's
something simple and readily fixed like a buffer overflow or an URL
traversal issue that just needs a simple filter on input before
executing a risky procedure.  So who does the fixing, reviewing, etc
should be decided on a case-by-case basis based on the problem itself
and available expertise IMO.

And this

    Pre-notify everyone we know to be running the affected version(s)
    of Django.

seems positively counter-productive if they're serious about
"everyone".  Surely the black hats are running the affected versions
of Django to test their exploits!  So they get to find out they're
running out of time to execute while the people running affected
versions remain vulnerable until the release.


From stephen at xemacs.org  Sun Apr 17 21:02:35 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Mon, 18 Apr 2011 04:02:35 +0900
Subject: [Python-Dev] Releases for recent security vulnerability
In-Reply-To: <BANLkTimsgPk8uMMnCxhavSmrBCJ1Yyd_mw@mail.gmail.com>
References: <BANLkTi=FtJ_oZe-pKnNNANFTDehWrx-J2A@mail.gmail.com>
	<BANLkTikzFbgAWfn2AEWtPbqmDtSR6HN2Rw@mail.gmail.com>
	<4DA98166.2010604@gustavonarea.net>
	<BANLkTint4fVi=Joy+ythOAOOzKL_VzTHPg@mail.gmail.com>
	<BANLkTi=YLZu+40eAuXfBAyaORMxGCFsMRA@mail.gmail.com>
	<BANLkTimsgPk8uMMnCxhavSmrBCJ1Yyd_mw@mail.gmail.com>
Message-ID: <87vcyc7lbo.fsf@uwakimon.sk.tsukuba.ac.jp>

Nick Coghlan writes:

 > I'd personally like to see a couple of adjustments to
 > http://www.python.org/news/security/:

For another thing, it needs to be more discoverable.

For yet another thing, it has two ancient entries on it.  Surely there
are more than that?


From nikolay.desh at gmail.com  Sun Apr 17 20:57:02 2011
From: nikolay.desh at gmail.com (Nikolay Zakharov)
Date: Sun, 17 Apr 2011 22:57:02 +0400
Subject: [Python-Dev] python and super
In-Reply-To: <4DA8D6FC.9060707@canterbury.ac.nz>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>	<4DA79E28.2060406@pearwood.info>
	<4DA84DD6.20608@voidspace.org.uk>
	<4DA8D6FC.9060707@canterbury.ac.nz>
Message-ID: <4DAB37FE.8090500@gmail.com>

16.04.2011 03:38, Greg Ewing ?????:
> Michael Foord wrote:
>
>> consider the "recently" introduced problem caused by object.__init__
> > not taking arguments. This makes it impossible to use super correctly
> > in various circumstances.
> >
> > ...
> >
>> It is impossible to inherit from both C and A and have all parent 
>> __init__ methods called correctly. Changing the semantics of super as 
>> described would fix this problem.
>
> I don't see how, because auto-super-calling would eventually
> end up trying to call object.__init__ with arguments and fail.
>
> You might think to "fix" this by making a special case of
> object.__init__ and refraining from calling it. But the same
> problem arises in a more general way whenever some class in
> the mix has a method with the right name but the wrong
> signature, which is likely to happen if you try to mix
> classes that weren't designed to be mixed together.
>
Michael's words are not about *auto-calling* but about *stopping 
prevention* of parent's method call by a class that unrelated to such 
parent. In the example above A is such a stopper that prevents calling 
of B.__init__ and B is a stopper for calling A.__init__ but A and B are 
completely unrelated to each other.

object.__init__ would not be called anyway (in this example) but the 
point is that nobody (at least among Michael and myself) going to 
*auto-call* object.__init__ with some automagically picked arguments.

--
Nikolay Zakharov

From solipsis at pitrou.net  Sun Apr 17 21:06:26 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 17 Apr 2011 21:06:26 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<4DA9E292.20805@v.loewis.de>
	<loom.20110416T211116-728@post.gmane.org>
	<4DAA09F4.3000001@v.loewis.de> <4DAA1CCD.60805@voidspace.org.uk>
	<4DAA88A8.3080507@v.loewis.de> <4DAB10AD.4010207@voidspace.org.uk>
Message-ID: <20110417210626.7a611338@pitrou.net>

On Sun, 17 Apr 2011 17:09:17 +0100
Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> On 17/04/2011 07:28, "Martin v. L?wis" wrote:
> >> Well, there was a 5x speedup demonstrated comparing simplejson to the
> >> standard library json module.
> > Can you kindly point to that demonstration?
> >
> Hmm... according to a later email in this thread it is 350ms vs 250ms 
> for an 11kb sample. That's a nice speedup but not a 5x one.

That speedup is actually because of a slowdown in py3k, which should be
solved with http://bugs.python.org/issue11856

Regards

Antoine.



From martin at v.loewis.de  Sun Apr 17 22:16:06 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 17 Apr 2011 22:16:06 +0200
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <loom.20110417T123902-831@post.gmane.org>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>
	<iocen5$sq5$1@dough.gmane.org>
	<loom.20110417T123902-831@post.gmane.org>
Message-ID: <4DAB4A86.6090703@v.loewis.de>

> Of course, people might find other workloads which show bigger disparity in
> performance, or might find something in my 3.x port of simplejson which
> invalidates my finding of a 2% difference.

Thanks a lot for doing this research, by the way.

Regards,
Martin

From martin at v.loewis.de  Sun Apr 17 23:57:49 2011
From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sun, 17 Apr 2011 23:57:49 +0200
Subject: [Python-Dev] [ANN] Python 2.5.6 Release Candidate 1
Message-ID: <4DAB625D.7060203@v.loewis.de>

On behalf of the Python development team and the Python community, I'm
happy to announce the release candidate 1 of Python 2.5.6.

This is a source-only release that only includes security fixes. The
last full bug-fix release of Python 2.5 was Python 2.5.4. Users are
encouraged to upgrade to the latest release of Python 2.7 (which is
2.7.1 at this point).

This releases fixes issues with the urllib, urllib2, SimpleHTTPServer,
and audiop modules. See the release notes at the website (also
available as Misc/NEWS in the source distribution) for details of bugs
fixed.

For more information on Python 2.5.6, including download links for
various platforms, release notes, and known issues, please see:

    http://www.python.org/2.5.6

Highlights of the previous major Python releases are available from
the Python 2.5 page, at

    http://www.python.org/2.5/highlights.html

Enjoy this release,
Martin

Martin v. Loewis
martin at v.loewis.de
Python Release Manager
(on behalf of the entire python-dev team)

From greg.ewing at canterbury.ac.nz  Mon Apr 18 00:51:28 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 18 Apr 2011 10:51:28 +1200
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTi=xLocxCQGAQZ3Xy3z6sXsxZnt-cA@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>
	<BANLkTi=xLocxCQGAQZ3Xy3z6sXsxZnt-cA@mail.gmail.com>
Message-ID: <4DAB6EF0.9020409@canterbury.ac.nz>

Mark Janssen wrote:
> I have to say it is quite strange to
> me that there is no distinction made between IS-A relationship and
> HAS-A relationships with regard to the issue of Inheritence.

I'm not sure what you mean by that. Inheritance is (or
should be) used only for is-a relationships. Misusing it
for has-a relationships leads to problems.

> Python, confusingly makes no syntactic distinction,

Yes, it does, as long as you use composition instead of
inheritance for has-a relationships.

-- 
Greg

From vinay_sajip at yahoo.co.uk  Mon Apr 18 02:19:30 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Mon, 18 Apr 2011 00:19:30 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>	<loom.20110416T113016-741@post.gmane.org>	<20110416161931.089d2014@pitrou.net>	<loom.20110416T181907-569@post.gmane.org>	<20110416192736.1f9dd279@pitrou.net>
	<loom.20110417T122545-907@post.gmane.org>
	<ioer6e$te1$1@dough.gmane.org>
Message-ID: <loom.20110418T020421-9@post.gmane.org>

Stefan Behnel <stefan_ml <at> behnel.de> writes:

> Is this using the C accelerated version in both cases? What about the pure 
> Python versions? Could you provide numbers for both?

What I posted earlier were C-accelerated timings. I'm not sure exactly how to
turn off the speedups for stdlib json. With some assumptions, as listed in this
script:

https://gist.github.com/924626

I get timings like this:

Python version: 3.2 (r32:88445, Mar 25 2011, 19:28:28) 
[GCC 4.5.2]
11.21484375 KiB read
Timing simplejson (with speedups):
0.31562185287475586
Timing stdlib json (with speedups):
0.31923389434814453
Timing simplejson (without speedups):
4.586531162261963
Timing stdlib json (without speedups):
2.5293829441070557

It's quite likely that I've failed to turn off the stdlib json speedups (though
I attempted to turn them off for both encoding and decoding), which would
explain the big disparity in the non-speedup case. Perhaps someone with more
familiarity with stdlib json speedup internals could take a look to see what
I've missed? I perhaps can't see the forest for the trees.

Regards,

Vinay Sajip


From ncoghlan at gmail.com  Mon Apr 18 04:58:54 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 18 Apr 2011 12:58:54 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibility Requirements
In-Reply-To: <20110417175116.1D7452500D7@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110417101451.GA23490@sleipnir.bytereef.org>
	<20110417154208.1C9AF2500CF@mailhost.webabinitio.net>
	<20110417171711.GA26304@sleipnir.bytereef.org>
	<20110417175116.1D7452500D7@mailhost.webabinitio.net>
Message-ID: <BANLkTim33aoPzV+URD4HSHa9v6qQ10T7BA@mail.gmail.com>

On Mon, Apr 18, 2011 at 3:50 AM, R. David Murray <rdmurray at bitdance.com> wrote:
> Thanks, your explanation seems to me to make a good case for making the
> decimal.py implementation less permissive.

Indeed. Since the current handling of Context in decimal.py violates
"Errors should never pass silently, unless explicitly silenced", I
would personally support a proposal to lock down its __setattr__ to a
predefined set of attributes, have its __delattr__ always raise an
exception, and introduce a parent ABC that is used for an isinstance()
check in setcontext(). (The ABC could include an attribute check, so
only objects that failed to provide all the appropriate methods and
attributes would raise the TypeError).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Mon Apr 18 05:07:29 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 18 Apr 2011 13:07:29 +1000
Subject: [Python-Dev] Status of json (simplejson) in cpython
In-Reply-To: <loom.20110418T020421-9@post.gmane.org>
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<20110416192736.1f9dd279@pitrou.net>
	<loom.20110417T122545-907@post.gmane.org>
	<ioer6e$te1$1@dough.gmane.org>
	<loom.20110418T020421-9@post.gmane.org>
Message-ID: <BANLkTim7HzL-D5JkfOiwOnFoOZMYX5fMXw@mail.gmail.com>

On Mon, Apr 18, 2011 at 10:19 AM, Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:
> It's quite likely that I've failed to turn off the stdlib json speedups (though
> I attempted to turn them off for both encoding and decoding), which would
> explain the big disparity in the non-speedup case. Perhaps someone with more
> familiarity with stdlib json speedup internals could take a look to see what
> I've missed? I perhaps can't see the forest for the trees.

Consider trying:

import sys
sys.modules["_json"] = 0 # Block the C extension
import json

in a fresh interpreter.

(This is the same dance test.support.import_fresh_module() uses
internally to get unaccelerated modules for testing purposes)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From python.leojay at gmail.com  Mon Apr 18 05:15:36 2011
From: python.leojay at gmail.com (Leo Jay)
Date: Mon, 18 Apr 2011 11:15:36 +0800
Subject: [Python-Dev] [ANN] Python 2.5.6 Release Candidate 1
In-Reply-To: <4DAB625D.7060203@v.loewis.de>
References: <4DAB625D.7060203@v.loewis.de>
Message-ID: <BANLkTi=8o4m4btYv40D7P_aPdZtqWM4Qwg@mail.gmail.com>

Hi,

I think the release date of 2.5.6c1 should be 17-Apr-2011, instead of
17-Apr-2010
http://www.python.org/download/releases/2.5.6/NEWS.txt

On Mon, Apr 18, 2011 at 5:57 AM, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>
> On behalf of the Python development team and the Python community, I'm
> happy to announce the release candidate 1 of Python 2.5.6.
>
> This is a source-only release that only includes security fixes. The
> last full bug-fix release of Python 2.5 was Python 2.5.4. Users are
> encouraged to upgrade to the latest release of Python 2.7 (which is
> 2.7.1 at this point).
>
> This releases fixes issues with the urllib, urllib2, SimpleHTTPServer,
> and audiop modules. See the release notes at the website (also
> available as Misc/NEWS in the source distribution) for details of bugs
> fixed.
>
> For more information on Python 2.5.6, including download links for
> various platforms, release notes, and known issues, please see:
>
> ? ?http://www.python.org/2.5.6
>
> Highlights of the previous major Python releases are available from
> the Python 2.5 page, at
>
> ? ?http://www.python.org/2.5/highlights.html
>
> Enjoy this release,
> Martin
>
> Martin v. Loewis
> martin at v.loewis.de
> Python Release Manager
> (on behalf of the entire python-dev team)


--
Best Regards,
Leo Jay

From martin at v.loewis.de  Mon Apr 18 07:24:54 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Mon, 18 Apr 2011 07:24:54 +0200
Subject: [Python-Dev] [ANN] Python 2.5.6 Release Candidate 1
In-Reply-To: <BANLkTi=8o4m4btYv40D7P_aPdZtqWM4Qwg@mail.gmail.com>
References: <4DAB625D.7060203@v.loewis.de>
	<BANLkTi=8o4m4btYv40D7P_aPdZtqWM4Qwg@mail.gmail.com>
Message-ID: <4DABCB26.2050506@v.loewis.de>

> I think the release date of 2.5.6c1 should be 17-Apr-2011, instead of
> 17-Apr-2010
> http://www.python.org/download/releases/2.5.6/NEWS.txt

Thanks, fixed.

Martin

From fijall at gmail.com  Mon Apr 18 09:05:24 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Mon, 18 Apr 2011 09:05:24 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <4862031C-A420-41A5-82B0-713262407802@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
Message-ID: <BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>

On Sun, Apr 17, 2011 at 4:19 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 16, 2011, at 2:45 PM, Brett Cannon wrote:
>
>
> On Sat, Apr 16, 2011 at 14:23, Stefan Krah <stefan at bytereef.org> wrote:
>>
>> Brett Cannon <brett at python.org> wrote:
>> > In the grand python-dev tradition of "silence means acceptance", I
>> > consider
>> > this PEP finalized and implicitly accepted.
>
> I haven't seen any responses that said, yes this is a well thought-out
> proposal that will actually benefit any of the various implementations.
> Almost none of the concerns that have been raised has been addressed. ?Does
> the PEP only apply to purely algorithmic modules such as heapq or does it
> apply to anything written in C (like an xz compressor or for example)?

My understanding is it does apply only to stuff that does not wrap an
external library.

> Does
> testing every branch in a given implementation now guarantee every
> implementation detail or do we only promise the published API (historically,
> we've *always* done the latter)? ?Is there going to be any guidance on the
> commonly encountered semantic differences between C modules and their Python
> counterparts (thread-safety, argument handling, tracebacks, all possible
> exceptions, monkey-patchable pure python classes versus hard-wired C types
> etc)?
> The PEP seems to be predicated on a notion that anything written in C is bad
> and that all testing is good.

Sounds about right

> ?AFAICT, it doesn't provide any practical
> advice to someone pursuing a non-trivial project (such as decimal or
> threading). ?The PEP mostly seems to be about discouraging any further work
> in C. ?If that's the case, it should just come out and say it rather than
> tangentially introducing ambiguous testing requirements that don't make a
> lot of sense.
> The PEP also makes some unsupported claims about saving labor. ?My
> understanding is the IronPython and Jython tend to re-implement modules
> using native constructs. ?Even with PyPy, the usual pure python idioms
> aren't necessarily what is best for PyPy, so I expect some rewriting there
> also.

We try very hard to optimize for usual python idioms. They're very
often much better than specific cpython hacks. Unless you mean things
like rebiding a global into default a "pythonic idiom". We had to
rewrite places in standard library which are precisely not very
pythonic.

>?It seems the lion's share of the work in making other implementations
> has to do with interpreter details and whatnot -- I would be surprised if
> the likes of bisect or heapq took even one-tenth of one percent of the total
> development time for any of the other implementations.

You're wrong. We didn't even write _heapq and _bisect. That's actually
a lot of work and PyPy's team is quite small *and* it has to do all
the other stuff as well. heapq and bisect were never a problem (except
one case in twisted), but other stuff where C version diverged from
Python version were a problem. Hell, we even wrote cPickle which wraps
pickle and provides correct interface! This is kind of things we would
rather not spend time on (and yes, it is time consuming).

>
>>
>> I did not really see an answer to these concerns:
>>
>> http://mail.python.org/pipermail/python-dev/2011-April/110672.html
>
> Antoine does seem sold on the 100% branch coverage requirement and views it
> as pointless. I disagree. =)
>
> As for the exception Stefan is saying may be granted, that is not in the PEP
> so I consider it unimportant. If we really feel the desire to grant an
> exception we can (since we can break any of our own rules that we
> collectively choose to), but I'm assuming we won't.
>
>>
>> http://mail.python.org/pipermail/python-dev/2011-April/110675.html
>
> Raymond thinks that have a testing requirement conflates having
> implementations match vs. APIs.
>
> That is not an accurate restatement of my post.
>
> Well, as we all know, the stdlib ends up having its implementation details
> relied upon constantly by people whether they mean to or not,? so making
> sure that this is properly tested helps deal with this known reality.
>
> If you're saying that all implementation details (including internal
> branching logic) are now guaranteed behaviors, then I think this PEP has
> completely lost its way. ?I don't know of any implementors asking for this.
>
> This is a damned-if-you-do-damned-if-you-don't situation. The first draft of
> this PEP said to be "semantically equivalent w/ divergence where technically
> required", but I got pushback from being too wishy-washy w/ lack of concrete
> details. So I introduce a concrete metric that some are accusing of being
> inaccurate for the goals of the PEP. I'm screwed or I'm screwed. =) So I am
> choosing to go with the one that has a side benefit of also increasing test
> coverage.
>
> Maybe that is just an indication that the proposal isn't mature yet. ? To
> me, it doesn't seem well thought out and isn't realistic.
>
> Now if people would actually support simply not accepting any more C modules
> into the Python stdlib (this does not apply to CPython's stdlib), then I'm
> all for that.
>
> I only went with the "accelerator modules are okay" route to help get
> acceptance for the PEP. But if people are willing to go down a more
> stringent route and say that any module which uses new C code is considered
> CPython-specific and thus any acceptance of such modules will be damn hard
> to accomplish as it will marginalize the value of the code, that's fine by
> me.
>
> Is that what people want? ? For example, do we want to accept a C version of
> decimal? ?Without it, the decimal module is unusable for people with high
> volumes of data. ?Do we want things like an xz compressor to be written in
> pure python and only in Python? ?I don't think this benefits our users.
> I'm not really clear what it is you're trying to get at. ?For PyPy,
> IronPython, and Jython to succeed, does the CPython project need to come to
> a halt? ?I don't think many people here really believe that to be the case.
>
> Raymond
>
>
>
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>
>

From vinay_sajip at yahoo.co.uk  Mon Apr 18 10:33:23 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Mon, 18 Apr 2011 08:33:23 +0000 (UTC)
Subject: [Python-Dev] Status of json (simplejson) in cpython
References: <BANLkTind0dSsoesD+tZ-EAqZjpr2m-CuCw@mail.gmail.com>
	<loom.20110416T113016-741@post.gmane.org>
	<20110416161931.089d2014@pitrou.net>
	<loom.20110416T181907-569@post.gmane.org>
	<20110416192736.1f9dd279@pitrou.net>
	<loom.20110417T122545-907@post.gmane.org>
	<ioer6e$te1$1@dough.gmane.org>
	<loom.20110418T020421-9@post.gmane.org>
	<BANLkTim7HzL-D5JkfOiwOnFoOZMYX5fMXw@mail.gmail.com>
Message-ID: <loom.20110418T102925-684@post.gmane.org>

Nick Coghlan <ncoghlan <at> gmail.com> writes:

> Consider trying:
> 
> import sys
> sys.modules["_json"] = 0 # Block the C extension
> import json
> 
> in a fresh interpreter.
> 

Thanks for the tip. The revised script at

https://gist.github.com/924626

shows more believable numbers vis-?-vis the no-speedups case. Interestingly this
morning, stdlib json wins in both cases, though undoubtedly YMMV.

---------------------------------------------------------------------------
(jst3)vinay at eta-natty:~/projects/scratch$ python time_json.py --no-speedups
Python version: 3.2 (r32:88445, Mar 25 2011, 19:28:28)
[GCC 4.5.2]
11.21484375 KiB read
Timing simplejson (without speedups):
4.585145950317383
Timing stdlib json (without speedups):
3.9949100017547607
(jst3)vinay at eta-natty:~/projects/scratch$ python time_json.py
Python version: 3.2 (r32:88445, Mar 25 2011, 19:28:28)
[GCC 4.5.2]
11.21484375 KiB read
Timing simplejson (with speedups):
0.3202629089355469
Timing stdlib json (with speedups):
0.3200039863586426
---------------------------------------------------------------------------

Regards,

Vinay Sajip


From p.f.moore at gmail.com  Mon Apr 18 10:36:20 2011
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 18 Apr 2011 09:36:20 +0100
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
Message-ID: <BANLkTik1C+1HDgZJCEj+61UFHbCeg1hQMA@mail.gmail.com>

On 18 April 2011 08:05, Maciej Fijalkowski <fijall at gmail.com> wrote:
> On Sun, Apr 17, 2011 at 4:19 AM, Raymond Hettinger
> <raymond.hettinger at gmail.com> wrote:

>> Almost none of the concerns that have been raised has been addressed. ?Does
>> the PEP only apply to purely algorithmic modules such as heapq or does it
>> apply to anything written in C (like an xz compressor or for example)?
>
> My understanding is it does apply only to stuff that does not wrap an
> external library.

My understanding is that this is most people's understanding, so it
should be explicitly documented in the PEP.

It would also be worth asking: are there any other reasons for using C
code beyond wrapping external libraries and accelerating code that
could equally be written in Python? I can't think of any, myself, but
OTOH I wonder if the *degree* of acceleration is also relevant - some
things (compression algorithms, for example) just can't realistically
be coded in Python (CPython, at least).

>> The PEP seems to be predicated on a notion that anything written in C is bad
>> and that all testing is good.
>
> Sounds about right

I disagree. To me, a Python without libraries such as os, zlib,
zipfile, threading, etc wouldn't be much use (except in specialised
circumstances). OK, that means that alternative implementations need
to do extra work to implement equivalents in their own low-level
language, but so be it (sorry!)

This PEP has a flavour to me of the old "100% pure Java" ideal, where
Java coders expected everything to be reimplemented in Java, avoiding
any native code. I didn't like the idea then, and I don't have much
more love for it now in Python. (OK, I know this is an exaggeration of
the position the PEP is taking, but without more clarity in the PEP's
language, I honestly don't know how much of an exaggeration).

Maybe the PEP could go through the various C libraries in the stdlib
at the moment, and discuss how the PEP would address them? It would be
useful to see how much of an impact the PEP would have had if it had
been Python policy from the start...

Paul.

From solipsis at pitrou.net  Mon Apr 18 11:06:05 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 18 Apr 2011 11:06:05 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
	<BANLkTik1C+1HDgZJCEj+61UFHbCeg1hQMA@mail.gmail.com>
Message-ID: <20110418110605.2162043e@pitrou.net>

On Mon, 18 Apr 2011 09:36:20 +0100
Paul Moore <p.f.moore at gmail.com> wrote:
> On 18 April 2011 08:05, Maciej Fijalkowski <fijall at gmail.com> wrote:
> > On Sun, Apr 17, 2011 at 4:19 AM, Raymond Hettinger
> > <raymond.hettinger at gmail.com> wrote:
> 
> >> Almost none of the concerns that have been raised has been addressed. ?Does
> >> the PEP only apply to purely algorithmic modules such as heapq or does it
> >> apply to anything written in C (like an xz compressor or for example)?
> >
> > My understanding is it does apply only to stuff that does not wrap an
> > external library.
> 
> My understanding is that this is most people's understanding, so it
> should be explicitly documented in the PEP.
> 
> It would also be worth asking: are there any other reasons for using C
> code beyond wrapping external libraries and accelerating code that
> could equally be written in Python?

faulthandler is an example. Very low-level tinkering with threads,
signal handlers and possibly corrupt memory simply can't be done
in Python.

Regards

Antoine.



From rdmurray at bitdance.com  Mon Apr 18 14:30:39 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 18 Apr 2011 08:30:39 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTik1C+1HDgZJCEj+61UFHbCeg1hQMA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
	<BANLkTik1C+1HDgZJCEj+61UFHbCeg1hQMA@mail.gmail.com>
Message-ID: <20110418123108.8DE972500DB@mailhost.webabinitio.net>

On Mon, 18 Apr 2011 09:36:20 +0100, Paul Moore <p.f.moore at gmail.com> wrote:
> On 18 April 2011 08:05, Maciej Fijalkowski <fijall at gmail.com> wrote:
> > On Sun, Apr 17, 2011 at 4:19 AM, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> 
> >> The PEP seems to be predicated on a notion that anything written in C is
> >> bad and that all testing is good.
> >
> > Sounds about right
> 
> I disagree. To me, a Python without libraries such as os, zlib,
> zipfile, threading, etc wouldn't be much use (except in specialised
> circumstances). OK, that means that alternative implementations need
> to do extra work to implement equivalents in their own low-level
> language, but so be it (sorry!)

I think Maciej left out an "only" in that sentence.  If you say "only C",
then the sentence makes sense, even when applied to modules that *can*
only be written in C (for CPython).  That is, not having a Python version
is bad.  Necessary in many cases (or not worth the cost, for external
library wrappers), but wouldn't it be nicer if it wasn't necessary?

> This PEP has a flavour to me of the old "100% pure Java" ideal, where
> Java coders expected everything to be reimplemented in Java, avoiding
> any native code. I didn't like the idea then, and I don't have much
> more love for it now in Python. (OK, I know this is an exaggeration of
> the position the PEP is taking, but without more clarity in the PEP's
> language, I honestly don't know how much of an exaggeration).

The Pythonic ideal contains quite a bit of pragmatism, so yes, that is
an exaggeration of the goals of the PEP, certainly.  (Although pypy
may do it anyway, for pragmatic reasons :)

> Maybe the PEP could go through the various C libraries in the stdlib
> at the moment, and discuss how the PEP would address them? It would be
> useful to see how much of an impact the PEP would have had if it had
> been Python policy from the start...

That might indeed be a useful exercise, especially since other
implementations (or even perhaps CPython developers) may want to
contribute Python-only versions and/or tests for things that would have
been affected by the PEP.  I don't have time to do it right now,
but if I can pry any time loose I'll have it near the top of my list.

--
R. David Murray           http://www.bitdance.com

From barry at python.org  Mon Apr 18 14:53:26 2011
From: barry at python.org (Barry Warsaw)
Date: Mon, 18 Apr 2011 08:53:26 -0400
Subject: [Python-Dev] Python 2.6.7
Message-ID: <20110418085326.6b8d787a@neurotica.wooz.org>

With Martin getting ready to release 2.5.6, I think it's time to prepare a
2.6.7 source-only security release.

I'll work my way through the NEWS file and recent commits, but if there is
anything that you know is missing from the 2.6 branch, please let me know.  It
would be especially helpful if there were bugs for any such issues.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110418/d83f1076/attachment.pgp>

From python-announce-list-bounces at python.org  Mon Apr 18 17:10:18 2011
From: python-announce-list-bounces at python.org (python-announce-list-bounces at python.org)
Date: Mon, 18 Apr 2011 17:10:18 +0200
Subject: [Python-Dev] Forward of moderated message
Message-ID: <mailman.1.1303139418.10844.python-announce-list@python.org>

An embedded message was scrubbed...
From: Mark Summerfield <mark at qtrac.eu>
Subject: Re: [ANN] Python 2.5.6 Release Candidate 1
Date: Mon, 18 Apr 2011 09:21:38 +0100
Size: 4649
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110418/0433da94/attachment.mht>

From skip at pobox.com  Mon Apr 18 17:22:57 2011
From: skip at pobox.com (skip at pobox.com)
Date: Mon, 18 Apr 2011 10:22:57 -0500
Subject: [Python-Dev] Post from Mark Summerfield
Message-ID: <19884.22353.400060.437132@montanaro.dyndns.org>

Mark Summerfield responded to Martin's python-announce post.  Rather than
approving it I rejected it and forwarded it here.  (I suppose I could have
forwarded it directly to Martin, but that would have required that I recall
or look up his email address...)

Skip

From merwok at netwok.org  Mon Apr 18 18:32:51 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Mon, 18 Apr 2011 18:32:51 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
References: "\"<BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>	<20110416212352.GA19573@sleipnir.bytereef.org>"
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>"
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
Message-ID: <44f951664fe4e294c102e8bfa6c10d64@netwok.org>

 Hi,

> We try very hard to optimize for usual python idioms. They're very
> often much better than specific cpython hacks. Unless you mean things
> like rebiding a global into default a "pythonic idiom". We had to
> rewrite places in standard library which are precisely not very
> pythonic.

 If I understand correctly, you?ve made internal changes preserving the
 official API of the modules.  Have you reported those cases to
 bugs.python.org?  I?m sure we?d be glad to incorporate those changes
 into the stdlib, possibly even in the stable branches if their 
 rationale
 is strong enough.

 Regards

From merwok at netwok.org  Mon Apr 18 18:34:06 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Mon, 18 Apr 2011 18:34:06 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <20110417053245.42D262500D7@mailhost.webabinitio.net>
References: "\"<BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>	<20110416212352.GA19573@sleipnir.bytereef.org>"
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>"
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
Message-ID: <86b0853a355f18eabfed986b953c123b@netwok.org>

 Hi,

> Perhaps we need a @python_implementation_detail skip decorator?
 That?s called test.support.cpython_only (see also 
 test.support.check_impl_detail).  You?re welcome.

 Regards

From fijall at gmail.com  Mon Apr 18 19:11:50 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Mon, 18 Apr 2011 19:11:50 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <44f951664fe4e294c102e8bfa6c10d64@netwok.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
	<44f951664fe4e294c102e8bfa6c10d64@netwok.org>
Message-ID: <BANLkTim0_W0hO39iJFQuccj0dodvdMQbtA@mail.gmail.com>

On Mon, Apr 18, 2011 at 6:32 PM, ?ric Araujo <merwok at netwok.org> wrote:
> Hi,
>
>> We try very hard to optimize for usual python idioms. They're very
>> often much better than specific cpython hacks. Unless you mean things
>> like rebiding a global into default a "pythonic idiom". We had to
>> rewrite places in standard library which are precisely not very
>> pythonic.
>
> If I understand correctly, you?ve made internal changes preserving the
> official API of the modules. ?Have you reported those cases to
> bugs.python.org? ?I?m sure we?d be glad to incorporate those changes
> into the stdlib, possibly even in the stable branches if their rationale
> is strong enough.

I think what's relevant was merged by benjamin. Usually:

* we do revert things that were specifically made to make cpython faster, like

 def f(_getattr=getattr):
   ...

* we usually target CPython version that's already frozen, which is
pretty inconvinient to post this changes back. Example would be a
socket module where it has changed enough in 3.x that 2.7 changes make
no sense.
>
> Regards
>

From raymond.hettinger at gmail.com  Mon Apr 18 19:26:17 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Mon, 18 Apr 2011 10:26:17 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTim0_W0hO39iJFQuccj0dodvdMQbtA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>
	<44f951664fe4e294c102e8bfa6c10d64@netwok.org>
	<BANLkTim0_W0hO39iJFQuccj0dodvdMQbtA@mail.gmail.com>
Message-ID: <ACF9143C-6098-4732-B179-9C581EE0C6F3@gmail.com>


On Apr 18, 2011, at 10:11 AM, Maciej Fijalkowski wrote:
> 
> * we usually target CPython version that's already frozen, which is
> pretty inconvinient to post this changes back. Example would be a
> socket module where it has changed enough in 3.x that 2.7 changes make
> no sense.


Do you have any thoughts on the problem with the concrete C API
not working well with subclasses of builtin types?

I'm thinking that the PEP should specifically ban the practice
of using the concrete api unless it is known for sure that
an object is an exact type match.

It is okay to write PyList_New() followed by PyList_SetItem()
but not okay to use PyList_SetItem() on a user supplied
argument that is known to be a subclass of list.  A fast path
can be provided for an exact path, but there would also need
to a be a slower path that either converts the object to
an exact list or that uses PyObject_SetItem().

In the discussions about this topic, there doesn't seem to be
any technical solutions; instead, it will take a social solution
such as a PEP and clear warnings in the docs.


Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110418/b77e70c2/attachment.html>

From rdmurray at bitdance.com  Mon Apr 18 19:30:56 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 18 Apr 2011 13:30:56 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <86b0853a355f18eabfed986b953c123b@netwok.org>
References: "\"<BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>"
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>"
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
	<86b0853a355f18eabfed986b953c123b@netwok.org>
Message-ID: <20110418173125.72DC72500DB@mailhost.webabinitio.net>

On Mon, 18 Apr 2011 18:34:06 +0200, =?UTF-8?Q?=C3=89ric_Araujo?= <merwok at netwok.org> wrote:
> > Perhaps we need a @python_implementation_detail skip decorator?
>  That???s called test.support.cpython_only (see also 
>  test.support.check_impl_detail).  You???re welcome.

Nope.  That's not what I was talking about.  I was talking about marking
a test as something that we require only the *python* implementation of
a module to pass (presumably because it tests an internal implementation
detail).  Thus a C accelerator would not be expected to pass that
test, nor would a C# accelerator, but pypy or any platform without
an accelerator (that is, anything *using* the python code) would be
expected to pass it.

I would hope that such tests would be vanishingly rare (that is,
that all needed tests can be expressed as black box tests).

--
R. David Murray           http://www.bitdance.com

From g.brandl at gmx.net  Mon Apr 18 20:26:36 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 18 Apr 2011 20:26:36 +0200
Subject: [Python-Dev] cpython: #11731: simplify/enhance parser/generator
 API by introducing policy objects.
In-Reply-To: <E1QBskY-0004rj-OH@dinsdale.python.org>
References: <E1QBskY-0004rj-OH@dinsdale.python.org>
Message-ID: <iohvp8$49e$1@dough.gmane.org>

On 18.04.2011 20:00, r.david.murray wrote:

> diff --git a/Doc/library/email.parser.rst b/Doc/library/email.parser.rst
> --- a/Doc/library/email.parser.rst
> +++ b/Doc/library/email.parser.rst
> @@ -112,8 +118,13 @@
>     :class:`~email.message.Message` (see :mod:`email.message`).  The factory will
>     be called without arguments.
>  
> -   .. versionchanged:: 3.2
> -      Removed the *strict* argument that was deprecated in 2.4.
> +   The *policy* keyword specifies a :mod:`~email.policy` object that controls a
> +   number of aspects of the parser's operation.  The default policy maintains
> +   backward compatibility.
> +
> +   .. versionchanged:: 3.3
> +      Removed the *strict* argument that was deprecated in 2.4.  Added the
> +      *policy* keyword.

Hmm, so *strict* wasn't actually removed in 3.2?

> @@ -187,12 +204,15 @@
>  
>  .. currentmodule:: email
>  
> -.. function:: message_from_string(s, _class=email.message.Message, strict=None)
> +.. function:: message_from_string(s, _class=email.message.Message, *, \
> +                                  policy=policy.default)
>  
>     Return a message object structure from a string.  This is exactly equivalent to
> -   ``Parser().parsestr(s)``.  Optional *_class* and *strict* are interpreted as
> +   ``Parser().parsestr(s)``.  *_class* and *policy* are interpreted as
>     with the :class:`Parser` class constructor.
>  
> +   .. versionchanged:: removed *strict*, added *policy*
> +

The 3.3 version is missing here.  Also, please always end version directive text
with a period.

>  .. function:: message_from_bytes(s, _class=email.message.Message, strict=None)
>  
>     Return a message object structure from a byte string.  This is exactly
> @@ -200,21 +220,27 @@
>     *strict* are interpreted as with the :class:`Parser` class constructor.
>  
>     .. versionadded:: 3.2
> +   .. versionchanged:: 3.3 removed *strict*, added *policy*

See above.

> -.. function:: message_from_file(fp, _class=email.message.Message, strict=None)
> +.. function:: message_from_file(fp, _class=email.message.Message, *, \
> +                                policy=policy.default)
>  
>     Return a message object structure tree from an open :term:`file object`.
> -   This is exactly equivalent to ``Parser().parse(fp)``.  Optional *_class*
> -   and *strict* are interpreted as with the :class:`Parser` class constructor.
> +   This is exactly equivalent to ``Parser().parse(fp)``.  *_class*
> +   and *policy* are interpreted as with the :class:`Parser` class constructor.
>  
> -.. function:: message_from_binary_file(fp, _class=email.message.Message, strict=None)
> +   .. versionchanged:: 3.3 removed *strict*, added *policy*

See above.

> +.. function:: message_from_binary_file(fp, _class=email.message.Message, *, \
> +                                       policy=policy.default)
>  
>     Return a message object structure tree from an open binary :term:`file
>     object`.  This is exactly equivalent to ``BytesParser().parse(fp)``.
> -   Optional *_class* and *strict* are interpreted as with the :class:`Parser`
> +   *_class* and *policy* are interpreted as with the :class:`Parser`
>     class constructor.
>  
>     .. versionadded:: 3.2
> +   .. versionchanged:: 3.3 removed *strict*, added *policy*

See above.

> --- /dev/null
> +++ b/Doc/library/email.policy.rst
> @@ -0,0 +1,179 @@
> +:mod:`email`: Policy Objects
> +----------------------------
> +
> +.. module:: email.policy
> +   :synopsis: Controlling the parsing and generating of messages

This file should have a ".. versionadded:: 3.3" (without further content) here.

> +The :mod:`email` package's prime focus is the handling of email messages as
> +described by the various email and MIME RFCs.  However, the general format of
> +email messages (a block of header fields each consisting of a name followed by
> +a colon followed by a value, the whole block followed by a blank line and an
> +arbitrary 'body'), is a format that has found utility outside of the realm of
> +email.  Some of these uses conform fairly closely to the main RFCs, some do
> +not.  And even when working with email, there are times when it is desirable to
> +break strict compliance with the RFCs.
> +
> +Policy objects are the mechanism used to provide the email package with the
> +flexibility to handle all these disparate use cases,

Looks like something is missing from this sentence :)

[...]

> +As an example, the following code could be used to read an email message from a
> +file on disk and pass it to the system ``sendmail`` program on a ``unix``
> +system::

Should be Unix, not ``unix``.

> +   >>> from email import msg_from_binary_file
> +   >>> from email.generator import BytesGenerator
> +   >>> import email.policy
> +   >>> from subprocess import Popen, PIPE
> +   >>> with open('mymsg.txt', 'b') as f:
> +   >>>     msg = msg_from_binary_file(f, policy=email.policy.mbox)
> +   >>> p = Popen(['sendmail', msg['To'][0].address], stdin=PIPE)
> +   >>> g = BytesGenerator(p.stdin, email.policy.policy=SMTP)

That keyword arg doesn't look right.

> +   >>> g.flatten(msg)
> +   >>> p.stdin.close()
> +   >>> rc = p.wait()

Also, if you put interactive prompts, please use them correctly ("..." prompt
and one blank line for the with block).

> +Some email package methods accept a *policy* keyword argument, allowing the
> +policy to be overridden for that method.  For example, the following code use

"uses"

> +the :meth:`email.message.Message.as_string` method to the *msg* object from the
                                                      ^^^^^^
Something is missing around here.

> +previous example and re-write it to a file using the native line separators for
> +the platform on which it is running::
> +
> +   >>> import os
> +   >>> mypolicy = email.policy.Policy(linesep=os.linesep)
> +   >>> with open('converted.txt', 'wb') as f:
> +   ...     f.write(msg.as_string(policy=mypolicy))
> +
> +Policy instances are immutable, but they can be cloned, accepting the same
> +keyword arguments as the class constructor and returning a new :class:`Policy`
> +instance that is a copy of the original but with the specified attributes
> +values changed.  For example, the following creates an SMTP policy that will
> +raise any defects detected as errors::
> +
> +   >>> strict_SMTP = email.policy.SMTP.clone(raise_on_defect=True)
> +
> +Policy objects can also be combined using the addition operator, producing a
> +policy object whose settings are a combination of the non-default values of the
> +summed objects::
> +
> +   >>> strict_SMTP = email.policy.SMTP + email.policy.strict

Interesting API :)

> +This operation is not commutative; that is, the order in which the objects are
> +added matters.  To illustrate::
> +
> +   >>> Policy = email.policy.Policy
> +   >>> apolicy = Policy(max_line_length=100) + Policy(max_line_length=80)
> +   >>> apolicy.max_line_length
> +   80
> +   >>> apolicy = Policy(max_line_length=80) + Policy(max_line_length=100)
> +   >>> apolicy.max_line_length
> +   100
> +
> +
> +.. class:: Policy(**kw)
> +
> +   The valid constructor keyword arguments are any of the attributes listed
> +   below.
> +
> +   .. attribute:: max_line_length
> +
> +      The maximum length of any line in the serialized output, not counting the
> +      end of line character(s).  Default is 78, per :rfc:`5322`.  A value of
> +      ``0`` or :const:`None` indicates that no line wrapping should be
> +      done at all.
> +
> +   .. attribute:: linesep
> +
> +      The string to be used to terminate lines in serialized output.  The
> +      default is '\\n' because that's the internal end-of-line discipline used
> +      by Python, though '\\r\\n' is required by the RFCs.  See `Policy
> +      Instances`_ for policies that use an RFC conformant linesep.  Setting it
> +      to :attr:`os.linesep` may also be useful.

These string constants are probably better off in code markup, i.e. ``'\n'``.

> +   .. attribute:: must_be_7bit
> +
> +      If :const:`True`, data output by a bytes generator is limited to ASCII
> +      characters.  If :const:`False` (the default), then bytes with the high
> +      bit set are preserved and/or allowed in certain contexts (for example,
> +      where possible a content transfer encoding of ``8bit`` will be used).
> +      String generators act as if ``must_be_7bit`` is `True` regardless of the
> +      policy in effect, since a string cannot represent non-ASCII bytes.

Please use either :const:`True` or ``True``.

> +   .. attribute:: raise_on_defect
> +
> +      If :const:`True`, any defects encountered will be raised as errors.  If
> +      :const:`False` (the default), defects will be passed to the
> +      :meth:`register_defect` method.

A short sentence that the following are methods would be nice.

> +   .. method:: handle_defect(obj, defect)
> +
> +      *obj* is the object on which to register the defect.

What kind of object is *obj*?

>  *defect* should be
> +      an instance of a  subclass of :class:`~email.errors.Defect`.
> +      If :attr:`raise_on_defect`
> +      is ``True`` the defect is raised as an exception.  Otherwise *obj* and
> +      *defect* are passed to :meth:`register_defect`.  This method is intended
> +      to be called by parsers when they encounter defects, and will not be
> +      called by code that uses the email library unless that code is
> +      implementing an alternate parser.
> +
> +   .. method:: register_defect(obj, defect)
> +
> +      *obj* is the object on which to register the defect.  *defect* should be
> +      a subclass of :class:`~email.errors.Defect`.  This method is part of the
> +      public API so that custom ``Policy`` subclasses can implement alternate
> +      handling of defects.  The default implementation calls the ``append``
> +      method of the ``defects`` attribute of *obj*.
> +
> +   .. method:: clone(obj, *kw):
> +
> +      Return a new :class:`Policy` instance whose attributes have the same
> +      values as the current instance, except where those attributes are
> +      given new values by the keyword arguments.
> +
> +
> +Policy Instances
> +................

We're usually using "^^^^" for underlining this level of headings, but it's not
really important.

> +The following instances of :class:`Policy` provide defaults suitable for
> +specific common application domains.

Indentation switches to 4 spaces below here...

> +.. data:: default
> +
> +    An instance of :class:`Policy` with all defaults unchanged.
> +
> +.. data:: SMTP
> +
> +    Output serialized from a message will conform to the email and SMTP
> +    RFCs.  The only changed attribute is :attr:`linesep`, which is set to
> +    ``\r\n``.
> +
> +.. data:: HTTP
> +
> +    Suitable for use when serializing headers for use in HTTP traffic.
> +    :attr:`linesep` is set to ``\r\n``, and :attr:`max_line_length` is set to
> +    :const:`None` (unlimited).
> +
> +.. data:: strict
> +
> +    :attr:`raise_on_defect` is set to :const:`True`.

Sorry for the long review.

Georg


From stefan_ml at behnel.de  Mon Apr 18 21:23:02 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 18 Apr 2011 21:23:02 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTim0_W0hO39iJFQuccj0dodvdMQbtA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>	<20110416212352.GA19573@sleipnir.bytereef.org>	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>	<4862031C-A420-41A5-82B0-713262407802@gmail.com>	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>	<44f951664fe4e294c102e8bfa6c10d64@netwok.org>
	<BANLkTim0_W0hO39iJFQuccj0dodvdMQbtA@mail.gmail.com>
Message-ID: <ioi32n$ou7$1@dough.gmane.org>

Maciej Fijalkowski, 18.04.2011 19:11:
> On Mon, Apr 18, 2011 at 6:32 PM, ?ric Araujo wrote:
>>> We try very hard to optimize for usual python idioms. They're very
>>> often much better than specific cpython hacks. Unless you mean things
>>> like rebiding a global into default a "pythonic idiom". We had to
>>> rewrite places in standard library which are precisely not very
>>> pythonic.
>>
>> If I understand correctly, you?ve made internal changes preserving the
>> official API of the modules.  Have you reported those cases to
>> bugs.python.org?  I?m sure we?d be glad to incorporate those changes
>> into the stdlib, possibly even in the stable branches if their rationale
>> is strong enough.
>
> I think what's relevant was merged by benjamin. Usually:
>
> * we do revert things that were specifically made to make cpython faster, like
>
>   def f(_getattr=getattr):
>     ...

Thanks. Speaking for the Cython project, we are certainly happy to see 
these micro optimisations reverted. Makes our life easier and the generated 
code faster.

Stefan


From rdmurray at bitdance.com  Mon Apr 18 21:39:48 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 18 Apr 2011 15:39:48 -0400
Subject: [Python-Dev] cpython: #11731: simplify/enhance parser/generator
	API by introducing policy objects.
In-Reply-To: <iohvp8$49e$1@dough.gmane.org>
References: <E1QBskY-0004rj-OH@dinsdale.python.org>
	<iohvp8$49e$1@dough.gmane.org>
Message-ID: <20110418194009.5C09D2500DC@mailhost.webabinitio.net>

On Mon, 18 Apr 2011 20:26:36 +0200, Georg Brandl <g.brandl at gmx.net> wrote:
> On 18.04.2011 20:00, r.david.murray wrote:
> 
> > diff --git a/Doc/library/email.parser.rst b/Doc/library/email.parser.rst
> > --- a/Doc/library/email.parser.rst
> > +++ b/Doc/library/email.parser.rst
> > @@ -112,8 +118,13 @@
> >     :class:`~email.message.Message` (see :mod:`email.message`).  The factory will
> >     be called without arguments.
> >  
> > -   .. versionchanged:: 3.2
> > -      Removed the *strict* argument that was deprecated in 2.4.
> > +   The *policy* keyword specifies a :mod:`~email.policy` object that controls a
> > +   number of aspects of the parser's operation.  The default policy maintains
> > +   backward compatibility.
> > +
> > +   .. versionchanged:: 3.3
> > +      Removed the *strict* argument that was deprecated in 2.4.  Added the
> > +      *policy* keyword.
> 
> Hmm, so *strict* wasn't actually removed in 3.2?

Correct.  I had previously checked in a versionchanged with the wrong
release number.  Should have corrected that separately.

[...]

> > +Policy objects are the mechanism used to provide the email package with the
> > +flexibility to handle all these disparate use cases,
> 
> Looks like something is missing from this sentence :)

??ric thought so too, but it reads fine to me.  Maybe it is colloquial
grammar and I'm just blind to it.  I can't now remember what his suggested
modification was, either.  I've rewritten it as:

    Policy objects give the email package the flexibility to handle all
    these disparate use cases.

[...]
 
> > +   >>> from email import msg_from_binary_file
> > +   >>> from email.generator import BytesGenerator
> > +   >>> import email.policy
> > +   >>> from subprocess import Popen, PIPE
> > +   >>> with open('mymsg.txt', 'b') as f:
> > +   >>>     msg = msg_from_binary_file(f, policy=email.policy.mbox)
> > +   >>> p = Popen(['sendmail', msg['To'][0].address], stdin=PIPE)
> > +   >>> g = BytesGenerator(p.stdin, email.policy.policy=SMTP)
> 
> That keyword arg doesn't look right.

Yep, I got that backward when I edited it.
 
> > +   >>> g.flatten(msg)
> > +   >>> p.stdin.close()
> > +   >>> rc = p.wait()
> 
> Also, if you put interactive prompts, please use them correctly ("..." prompt
> and one blank line for the with block).

??ric had me fix one example of that already.  What do you mean by "one
blank line" though?  Doctest doesn't require blank lines after the ...
lines, does it?

[...]

> > +   .. method:: handle_defect(obj, defect)
> > +
> > +      *obj* is the object on which to register the defect.
> 
> What kind of object is *obj*?

Whatever object is being used to represent the data being parsed when
the defect is found.  Right now that's always a Message, but that won't
continue to be true.  The rest of the documentation mentions or will
mention which objects have defect lists, and it felt like duplicating
that information here was a bad case of repetition (ie: one that doesn't
add much value compared to the danger of it getting out of date.)

Except for this last item, I've fixed everything and will shortly
check it in.

> Sorry for the long review.

It was a long patch.  No need to apologize.

--
R. David Murray           http://www.bitdance.com

From stefan_ml at behnel.de  Mon Apr 18 21:50:07 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 18 Apr 2011 21:50:07 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <ACF9143C-6098-4732-B179-9C581EE0C6F3@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>	<20110416212352.GA19573@sleipnir.bytereef.org>	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>	<4862031C-A420-41A5-82B0-713262407802@gmail.com>	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>	<44f951664fe4e294c102e8bfa6c10d64@netwok.org>	<BANLkTim0_W0hO39iJFQuccj0dodvdMQbtA@mail.gmail.com>
	<ACF9143C-6098-4732-B179-9C581EE0C6F3@gmail.com>
Message-ID: <ioi4lg$2kh$1@dough.gmane.org>

Raymond Hettinger, 18.04.2011 19:26:
> On Apr 18, 2011, at 10:11 AM, Maciej Fijalkowski wrote:
>>
>> * we usually target CPython version that's already frozen, which is
>> pretty inconvinient to post this changes back. Example would be a
>> socket module where it has changed enough in 3.x that 2.7 changes make
>> no sense.
>
> Do you have any thoughts on the problem with the concrete C API
> not working well with subclasses of builtin types?
>
> I'm thinking that the PEP should specifically ban the practice
> of using the concrete api unless it is known for sure that
> an object is an exact type match.

Absolutely.


> It is okay to write PyList_New() followed by PyList_SetItem()
> but not okay to use PyList_SetItem() on a user supplied
> argument that is known to be a subclass of list.  A fast path
> can be provided for an exact path, but there would also need
> to a be a slower path that either converts the object to
> an exact list or that uses PyObject_SetItem().

For what it's worth, Cython generates code that contains optimistic 
optimisations for common cases, such as iteration, x.append() calls, etc. 
When it finds such a pattern, it generates separate code paths for the most 
likely (builtin type) case and a slower fallback for the more unlikely case 
of a user provided type. So you get both speed and compatibility for free, 
just by writing idiomatic code like "for item in some_iterable".

Stefan


From stefan_ml at behnel.de  Mon Apr 18 23:11:02 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 18 Apr 2011 23:11:02 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <20110418123108.8DE972500DB@mailhost.webabinitio.net>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>	<20110416212352.GA19573@sleipnir.bytereef.org>	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>	<4862031C-A420-41A5-82B0-713262407802@gmail.com>	<BANLkTim7Y2oKuUGGdE16RJRWaupW64Nr3w@mail.gmail.com>	<BANLkTik1C+1HDgZJCEj+61UFHbCeg1hQMA@mail.gmail.com>
	<20110418123108.8DE972500DB@mailhost.webabinitio.net>
Message-ID: <ioi9d6$vh4$1@dough.gmane.org>

R. David Murray, 18.04.2011 14:30:
> On Mon, 18 Apr 2011 09:36:20 +0100, Paul Moore wrote:
>> On 18 April 2011 08:05, Maciej Fijalkowski wrote:
>>> On Sun, Apr 17, 2011 at 4:19 AM, Raymond Hettinger wrote:
>>>
>>>> The PEP seems to be predicated on a notion that anything written in C is
>>>> bad and that all testing is good.
>>>
>>> Sounds about right
>>
>> I disagree. To me, a Python without libraries such as os, zlib,
>> zipfile, threading, etc wouldn't be much use (except in specialised
>> circumstances). OK, that means that alternative implementations need
>> to do extra work to implement equivalents in their own low-level
>> language, but so be it (sorry!)
>
> I think Maciej left out an "only" in that sentence.  If you say "only C",
> then the sentence makes sense, even when applied to modules that *can*
> only be written in C (for CPython).  That is, not having a Python version
> is bad.  Necessary in many cases (or not worth the cost, for external
> library wrappers), but wouldn't it be nicer if it wasn't necessary?

FWIW, there is a proposed GSoC project that aims to implement a Cython 
backend for PyPy, either using ctypes or PyPy's own FFI. That would 
basically remove the need to write library wrappers in C for both CPython 
and PyPy, and eventually for IronPython, which also has a Cython port in 
the making. Not sure how Jython fits into this, but I wouldn't object to 
someone writing a JNI backend either.

Stefan


From brett at python.org  Mon Apr 18 23:22:35 2011
From: brett at python.org (Brett Cannon)
Date: Mon, 18 Apr 2011 14:22:35 -0700
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <4F1A17A7-CA6E-42BD-A856-15DD92EAEE76@gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<BANLkTin5A_tJ+gCxXuA2xVWhMAghBb1dqw@mail.gmail.com>
	<BANLkTimVysOh9BdB1vNKHg5LHR3jq3RFyw@mail.gmail.com>
	<20110416212352.GA19573@sleipnir.bytereef.org>
	<BANLkTinaXQv+A_G-LrbPpiHj6WHK8TVT-A@mail.gmail.com>
	<4862031C-A420-41A5-82B0-713262407802@gmail.com>
	<20110417053245.42D262500D7@mailhost.webabinitio.net>
	<4F1A17A7-CA6E-42BD-A856-15DD92EAEE76@gmail.com>
Message-ID: <BANLkTimuRHgYErYijw4KWO3B4OcmgQRCSQ@mail.gmail.com>

I just want to say upfront that my personal life has just gotten very hectic
as of late (green card stuff for my wife who is Canadian) and probably will
not let up until June. So if I go a while without replying to points being
made for quite a while, I apologize. Luckily there seem to be others here
who understand the direction I am coming from so there is no need to stop
talking while I am pre-occupied with the real world.

On Sun, Apr 17, 2011 at 00:30, Raymond Hettinger <
raymond.hettinger at gmail.com> wrote:

>
> >>>> In the grand python-dev tradition of "silence means acceptance", I
> consider
> >>>> this PEP finalized and implicitly accepted.
> >>
> >> I haven't seen any responses that said, yes this is a well thought-out
> proposal
> >> that will actually benefit any of the various implementations.
> >
> > In that case it may well be that the silence is because the other
> > implementations think the PEP is OK.  They certainly voted in favor of
> > the broad outline of it at the language summit.
>
> Sounds like it was implicitly accepted even before it was written or any of
> the details were discussed.
>

Actually I directly emailed the relevant people from the other VMs to make
sure they were happy with what I was aiming for before I approached
python-dev with the PEP. So IronPython, Jython, and PyPy lead developers
have all told me that they want something along the lines of this PEP to
happen.


>
> The big picture of "let's do something to make life easier for other
> implementations" is a worthy goal.  What that something should be is still a
> bit ambiguous.
>
>
> >> every branch in a given implementation now guarantee every
> implementation detail
> >> or do we only promise the published API (historically, we've *always*
> done the
> >> latter)?
> >
> > As Brett said, people do come to depend on the details of the
> > implementation.  But IMO the PEP should be clarified to say that the
> > tests we are talking about should be tests *of the published API*.
> > That is, blackbox tests, not whitebox tests.
>
> +1 That's an excellent suggestion.  Without that change, it seems like the
> PEP is overreaching.
>

I'm okay with going with this line of through, including R. David's "100%
branch coverage is but one way to achieve extensive testing of the published
API".


>
>
> >> Is there going to be any guidance on the commonly encountered semantic
> >> differences between C modules and their Python counterparts
> (thread-safety,
> >> argument handling, tracebacks, all possible exceptions, monkey-patchable
> pure
> >> python classes versus hard-wired C types etc)?
> >
> > Presumably we will need to develop such guidance.
>
> +1 That would be very helpful.  Right now, the PEP doesn't address any of
> the commonly encountered differences.
>

If people are willing to help me (i.e., go ahead and edit the PEP) with this
then I am okay with adding some common issues (but I don't expect it to be
exhaustive).


>
>
> > I personally have no problem with the 100% coverage being made a
> > recommendation in the PEP rather than a requirement.  It sounds like
> > that might be acceptable to Antoine.  Actually, I would also be fine with
> > saying "comprehensive" instead, with a note that 100% branch coverage is
> > a good way to head toward that goal, since a comprehensive test suite
> > should contain more tests than the minimum set needed to get to 100%
> > branch coverage.
>
> +1 better test coverage is always a good thing (IMO).
>
>
> Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110418/d3e9f5af/attachment.html>

From stefan_ml at behnel.de  Tue Apr 19 07:06:09 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Tue, 19 Apr 2011 07:06:09 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
Message-ID: <ioj581$sl9$1@dough.gmane.org>

Brett Cannon, 05.04.2011 01:46:
> At both the VM and language summits at PyCon this year, the issue of
> compatibility of the stdlib amongst the various VMs came up. Two issues came
> about in regards to modules that use C code. One is that code that comes in
> only as C code sucks for all other VMs that are not CPython since they all
> end up having to re-implement that module themselves. Two is that modules
> that have an accelerator module (e.g., heapq, warnings, etc.) can end up
> with compatibility options (sorry, Raymond, for picking on heapq, but is was
> what bit the PyPy people most recently =).
>
> In lieu of all of this, here is a draft PEP to more clearly state the policy
> for the stdlib when it comes to C code. Since this has come up before and
> this was discussed so much at the summits I have gone ahead and checked this
> in so that even if this PEP gets rejected there can be a written record as
> to why.
>
> And before anyone asks, I have already run this past the lead devs of PyPy,
> Jython, and IronPython and they all support what this PEP proposes. And with
> the devs of the other VMs gaining push privileges there shouldn't be an
> added developer burden on everyone to make this PEP happen.

This PEP has received a lengthy discussion by now, so here's why I think 
it's being fought so heavily by several CPython core developers, 
specifically those who have traditionally carried a large part of the 
optimisation load in the project.

I think the whole point of this PEP is that, having agreed that a shared 
standard library for all Python implementations is a good thing, the amount 
of shareable code should be maximised. I doubt that anyone will argue 
against this goal.

But that obviously includes all sides. If other implementations are free to 
cherry pick the targets of their own effort geared by the optimisation of 
their own implementation, and leave the whole burden of compatibility and 
code reusability on CPython, in addition to the CPython efforts of 
improving and optimising its own core code base and its own stdlib version, 
it's not an equal matter.

That's what makes the PEP feel so unfair to CPython developers, because 
they are the ones who carry most of the burden of maintaining the stdlib in 
the first place, and who will most likely continue to carry it, because 
other implementations will continue to be occupied with their own core 
development for another while or two. It is nice to read that other 
implementations are contributing back patches that simplify their own reuse 
of the stdlib code. However, that does not yet make them equal contributors 
to the development and the maintenance of the stdlib, and is of very little 
worth to the CPython project. It often even runs counter to the interest of 
CPython itself.

I think this social problem of the PEP can only be solved if the CPython 
project stops doing the major share of the stdlib maintenance, thus freeing 
its own developer capacities to focus on CPython related improvements and 
optimisations, just like the other implementations currently do. I'm not 
sure we want that at this point.

Stefan


From g.brandl at gmx.net  Tue Apr 19 07:30:50 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 19 Apr 2011 07:30:50 +0200
Subject: [Python-Dev] cpython: #11731: simplify/enhance parser/generator
 API by introducing policy objects.
In-Reply-To: <20110418194009.5C09D2500DC@mailhost.webabinitio.net>
References: <E1QBskY-0004rj-OH@dinsdale.python.org>
	<iohvp8$49e$1@dough.gmane.org>
	<20110418194009.5C09D2500DC@mailhost.webabinitio.net>
Message-ID: <ioj6mm$30n$1@dough.gmane.org>

On 18.04.2011 21:39, R. David Murray wrote:

>> > +Policy objects are the mechanism used to provide the email package with the
>> > +flexibility to handle all these disparate use cases,
>> 
>> Looks like something is missing from this sentence :)
> 
> ??ric thought so too, but it reads fine to me.  Maybe it is colloquial
> grammar and I'm just blind to it.  I can't now remember what his suggested
> modification was, either.  I've rewritten it as:
> 
>     Policy objects give the email package the flexibility to handle all
>     these disparate use cases.

Sure, I was only asking because the original ended in a trailing comma.

>> > +   >>> from email import msg_from_binary_file
>> > +   >>> from email.generator import BytesGenerator
>> > +   >>> import email.policy
>> > +   >>> from subprocess import Popen, PIPE
>> > +   >>> with open('mymsg.txt', 'b') as f:
>> > +   >>>     msg = msg_from_binary_file(f, policy=email.policy.mbox)
>> > +   >>> p = Popen(['sendmail', msg['To'][0].address], stdin=PIPE)
>> > +   >>> g = BytesGenerator(p.stdin, email.policy.policy=SMTP)
>> 
>> That keyword arg doesn't look right.
> 
> Yep, I got that backward when I edited it.
>  
>> > +   >>> g.flatten(msg)
>> > +   >>> p.stdin.close()
>> > +   >>> rc = p.wait()
>> 
>> Also, if you put interactive prompts, please use them correctly ("..." prompt
>> and one blank line for the with block).
> 
> ??ric had me fix one example of that already.  What do you mean by "one
> blank line" though?  Doctest doesn't require blank lines after the ...
> lines, does it?

Not sure what doctest requires, but in the actual interactive shell you'd have

>>> with ...:
...     do something
...
>>> next statement

It's not really important though.

Georg


From ncoghlan at gmail.com  Tue Apr 19 10:57:11 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 19 Apr 2011 18:57:11 +1000
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <ioj581$sl9$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
Message-ID: <BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>

On Tue, Apr 19, 2011 at 3:06 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> I think this social problem of the PEP can only be solved if the CPython
> project stops doing the major share of the stdlib maintenance, thus freeing
> its own developer capacities to focus on CPython related improvements and
> optimisations, just like the other implementations currently do. I'm not
> sure we want that at this point.

We've made a start on that aspect by granting CPython access to
several of the core developers on the other VMs. The idea being that
they can update the pure Python versions of modules directly rather
than having to wait for one of us to do it on their behalf.

Of course, as Maciej pointed out, that is currently hindered by the
fact that the other VMs aren't targeting 3.3 yet, and that's where the
main CPython development is happening.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From stefan_ml at behnel.de  Tue Apr 19 12:01:44 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Tue, 19 Apr 2011 12:01:44 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
Message-ID: <iojmi8$nhg$1@dough.gmane.org>

Nick Coghlan, 19.04.2011 10:57:
> On Tue, Apr 19, 2011 at 3:06 PM, Stefan Behnel wrote:
>> I think this social problem of the PEP can only be solved if the CPython
>> project stops doing the major share of the stdlib maintenance, thus freeing
>> its own developer capacities to focus on CPython related improvements and
>> optimisations, just like the other implementations currently do. I'm not
>> sure we want that at this point.
>
> We've made a start on that aspect by granting CPython access to
> several of the core developers on the other VMs. The idea being that
> they can update the pure Python versions of modules directly rather
> than having to wait for one of us to do it on their behalf.
>
> Of course, as Maciej pointed out, that is currently hindered by the
> fact that the other VMs aren't targeting 3.3 yet, and that's where the
> main CPython development is happening.

A related question is: when other Python VM projects try to port a given C 
module, would they actually invest the time to write a pure Python version 
that may or may not run within acceptable performance bounds for them, or 
would they prefer saving time by writing only a native implementation 
directly for their VM for performance reasons? Maybe both, maybe not. If 
they end up writing a native version after prototyping in Python, is the 
prototype worth including in the shared stdlib, even if its performance is 
completely unacceptable for everyone? Or, if they write a partial module 
and implement another part of it natively, would the incomplete 
implementation qualify as a valid addition to the shared stdlib?

Implementing a 100% compatible and "fast enough" Python version of a module 
is actually a rather time consuming task. I think we are expecting some 
altruism here that is easily sacrificed for time constraints, in any of the 
Python VM projects. CPython is just in the unlucky position of representing 
the status-quo.

Stefan


From solipsis at pitrou.net  Tue Apr 19 13:35:48 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 19 Apr 2011 13:35:48 +0200
Subject: [Python-Dev] cpython: os.sendfile(): on Linux if offset
 parameter is passed as	NULL we were
References: <E1QC5eb-0007jZ-VX@dinsdale.python.org>
Message-ID: <20110419133548.5d91845c@pitrou.net>

On Tue, 19 Apr 2011 09:47:21 +0200
giampaolo.rodola <python-checkins at python.org> wrote:

> http://hg.python.org/cpython/rev/8c49f7fbba1d
> changeset:   69437:8c49f7fbba1d
> user:        Giampaolo Rodola' <g.rodola at gmail.com>
> date:        Tue Apr 19 09:47:16 2011 +0200
> summary:
>   os.sendfile(): on Linux if offset parameter is passed as NULL we were erroneously returning a (bytes_sent, None) tuple instead of bytes_sent

Do we have tests for this?

Regards

Antoine.



From jnoller at gmail.com  Tue Apr 19 14:02:23 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Tue, 19 Apr 2011 08:02:23 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <ioj581$sl9$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
Message-ID: <BANLkTi=1t=YH7Wa4u2cNtoEDVB8=WqMBjg@mail.gmail.com>

On Tue, Apr 19, 2011 at 1:06 AM, Stefan Behnel <stefan_ml at behnel.de> wrote:
[snip]
> This PEP has received a lengthy discussion by now, so here's why I think
> it's being fought so heavily by several CPython core developers,
> specifically those who have traditionally carried a large part of the
> optimisation load in the project.
>
> I think the whole point of this PEP is that, having agreed that a shared
> standard library for all Python implementations is a good thing, the amount
> of shareable code should be maximised. I doubt that anyone will argue
> against this goal.
>
> But that obviously includes all sides. If other implementations are free to
> cherry pick the targets of their own effort geared by the optimisation of
> their own implementation, and leave the whole burden of compatibility and
> code reusability on CPython, in addition to the CPython efforts of improving
> and optimising its own core code base and its own stdlib version, it's not
> an equal matter.
>

I am going to go out on a limb here and state that once the stdlib is
shared, it is all of the VM's responsibility to help maintaining it,
meaning the PEP 399 is binding to all of the VMs. If Jython wants to
write an accelerator module in Java for something in the stdlib, they
have to follow the same guidelines, same applies to PyPy, etc.

I think this is an equal matter, and if needed, we should make note of
it in the PEP. The goal here is to make it easier to share the code
base of the stdlib and not pull the rug out from other implementations
by having a stdlib module written only in highly optimized C with no
Python fallback, leaving them with the unsavory duty of reimplementing
it in Python|Java|C#, etc.

Pure Python is the coin of the realm.

> That's what makes the PEP feel so unfair to CPython developers, because they
> are the ones who carry most of the burden of maintaining the stdlib in the
> first place, and who will most likely continue to carry it, because other
> implementations will continue to be occupied with their own core development
> for another while or two. It is nice to read that other implementations are
> contributing back patches that simplify their own reuse of the stdlib code.
> However, that does not yet make them equal contributors to the development
> and the maintenance of the stdlib, and is of very little worth to the
> CPython project. It often even runs counter to the interest of CPython
> itself.

Sure, at first glance this seems to place an unfair burden on CPython
- because we're just as guilty as being "closed" to other
implementation as the other implementations are to us. We're trying to
change that, and someone (us, as the reference implementation) need to
take the first responsible step.

Once this move is made/accepted, I would expect the other
implementation to rapidly move away from their custom implementations
of the stdlib and contribute to the shared code base and
documentation. Yes, this places a burden on CPython, but in the long
term in benefits *all* of the projects equally by simply having more
active contributors.

We have over 200 stdlib modules, and far, far less than that in active
developers focused or working on the stdlib. Making it a shared
property (in theory) means that the other VMs have a shared interest
in that property. We're effectively spreading the load.

> I think this social problem of the PEP can only be solved if the CPython
> project stops doing the major share of the stdlib maintenance, thus freeing
> its own developer capacities to focus on CPython related improvements and
> optimisations, just like the other implementations currently do. I'm not
> sure we want that at this point.

That's not going to happen. CPython will continue to do the bulk of
the maintenance until we break it out, and the other implementations
have time to adapt and pull in the shared code base.

I don't see this as such a large burden as you seem to be making it
out to be: CPython is the reference implementation, and our stdlib is
the reference stdlib. We can break out the stdlib and share it amongst
the implementations therefore making it more than the reference stdlib
- we can make it the defacto stdlib for the language as a whole.

We also, long term, want spread the maintenance load beyond CPython,
but right now we are the primary caretakers, so yes - this adds load
to us in the short term, but benefits us in the long term.

jesse

From fijall at gmail.com  Tue Apr 19 14:17:52 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Tue, 19 Apr 2011 14:17:52 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
Message-ID: <BANLkTikm2_0OKXXhUeQ3ThrvRiPji=xxFg@mail.gmail.com>

On Tue, Apr 19, 2011 at 10:57 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Tue, Apr 19, 2011 at 3:06 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
>> I think this social problem of the PEP can only be solved if the CPython
>> project stops doing the major share of the stdlib maintenance, thus freeing
>> its own developer capacities to focus on CPython related improvements and
>> optimisations, just like the other implementations currently do. I'm not
>> sure we want that at this point.
>
> We've made a start on that aspect by granting CPython access to
> several of the core developers on the other VMs. The idea being that
> they can update the pure Python versions of modules directly rather
> than having to wait for one of us to do it on their behalf.
>
> Of course, as Maciej pointed out, that is currently hindered by the
> fact that the other VMs aren't targeting 3.3 yet, and that's where the
> main CPython development is happening.

We're also slightly hindered by the fact that not all of us got
privilages so far (Antonio Cuni in particular).

>
> Cheers,
> Nick.
>
> --
> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

From jnoller at gmail.com  Tue Apr 19 14:22:12 2011
From: jnoller at gmail.com (Jesse Noller)
Date: Tue, 19 Apr 2011 08:22:12 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTikm2_0OKXXhUeQ3ThrvRiPji=xxFg@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
	<BANLkTikm2_0OKXXhUeQ3ThrvRiPji=xxFg@mail.gmail.com>
Message-ID: <BANLkTi=GK=kpfQxjU+Gy30jKJ0OZ3-0e4Q@mail.gmail.com>

On Tue, Apr 19, 2011 at 8:17 AM, Maciej Fijalkowski <fijall at gmail.com> wrote:
> On Tue, Apr 19, 2011 at 10:57 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> On Tue, Apr 19, 2011 at 3:06 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
>>> I think this social problem of the PEP can only be solved if the CPython
>>> project stops doing the major share of the stdlib maintenance, thus freeing
>>> its own developer capacities to focus on CPython related improvements and
>>> optimisations, just like the other implementations currently do. I'm not
>>> sure we want that at this point.
>>
>> We've made a start on that aspect by granting CPython access to
>> several of the core developers on the other VMs. The idea being that
>> they can update the pure Python versions of modules directly rather
>> than having to wait for one of us to do it on their behalf.
>>
>> Of course, as Maciej pointed out, that is currently hindered by the
>> fact that the other VMs aren't targeting 3.3 yet, and that's where the
>> main CPython development is happening.
>
> We're also slightly hindered by the fact that not all of us got
> privilages so far (Antonio Cuni in particular).

Yeah, I emailed him this morning, I dropped the ball on his commit bit
post pycon due to email overload. I'm resolving it today.

From techtonik at gmail.com  Tue Apr 19 14:25:58 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Tue, 19 Apr 2011 15:25:58 +0300
Subject: [Python-Dev] Python 2.6.7
In-Reply-To: <20110418085326.6b8d787a@neurotica.wooz.org>
References: <20110418085326.6b8d787a@neurotica.wooz.org>
Message-ID: <BANLkTikWQh12XSExUeMGyJV_LbbkMWqvHQ@mail.gmail.com>

On Mon, Apr 18, 2011 at 3:53 PM, Barry Warsaw <barry at python.org> wrote:
> With Martin getting ready to release 2.5.6, I think it's time to prepare a
> 2.6.7 source-only security release.
>
> I'll work my way through the NEWS file and recent commits, but if there is
> anything that you know is missing from the 2.6 branch, please let me know. ?It
> would be especially helpful if there were bugs for any such issues.

Does 'anything' only relate to security fixes?
--
anatoly t.

From fijall at gmail.com  Tue Apr 19 14:26:21 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Tue, 19 Apr 2011 14:26:21 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <iojmi8$nhg$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
	<iojmi8$nhg$1@dough.gmane.org>
Message-ID: <BANLkTi==8th5+iMP9YSyd-WYPDTfTx4fTQ@mail.gmail.com>

On Tue, Apr 19, 2011 at 12:01 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Nick Coghlan, 19.04.2011 10:57:
>>
>> On Tue, Apr 19, 2011 at 3:06 PM, Stefan Behnel wrote:
>>>
>>> I think this social problem of the PEP can only be solved if the CPython
>>> project stops doing the major share of the stdlib maintenance, thus
>>> freeing
>>> its own developer capacities to focus on CPython related improvements and
>>> optimisations, just like the other implementations currently do. I'm not
>>> sure we want that at this point.
>>
>> We've made a start on that aspect by granting CPython access to
>> several of the core developers on the other VMs. The idea being that
>> they can update the pure Python versions of modules directly rather
>> than having to wait for one of us to do it on their behalf.
>>
>> Of course, as Maciej pointed out, that is currently hindered by the
>> fact that the other VMs aren't targeting 3.3 yet, and that's where the
>> main CPython development is happening.
>
> A related question is: when other Python VM projects try to port a given C
> module, would they actually invest the time to write a pure Python version
> that may or may not run within acceptable performance bounds for them, or
> would they prefer saving time by writing only a native implementation
> directly for their VM for performance reasons? Maybe both, maybe not. If
> they end up writing a native version after prototyping in Python, is the
> prototype worth including in the shared stdlib, even if its performance is
> completely unacceptable for everyone? Or, if they write a partial module and
> implement another part of it natively, would the incomplete implementation
> qualify as a valid addition to the shared stdlib?

At least from our (PyPy's side), we do use pure python versions a lot.
Their performance vary, but sometimes you don't care, you just want
the module to work. Contrary to popular belief, not all code is
performance critical in standard library. We got quite far without
even looking. Later on we usually look there, but for us rewriting it
in RPython most of the time makes no sense, since pure python code
might even behave better than RPython code, especially if there are
loops which get jitted more efficiently if they're in pure python.

>
> Implementing a 100% compatible and "fast enough" Python version of a module
> is actually a rather time consuming task. I think we are expecting some
> altruism here that is easily sacrificed for time constraints, in any of the
> Python VM projects. CPython is just in the unlucky position of representing
> the status-quo.

I think 100% compatible with whatever performance is already a lot for
us. We can improve the performance later on. For example we never
touched heapq module and it works just fine as it is.

>
> Stefan
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

From fijall at gmail.com  Tue Apr 19 14:29:24 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Tue, 19 Apr 2011 14:29:24 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
In-Reply-To: <BANLkTi=1t=YH7Wa4u2cNtoEDVB8=WqMBjg@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTi=1t=YH7Wa4u2cNtoEDVB8=WqMBjg@mail.gmail.com>
Message-ID: <BANLkTikyObawauTTgexkF-J=jSKUOSM1xQ@mail.gmail.com>

>
> Once this move is made/accepted, I would expect the other
> implementation to rapidly move away from their custom implementations
> of the stdlib and contribute to the shared code base and
> documentation. Yes, this places a burden on CPython, but in the long
> term in benefits *all* of the projects equally by simply having more
> active contributors.
>

I would also like to point out that some valuable contributions were
made already by other implementations. When talking about stdlib, it's
mostly in the area of test suite, but not only in terms of "skip those
tests", but also improving test coverage and even fixing bugs. Unicode
fixes were prototyped on PyPy first and some PyPy optimizations were
ported to CPython (the original method cache patch came from Armin
Rigo as far as I remember). So it's not completely "Cpython's burden"
only.

Cheers,
fijal

From victor.stinner at haypocalc.com  Tue Apr 19 15:46:14 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Tue, 19 Apr 2011 15:46:14 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
Message-ID: <1303220774.8140.8.camel@marge>

Hi,

I asked one year ago if we should drop OS/2 support: Andrew MacIntyre,
our OS/2 maintainer, answered:
http://mail.python.org/pipermail/python-dev/2010-April/099477.html

Extract: << The 3.x branch needs quite a bit of work on OS/2 to 
deal with Unicode, as OS/2 was one of the earlier OSes with full 
multiple language support and IBM developed a unique API.  I'm still 
struggling to come to terms with this, partly because I myself don't 
"need" it. >>

So one year later, Python 3 does still not support OS/2.

--

About VMS: I don't know if anyone is using Python (2 or 3) on VMS, or if
Python 3 does work on VMS. I bet that it does just not compile :-)

I don't know anyone using VMS or OS/2.

--

There are 39 #ifdef VMS and 52 #ifdef OS2. We can keep them and wait
until someone work on these OSes to ensure that the test suite pass. But
if nobody cares of these OSes and nobody wants to maintain them, it
would be easier for the maintenance of the Python source code base to
remove specific code.

Well, not "remove" directly, but plan to remove it using the PEP 11
procedure (mark OS/2 and VMS as unsupported, and remove the code in
Python 3.4).

Victor


From barry at python.org  Tue Apr 19 16:01:14 2011
From: barry at python.org (Barry Warsaw)
Date: Tue, 19 Apr 2011 10:01:14 -0400
Subject: [Python-Dev] Python 2.6.7
In-Reply-To: <BANLkTikWQh12XSExUeMGyJV_LbbkMWqvHQ@mail.gmail.com>
References: <20110418085326.6b8d787a@neurotica.wooz.org>
	<BANLkTikWQh12XSExUeMGyJV_LbbkMWqvHQ@mail.gmail.com>
Message-ID: <20110419100114.5910722f@neurotica.wooz.org>

On Apr 19, 2011, at 03:25 PM, anatoly techtonik wrote:

>On Mon, Apr 18, 2011 at 3:53 PM, Barry Warsaw <barry at python.org> wrote:
>> With Martin getting ready to release 2.5.6, I think it's time to prepare a
>> 2.6.7 source-only security release.
>>
>> I'll work my way through the NEWS file and recent commits, but if there is
>> anything that you know is missing from the 2.6 branch, please let me know. ?It
>> would be especially helpful if there were bugs for any such issues.
>
>Does 'anything' only relate to security fixes?

Yes.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110419/277eea8a/attachment.pgp>

From ncoghlan at gmail.com  Tue Apr 19 16:10:11 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 20 Apr 2011 00:10:11 +1000
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant dynamic
	class creation
Message-ID: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>

In reviewing a fix for the metaclass calculation in __build_class__
[1], I realised that PEP 3115 poses a potential problem for the common
practice of using "type(name, bases, ns)" for dynamic class creation.

Specifically, if one of the base classes has a metaclass with a
significant __prepare__() method, then the current idiom will do the
wrong thing (and most likely fail as a result), since "ns" will
probably be an ordinary dictionary instead of whatever __prepare__()
would have returned.

Initially I was going to suggest making __build_class__ part of the
language definition rather than a CPython implementation detail, but
then I realised that various CPython specific elements in its
signature made that a bad idea.

Instead, I'm thinking along the lines of an
"operator.prepare(metaclass, bases)" function that does the metaclass
calculation dance, invoking __prepare__() and returning the result if
it exists, otherwise returning an ordinary dict. Under the hood we
would refactor this so that operator.prepare and __build_class__ were
using a shared implementation of the functionality at the C level - it
may even be advisable to expose that implementation via the C API as
PyType_PrepareNamespace().

The correct idiom for dynamic type creation in a PEP 3115 world would then be:

    from operator import prepare
    cls = type(name, bases, prepare(type, bases))

Thoughts?

Cheers,
Nick.

[1] http://bugs.python.org/issue1294232

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From mal at egenix.com  Tue Apr 19 16:36:13 2011
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 19 Apr 2011 16:36:13 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <1303220774.8140.8.camel@marge>
References: <1303220774.8140.8.camel@marge>
Message-ID: <4DAD9DDD.2080301@egenix.com>

Victor Stinner wrote:
> Hi,
> 
> I asked one year ago if we should drop OS/2 support: Andrew MacIntyre,
> our OS/2 maintainer, answered:
> http://mail.python.org/pipermail/python-dev/2010-April/099477.html
> 
> Extract: << The 3.x branch needs quite a bit of work on OS/2 to 
> deal with Unicode, as OS/2 was one of the earlier OSes with full 
> multiple language support and IBM developed a unique API.  I'm still 
> struggling to come to terms with this, partly because I myself don't 
> "need" it. >>
> 
> So one year later, Python 3 does still not support OS/2.
> 
> --
> 
> About VMS: I don't know if anyone is using Python (2 or 3) on VMS, or if
> Python 3 does work on VMS. I bet that it does just not compile :-)
> 
> I don't know anyone using VMS or OS/2.
> 
> --
> 
> There are 39 #ifdef VMS and 52 #ifdef OS2. We can keep them and wait
> until someone work on these OSes to ensure that the test suite pass. But
> if nobody cares of these OSes and nobody wants to maintain them, it
> would be easier for the maintenance of the Python source code base to
> remove specific code.
> 
> Well, not "remove" directly, but plan to remove it using the PEP 11
> procedure (mark OS/2 and VMS as unsupported, and remove the code in
> Python 3.4).

The Python core team is not really representative of the Python
community users, so I think this needs a different approach:

Instead of simply deprecating OSes without notice to the general
Python community, how about doing a "call for support" for these
OSes ?

If that doesn't turn up maintainers, then we can take the PEP 11
route.

FWIW: There's still a fan-base out there for OS/2 and its successor
eComStation:

http://en.wikipedia.org/wiki/EComStation
http://www.ecomstation.com/ecomstation20.phtml
http://www.warpstock.eu/

Same for VMS in form of OpenVMS:

http://en.wikipedia.org/wiki/OpenVMS
http://h71000.www7.hp.com/index.html?jumpid=/go/openvms
http://www.vmspython.org/

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 19 2011)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From rdmurray at bitdance.com  Tue Apr 19 16:37:41 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 19 Apr 2011 10:37:41 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <ioj581$sl9$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
Message-ID: <20110419143802.327D82500DF@mailhost.webabinitio.net>

On Tue, 19 Apr 2011 07:06:09 +0200, Stefan Behnel <stefan_ml at behnel.de> wrote:
> That's what makes the PEP feel so unfair to CPython developers, because 
> they are the ones who carry most of the burden of maintaining the stdlib in 
> the first place, and who will most likely continue to carry it, because 
> other implementations will continue to be occupied with their own core 
> development for another while or two. It is nice to read that other 
> implementations are contributing back patches that simplify their own reuse 
> of the stdlib code. However, that does not yet make them equal contributors 
> to the development and the maintenance of the stdlib, and is of very little 
> worth to the CPython project. It often even runs counter to the interest of 
> CPython itself.

So, the PEP makes the burden worse in that it requires that someone who
works on a module with a C accelerator must make sure that any existing
Python version and the C version stay in sync, and that *anyone* who wants
to introduce a new module into the stdlib must make sure it has a Python
version if that is practical.  IMO both of these are policies that make
sense for CPython even aside from the existence of other implementations:
Python is easier to read and understand, so where practical we should
provide a Python version of any module in the stdlib, for the benefit
of CPython users.

It doesn't sound like a great burden to me, but I'm not really qualified
to judge, since I don't generally work on C code.

Also, could you expand on "It often even runs counter to the interest of
CPython itself"?  I'm not seeing that, unless you are talking about
the parameter-binding micro-optimization, which I think we discourage
these days anyway.

> I think this social problem of the PEP can only be solved if the CPython 
> project stops doing the major share of the stdlib maintenance, thus freeing 
> its own developer capacities to focus on CPython related improvements and 
> optimisations, just like the other implementations currently do. I'm not 
> sure we want that at this point.

Personally, I consider myself an stdlib maintainer:  I only occasionally
dabble in C code when fixing bugs that annoy me for some reason.
I suppose that's why I'm one of the people backing this PEP.  I think
there are other CPython developers who might say the same thing.

--
R. David Murray           http://www.bitdance.com

From solipsis at pitrou.net  Tue Apr 19 16:46:58 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 19 Apr 2011 16:46:58 +0200
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
 Compatibiilty Requirements
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<20110419143802.327D82500DF@mailhost.webabinitio.net>
Message-ID: <20110419164658.535d5a55@pitrou.net>

On Tue, 19 Apr 2011 10:37:41 -0400
"R. David Murray" <rdmurray at bitdance.com> wrote:

> On Tue, 19 Apr 2011 07:06:09 +0200, Stefan Behnel <stefan_ml at behnel.de> wrote:
> > That's what makes the PEP feel so unfair to CPython developers, because 
> > they are the ones who carry most of the burden of maintaining the stdlib in 
> > the first place, and who will most likely continue to carry it, because 
> > other implementations will continue to be occupied with their own core 
> > development for another while or two. It is nice to read that other 
> > implementations are contributing back patches that simplify their own reuse 
> > of the stdlib code. However, that does not yet make them equal contributors 
> > to the development and the maintenance of the stdlib, and is of very little 
> > worth to the CPython project. It often even runs counter to the interest of 
> > CPython itself.
> 
> So, the PEP makes the burden worse in that it requires that someone who
> works on a module with a C accelerator must make sure that any existing
> Python version and the C version stay in sync, and that *anyone* who wants
> to introduce a new module into the stdlib must make sure it has a Python
> version if that is practical.  IMO both of these are policies that make
> sense for CPython even aside from the existence of other implementations:
> Python is easier to read and understand, so where practical we should
> provide a Python version of any module in the stdlib, for the benefit
> of CPython users.
> 
> It doesn't sound like a great burden to me, but I'm not really qualified
> to judge, since I don't generally work on C code.

I think it's ok. Our experience on the io module proves, I think,
that's it's indeed useful to have a pure Python (pseudocode-like)
implementation.

Regards

Antoine.



From rdmurray at bitdance.com  Tue Apr 19 16:50:28 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 19 Apr 2011 10:50:28 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <iojmi8$nhg$1@dough.gmane.org>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTimn_A8OvjEvET5JDrZhBZ+2OO3=sQ@mail.gmail.com>
	<iojmi8$nhg$1@dough.gmane.org>
Message-ID: <20110419145048.87EFB2500D1@mailhost.webabinitio.net>

On Tue, 19 Apr 2011 12:01:44 +0200, Stefan Behnel <stefan_ml at behnel.de> wrote:
> A related question is: when other Python VM projects try to port a given C 
> module, would they actually invest the time to write a pure Python version 
> that may or may not run within acceptable performance bounds for them, or 
> would they prefer saving time by writing only a native implementation 
> directly for their VM for performance reasons? Maybe both, maybe not. If 
> they end up writing a native version after prototyping in Python, is the 
> prototype worth including in the shared stdlib, even if its performance is 
> completely unacceptable for everyone? Or, if they write a partial module 
> and implement another part of it natively, would the incomplete 
> implementation qualify as a valid addition to the shared stdlib?

I would say yes, it is worth including.  And even more worth including is
any additional tests they develop to validate their implementation.

> Implementing a 100% compatible and "fast enough" Python version of a module 
> is actually a rather time consuming task. I think we are expecting some 
> altruism here that is easily sacrificed for time constraints, in any of the 
> Python VM projects. CPython is just in the unlucky position of representing 
> the status-quo.

Well, I don't think we are really expecting altruism.  We're trying
to leverage the work the community is doing, by drawing as much of the
Python code and validation tests that get created into a common stdlib.

If a module in the wild is being considered for inclusion in the stdlib,
it will need to have a Python version if practical.  Since we accept
so few modules anyway (for good reason), I really don't see this as a
big deal.  And, there's always the practicality beats purity argument:
if the PEP turns out to really get in the way of something everyone wants,
then we can agree to an exception.

--
R. David Murray           http://www.bitdance.com

From rdmurray at bitdance.com  Tue Apr 19 17:18:37 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 19 Apr 2011 11:18:37 -0400
Subject: [Python-Dev] PEP 399: Pure Python/C Accelerator Module
	Compatibiilty Requirements
In-Reply-To: <BANLkTikyObawauTTgexkF-J=jSKUOSM1xQ@mail.gmail.com>
References: <BANLkTikuhnx-+-jrmwREcEMFAqx-PKy7oA@mail.gmail.com>
	<ioj581$sl9$1@dough.gmane.org>
	<BANLkTi=1t=YH7Wa4u2cNtoEDVB8=WqMBjg@mail.gmail.com>
	<BANLkTikyObawauTTgexkF-J=jSKUOSM1xQ@mail.gmail.com>
Message-ID: <20110419151857.AA91E2500DC@mailhost.webabinitio.net>

On Tue, 19 Apr 2011 14:29:24 +0200, Maciej Fijalkowski <fijall at gmail.com> wrote:
> > Once this move is made/accepted, I would expect the other
> > implementation to rapidly move away from their custom implementations
> > of the stdlib and contribute to the shared code base and
> > documentation. Yes, this places a burden on CPython, but in the long
> > term in benefits *all* of the projects equally by simply having more
> > active contributors.
> 
> I would also like to point out that some valuable contributions were
> made already by other implementations. When talking about stdlib, it's
> mostly in the area of test suite, but not only in terms of "skip those
> tests", but also improving test coverage and even fixing bugs. Unicode
> fixes were prototyped on PyPy first and some PyPy optimizations were
> ported to CPython (the original method cache patch came from Armin
> Rigo as far as I remember). So it's not completely "Cpython's burden"
> only.

Yes, and you also need to keep in mind that several developers wear
multiple hats, and contribute to CPython on a regular or semi-regular
basis.

It is also enlightening to look at the output of hg churn.  The number
of active CPython developers over the past year is not huge, and very
few of them have spoken up in this thread.

--
R. David Murray           http://www.bitdance.com

From merwok at netwok.org  Tue Apr 19 17:48:33 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Tue, 19 Apr 2011 17:48:33 +0200
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix a few hyphens
 in argparse.rst.
In-Reply-To: <E1QBBtS-0006Zg-BE@dinsdale.python.org>
References: <E1QBBtS-0006Zg-BE@dinsdale.python.org>
Message-ID: <26a8f1c28020e94d7dc1cc2c610debe6@netwok.org>

 Hi,

> summary:
>   Fix a few hyphens in argparse.rst.

> -   :synopsis: Command-line option and argument parsing library.
> +   :synopsis: Command-line option and argument-parsing library.

 I believe that change should be reverted.  ?argument parsing library? 
 is
 a noun determined (qualified) by another noun itself determined by a
 noun, not by an adjective.

> -  followed by zero or one command-line args.  When parsing the 
> command-line, if
> +  followed by zero or one command-line args.  When parsing the 
> command line, if

 You changed ?arg? to ?argument? in one place but not throughout.  I
 think it?s best to use only full words in the docs (and for names in
 code, as recommended by PEP 8 :).

 Regards

From doug.hellmann at gmail.com  Tue Apr 19 21:20:13 2011
From: doug.hellmann at gmail.com (Doug Hellmann)
Date: Tue, 19 Apr 2011 15:20:13 -0400
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <4DAD9DDD.2080301@egenix.com>
References: <1303220774.8140.8.camel@marge> <4DAD9DDD.2080301@egenix.com>
Message-ID: <8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>


On Apr 19, 2011, at 10:36 AM, M.-A. Lemburg wrote:

> Victor Stinner wrote:
>> Hi,
>> 
>> I asked one year ago if we should drop OS/2 support: Andrew MacIntyre,
>> our OS/2 maintainer, answered:
>> http://mail.python.org/pipermail/python-dev/2010-April/099477.html
>> 
>> Extract: << The 3.x branch needs quite a bit of work on OS/2 to 
>> deal with Unicode, as OS/2 was one of the earlier OSes with full 
>> multiple language support and IBM developed a unique API.  I'm still 
>> struggling to come to terms with this, partly because I myself don't 
>> "need" it. >>
>> 
>> So one year later, Python 3 does still not support OS/2.
>> 
>> --
>> 
>> About VMS: I don't know if anyone is using Python (2 or 3) on VMS, or if
>> Python 3 does work on VMS. I bet that it does just not compile :-)
>> 
>> I don't know anyone using VMS or OS/2.
>> 
>> --
>> 
>> There are 39 #ifdef VMS and 52 #ifdef OS2. We can keep them and wait
>> until someone work on these OSes to ensure that the test suite pass. But
>> if nobody cares of these OSes and nobody wants to maintain them, it
>> would be easier for the maintenance of the Python source code base to
>> remove specific code.
>> 
>> Well, not "remove" directly, but plan to remove it using the PEP 11
>> procedure (mark OS/2 and VMS as unsupported, and remove the code in
>> Python 3.4).
> 
> The Python core team is not really representative of the Python
> community users, so I think this needs a different approach:
> 
> Instead of simply deprecating OSes without notice to the general
> Python community, how about doing a "call for support" for these
> OSes ?
> 
> If that doesn't turn up maintainers, then we can take the PEP 11
> route.

Victor, if you want to post the "call for support" to Python Insider, let me know off list and I will set you up with access.

Doug


From mal at egenix.com  Tue Apr 19 21:51:34 2011
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 19 Apr 2011 21:51:34 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>
References: <1303220774.8140.8.camel@marge> <4DAD9DDD.2080301@egenix.com>
	<8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>
Message-ID: <4DADE7C6.6040101@egenix.com>

Doug Hellmann wrote:
> 
> On Apr 19, 2011, at 10:36 AM, M.-A. Lemburg wrote:
> 
>> Victor Stinner wrote:
>>> Hi,
>>>
>>> I asked one year ago if we should drop OS/2 support: Andrew MacIntyre,
>>> our OS/2 maintainer, answered:
>>> http://mail.python.org/pipermail/python-dev/2010-April/099477.html
>>>
>>> Extract: << The 3.x branch needs quite a bit of work on OS/2 to 
>>> deal with Unicode, as OS/2 was one of the earlier OSes with full 
>>> multiple language support and IBM developed a unique API.  I'm still 
>>> struggling to come to terms with this, partly because I myself don't 
>>> "need" it. >>
>>>
>>> So one year later, Python 3 does still not support OS/2.
>>>
>>> --
>>>
>>> About VMS: I don't know if anyone is using Python (2 or 3) on VMS, or if
>>> Python 3 does work on VMS. I bet that it does just not compile :-)
>>>
>>> I don't know anyone using VMS or OS/2.
>>>
>>> --
>>>
>>> There are 39 #ifdef VMS and 52 #ifdef OS2. We can keep them and wait
>>> until someone work on these OSes to ensure that the test suite pass. But
>>> if nobody cares of these OSes and nobody wants to maintain them, it
>>> would be easier for the maintenance of the Python source code base to
>>> remove specific code.
>>>
>>> Well, not "remove" directly, but plan to remove it using the PEP 11
>>> procedure (mark OS/2 and VMS as unsupported, and remove the code in
>>> Python 3.4).
>>
>> The Python core team is not really representative of the Python
>> community users, so I think this needs a different approach:
>>
>> Instead of simply deprecating OSes without notice to the general
>> Python community, how about doing a "call for support" for these
>> OSes ?
>>
>> If that doesn't turn up maintainers, then we can take the PEP 11
>> route.
> 
> Victor, if you want to post the "call for support" to Python Insider, let me know off list and I will set you up with access.

I can help with that if you like.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 19 2011)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From victor.stinner at haypocalc.com  Tue Apr 19 22:14:06 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Tue, 19 Apr 2011 22:14:06 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>
References: <1303220774.8140.8.camel@marge> <4DAD9DDD.2080301@egenix.com>
	<8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>
Message-ID: <1303244046.21240.6.camel@marge>

Le mardi 19 avril 2011 ? 15:20 -0400, Doug Hellmann a ?crit :
> > The Python core team is not really representative of the Python
> > community users, so I think this needs a different approach:
> > 
> > Instead of simply deprecating OSes without notice to the general
> > Python community, how about doing a "call for support" for these
> > OSes ?
> > 
> > If that doesn't turn up maintainers, then we can take the PEP 11
> > route.
> 
> Victor, if you want to post the "call for support" to Python Insider,
> let me know off list and I will set you up with access.

If we ask users if they want to keep OS/2 and VMS, I expect that at
least someone would like to keep them. But it doesn't solve the
maintenance problem: we need maintainers (developers), not users.

If a "call for support" can help us to maintain these OSes, nice. But I
don't want to touch these OSes (I want to do less work, not more
work :-)), and so I don't want to write such call. If you feel concerned
by this issue, contact Doug to write the call ;-)

Victor


From solipsis at pitrou.net  Tue Apr 19 22:48:16 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 19 Apr 2011 22:48:16 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
References: <1303220774.8140.8.camel@marge> <4DAD9DDD.2080301@egenix.com>
	<8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>
Message-ID: <20110419224816.467c30a4@pitrou.net>

On Tue, 19 Apr 2011 15:20:13 -0400
Doug Hellmann <doug.hellmann at gmail.com> wrote:
> 
> Victor, if you want to post the "call for support" to Python Insider, let me know off list and I will set you up with access.

Doesn't it have more chances of succeeding if posted to
comp.lang.python, simply?

Regards

Antoine.



From martin at v.loewis.de  Tue Apr 19 23:21:24 2011
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 19 Apr 2011 23:21:24 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <1303220774.8140.8.camel@marge>
References: <1303220774.8140.8.camel@marge>
Message-ID: <4DADFCD4.20705@v.loewis.de>

> Well, not "remove" directly, but plan to remove it using the PEP 11
> procedure (mark OS/2 and VMS as unsupported, and remove the code in
> Python 3.4).

I think the PEP 11 procedure is just right for this. It *is* a call
for maintainers, so if any user is interested in ongoing support,
they should step forward.

Having then also blog posts about these pending deprecations sounds
fine to me - also adding them to the 3.2.x release pages would be
appropriate (IMO). It's important that we give users due notice, but
lacking any actual contribution, we should also be able to remove
the code eventually.

So please go ahead and add them to PEP 11.

Regards,
Martin

From victor.stinner at haypocalc.com  Wed Apr 20 10:20:55 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Wed, 20 Apr 2011 10:20:55 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing	informations about the
In-Reply-To: <4DAE47FA.7080007@udel.edu>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
Message-ID: <1303287655.2126.2.camel@marge>

Hi,

Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
> On 4/19/2011 5:59 PM, victor.stinner wrote:
> 
> >    Issue #11223: Add threading._info() function providing informations about the
> > thread implementation.
> 
> Since this is being documented, making it part of the public api, why 
> does it have a leading underscore?

Well, I suppose that this function might be specific to CPython. Do you
think that this function can/should be implemented in PyPy, Jython and
IronPython?

Victor


From regebro at gmail.com  Wed Apr 20 10:52:57 2011
From: regebro at gmail.com (Lennart Regebro)
Date: Wed, 20 Apr 2011 10:52:57 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <4DADFCD4.20705@v.loewis.de>
References: <1303220774.8140.8.camel@marge> <4DADFCD4.20705@v.loewis.de>
Message-ID: <BANLkTi=A9NGP+qK8V=kKemQ1XeLB=d02PQ@mail.gmail.com>

Various people wrote:
> So please go ahead and add them to PEP 11.

> If you want to post the "call for support" to Python Insider, let me know off list and I will set you up with access.

> Doesn't it have more chances of succeeding if posted to comp.lang.python, simply?

I say "all of the above". It could be good to find a OS/2 and OpenVMS
developer mailing list as well, and post it there.

--
Lennart Regebro, Colliberty: http://www.colliberty.com/
Telephone: +48 691 268 328

From victor.stinner at haypocalc.com  Wed Apr 20 11:37:05 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Wed, 20 Apr 2011 11:37:05 +0200
Subject: [Python-Dev] Buildbots and faulthandler
Message-ID: <1303292225.2126.39.camel@marge>

Hi,

The new faulthandler module is now fully functional and it has no more
known issue. Its timeout feature is used on regrtest to dump the Python
backtrace and exit if a test takes more than 1 hour.

Using the regrtest timeout and faulthandler signal handlers (enable in
regrtest), I started to collect tracebacks of all timeouts.

Open issues:

 * test_threading.test_notify() on Windows
   http://bugs.python.org/issue11769
   Not analyzed yet. I am unable to reproduce it in my VM.

 * test_mmap.test_large_offset() on Mac OS X
   http://bugs.python.org/issue11779
   May be related (and fixed) by issue #11277 which has a patch.

 * test_threading.test_3_join_in_forked_from_thread() on Ubuntu
   http://bugs.python.org/issue11870
   Only seen once.

 * test_mmap.test_big_buffer() on Mac OS X (it's a crash, bus error)
   http://bugs.python.org/issue11277
   The origin of the problem was already identified, but the trace
   proves that faulthandler is able to catch correctly SIGBUS ;-)

 * test_ttk_guionly on Mac OS X (bus error)
   http://bugs.python.org/issue5120
   Same as #11277 (the origin of the problem was already identified)

Closed issues:

 * test_io.test_interrupted_write_text() on FreeBSD
   http://bugs.python.org/issue11859
   (there was already enough information without faulthandler)

 * test_threadsignals.test_signals() on Mac OS X
   http://bugs.python.org/issue11768
   Race condition (deadlock).

 * test_multiprocessing.test_async_error_callback() on many OSes
   http://bugs.python.org/issue8428
   Race condition.

I'm proud of #11768 (because I fixed it). The bug was a deadlock. It is
usually very hard to reproduce such issue (a deadlock) and without
faulthandler, the only available information was the name of the file.
With faulthandler, we have not only the name of the test function, but
also the full traceback of the hang, but also the traceback of all other
threads.

Thanks to the faulthandler trace of #8428, with the traceback of all
threads, Charles-Francois Natali was able to understand and fix another
complex race condition in multiprocessing (at shutdown).

I also fixed other issues (not using faulthandler) and so ALL OUR 3.X
BUILDBOTS ARE GREEN!

... ok ok, except:

 - sparc Debian 3.x: offline since 21 days
 - PPC Leopard 3.x : "hg clean" fails with
twisted.internet.error.ProcessExitedAlready, but I think that except
this buildbot specific issue, it must be green
 - x86 Windows7 3.x: the master lost the connection with the slave on
test_cmd_line, but it should be a sporadic problem

Anyway, if you see a "Timeout (1:00:00)!" or "Fatal error" (with a
traceback) issue on a buildbot, please open a new issue (if it doesn't
exist, search a least the name of the test file). If you have other
problems related to regrtest timeout or faulthandler, contact me or open
an issue.

Finally, I'm very happy to see that my faulthandler module was as useful
as I expected: with more informations, we are now able to identify race
conditions. I hope that we will fix all remaining threading, signal and
subprocess race conditions!

Victor


From ncoghlan at gmail.com  Wed Apr 20 12:24:43 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 20 Apr 2011 20:24:43 +1000
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <1303287655.2126.2.camel@marge>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
Message-ID: <BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>

On Wed, Apr 20, 2011 at 6:20 PM, Victor Stinner
<victor.stinner at haypocalc.com> wrote:
> Hi,
>
> Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
>> On 4/19/2011 5:59 PM, victor.stinner wrote:
>>
>> > ? ?Issue #11223: Add threading._info() function providing informations about the
>> > thread implementation.
>>
>> Since this is being documented, making it part of the public api, why
>> does it have a leading underscore?
>
> Well, I suppose that this function might be specific to CPython. Do you
> think that this function can/should be implemented in PyPy, Jython and
> IronPython?

I agree with your reasoning (and the leading underscore), but I
suggest marking the docs with the implementation detail flag.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Wed Apr 20 12:29:46 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 20 Apr 2011 20:29:46 +1000
Subject: [Python-Dev] Buildbots and faulthandler
In-Reply-To: <1303292225.2126.39.camel@marge>
References: <1303292225.2126.39.camel@marge>
Message-ID: <BANLkTimotpphFGwxRDnjz9Gr+k+QgsZufA@mail.gmail.com>

On Wed, Apr 20, 2011 at 7:37 PM, Victor Stinner
<victor.stinner at haypocalc.com> wrote:
> Finally, I'm very happy to see that my faulthandler module was as useful
> as I expected: with more informations, we are now able to identify race
> conditions. I hope that we will fix all remaining threading, signal and
> subprocess race conditions!

Excellent work :)

Minor nit: the faulthandler docs could use an "impl-detail" block
similar to the one in the dis module docs.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ethan at stoneleaf.us  Wed Apr 20 13:57:53 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 20 Apr 2011 04:57:53 -0700
Subject: [Python-Dev] Buildbots and faulthandler
In-Reply-To: <1303292225.2126.39.camel@marge>
References: <1303292225.2126.39.camel@marge>
Message-ID: <4DAECA41.1050309@stoneleaf.us>

Victor Stinner wrote:
> Finally, I'm very happy to see that my faulthandler module was as useful
> as I expected [...]

Congratulations!  Nice work.

~Ethan~

From exarkun at twistedmatrix.com  Wed Apr 20 14:31:05 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Wed, 20 Apr 2011 12:31:05 -0000
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223:
	Add	threading._info() function providing informations about the
In-Reply-To: <1303287655.2126.2.camel@marge>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
Message-ID: <20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>

On 08:20 am, victor.stinner at haypocalc.com wrote:
>Hi,
>
>Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
>>On 4/19/2011 5:59 PM, victor.stinner wrote:
>>
>> >    Issue #11223: Add threading._info() function providing 
>>informations about the
>> > thread implementation.
>>
>>Since this is being documented, making it part of the public api, why
>>does it have a leading underscore?
>

Can I propose something wildly radical?  Maybe the guarantees made about 
whether an API will be available in future versions of Python 
(ostensibly what "public" vs "private" is for) should not be tightly 
coupled to the decision about whether to bother to explain what an API 
does?

Jean-Paul

From benjamin at python.org  Wed Apr 20 15:11:48 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 20 Apr 2011 08:11:48 -0500
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
	<20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>
Message-ID: <BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>

2011/4/20  <exarkun at twistedmatrix.com>:
> On 08:20 am, victor.stinner at haypocalc.com wrote:
>>
>> Hi,
>>
>> Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
>>>
>>> On 4/19/2011 5:59 PM, victor.stinner wrote:
>>>
>>> > ? ?Issue #11223: Add threading._info() function providing informations
>>> > about the
>>> > thread implementation.
>>>
>>> Since this is being documented, making it part of the public api, why
>>> does it have a leading underscore?
>>
>
> Can I propose something wildly radical? ?Maybe the guarantees made about
> whether an API will be available in future versions of Python (ostensibly
> what "public" vs "private" is for) should not be tightly coupled to the
> decision about whether to bother to explain what an API does?

With what criteria would you propose to replace it with?



-- 
Regards,
Benjamin

From victor.stinner at haypocalc.com  Wed Apr 20 16:01:34 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Wed, 20 Apr 2011 16:01:34 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>
References: <E1QCIxC-00028d-HR@dinsdale.python.org>
	<4DAE47FA.7080007@udel.edu> <1303287655.2126.2.camel@marge>
	<BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>
Message-ID: <1303308094.9838.12.camel@marge>

Le mercredi 20 avril 2011 ? 20:24 +1000, Nick Coghlan a ?crit :
> On Wed, Apr 20, 2011 at 6:20 PM, Victor Stinner
> <victor.stinner at haypocalc.com> wrote:
> > Hi,
> >
> > Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
> >> On 4/19/2011 5:59 PM, victor.stinner wrote:
> >>
> >> >    Issue #11223: Add threading._info() function providing informations about the
> >> > thread implementation.
> >>
> >> Since this is being documented, making it part of the public api, why
> >> does it have a leading underscore?
> >
> > Well, I suppose that this function might be specific to CPython. Do you
> > think that this function can/should be implemented in PyPy, Jython and
> > IronPython?
> 
> I agree with your reasoning (and the leading underscore), but I
> suggest marking the docs with the implementation detail flag.

I chose to return a dict to be flexible: any thread implementation may
add new specific keys. There is just one mandatory key: 'name', name of
the thread implementation (nt, os2, pthread or solaris for CPython 3.3).

http://docs.python.org/dev/py3k/library/threading.html#threading._info

After thinking twice, PyPy, Jython and IronPython should be able to fill
the only required key (name).

PyPy reuses the code from CPython, so it can just reuse the same names
(pthread or nt). I suppose that IronPython uses Windows threads and
semaphores, so it can use the name 'nt'. For Jython, I don't know if
Jython is able to get the name of the thread implementation used by the
JVM. If it is not, something like 'jvm' can be used.

threading._info() is a function: it can call other functions to retrieve
informations (it is not hardcoded or initialized at startup).

What do you think? Can I remove the leading underscore? :-)

Victor


From exarkun at twistedmatrix.com  Wed Apr 20 16:11:37 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Wed, 20 Apr 2011 14:11:37 -0000
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223:
	Add	threading._info() function providing informations about the
In-Reply-To: <BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
	<20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>
	<BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>
Message-ID: <20110420141137.1992.1520781814.divmod.xquotient.875@localhost.localdomain>

On 01:11 pm, benjamin at python.org wrote:
>2011/4/20  <exarkun at twistedmatrix.com>:
>>On 08:20 am, victor.stinner at haypocalc.com wrote:
>>>
>>>Hi,
>>>
>>>Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
>>>>
>>>>On 4/19/2011 5:59 PM, victor.stinner wrote:
>>>>
>>>> > ? ?Issue #11223: Add threading._info() function providing 
>>>>informations
>>>> > about the
>>>> > thread implementation.
>>>>
>>>>Since this is being documented, making it part of the public api, 
>>>>why
>>>>does it have a leading underscore?
>>>
>>
>>Can I propose something wildly radical? ?Maybe the guarantees made 
>>about
>>whether an API will be available in future versions of Python 
>>(ostensibly
>>what "public" vs "private" is for) should not be tightly coupled to 
>>the
>>decision about whether to bother to explain what an API does?
>
>With what criteria would you propose to replace it with?

I'm not sure what kind of criteria you're thinking of.  I'm only 
suggesting that:

  1) Document whatever you want (preferably as much as possible)

  2) Make "privateness" defined by whether there is a leading underscore

It is a big mistake to think that documentation isn't necessary for 
things just because you don't want application developers to use them. 
Maintainers benefit from it just as much.

Jean-Paul

From rdmurray at bitdance.com  Wed Apr 20 16:18:59 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 20 Apr 2011 10:18:59 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
	threading._info() function providing informations about the
In-Reply-To: <BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>
References: <E1QCIxC-00028d-HR@dinsdale.python.org>
	<4DAE47FA.7080007@udel.edu> <1303287655.2126.2.camel@marge>
	<20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>
	<BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>
Message-ID: <20110420141919.C22732500DD@mailhost.webabinitio.net>

On Wed, 20 Apr 2011 08:11:48 -0500, Benjamin Peterson <benjamin at python.org> wrote:
> 2011/4/20  <exarkun at twistedmatrix.com>:
> > On 08:20 am, victor.stinner at haypocalc.com wrote:
> >>
> >> Hi,
> >>
> >> Le mardi 19 avril 2011 ?? 22:42 -0400, Terry Reedy a ??crit :
> >>>
> >>> On 4/19/2011 5:59 PM, victor.stinner wrote:
> >>>
> >>> > ?? ??Issue #11223: Add threading._info() function providing informations
> >>> > about the
> >>> > thread implementation.
> >>>
> >>> Since this is being documented, making it part of the public api, why
> >>> does it have a leading underscore?
> >>
> >
> > Can I propose something wildly radical? ??Maybe the guarantees made about
> > whether an API will be available in future versions of Python (ostensibly
> > what "public" vs "private" is for) should not be tightly coupled to the
> > decision about whether to bother to explain what an API does?
> 
> With what criteria would you propose to replace it with?

I believe Jean-Paul was suggesting that just because an interface is
marked as "private" and might go away or change in the future does not
automatically mean it must also be undocumented.  To which I say +1.
(Note that we already have a whole module like that: test.support.)

--
R. David Murray           http://www.bitdance.com

From benjamin at python.org  Wed Apr 20 18:56:41 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 20 Apr 2011 11:56:41 -0500
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <20110420141919.C22732500DD@mailhost.webabinitio.net>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
	<20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>
	<BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>
	<20110420141919.C22732500DD@mailhost.webabinitio.net>
Message-ID: <BANLkTi=yL2t40cDBVnibGWHrYyt9i6ONnA@mail.gmail.com>

2011/4/20 R. David Murray <rdmurray at bitdance.com>:
> On Wed, 20 Apr 2011 08:11:48 -0500, Benjamin Peterson <benjamin at python.org> wrote:
>> 2011/4/20 ?<exarkun at twistedmatrix.com>:
>> > On 08:20 am, victor.stinner at haypocalc.com wrote:
>> >>
>> >> Hi,
>> >>
>> >> Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
>> >>>
>> >>> On 4/19/2011 5:59 PM, victor.stinner wrote:
>> >>>
>> >>> > ? ?Issue #11223: Add threading._info() function providing informations
>> >>> > about the
>> >>> > thread implementation.
>> >>>
>> >>> Since this is being documented, making it part of the public api, why
>> >>> does it have a leading underscore?
>> >>
>> >
>> > Can I propose something wildly radical? ?Maybe the guarantees made about
>> > whether an API will be available in future versions of Python (ostensibly
>> > what "public" vs "private" is for) should not be tightly coupled to the
>> > decision about whether to bother to explain what an API does?
>>
>> With what criteria would you propose to replace it with?
>
> I believe Jean-Paul was suggesting that just because an interface is
> marked as "private" and might go away or change in the future does not
> automatically mean it must also be undocumented. ?To which I say +1.
> (Note that we already have a whole module like that: test.support.)

I think that test.* as a special case is private stuff.


-- 
Regards,
Benjamin

From benjamin at python.org  Wed Apr 20 18:57:38 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 20 Apr 2011 11:57:38 -0500
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <1303308094.9838.12.camel@marge>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
	<BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>
	<1303308094.9838.12.camel@marge>
Message-ID: <BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>

2011/4/20 Victor Stinner <victor.stinner at haypocalc.com>:
> Le mercredi 20 avril 2011 ? 20:24 +1000, Nick Coghlan a ?crit :
>> On Wed, Apr 20, 2011 at 6:20 PM, Victor Stinner
>> <victor.stinner at haypocalc.com> wrote:
>> > Hi,
>> >
>> > Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
>> >> On 4/19/2011 5:59 PM, victor.stinner wrote:
>> >>
>> >> > ? ?Issue #11223: Add threading._info() function providing informations about the
>> >> > thread implementation.
>> >>
>> >> Since this is being documented, making it part of the public api, why
>> >> does it have a leading underscore?
>> >
>> > Well, I suppose that this function might be specific to CPython. Do you
>> > think that this function can/should be implemented in PyPy, Jython and
>> > IronPython?
>>
>> I agree with your reasoning (and the leading underscore), but I
>> suggest marking the docs with the implementation detail flag.
>
> I chose to return a dict to be flexible: any thread implementation may
> add new specific keys. There is just one mandatory key: 'name', name of
> the thread implementation (nt, os2, pthread or solaris for CPython 3.3).

How about using a structseq ala sys.float_info or sys.long_info? (In
fact, we might want to put this in sys.)


-- 
Regards,
Benjamin

From g.rodola at gmail.com  Wed Apr 20 20:09:04 2011
From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=)
Date: Wed, 20 Apr 2011 20:09:04 +0200
Subject: [Python-Dev] cpython: os.sendfile(): on Linux if offset
 parameter is passed as NULL we were
In-Reply-To: <20110419133548.5d91845c@pitrou.net>
References: <E1QC5eb-0007jZ-VX@dinsdale.python.org>
	<20110419133548.5d91845c@pitrou.net>
Message-ID: <BANLkTinQjeR_6XdrXqMHxHnhqyUp9mOwOQ@mail.gmail.com>

No we haven't.
I plan to make a unique commit for offset=None on Linux and a serie of
other tests I have implemented for py-sendfile module [1].
In details test for small file, empty file and (most important) large file:
http://code.google.com/p/py-sendfile/source/browse/trunk/test/test_sendfile.py?spec=svn68&r=68#296

[1] http://code.google.com/p/py-sendfile

--- Giampaolo
http://code.google.com/p/pyftpdlib
http://code.google.com/p/psutil



2011/4/19 Antoine Pitrou <solipsis at pitrou.net>:
> On Tue, 19 Apr 2011 09:47:21 +0200
> giampaolo.rodola <python-checkins at python.org> wrote:
>
>> http://hg.python.org/cpython/rev/8c49f7fbba1d
>> changeset: ? 69437:8c49f7fbba1d
>> user: ? ? ? ?Giampaolo Rodola' <g.rodola at gmail.com>
>> date: ? ? ? ?Tue Apr 19 09:47:16 2011 +0200
>> summary:
>> ? os.sendfile(): on Linux if offset parameter is passed as NULL we were erroneously returning a (bytes_sent, None) tuple instead of bytes_sent
>
> Do we have tests for this?
>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com
>

From tjreedy at udel.edu  Wed Apr 20 23:38:14 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 20 Apr 2011 17:38:14 -0400
Subject: [Python-Dev] cpython: os.sendfile(): on Linux if offset
 parameter is passed as NULL we were
In-Reply-To: <BANLkTinQjeR_6XdrXqMHxHnhqyUp9mOwOQ@mail.gmail.com>
References: <E1QC5eb-0007jZ-VX@dinsdale.python.org>	<20110419133548.5d91845c@pitrou.net>
	<BANLkTinQjeR_6XdrXqMHxHnhqyUp9mOwOQ@mail.gmail.com>
Message-ID: <ionjo7$mak$1@dough.gmane.org>

On 4/20/2011 2:09 PM, Giampaolo Rodol? wrote:
> No we haven't.

"No we haven't" what? Such out-of-context responses exemplify why 
top-posting is greatly inferior for readers, who vastly outnumber the 
one writer. If that line had been put where it belongs, right after what 
it refers to, it would have been clear.

-- 
Terry Jan Reedy



From tjreedy at udel.edu  Wed Apr 20 23:47:45 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 20 Apr 2011 17:47:45 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>
References: <E1QCIxC-00028d-HR@dinsdale.python.org>
	<4DAE47FA.7080007@udel.edu>	<1303287655.2126.2.camel@marge>	<BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>	<1303308094.9838.12.camel@marge>
	<BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>
Message-ID: <ionka3$pfc$1@dough.gmane.org>

On 4/20/2011 12:57 PM, Benjamin Peterson wrote:

>>>>> On 4/19/2011 5:59 PM, victor.stinner wrote:
>>>>>
>>>>>>     Issue #11223: Add threading._info() function providing informations about the
>>>>>> thread implementation.

> How about using a structseq ala sys.float_info or sys.long_info? (In
> fact, we might want to put this in sys.)

sys.thread_info strikes me as a good idea too. The only thing required 
should be 'name' with '' at the default indicating no threading.

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Wed Apr 20 23:53:22 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 20 Apr 2011 17:53:22 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <20110420141137.1992.1520781814.divmod.xquotient.875@localhost.localdomain>
References: <E1QCIxC-00028d-HR@dinsdale.python.org>
	<4DAE47FA.7080007@udel.edu>	<1303287655.2126.2.camel@marge>	<20110420123105.1992.958167700.divmod.xquotient.867@localhost.localdomain>	<BANLkTim+UKzjoEs+aRaVmeM-ami+YL+tEA@mail.gmail.com>
	<20110420141137.1992.1520781814.divmod.xquotient.875@localhost.localdomain>
Message-ID: <ionkkh$qgv$2@dough.gmane.org>

On 4/20/2011 10:11 AM, exarkun at twistedmatrix.com wrote:
> On 01:11 pm, benjamin at python.org wrote:

> It is a big mistake to think that documentation isn't necessary for
> things just because you don't want application developers to use them.
> Maintainers benefit from it just as much.

Maintainers can and will read the doc string.



-- 
Terry Jan Reedy


From victor.stinner at haypocalc.com  Wed Apr 20 23:53:55 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Wed, 20 Apr 2011 23:53:55 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>
References: <E1QCIxC-00028d-HR@dinsdale.python.org>
	<4DAE47FA.7080007@udel.edu> <1303287655.2126.2.camel@marge>
	<BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>
	<1303308094.9838.12.camel@marge>
	<BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>
Message-ID: <1303336435.22095.7.camel@marge>

Le mercredi 20 avril 2011 ? 11:57 -0500, Benjamin Peterson a ?crit :
> 2011/4/20 Victor Stinner <victor.stinner at haypocalc.com>:
> > Le mercredi 20 avril 2011 ? 20:24 +1000, Nick Coghlan a ?crit :
> >> On Wed, Apr 20, 2011 at 6:20 PM, Victor Stinner
> >> <victor.stinner at haypocalc.com> wrote:
> >> > Hi,
> >> >
> >> > Le mardi 19 avril 2011 ? 22:42 -0400, Terry Reedy a ?crit :
> >> >> On 4/19/2011 5:59 PM, victor.stinner wrote:
> >> >>
> >> >> >    Issue #11223: Add threading._info() function providing informations about the
> >> >> > thread implementation.
> >> >>
> >> >> Since this is being documented, making it part of the public api, why
> >> >> does it have a leading underscore?
> >> >
> >> > Well, I suppose that this function might be specific to CPython. Do you
> >> > think that this function can/should be implemented in PyPy, Jython and
> >> > IronPython?
> >>
> >> I agree with your reasoning (and the leading underscore), but I
> >> suggest marking the docs with the implementation detail flag.
> >
> > I chose to return a dict to be flexible: any thread implementation may
> > add new specific keys. There is just one mandatory key: 'name', name of
> > the thread implementation (nt, os2, pthread or solaris for CPython 3.3).
> 
> How about using a structseq ala sys.float_info or sys.long_info? (In
> fact, we might want to put this in sys.)

Would you prefer something like the following example?

>>> sys.thread_info
sys.threadinfo(name='pthread', 'lock_implementation': 'semaphore',
version: 'NPTL 2.11.2')
>>> sys.thread_info
sys.threadinfo(name='nt', 'lock_implementation': 'semaphore', version:
'')
>>> sys.thread_info
sys.threadinfo(name='os2', 'lock_implementation': '', version: '')

Victor


From tjreedy at udel.edu  Wed Apr 20 23:52:42 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 20 Apr 2011 17:52:42 -0400
Subject: [Python-Dev] Buildbots and faulthandler
In-Reply-To: <4DAECA41.1050309@stoneleaf.us>
References: <1303292225.2126.39.camel@marge> <4DAECA41.1050309@stoneleaf.us>
Message-ID: <ionkj9$qgv$1@dough.gmane.org>

On 4/20/2011 7:57 AM, Ethan Furman wrote:
> Victor Stinner wrote:
>> Finally, I'm very happy to see that my faulthandler module was as useful
>> as I expected [...]
>
> Congratulations! Nice work.

Ditto. Multiple pats on the back.

-- 
Terry Jan Reedy


From benjamin at python.org  Thu Apr 21 00:03:19 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 20 Apr 2011 17:03:19 -0500
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
In-Reply-To: <1303336435.22095.7.camel@marge>
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
	<BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>
	<1303308094.9838.12.camel@marge>
	<BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>
	<1303336435.22095.7.camel@marge>
Message-ID: <BANLkTi=wv7eWU+wdhzwPqZGKtQaxvmfd0Q@mail.gmail.com>

2011/4/20 Victor Stinner <victor.stinner at haypocalc.com>:
> Le mercredi 20 avril 2011 ? 11:57 -0500, Benjamin Peterson a ?crit :
>> How about using a structseq ala sys.float_info or sys.long_info? (In
>> fact, we might want to put this in sys.)
>
> Would you prefer something like the following example?
>
>>>> sys.thread_info
> sys.threadinfo(name='pthread', 'lock_implementation': 'semaphore',
> version: 'NPTL 2.11.2')
>>>> sys.thread_info
> sys.threadinfo(name='nt', 'lock_implementation': 'semaphore', version:
> '')
>>>> sys.thread_info
> sys.threadinfo(name='os2', 'lock_implementation': '', version: '')

The only things that would improve that beautiful sight would be
s/threadinfo/thread_info/. :)



-- 
Regards,
Benjamin

From solipsis at pitrou.net  Thu Apr 21 00:14:43 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 21 Apr 2011 00:14:43 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11223: Add
 threading._info() function providing informations about the
References: <E1QCIxC-00028d-HR@dinsdale.python.org> <4DAE47FA.7080007@udel.edu>
	<1303287655.2126.2.camel@marge>
	<BANLkTinxZBrxL7-2AzUWLRvM-6xstiSR8Q@mail.gmail.com>
	<1303308094.9838.12.camel@marge>
	<BANLkTi=2N0OxDO6s8-EtDK1n_GpYGS0Z8Q@mail.gmail.com>
	<1303336435.22095.7.camel@marge>
	<BANLkTi=wv7eWU+wdhzwPqZGKtQaxvmfd0Q@mail.gmail.com>
Message-ID: <20110421001443.389da263@pitrou.net>

On Wed, 20 Apr 2011 17:03:19 -0500
Benjamin Peterson <benjamin at python.org> wrote:

> 2011/4/20 Victor Stinner <victor.stinner at haypocalc.com>:
> > Le mercredi 20 avril 2011 ? 11:57 -0500, Benjamin Peterson a ?crit :
> >> How about using a structseq ala sys.float_info or sys.long_info? (In
> >> fact, we might want to put this in sys.)
> >
> > Would you prefer something like the following example?
> >
> >>>> sys.thread_info
> > sys.threadinfo(name='pthread', 'lock_implementation': 'semaphore',
> > version: 'NPTL 2.11.2')
> >>>> sys.thread_info
> > sys.threadinfo(name='nt', 'lock_implementation': 'semaphore', version:
> > '')
> >>>> sys.thread_info
> > sys.threadinfo(name='os2', 'lock_implementation': '', version: '')
> 
> The only things that would improve that beautiful sight would be
> s/threadinfo/thread_info/. :)

And None instead of the empty string when a value is unknown/irrelevant.

Regards

Antoine.



From fuzzyman at voidspace.org.uk  Fri Apr 22 00:15:54 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 21 Apr 2011 23:15:54 +0100
Subject: [Python-Dev] Test cases not garbage collected after run
In-Reply-To: <BANLkTi=nwqYS468F7VN+XBH_dZzKt6iFtA@mail.gmail.com>
References: <BANLkTimKhJVdNHYfCh9Hu5NLkFo1ie8x5A@mail.gmail.com>	<4D9DEB19.10307@voidspace.org.uk>	<BANLkTineVvCOJfbgL5-UqiGauSvQ0YYoOQ@mail.gmail.com>	<4D9E1AA4.4020607@voidspace.org.uk>	<BANLkTimd=JpjsbhQe1NkCNs2fL9nZ9T3mg@mail.gmail.com>	<4DA6DBDF.6000202@voidspace.org.uk>
	<BANLkTi=nwqYS468F7VN+XBH_dZzKt6iFtA@mail.gmail.com>
Message-ID: <4DB0AC9A.3020003@voidspace.org.uk>

On 15/04/2011 17:49, Martin (gzlist) wrote:
> On 14/04/2011, Michael Foord<fuzzyman at voidspace.org.uk>  wrote:
>> I'd be interested to know what is keeping the tests alive even when the
>> test suite isn't. As far as I know there is nothing else in unittest
>> that would do that.
> The main cause is some handy code for collecting and filtering tests
> by name, which unintentionally keeps alive a list outside the
> TestSuite instance.
>
> There's also the problem of reference cycles involving exc_info, bound
> methods, and so on that make the lifetimes of test cases
> unpredictable. That's mostly a problem for systems with a very limited
> allotment of certain resources such as socket handles. However it also
> makes ensuring the code doesn't regress back to leaking-the-universe
> more complicated as tests may still survive past the pop.
>
>> It's either a general problem that unittest can fix, or it is a problem
>> *caused* by the bazaar test suite and should be fixed there. Bazaar does
>> some funky stuff copying tests to run them with different backends, so
>> it is possible that this is the cause of the problem (and it isn't a
>> general problem).
> The fact it's easy to accidentally keep objects alive is a general
> problem. If every project that writes their own little test loader
> risks reverting to immortal cases, that's not really progress. The
> Bazaar example is a warning because the intention was the same as
> yours, but ended up being a behaviour regression that went unnoticed
> by most of the developers while crippling others. And as John
> mentioned, the fix hasn't yet landed, mostly because the hack is good
> enough for me and the right thing is too complicated.
>

I can't remember if I replied to this or not. Sorry.

Anyway, so long as *unittest* doesn't keep your test cases alive (which 
currently it does) then if any individual test framework keeps the tests 
alive that is a bug in the framework (and it can be tested for).

If we stomp on the test instance dictionaries then legitimate use cases 
may be prevented (like test cases copying themselves and executing a 
copy when run - a use case described by Robert Collins in a previous email).

Although other test frameworks may implement additional measures 
required specifically by them, the duty of unittest is just to ensure 
that it doesn't make disposing of test cases *impossible* during normal use.

All the best,

Michael Foord

> Martin


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From status at bugs.python.org  Fri Apr 22 18:07:21 2011
From: status at bugs.python.org (Python tracker)
Date: Fri, 22 Apr 2011 18:07:21 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20110422160721.E53941D41B@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2011-04-15 - 2011-04-22)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    2752 (+18)
  closed 20937 (+38)
  total  23689 (+56)

Open issues with patches: 1194 


Issues opened (38)
==================

#4608: urllib.request.urlopen does not return an iterable object
http://bugs.python.org/issue4608  reopened by orsenthil

#11584: email.decode_header fails if msg.__getitem__ returns Header ob
http://bugs.python.org/issue11584  reopened by r.david.murray

#11619: On Windows, don't encode filenames in the import machinery
http://bugs.python.org/issue11619  reopened by haypo

#11779: test_mmap.test_large_offset() timeout (1 hour) on "AMD64 Snow 
http://bugs.python.org/issue11779  reopened by haypo

#11854: __or__ et al instantiate subclass of set without calling __ini
http://bugs.python.org/issue11854  opened by Robert.Burke

#11856: Optimize parsing of JSON numbers
http://bugs.python.org/issue11856  opened by pitrou

#11858: configparser.ExtendedInterpolation and section case
http://bugs.python.org/issue11858  opened by ciasoms

#11859: test_interrupted_write_text() of test_io failed of Python 3.3 
http://bugs.python.org/issue11859  opened by haypo

#11860: reference 2.3 has text that runs past the page
http://bugs.python.org/issue11860  opened by Mike.Kamermans

#11863: Enforce PEP 11 - remove support for legacy systems
http://bugs.python.org/issue11863  opened by pitrou

#11864: sporadic failure in test_concurrent_futures
http://bugs.python.org/issue11864  opened by pitrou

#11866: race condition in threading._newname()
http://bugs.python.org/issue11866  opened by Peter.Saveliev

#11867: Make test_mailbox deterministic
http://bugs.python.org/issue11867  opened by r.david.murray

#11869: Include information about the bug tracker Rietveld code review
http://bugs.python.org/issue11869  opened by ned.deily

#11870: test_3_join_in_forked_from_thread() of test_threading hangs 1 
http://bugs.python.org/issue11870  opened by haypo

#11871: test_default_timeout() of test_threading.BarrierTests failure:
http://bugs.python.org/issue11871  opened by haypo

#11872: cPickle gives strange error for large objects.
http://bugs.python.org/issue11872  opened by meawoppl

#11873: test_regexp() of test_compileall failure on "x86 OpenIndiana 3
http://bugs.python.org/issue11873  opened by haypo

#11874: argparse assertion failure with brackets in metavars
http://bugs.python.org/issue11874  opened by htnieman

#11877: Change os.fsync() to support physical backing store syncs
http://bugs.python.org/issue11877  opened by sdaoden

#11879: TarFile.chown: should use TarInfo.uid if user lookup fails
http://bugs.python.org/issue11879  opened by mgold-qnx

#11880: add a {dist-info} category to distutils2
http://bugs.python.org/issue11880  opened by dholth

#11882: test_imaplib failed on x86 ubuntu
http://bugs.python.org/issue11882  opened by kasun

#11883: Call connect() before sending an email with smtplib
http://bugs.python.org/issue11883  opened by sandro.tosi

#11884: Argparse calls ngettext but doesn't import it
http://bugs.python.org/issue11884  opened by johnohagan

#11886: test_time.test_tzset() fails on "x86 FreeBSD 7.2 3.x": AEST ti
http://bugs.python.org/issue11886  opened by haypo

#11887: unittest fails on comparing str with bytes if python has the -
http://bugs.python.org/issue11887  opened by haypo

#11888: Add C99's log2() function to the math library
http://bugs.python.org/issue11888  opened by rhettinger

#11889: 'enumerate' 'start' parameter documentation is confusing
http://bugs.python.org/issue11889  opened by phammer

#11893: Obsolete SSLFakeFile in smtplib?
http://bugs.python.org/issue11893  opened by pitrou

#11894: test_multiprocessing failure on "AMD64 OpenIndiana 3.x": KeyEr
http://bugs.python.org/issue11894  opened by haypo

#11895: pybench prep_times calculation error
http://bugs.python.org/issue11895  opened by termim

#11896: Save on Close fails in IDLE, from Linux system
http://bugs.python.org/issue11896  opened by marcus777

#11898: Sending binary data with a POST request in httplib can cause U
http://bugs.python.org/issue11898  opened by bero

#11899: TarFile.gettarinfo modifies self.inodes
http://bugs.python.org/issue11899  opened by mgold-qnx

#11901: Docs for sys.hexversion should give the algorithm
http://bugs.python.org/issue11901  opened by r.david.murray

#11906: Test_argparse failure but only in interactive mode
http://bugs.python.org/issue11906  opened by terry.reedy

#11907: SysLogHandler can't send long messages
http://bugs.python.org/issue11907  opened by lukas.lalinsky



Most recent 15 issues with no replies (15)
==========================================

#11907: SysLogHandler can't send long messages
http://bugs.python.org/issue11907

#11906: Test_argparse failure but only in interactive mode
http://bugs.python.org/issue11906

#11901: Docs for sys.hexversion should give the algorithm
http://bugs.python.org/issue11901

#11898: Sending binary data with a POST request in httplib can cause U
http://bugs.python.org/issue11898

#11894: test_multiprocessing failure on "AMD64 OpenIndiana 3.x": KeyEr
http://bugs.python.org/issue11894

#11893: Obsolete SSLFakeFile in smtplib?
http://bugs.python.org/issue11893

#11887: unittest fails on comparing str with bytes if python has the -
http://bugs.python.org/issue11887

#11884: Argparse calls ngettext but doesn't import it
http://bugs.python.org/issue11884

#11883: Call connect() before sending an email with smtplib
http://bugs.python.org/issue11883

#11879: TarFile.chown: should use TarInfo.uid if user lookup fails
http://bugs.python.org/issue11879

#11874: argparse assertion failure with brackets in metavars
http://bugs.python.org/issue11874

#11871: test_default_timeout() of test_threading.BarrierTests failure:
http://bugs.python.org/issue11871

#11870: test_3_join_in_forked_from_thread() of test_threading hangs 1 
http://bugs.python.org/issue11870

#11869: Include information about the bug tracker Rietveld code review
http://bugs.python.org/issue11869

#11866: race condition in threading._newname()
http://bugs.python.org/issue11866



Most recent 15 issues waiting for review (15)
=============================================

#11898: Sending binary data with a POST request in httplib can cause U
http://bugs.python.org/issue11898

#11895: pybench prep_times calculation error
http://bugs.python.org/issue11895

#11887: unittest fails on comparing str with bytes if python has the -
http://bugs.python.org/issue11887

#11883: Call connect() before sending an email with smtplib
http://bugs.python.org/issue11883

#11877: Change os.fsync() to support physical backing store syncs
http://bugs.python.org/issue11877

#11867: Make test_mailbox deterministic
http://bugs.python.org/issue11867

#11863: Enforce PEP 11 - remove support for legacy systems
http://bugs.python.org/issue11863

#11858: configparser.ExtendedInterpolation and section case
http://bugs.python.org/issue11858

#11856: Optimize parsing of JSON numbers
http://bugs.python.org/issue11856

#11849: ElementTree memory leak
http://bugs.python.org/issue11849

#11841: Bug in the verson comparison
http://bugs.python.org/issue11841

#11835: python (x64) ctypes incorrectly pass structures parameter
http://bugs.python.org/issue11835

#11832: Add option to pause regrtest to attach a debugger
http://bugs.python.org/issue11832

#11831: "pydoc -w" causes "no Python documentation found" error when t
http://bugs.python.org/issue11831

#11829: inspect.getattr_static code execution with meta-metaclasses
http://bugs.python.org/issue11829



Top 10 most discussed issues (10)
=================================

#11877: Change os.fsync() to support physical backing store syncs
http://bugs.python.org/issue11877  19 msgs

#10042: total_ordering
http://bugs.python.org/issue10042  16 msgs

#11277: Crash with mmap and sparse files on Mac OS X
http://bugs.python.org/issue11277  15 msgs

#1294232: Error in metaclass search order
http://bugs.python.org/issue1294232  14 msgs

#10665: Expand unicodedata module documentation
http://bugs.python.org/issue10665   8 msgs

#11779: test_mmap.test_large_offset() timeout (1 hour) on "AMD64 Snow 
http://bugs.python.org/issue11779   8 msgs

#11849: ElementTree memory leak
http://bugs.python.org/issue11849   7 msgs

#10932: distutils.core.setup - data_files misbehaviour ?
http://bugs.python.org/issue10932   6 msgs

#11863: Enforce PEP 11 - remove support for legacy systems
http://bugs.python.org/issue11863   6 msgs

#8809: smtplib should support SSL contexts
http://bugs.python.org/issue8809   5 msgs



Issues closed (37)
==================

#5162: multiprocessing cannot spawn child from a Windows service
http://bugs.python.org/issue5162  closed by brian.curtin

#5612: whitespace folding in the email package could be better ;-)
http://bugs.python.org/issue5612  closed by r.david.murray

#7796: No way to find out if an object is an instance of a namedtuple
http://bugs.python.org/issue7796  closed by rhettinger

#8769: Straightforward usage of email package fails to round-trip
http://bugs.python.org/issue8769  closed by r.david.murray

#8886: zipfile.ZipExtFile is a context manager, but that is not docum
http://bugs.python.org/issue8886  closed by brian.curtin

#8944: test_winreg.test_reflection_functions fails on Windows Server 
http://bugs.python.org/issue8944  closed by brian.curtin

#10540: test_shutil fails on Windows after r86733
http://bugs.python.org/issue10540  closed by brian.curtin

#11223: interruption of locks by signals not guaranteed when locks are
http://bugs.python.org/issue11223  closed by haypo

#11300: mmap() large file failures on Mac OS X docfix
http://bugs.python.org/issue11300  closed by sdaoden

#11768: signal_handler() is not reentrant: deadlock in Py_AddPendingCa
http://bugs.python.org/issue11768  closed by haypo

#11790: transient failure in test_multiprocessing.WithProcessesTestCon
http://bugs.python.org/issue11790  closed by pitrou

#11800: regrtest --timeout: apply the timeout on a function, not on th
http://bugs.python.org/issue11800  closed by haypo

#11828: startswith and endswith don't accept None as slice index
http://bugs.python.org/issue11828  closed by python-dev

#11851: Flushing the standard input causes an error
http://bugs.python.org/issue11851  closed by belopolsky

#11852: New QueueListener is unusable due to missing threading and que
http://bugs.python.org/issue11852  closed by vinay.sajip

#11853: idle3.2 on mac unresponsive on input() called from a source fi
http://bugs.python.org/issue11853  closed by ned.deily

#11855: urlretrieve --> urlretrieve()
http://bugs.python.org/issue11855  closed by eli.bendersky

#11857: Hyphenate the argparse.rst file, patch added
http://bugs.python.org/issue11857  closed by ezio.melotti

#11861: 2to3 fails with a ParseError
http://bugs.python.org/issue11861  closed by amaury.forgeotdarc

#11862: urlparse.ParseResult to have meaningful __str__
http://bugs.python.org/issue11862  closed by orsenthil

#11865: typo in Py_AddPendingCall document
http://bugs.python.org/issue11865  closed by ezio.melotti

#11868: Minor word-choice improvement in devguide "lifecycle of a patc
http://bugs.python.org/issue11868  closed by ned.deily

#11875: OrderedDict.__reduce__ not threadsafe
http://bugs.python.org/issue11875  closed by rhettinger

#11876: SGI Irix threads, SunOS lightweight processes, GNU pth threads
http://bugs.python.org/issue11876  closed by haypo

#11878: No SOAP libraries available for Python 3.x
http://bugs.python.org/issue11878  closed by brian.curtin

#11881: Add list.get
http://bugs.python.org/issue11881  closed by rhettinger

#11885: argparse docs needs fixing
http://bugs.python.org/issue11885  closed by ezio.melotti

#11890: COMPILER WARNING: warning: offset outside bounds of constant s
http://bugs.python.org/issue11890  closed by python-dev

#11891: Poll call in multiprocessing/forking.py is not thread safe.  R
http://bugs.python.org/issue11891  closed by brian.curtin

#11892: Compiler warning: warning: implicit declaration of function 'f
http://bugs.python.org/issue11892  closed by python-dev

#11897: [PATCH] Documentation: fix typo, absolute_import not absolute_
http://bugs.python.org/issue11897  closed by ezio.melotti

#11900: 2.7.1 unicode subclasses not calling __str__() for print state
http://bugs.python.org/issue11900  closed by r.david.murray

#11902: typo in argparse doc's: "action.."
http://bugs.python.org/issue11902  closed by ezio.melotti

#11903: Incorrect test code in test_logging.py
http://bugs.python.org/issue11903  closed by vinay.sajip

#11904: incorrect reStructuredText formatting in argparse module
http://bugs.python.org/issue11904  closed by ezio.melotti

#11905: typo in argparse doc's: missing dot at end of sentence
http://bugs.python.org/issue11905  closed by ezio.melotti

#1372770: email.Header should preserve original FWS
http://bugs.python.org/issue1372770  closed by r.david.murray

From listas.programacao at gmail.com  Sat Apr 23 20:14:20 2011
From: listas.programacao at gmail.com (Jayme Proni Filho)
Date: Sat, 23 Apr 2011 15:14:20 -0300
Subject: [Python-Dev] Hello guys!
Message-ID: <BANLkTi=F3okucuNDNVf==XETvVfRYK5s9Q@mail.gmail.com>

Hello guys!

Well, I'll do like welcome message of the python-dev-request ask.

I'm a brazilian C programmer and I'm learning how to programm in Python fast
because it is cool.
I installed python a few days ago in my notebook. So, I will be here
learning with you guys just watching you and your discussions about what it
is better for python's future for while and when I fell 100% sure about what
I have to post it here. I will.
I'm sorry about my English. That's my first time typing in English in a
important place.

See you guys!

---------------------------------------------------------------------------------------
Jayme Proni Filho
Skype: jaymeproni
Twitter: @jaymeproni
Phone: +55 - 17 - 3631 - 6576
Mobile: +55 - 17 - 9605 - 3560
e-Mail: jaymeproni at yahoo dot com dot br
---------------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110423/04b98cd0/attachment.html>

From jcea at jcea.es  Mon Apr 25 04:47:03 2011
From: jcea at jcea.es (Jesus Cea)
Date: Mon, 25 Apr 2011 04:47:03 +0200
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
 Correctly merging #9319 into 3.3?
In-Reply-To: <E1QEAu8-0006Rn-UZ@dinsdale.python.org>
References: <E1QEAu8-0006Rn-UZ@dinsdale.python.org>
Message-ID: <4DB4E0A7.4080506@jcea.es>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

If a patch in 3.2 is not applicable in 3.3, a "null merge" should be
done. If not, next developer tring to merge will find some other
unrelated code to merge, and she doesn't have the context knowledge to
know what to do :-).

In this case, I merged code that doesn't actually compile, breaking the
build for 20 minutes :-).

And yes, I fully realized that I should try to compile locally first.
Dealing with this unexpected merge when merging my own patch was...
unexpected, and the code seemed sensible enough.

Do we have some hat-of-shame I should wear because breaking the build? :).

- -- 
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea at jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea at jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCUAwUBTbTgpplgi5GaxT1NAQJqegP3QSVIf6yszZrFJEgKTaK4XXvHB965PdYN
T9g8bx5IKXmiMjDBCatjuA2AAtwnL0Wd2Dw0tnGhRTqYHD2l+cMcFw/2JtV4L6sC
c0fKm2o+V8gSW7KZwdvgNWiQlzE3lp2DiD/ng3gM3JlK/EKghIH8acDiJsHHrQtS
7T7iSLllOw==
=+50u
-----END PGP SIGNATURE-----

From jcea at jcea.es  Mon Apr 25 05:04:49 2011
From: jcea at jcea.es (Jesus Cea)
Date: Mon, 25 Apr 2011 05:04:49 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <20110419224816.467c30a4@pitrou.net>
References: <1303220774.8140.8.camel@marge>
	<4DAD9DDD.2080301@egenix.com>	<8E0C8D8E-41E7-4510-92DB-18591C25C6DA@gmail.com>
	<20110419224816.467c30a4@pitrou.net>
Message-ID: <4DB4E4D1.8040409@jcea.es>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 19/04/11 22:48, Antoine Pitrou wrote:
> On Tue, 19 Apr 2011 15:20:13 -0400
> Doug Hellmann <doug.hellmann at gmail.com> wrote:
>>
>> Victor, if you want to post the "call for support" to Python Insider, let me know off list and I will set you up with access.
> 
> Doesn't it have more chances of succeeding if posted to
> comp.lang.python, simply?

I think "Python Insider" point was to translate to general public
relevant python-dev discussions. This is a perfect example.

+1 to include deprecation warnings/errors in 3.3 and remove in 3.4,
according to PEP11. A heads-up warning and request for help in "Python
Insider" is the way to go, too.

- -- 
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea at jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea at jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTbTk0Jlgi5GaxT1NAQJyGAP8CnKuJyOh0pidU/Y4xlE4oSqQzwbVIqYA
+Pd95c+oWDf8cRGkc8U/4APHOruyX1YYUpQL9WTlf3NzyoBv0f7JvzQRgc9eKDaj
IGU79VhDKEShTB49saPTsUCpIcaQ8bUTeAjXLv67ga44WQ0toghez7dWVJ8iWh6+
R+w/4tK6aRM=
=MBBZ
-----END PGP SIGNATURE-----

From jcea at jcea.es  Mon Apr 25 05:25:02 2011
From: jcea at jcea.es (Jesus Cea)
Date: Mon, 25 Apr 2011 05:25:02 +0200
Subject: [Python-Dev] Test "Force Build" on custom buildbots
In-Reply-To: <20110330182713.48ab52fa@pitrou.net>
References: <1301500742.4065.11.camel@marge> <1301501513.4065.19.camel@marge>
	<20110330182713.48ab52fa@pitrou.net>
Message-ID: <4DB4E98E.1000605@jcea.es>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 30/03/11 18:27, Antoine Pitrou wrote:
> On Wed, 30 Mar 2011 18:11:53 +0200
> Victor Stinner <victor.stinner at haypocalc.com> wrote:
>> Le mercredi 30 mars 2011 ? 17:59 +0200, Victor Stinner a ?crit :
>>> I'm testing my faulthandler repository on the custom buildbots, here are
>>> some remarks and issues.
>>
>> Oh, I forgot something: there is an error on hg purge.
> [...]
> 
> It's not an error, it falls back on another purging method when the
> purge extension is not enabled.

I guess you are talking about
<http://mercurial.selenic.com/wiki/PurgeExtension>. Do you want me to
activate this extension in my buildbots? (OpenIndiana machine).

- -- 
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea at jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea at jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTbTpjZlgi5GaxT1NAQKVfwP9FwD50nnoIErTpigrG8W/7ds1kWQz3C02
4qwt7P5jQ9mbXImIbmh0TKNjJJg0zsp+QR/3cZZRkg67R0Walu9glbXlcp9mDeWZ
qU3SuiRQ4vNq+lcXdEK0dXSXYxGMHaIgd+PYxaBaQzdDkG8Cgct/uYZO2157UuMd
45KKejJuUB8=
=csNj
-----END PGP SIGNATURE-----

From jcea at jcea.es  Mon Apr 25 05:28:08 2011
From: jcea at jcea.es (Jesus Cea)
Date: Mon, 25 Apr 2011 05:28:08 +0200
Subject: [Python-Dev] Issue 11715: building Python from source on
 multiarch Debian/Ubuntu
In-Reply-To: <AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
References: <20110330161709.756b27f7@neurotica.wooz.org>	<4D94BB4D.8030405@netwok.org>
	<AANLkTi=xHN2jffUkjz=7m+dbVvjernOQEg1_cHk+SQ+q@mail.gmail.com>
Message-ID: <4DB4EA48.30106@jcea.es>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/04/11 00:37, Nick Coghlan wrote:
> However, the combination of "running on Ubuntu 11.04+" and "need to
> build security patched version of old Python" seems unlikely.

Well, I, for one, have Python 2.3, 2.4, 2.5, 2.6, 2.7, 3.1 and 3.2
installed in my machine (Ubuntu 10.04) because I need to support code
spanning such a range of python versions. I remember that compiling 2.3
or 2.4 was a bit painful.

- -- 
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea at jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea at jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTbTqSJlgi5GaxT1NAQJdBwP/fVcqmac2gF/rPLYJvrwRKC7JqlmogRuQ
h9Wp97Ihl430aeESjIuzLPCdfBxEWj14f6bP2GHOanncOXaNOLisA2Oktl5I92Dc
Iw0lWPjWuEuDb1zkvALFB312Ecu0icBSCtScVRHHTeppC+ucpzuu/+eNXN+tCJ31
eWlNQcZR1qQ=
=oQ1k
-----END PGP SIGNATURE-----

From solipsis at pitrou.net  Mon Apr 25 13:30:31 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 25 Apr 2011 13:30:31 +0200
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
 Correctly merging #9319 into 3.3?
References: <E1QEAu8-0006Rn-UZ@dinsdale.python.org> <4DB4E0A7.4080506@jcea.es>
Message-ID: <20110425133031.457bd7f9@pitrou.net>

On Mon, 25 Apr 2011 04:47:03 +0200
Jesus Cea <jcea at jcea.es> wrote:
> 
> And yes, I fully realized that I should try to compile locally first.
> Dealing with this unexpected merge when merging my own patch was...
> unexpected, and the code seemed sensible enough.

You should *always* recompile and run the affected tests before checking
in a change. Even if the changes look "trivial".
By trying to save a little time on your side your may lose a lot of
other people's time.

> Do we have some hat-of-shame I should wear because breaking the build? :).

The tests are still broken it seems:

======================================================================
ERROR: test_issue9319 (test.test_imp.ImportTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/pythonbuildbot/buildarea/3.x.hansen-osx-x86-2/build/Lib/test/test_imp.py", line 181, in test_issue9319
    imp.find_module, "test/badsyntax_pep3120")
  File "/Users/pythonbuildbot/buildarea/3.x.hansen-osx-x86-2/build/Lib/unittest/case.py", line 574, in assertRaises
    callableObj(*args, **kwargs)
ImportError: No module named 'test/badsyntax_pep3120'


Regards

Antoine.



From haael at interia.pl  Mon Apr 25 14:04:35 2011
From: haael at interia.pl (haael)
Date: Mon, 25 Apr 2011 14:04:35 +0200
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in the
	'types' module?
Message-ID: <4DB56353.6020107@interia.pl>


Sorry if I am asking the obvious, but why are the aliases of set types not 
included in the 'types' module? I thought for a moment that they are just 
classes, but no, they introduce themselves as built-in types, just like any 
other standard Python type.

 > print type(set([1, 2, 4]))
<type 'set'>

 > print type(frozenset([3, 5]))
<type 'frozenset'>

Is it intentional, or is there some meaning behind this? If not, shouldn't they 
be added to the module?


Regards,
Bartosz Tarnowski



---------------------------------------------------------------
Darmowy program do wype?niania PIT: http://linkint.pl/f2931


From solipsis at pitrou.net  Mon Apr 25 14:52:20 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 25 Apr 2011 14:52:20 +0200
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in
 the 'types' module?
References: <4DB56353.6020107@interia.pl>
Message-ID: <20110425145220.5f42ae14@pitrou.net>

On Mon, 25 Apr 2011 14:04:35 +0200
haael <haael at interia.pl> wrote:
> 
> Sorry if I am asking the obvious, but why are the aliases of set types not 
> included in the 'types' module?

Because there's no reason to include them, since they are already in
the root (builtins) namespace.

You'll notice that in Python 3, the "types" module only contains types
which are not obviously accessed through easier means:

>>> dir(types)
['BuiltinFunctionType', 'BuiltinMethodType', 'CodeType', 'FrameType',
'FunctionType', 'GeneratorType', 'GetSetDescriptorType', 'LambdaType',
'MemberDescriptorType', 'MethodType', 'ModuleType', 'TracebackType',
'__builtins__', '__cached__', '__doc__', '__file__', '__name__',
'__package__']


Regards

Antoine.



From fdrake at acm.org  Mon Apr 25 15:01:36 2011
From: fdrake at acm.org (Fred Drake)
Date: Mon, 25 Apr 2011 09:01:36 -0400
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in
 the 'types' module?
In-Reply-To: <4DB56353.6020107@interia.pl>
References: <4DB56353.6020107@interia.pl>
Message-ID: <BANLkTin2MsLEu9mPd_uP=f5WYjJX3Y=JTA@mail.gmail.com>

On Mon, Apr 25, 2011 at 8:04 AM, haael <haael at interia.pl> wrote:
> Sorry if I am asking the obvious, but why are the aliases of set types not
> included in the 'types' module? I thought for a moment that they are just
> classes, but no, they introduce themselves as built-in types, just like any
> other standard Python type.

The types module pre-dates the time when classes were actually types in their
own right, and many of the built-in constructors, like "float", "int", and
"list", were simply functions.  When that was the case:

    >>> import types
    >>> types.IntType == int
    False

For types that have always been types, there's no corresponding entry in the
types module, nor is there any need for any, since the type itself is already
accessible.


  -Fred

-- 
Fred L. Drake, Jr.? ? <fdrake at acm.org>
"Give me the luxuries of life and I will willingly do without the necessities."
?? --Frank Lloyd Wright

From haael at interia.pl  Mon Apr 25 16:01:32 2011
From: haael at interia.pl (haael)
Date: Mon, 25 Apr 2011 16:01:32 +0200
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in
 the 'types' module?
In-Reply-To: <20110425145220.5f42ae14@pitrou.net>
References: <4DB56353.6020107@interia.pl> <20110425145220.5f42ae14@pitrou.net>
Message-ID: <4DB57EBC.2000605@interia.pl>


> Because there's no reason to include them, since they are already in
> the root (builtins) namespace.
>
> You'll notice that in Python 3, the "types" module only contains types
> which are not obviously accessed through easier means:


OK, makes sense, but in this case it would be handy to have some list of all 
possible built-in types. I was just creating one and I nearly missed sets. If 
an entry in 'types' module is too much, there should be some comprehensive list 
in the documentation at least.

Regards,
Bartosz Tarnowski

---------------------------------------------
Ksiegowa radzi: Jak za??ozyc firme w 15 minut?
http://linkint.pl/f2968


From benjamin at python.org  Mon Apr 25 16:02:20 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Mon, 25 Apr 2011 09:02:20 -0500
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in
 the 'types' module?
In-Reply-To: <4DB57EBC.2000605@interia.pl>
References: <4DB56353.6020107@interia.pl> <20110425145220.5f42ae14@pitrou.net>
	<4DB57EBC.2000605@interia.pl>
Message-ID: <BANLkTim67Ar8R7-L5hpX81hhoCTq6nHGHw@mail.gmail.com>

2011/4/25 haael <haael at interia.pl>:
>
>> Because there's no reason to include them, since they are already in
>> the root (builtins) namespace.
>>
>> You'll notice that in Python 3, the "types" module only contains types
>> which are not obviously accessed through easier means:
>
>
> OK, makes sense, but in this case it would be handy to have some list of all
> possible built-in types. I was just creating one and I nearly missed sets.
> If an entry in 'types' module is too much, there should be some
> comprehensive list in the documentation at least.

http://docs.python.org/dev/library/stdtypes


-- 
Regards,
Benjamin

From victor.stinner at haypocalc.com  Mon Apr 25 16:07:13 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Mon, 25 Apr 2011 16:07:13 +0200
Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default):
 Correctly merging #9319 into 3.3?
In-Reply-To: <4DB4E0A7.4080506@jcea.es>
References: <E1QEAu8-0006Rn-UZ@dinsdale.python.org> <4DB4E0A7.4080506@jcea.es>
Message-ID: <1303740433.12359.4.camel@marge>

Le lundi 25 avril 2011 ? 04:47 +0200, Jesus Cea a ?crit :
> If a patch in 3.2 is not applicable in 3.3, a "null merge" should be
> done.

Correct. Sorry, I forgot that. And yes, the 3.2 fix was not applicable
to 3.3, that's why I forgot to merge.

> If not, next developer tring to merge will find some other
> unrelated code to merge, and she doesn't have the context knowledge to
> know what to do :-)

Hum, you may read the history of the issue to decide what to do, or ask
the commiter to do the merge.

> In this case, I merged code that doesn't actually compile, breaking the
> build for 20 minutes :-).

He he, it was a trap! When you touch one of my commit, all buildbots
turn red! :-)

> Do we have some hat-of-shame I should wear because breaking the build? :).

Don't worry, it doesn't matter if you quickly fix your mistake.

Victor


From victor.stinner at haypocalc.com  Mon Apr 25 16:26:01 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Mon, 25 Apr 2011 16:26:01 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <4DADFCD4.20705@v.loewis.de>
References: <1303220774.8140.8.camel@marge>  <4DADFCD4.20705@v.loewis.de>
Message-ID: <1303741561.13154.1.camel@marge>

Le mardi 19 avril 2011 ? 23:21 +0200, "Martin v. L?wis" a ?crit :
> > Well, not "remove" directly, but plan to remove it using the PEP 11
> > procedure (mark OS/2 and VMS as unsupported, and remove the code in
> > Python 3.4).
> 
> I think the PEP 11 procedure is just right for this. It *is* a call
> for maintainers, so if any user is interested in ongoing support,
> they should step forward.
> 
> Having then also blog posts about these pending deprecations sounds
> fine to me - also adding them to the 3.2.x release pages would be
> appropriate (IMO). It's important that we give users due notice, but
> lacking any actual contribution, we should also be able to remove
> the code eventually.
> 
> So please go ahead and add them to PEP 11.

Ok, I added OS/2 and VMS to the PEP 11. I also opened any issue to
remember that I should do something to raise an error on build.

Victor


From exarkun at twistedmatrix.com  Mon Apr 25 16:17:03 2011
From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com)
Date: Mon, 25 Apr 2011 14:17:03 -0000
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in
	the	'types' module?
In-Reply-To: <4DB57EBC.2000605@interia.pl>
References: <4DB56353.6020107@interia.pl> <20110425145220.5f42ae14@pitrou.net>
	<4DB57EBC.2000605@interia.pl>
Message-ID: <20110425141703.1992.429006306.divmod.xquotient.986@localhost.localdomain>

On 02:01 pm, haael at interia.pl wrote:
>
>>Because there's no reason to include them, since they are already in
>>the root (builtins) namespace.
>>
>>You'll notice that in Python 3, the "types" module only contains types
>>which are not obviously accessed through easier means:
>
>
>OK, makes sense, but in this case it would be handy to have some list 
>of all possible built-in types. I was just creating one and I nearly 
>missed sets. If an entry in 'types' module is too much, there should be 
>some comprehensive list in the documentation at least.

Maybe this is what you're after?
>>>pprint([t for t in object.__subclasses__() if t.__module__ == 
>>>'__builtin__'])
[<type 'type'>,
<type 'weakref'>,
<type 'weakcallableproxy'>,
<type 'weakproxy'>,
<type 'int'>,
<type 'basestring'>,
<type 'bytearray'>,
<type 'list'>,
<type 'NoneType'>,
<type 'NotImplementedType'>,
<type 'traceback'>,
<type 'super'>,
<type 'xrange'>,
<type 'dict'>,
<type 'set'>,
<type 'slice'>,
<type 'staticmethod'>,
<type 'complex'>,
<type 'float'>,
<type 'buffer'>,
<type 'long'>,
<type 'frozenset'>,
<type 'property'>,
<type 'tuple'>,
<type 'enumerate'>,
<type 'reversed'>,
<type 'code'>,
<type 'frame'>,
<type 'builtin_function_or_method'>,
<type 'instancemethod'>,
<type 'function'>,
<type 'classobj'>,
<type 'dictproxy'>,
<type 'generator'>,
<type 'getset_descriptor'>,
<type 'wrapper_descriptor'>,
<type 'instance'>,
<type 'ellipsis'>,
<type 'member_descriptor'>,
<type 'EncodingMap'>,
<type 'module'>,
<type 'classmethod'>,
<type 'file'>]
>>>

Jean-Paul

From rob.cliffe at btinternet.com  Mon Apr 25 19:21:06 2011
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Mon, 25 Apr 2011 18:21:06 +0100
Subject: [Python-Dev] Syntax quirk
Message-ID: <4DB5AD82.1040209@btinternet.com>

 >>> type (3.)
<type 'float'>
 >>> 3..__class__
<type 'float'>
 >>> type(3)
<type 'int'>
 >>> 3.__class__
   File "<stdin>", line 1
     3.__class__
               ^
SyntaxError: invalid syntax

Superficially the last example ought to be legal syntax (and return 
<type 'int'>).
Is it an oversight which could be fixed in a straightforward way, or are 
there reasons why it can't?

I have tested this with Python 2.5 and Python 3.2.

Best wishes
Rob Cliffe



From alexander.belopolsky at gmail.com  Mon Apr 25 19:33:40 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Mon, 25 Apr 2011 13:33:40 -0400
Subject: [Python-Dev] Syntax quirk
In-Reply-To: <4DB5AD82.1040209@btinternet.com>
References: <4DB5AD82.1040209@btinternet.com>
Message-ID: <BANLkTikmAuARE4EBNuyqdk5r8jXLaYZfyA@mail.gmail.com>

On Mon, Apr 25, 2011 at 1:21 PM, Rob Cliffe <rob.cliffe at btinternet.com> wrote:
..
>>>> 3.__class__
> ?File "<stdin>", line 1
> ? ?3.__class__
> ? ? ? ? ? ? ?^
> SyntaxError: invalid syntax
>
> Superficially the last example ought to be legal syntax (and return <type
> 'int'>).

If it was valid, then

>>> 3.e+7

would have to raise an attribute error instead of

>>> 3.e+7
30000000.0

From ncoghlan at gmail.com  Mon Apr 25 19:36:11 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 26 Apr 2011 03:36:11 +1000
Subject: [Python-Dev] Syntax quirk
In-Reply-To: <4DB5AD82.1040209@btinternet.com>
References: <4DB5AD82.1040209@btinternet.com>
Message-ID: <BANLkTi=CqUZnLLnsR_3P8X+u-N8-nbBztQ@mail.gmail.com>

On Tue, Apr 26, 2011 at 3:21 AM, Rob Cliffe <rob.cliffe at btinternet.com> wrote:
>>>> type (3.)
> <type 'float'>
>>>> 3..__class__
> <type 'float'>
>>>> type(3)
> <type 'int'>
>>>> 3.__class__
> ?File "<stdin>", line 1
> ? ?3.__class__
> ? ? ? ? ? ? ?^
> SyntaxError: invalid syntax
>
> Superficially the last example ought to be legal syntax (and return <type
> 'int'>).
> Is it an oversight which could be fixed in a straightforward way, or are
> there reasons why it can't?

The parser (or is it the lexer? I never remember which it is that has
the problem in this case) can't handle it - it sees the first "." and
expects a floating point value. It's hard to disambiguate due to 3.e10
and the like being valid floating point numbers, while 3..e10 has to
be an attribute access.

You have to use whitespace or parentheses to eliminate the ambiguity:

3. __class__
(3).__class__

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From tjreedy at udel.edu  Mon Apr 25 19:38:20 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 25 Apr 2011 13:38:20 -0400
Subject: [Python-Dev] Syntax quirk
In-Reply-To: <4DB5AD82.1040209@btinternet.com>
References: <4DB5AD82.1040209@btinternet.com>
Message-ID: <ip4bia$3np$1@dough.gmane.org>

On 4/25/2011 1:21 PM, Rob Cliffe wrote:
>  >>> type (3.)
> <type 'float'>
>  >>> 3..__class__
> <type 'float'>
>  >>> type(3)
> <type 'int'>
>  >>> 3.__class__
> File "<stdin>", line 1
> 3.__class__
> ^
> SyntaxError: invalid syntax
>
> Superficially the last example ought to be legal syntax (and return
> <type 'int'>).

You are a more sophisticated parser than Python, which is limited to 
LL(1) parsing. (No that is not in the manual, but it is a known design 
consraint.)

> Is it an oversight which could be fixed in a straightforward way, or are
> there reasons why it can't?

This sort of question as to why Python is the way it is really belongs 
on python-list.

3.x is parsed as (3.)x (float 3. followed by x) which is invalid syntax 
unless 'x' is a digit(s). You automatically back up and reparse as 3(.x)
3 .0 is a syntax error.
3 .__class__ is int.

-- 
Terry Jan Reedy


From haael at interia.pl  Mon Apr 25 20:12:47 2011
From: haael at interia.pl (haael)
Date: Mon, 25 Apr 2011 20:12:47 +0200
Subject: [Python-Dev] Why are there no 'set' and 'frozenset' types in
 the	'types' module?
In-Reply-To: <20110425141703.1992.429006306.divmod.xquotient.986@localhost.localdomain>
References: <4DB56353.6020107@interia.pl>
	<20110425145220.5f42ae14@pitrou.net>	<4DB57EBC.2000605@interia.pl>
	<20110425141703.1992.429006306.divmod.xquotient.986@localhost.localdomain>
Message-ID: <4DB5B99F.80103@interia.pl>



>>> Because there's no reason to include them, since they are already in
>>> the root (builtins) namespace.
>>>
>>> You'll notice that in Python 3, the "types" module only contains types
>>> which are not obviously accessed through easier means:
>>
>>
>> OK, makes sense, but in this case it would be handy to have some list of all
>> possible built-in types. I was just creating one and I nearly missed sets. If
>> an entry in 'types' module is too much, there should be some comprehensive
>> list in the documentation at least.
>
> Maybe this is what you're after?
>>>> pprint([t for t in object.__subclasses__() if t.__module__ == '__builtin__'])
> [<type 'type'>,
> <type 'weakref'>,
> <type 'weakcallableproxy'>,
> <type 'weakproxy'>,
> <type 'int'>,
> <type 'basestring'>,
> <type 'bytearray'>,
> <type 'list'>,
> <type 'NoneType'>,
> <type 'NotImplementedType'>,
> <type 'traceback'>,
> <type 'super'>,
> <type 'xrange'>,
> <type 'dict'>,
> <type 'set'>,
> <type 'slice'>,
> <type 'staticmethod'>,
> <type 'complex'>,
> <type 'float'>,
> <type 'buffer'>,
> <type 'long'>,
> <type 'frozenset'>,
> <type 'property'>,
> <type 'tuple'>,
> <type 'enumerate'>,
> <type 'reversed'>,
> <type 'code'>,
> <type 'frame'>,
> <type 'builtin_function_or_method'>,
> <type 'instancemethod'>,
> <type 'function'>,
> <type 'classobj'>,
> <type 'dictproxy'>,
> <type 'generator'>,
> <type 'getset_descriptor'>,
> <type 'wrapper_descriptor'>,
> <type 'instance'>,
> <type 'ellipsis'>,
> <type 'member_descriptor'>,
> <type 'EncodingMap'>,
> <type 'module'>,
> <type 'classmethod'>,
> <type 'file'>]
>>>>
>
> Jean-Paul


Yes, something like that, but without abstract types like 'basestring'. If 
abstract types were OK, it would suffice to use 'object'.

The use case is designing protocols that export Python objects to outer world, 
'pickle' is an example. One needs to typecase through all built-in Python types 
and handle them in some way.

Nevertheless, my problem is solved. Thank you.


Regards,
Bartosz Tarnowski






---------------------------------------------------------------
Darmowy program do wype??niania PIT: http://linkint.pl/f2931


From cool-rr at cool-rr.com  Mon Apr 25 20:43:49 2011
From: cool-rr at cool-rr.com (cool-RR)
Date: Mon, 25 Apr 2011 14:43:49 -0400
Subject: [Python-Dev] Why doesn't `functools.total_ordering` use the
	existing ordering methods?
Message-ID: <BANLkTi=pDsd7XnAdJku6T9d8JT+4_XxTow@mail.gmail.com>

Hello,

Today I was trying to use `total_ordering` for the first time. I was
expecting that in order to implement e.g. `x > y` it would do `not x < y and
not x == y`, assuming that `__lt__`  and `__eq__` are defined. But I see it
just does `y < x`, which is problematic. For example if you have a class
that is decorated by `total_ordering`, and implements only `__lt__`  and
`__eq__`, then trying to do `x < y` will result in infinite recursion.

Why not have `total_ordering` work in the way I suggested?


Ram.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110425/82704617/attachment.html>

From martin at v.loewis.de  Mon Apr 25 21:09:37 2011
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Mon, 25 Apr 2011 21:09:37 +0200
Subject: [Python-Dev] Drop OS/2 and VMS support?
In-Reply-To: <1303741561.13154.1.camel@marge>
References: <1303220774.8140.8.camel@marge> <4DADFCD4.20705@v.loewis.de>
	<1303741561.13154.1.camel@marge>
Message-ID: <4DB5C6F1.50504@v.loewis.de>

> Ok, I added OS/2 and VMS to the PEP 11. I also opened any issue to
> remember that I should do something to raise an error on build.

For OS/2, I propose to syntactically break the makefile; anybody trying
to build it should then run into that, and
a) can easily overcome the limitation (probably to then run into more
   severe problems), and
b) consider taking over maintenance if they are interested

For VMS, I *think* the build process is configure-based (but I may
misremember); if so, adding an exit into configure.in would be
appropriate. Else an #error in a central header file may do as well.

Or perhaps we could always use the #error if we can trust that compilers
will honor it.

Regards,
Martin

From raymond.hettinger at gmail.com  Mon Apr 25 21:13:54 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Mon, 25 Apr 2011 12:13:54 -0700
Subject: [Python-Dev] Why doesn't `functools.total_ordering` use the
	existing ordering methods?
In-Reply-To: <BANLkTi=pDsd7XnAdJku6T9d8JT+4_XxTow@mail.gmail.com>
References: <BANLkTi=pDsd7XnAdJku6T9d8JT+4_XxTow@mail.gmail.com>
Message-ID: <64FAF308-8FAB-4291-9EF3-94452D339399@gmail.com>


On Apr 25, 2011, at 11:43 AM, cool-RR wrote:

> Today I was trying to use `total_ordering` for the first time. I was expecting that in order to implement e.g. `x > y` it would do `not x < y and not x == y`, assuming that `__lt__`  and `__eq__` are defined.

This was fixed.  The current code has:

    convert = {
        '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)),
                   ('__le__', lambda self, other: self < other or self == other),
                   ('__ge__', lambda self, other: not self < other)],
        '__le__': [('__ge__', lambda self, other: not self <= other or self == other),
                   ('__lt__', lambda self, other: self <= other and not self == other),
                   ('__gt__', lambda self, other: not self <= other)],
        '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)),
                   ('__ge__', lambda self, other: self > other or self == other),
                   ('__le__', lambda self, other: not self > other)],
        '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other),
                   ('__gt__', lambda self, other: self >= other and not self == other),
                   ('__lt__', lambda self, other: not self >= other)]
    }


> Why not have `total_ordering` work in the way I suggested?

To avoid needless posts, you should use the tracker.


Raymond


From regebro at gmail.com  Tue Apr 26 09:10:10 2011
From: regebro at gmail.com (Lennart Regebro)
Date: Tue, 26 Apr 2011 09:10:10 +0200
Subject: [Python-Dev] Why doesn't `functools.total_ordering` use the
 existing ordering methods?
In-Reply-To: <BANLkTi=pDsd7XnAdJku6T9d8JT+4_XxTow@mail.gmail.com>
References: <BANLkTi=pDsd7XnAdJku6T9d8JT+4_XxTow@mail.gmail.com>
Message-ID: <BANLkTi=jg8KdMt9S1y7y-_U964=8MvVKog@mail.gmail.com>

On Mon, Apr 25, 2011 at 20:43, cool-RR <cool-rr at cool-rr.com> wrote:
> Hello,
> Today I was trying to use `total_ordering` for the first time. I was
> expecting that in order to implement e.g. `x > y` it would do `not x < y and
> not x == y`, assuming that `__lt__` ?and `__eq__` are defined. But I see it
> just does `y < x`, which is problematic. For example if you have a class
> that is decorated by `total_ordering`, and implements only?`__lt__` ?and
> `__eq__`, then trying to do `x < y` will result in infinite recursion.
> Why not have `total_ordering` work in the way I suggested?

This has been partly fixed for Python 3.2, although it can still
happen if you compare two types that both use the total_ordering
decorator. See http://bugs.python.org/issue10042 .

-- 
Lennart Regebro: http://regebro.wordpress.com/
Porting to Python 3: http://python3porting.com/

From g.brandl at gmx.net  Tue Apr 26 09:46:30 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 26 Apr 2011 09:46:30 +0200
Subject: [Python-Dev] cpython (2.7): #11901: add description of how
 bitfields are laid out to hexversion docs
In-Reply-To: <E1QESBP-0001c2-7h@dinsdale.python.org>
References: <E1QESBP-0001c2-7h@dinsdale.python.org>
Message-ID: <ip5t90$od7$1@dough.gmane.org>

On 25.04.2011 22:14, r.david.murray wrote:
> http://hg.python.org/cpython/rev/48758cd0769b
> changeset:   69558:48758cd0769b
> branch:      2.7
> parent:      69545:e4fcfb8066ff
> user:        R David Murray <rdmurray at bitdance.com>
> date:        Mon Apr 25 16:10:18 2011 -0400
> summary:
>   #11901: add description of how bitfields are laid out to hexversion docs
> 
> Patch by Sijin Joseph.
> 
> files:
>   Doc/library/sys.rst |  24 ++++++++++++++++++++++++
>   Misc/ACKS           |   1 +
>   2 files changed, 25 insertions(+), 0 deletions(-)
> 
> 
> diff --git a/Doc/library/sys.rst b/Doc/library/sys.rst
> --- a/Doc/library/sys.rst
> +++ b/Doc/library/sys.rst
> @@ -562,6 +562,30 @@
>     ``version_info`` value may be used for a more human-friendly encoding of the
>     same information.
>  
> +   The ``hexversion`` is a 32-bit number with the following layout

Should have a colon at the end.

> +
> +   +-------------------------+------------------------------------------------+
> +   | bits (big endian order) | meaning                                        |

We usually have table headings capitalized.

> +   +=========================+================================================+
> +   | :const:`1-8`            |  ``PY_MAJOR_VERSION``  (the ``2`` in           |
> +   |                         |  ``2.1.0a3``)                                  |
> +   +-------------------------+------------------------------------------------+
> +   | :const:`9-16`           |  ``PY_MINOR_VERSION``  (the ``1`` in           |
> +   |                         |  ``2.1.0a3``)                                  |
> +   +-------------------------+------------------------------------------------+
> +   | :const:`17-24`          |  ``PY_MICRO_VERSION``  (the ``0`` in           |
> +   |                         |  ``2.1.0a3``)                                  |
> +   +-------------------------+------------------------------------------------+
> +   | :const:`25-28`          |  ``PY_RELEASE_LEVEL``  (``0xA`` for alpha,     |
> +   |                         |  ``0xB`` for beta, ``0xC`` for gamma and       |

Even though PY_RELEASE_LEVEL_GAMMA is defined, I think this should say "release
candidate" instead of "gamma".

> +   |                         |  ``0xF`` for final)                            |
> +   +-------------------------+------------------------------------------------+
> +   | :const:`29-32`          |  ``PY_RELEASE_SERIAL``  (the ``3`` in          |
> +   |                         |  ``2.1.0a3``)                                  |
> +   +-------------------------+------------------------------------------------+

... and zero in final releases.

> +   thus ``2.1.0a3`` is hexversion ``0x020100a3``

Please capitalize and add a period.

Georg


From jimjjewett at gmail.com  Tue Apr 26 16:03:47 2011
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 26 Apr 2011 10:03:47 -0400
Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #11919: try
 to fix test_imp failure on some buildbots.
In-Reply-To: <E1QERdb-00072U-Eh@dinsdale.python.org>
References: <E1QERdb-00072U-Eh@dinsdale.python.org>
Message-ID: <BANLkTikR+wNKCXhTVM4ihp_8w0vv69U5mw@mail.gmail.com>

This seems to be changing what is tested -- are you saying that
filenames with an included directory name are not intended to be
supported?

On 4/25/11, antoine.pitrou <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/2f2c7eb27437
> changeset:   69556:2f2c7eb27437
> branch:      3.2
> parent:      69554:77cf9e4b144b
> user:        Antoine Pitrou <solipsis at pitrou.net>
> date:        Mon Apr 25 21:39:49 2011 +0200
> summary:
>   Issue #11919: try to fix test_imp failure on some buildbots.
>
> files:
>   Lib/test/test_imp.py |  3 ++-
>   1 files changed, 2 insertions(+), 1 deletions(-)
>
>
> diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py
> --- a/Lib/test/test_imp.py
> +++ b/Lib/test/test_imp.py
> @@ -171,8 +171,9 @@
>              support.rmtree(test_package_name)
>
>      def test_issue9319(self):
> +        path = os.path.dirname(__file__)
>          self.assertRaises(SyntaxError,
> -                          imp.find_module, "test/badsyntax_pep3120")
> +                          imp.find_module, "badsyntax_pep3120", [path])
>
>
>  class ReloadTests(unittest.TestCase):
>
> --
> Repository URL: http://hg.python.org/cpython
>

From solipsis at pitrou.net  Tue Apr 26 16:14:38 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 26 Apr 2011 16:14:38 +0200
Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #11919: try
 to fix test_imp failure on some buildbots.
In-Reply-To: <BANLkTikR+wNKCXhTVM4ihp_8w0vv69U5mw@mail.gmail.com>
References: <E1QERdb-00072U-Eh@dinsdale.python.org>
	<BANLkTikR+wNKCXhTVM4ihp_8w0vv69U5mw@mail.gmail.com>
Message-ID: <1303827278.3518.11.camel@localhost.localdomain>

Le mardi 26 avril 2011 ? 10:03 -0400, Jim Jewett a ?crit :
> This seems to be changing what is tested -- are you saying that
> filenames with an included directory name are not intended to be
> supported?

I don't know, but that's not the point of this very test.
(I also find it a bit surprising that find_module() would accept a
module name - and not a filename - containing a slash and treat it as
some kind of directory path)

Regards

Antoine.



From merwok at netwok.org  Tue Apr 26 17:23:47 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Tue, 26 Apr 2011 17:23:47 +0200
Subject: [Python-Dev] Tip for hg merge
In-Reply-To: <4DB4E0A7.4080506@jcea.es>
References: <E1QEAu8-0006Rn-UZ@dinsdale.python.org> <4DB4E0A7.4080506@jcea.es>
Message-ID: <4DB6E383.2080705@netwok.org>

Hi,

> If not, next developer tring to merge will find some other
> unrelated code to merge, and she doesn't have the context knowledge to
> know what to do :-).

Here?s a useful tip: instead of merging pulled changesets with your
branch, do the reverse.  That is:

$ hg pull
$ hg heads .  # get only heads for the checked-out branch
$ hg up other-head
$ hg merge

Now instead of merging unknown code into your checkout, you will merge
the code added by your unpushed changesets to the other code.  If you?re
using a three-way file merge tool, it is your code that will be in the
?other? pane, not the unknown code.

Regards

From moloney at ohsu.edu  Tue Apr 26 19:17:18 2011
From: moloney at ohsu.edu (Brendan Moloney)
Date: Tue, 26 Apr 2011 10:17:18 -0700
Subject: [Python-Dev] Allowing import star with namespaces
Message-ID: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>

We all know that doing:

> from pkg import *

is bad because it obliterates the 'pkg' namespace. So why not allow something like:

> import pkg.*

This would still be helpful for interactive sessions while keeping namespaces around.

Sorry if this has been brought up before, my searching didn't find anything relevant in the archives.

Thanks,
Brendan

From steve at pearwood.info  Tue Apr 26 19:48:13 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 27 Apr 2011 03:48:13 +1000
Subject: [Python-Dev] Allowing import star with namespaces
In-Reply-To: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>
References: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>
Message-ID: <4DB7055D.1010606@pearwood.info>

Brendan Moloney wrote:
> We all know that doing:
> 
>> from pkg import *
> 
> is bad because it obliterates the 'pkg' namespace. So why not allow something like:

I don't quite know what you mean by obliterating the pkg namespace, but 
if my guess is correct, you're wrong. One of the problems with import * 
is that it (potentially) obliterates the caller's namespace, not pkg. 
That is, if you have a function spam(), and you do "from module import 
*", and module also has spam(), it blows away your function. The pkg 
namespace isn't touched -- it's just unavailable to the caller.

  >> import pkg.*
> 
> This would still be helpful for interactive sessions while keeping namespaces around.

I don't understand what the difference between that and just "import 
pkg" would be.


By the way, this sort of question should probably go to the python-ideas 
mailing list for any extended discussion.



-- 
Steven

From ethan at stoneleaf.us  Tue Apr 26 20:03:52 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 26 Apr 2011 11:03:52 -0700
Subject: [Python-Dev] Allowing import star with namespaces
In-Reply-To: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>
References: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>
Message-ID: <4DB70908.2020908@stoneleaf.us>

Brendan Moloney wrote:
> We all know that doing:
> 
> --> from pkg import *
> 
> is bad because it obliterates the 'pkg' namespace.

The strongest reason for not doing this is that it pollutes the current 
namespace, not that it obliterates the 'pkg' namespace.

> So why not allow something like:
> 
> --> import pkg.*


How would that be different from

--> import pkg

?

If you want convenience for interactive work, you can always:

--> import pkg
--> from pkg import *

and then have the best (and worst!) of both techniques.

~Ethan~

From solipsis at pitrou.net  Tue Apr 26 20:00:03 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 26 Apr 2011 20:00:03 +0200
Subject: [Python-Dev] cpython: test_logging coverage improvements.
References: <E1QEmI4-0008Dl-9F@dinsdale.python.org>
Message-ID: <20110426200003.4d24c67b@pitrou.net>

On Tue, 26 Apr 2011 19:43:12 +0200
vinay.sajip <python-checkins at python.org> wrote:

> http://hg.python.org/cpython/rev/ababe8a73327
> changeset:   69575:ababe8a73327
> user:        Vinay Sajip <vinay_sajip at yahoo.co.uk>
> date:        Tue Apr 26 18:43:05 2011 +0100
> summary:
>   test_logging coverage improvements.

Apparently produces some failures:

======================================================================
FAIL: test_time (test.test_logging.FormatterTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/home/buildbot/buildarea/3.x.krah-freebsd/build/Lib/test/test_logging.py", line 2238, in test_time
    self.assertEqual(f.formatTime(r), '1993-04-21 08:03:00,123')
AssertionError: '1993-04-21 09:03:00,123' != '1993-04-21 08:03:00,123'
- 1993-04-21 09:03:00,123
?             ^
+ 1993-04-21 08:03:00,123
?             ^

(http://www.python.org/dev/buildbot/all/builders/AMD64%20FreeBSD%208.2%203.x/builds/121/steps/test/logs/stdio)

Regards

Antoine.



From garcia.marc at gmail.com  Tue Apr 26 20:37:04 2011
From: garcia.marc at gmail.com (Marc Garcia)
Date: Tue, 26 Apr 2011 20:37:04 +0200
Subject: [Python-Dev] Simple XML-RPC server over SSL/TLS
Message-ID: <BANLkTinDGtWZsDPZ37U5_zqw9Aio-CpeXw@mail.gmail.com>

Hi there,

I'm working on a project where I'm using Python's simple XML-RPC server [1]
on Python 3.x. I need to use it over TLS, which is not possible directly,
but it's pretty simple to implement extending few classes of the standard
library.

But what I would like to know, is if is there any reason why XML-RPC can't
optionally work over TLS/SSL using Python's ssl module. I'll create a
ticket, and send a patch, but I was wondering if it was a reason why this
was not implemented.

Cheers,
  Marc

1. http://docs.python.org/dev/library/xmlrpc.server.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110426/21b717de/attachment.html>

From moloney at ohsu.edu  Tue Apr 26 20:35:00 2011
From: moloney at ohsu.edu (Brendan Moloney)
Date: Tue, 26 Apr 2011 11:35:00 -0700
Subject: [Python-Dev] Allowing import star with namespaces
In-Reply-To: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>
References: <5E25C96030E66B44B9CFAA95D3DE5919351310A794@EX-MB08.ohsu.edu>
Message-ID: <5E25C96030E66B44B9CFAA95D3DE5919351310A79C@EX-MB08.ohsu.edu>

Ethan Furman wrote:
> The strongest reason for not doing this is that it pollutes the current 
> namespace, not that it obliterates the 'pkg' namespace.

Sorry, I phrased that badly.  When I said "obliterates the 'pkg' namespace" I was referring to dumping the 'pkg' namespace into the current namespace (polluting it, as you would say).

> How would that be different from
> --> import pkg

Because that does not import all of the (public) modules and packages under 'pkg'. For example scipy has has a subpackage 'linalg'.  If I just do 'import scipy' then I can not refer to 'scipy.linalg' until I do 'import scipy.linalg'. 


Steven D'Aprano wrote:
> By the way, this sort of question should probably go to the python-ideas 
> mailing list for any extended discussion.

Sorry, didn't realize that would be the more appropriate list. 


Thanks,
Brendan



From ethan at stoneleaf.us  Tue Apr 26 21:32:40 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 26 Apr 2011 12:32:40 -0700
Subject: [Python-Dev] Issue Tracker
In-Reply-To: <4D90EA06.3030003@stoneleaf.us>
References: <4D90EA06.3030003@stoneleaf.us>
Message-ID: <4DB71DD8.4070506@stoneleaf.us>

Okay, I finally found a little time and got roundup installed and operating.

Only major complaint at this point is that the issue messages are 
presented in top-post format (argh).

Does anyone know off the top of one's head what to change to put roundup 
in bottom-post (chronological) format?

TIA!

~Ethan~

From victor.stinner at haypocalc.com  Tue Apr 26 22:42:33 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Tue, 26 Apr 2011 22:42:33 +0200
Subject: [Python-Dev] [Python-checkins] cpython (3.2): Issue #11919: try
 to fix test_imp failure on some buildbots.
In-Reply-To: <BANLkTikR+wNKCXhTVM4ihp_8w0vv69U5mw@mail.gmail.com>
References: <E1QERdb-00072U-Eh@dinsdale.python.org>
	<BANLkTikR+wNKCXhTVM4ihp_8w0vv69U5mw@mail.gmail.com>
Message-ID: <1303850553.1030.2.camel@marge>

Le mardi 26 avril 2011 ? 10:03 -0400, Jim Jewett a ?crit :
> This seems to be changing what is tested -- are you saying that
> filenames with an included directory name are not intended to be
> supported?

The test checks the Python parser, not the imp module :-)

I don't understand why: sometimes, find_module() accepts a (relative)
path, sometimes it doesn't.

Victor


From ezio.melotti at gmail.com  Wed Apr 27 03:02:07 2011
From: ezio.melotti at gmail.com (Ezio Melotti)
Date: Wed, 27 Apr 2011 04:02:07 +0300
Subject: [Python-Dev] Issue Tracker
In-Reply-To: <4DB71DD8.4070506@stoneleaf.us>
References: <4D90EA06.3030003@stoneleaf.us> <4DB71DD8.4070506@stoneleaf.us>
Message-ID: <4DB76B0F.1040206@gmail.com>

On 26/04/2011 22.32, Ethan Furman wrote:
> Okay, I finally found a little time and got roundup installed and 
> operating.
>
> Only major complaint at this point is that the issue messages are 
> presented in top-post format (argh).
>
> Does anyone know off the top of one's head what to change to put 
> roundup in bottom-post (chronological) format?
>
> TIA!
>
> ~Ethan~
>
>
See line 309 of 
http://svn.python.org/view/tracker/instances/python-dev/html/issue.item.html?view=markup
If you have other questions about Roundup see 
https://lists.sourceforge.net/lists/listinfo/roundup-users

Best Regards,
Ezio Melotti

From hrvoje.niksic at avl.com  Wed Apr 27 11:37:46 2011
From: hrvoje.niksic at avl.com (Hrvoje Niksic)
Date: Wed, 27 Apr 2011 11:37:46 +0200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
Message-ID: <4DB7E3EA.3030208@avl.com>

The other day I was surprised to learn this:

 >>> nan = float('nan')
 >>> nan == nan
False
 >>> [nan] == [nan]
True                  # also True in tuples, dicts, etc.

# also:
 >>> l = [nan]
 >>> nan in l
True
 >>> l.index(nan)
0
 >>> l[0] == nan
False

The identity test is not in container comparators, but in 
PyObject_RichCompareBool:

     /* Quick result when objects are the same.
        Guarantees that identity implies equality. */
     if (v == w) {
         if (op == Py_EQ)
             return 1;
         else if (op == Py_NE)
             return 0;
     }

The guarantee referred to in the comment is not only (AFAICT) 
undocumented, but contradicts the documentation, which states that the 
result should be the "equivalent of o1 op o2".

Calling PyObject_RichCompareBool is inconsistent with calling 
PyObject_RichCompare and converting its result to bool manually, 
something that wrappers (C++) and generators (cython) might reasonably 
want to do themselves, for various reasons.

If this is considered a bug, I can open an issue.

Hrvoje

From lukasz at langa.pl  Wed Apr 27 13:31:16 2011
From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=)
Date: Wed, 27 Apr 2011 13:31:16 +0200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB7E3EA.3030208@avl.com>
References: <4DB7E3EA.3030208@avl.com>
Message-ID: <2952454F-7210-4177-9E78-DAD84276687D@langa.pl>

Wiadomo?? napisana przez Hrvoje Niksic w dniu 2011-04-27, o godz. 11:37:

> The other day I was surprised to learn this:
> 
> >>> nan = float('nan')
> >>> nan == nan
> False
> >>> [nan] == [nan]
> True                  # also True in tuples, dicts, etc.
> 
> # also:
> >>> l = [nan]
> >>> nan in l
> True
> >>> l.index(nan)
> 0
> >>> l[0] == nan
> False
> 

This surprises me as well. I guess this is all related to the fact that:
>>> nan is nan
True

Have a look at this as well:

>>> inf = float('inf')
>>> inf == inf
True
>>> [inf] == [inf]
True
>>> l = [inf]
>>> inf in l
True
>>> l.index(inf)
0
>>> l[0] == inf
True

# Or even:
>>> inf+1 == inf-1
True

For the infinity part, I believe this is related to the funky IEEE 754 standard. I found
some discussion about this here: http://compilers.iecc.com/comparch/article/98-07-134

-- 
Best regards,
?ukasz Langa

From ncoghlan at gmail.com  Wed Apr 27 14:20:46 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 27 Apr 2011 22:20:46 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <2952454F-7210-4177-9E78-DAD84276687D@langa.pl>
References: <4DB7E3EA.3030208@avl.com>
	<2952454F-7210-4177-9E78-DAD84276687D@langa.pl>
Message-ID: <BANLkTik8HuLFXiJq4o2GO+G0fv_GFoFh_g@mail.gmail.com>

2011/4/27 ?ukasz Langa <lukasz at langa.pl>:
> # Or even:
>>>> inf+1 == inf-1
> True
>
> For the infinity part, I believe this is related to the funky IEEE 754 standard. I found
> some discussion about this here: http://compilers.iecc.com/comparch/article/98-07-134

The inf behaviour is fine (inf != inf only when you start talking
about aleph levels, and IEEE 754 doesn't handle those).

It's specifically `nan` that is problematic, as it is one of the very
few cases that breaks the reflexivity of equality.

That said, the current behaviour was chosen deliberately so that
containers could cope with `nan` at least somewhat gracefully:
http://bugs.python.org/issue4296

Issue 10912 added an explicit note about this behaviour to the 3.x
series documentation, but that has not as yet been backported to 2.7
(I reopened the issue to request such a backport).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From raymond.hettinger at gmail.com  Wed Apr 27 16:39:49 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 27 Apr 2011 07:39:49 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB7E3EA.3030208@avl.com>
References: <4DB7E3EA.3030208@avl.com>
Message-ID: <633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>


On Apr 27, 2011, at 2:37 AM, Hrvoje Niksic wrote:

> The other day I was surprised to learn this:
> 
> >>> nan = float('nan')
> >>> nan == nan
> False
> >>> [nan] == [nan]
> True                  # also True in tuples, dicts, etc.

Would also be surprised if you put an object in a dictionary but couldn't get it out?  Or added it to a list but its count was zero?

Identity-implies-equality is necessary so that classes can maintain their invariants and so that programmers can reason about their code.  It is not just in PyObject_RichCompareBool, it is deeply embedded in the language (the logic inside dicts for example).  It is not a short-cut, it is a way of making sure that internally we can count on equality relations reflexive, symmetric, and transitive.  A programmer needs to be able to make basic deductions such as the relationship between the two forms of the in-operator:   for elem in somelist:  assert elem in somelist  # this should never fail.

What surprises me is that anyone gets surprised by anything when experimenting with an object that isn't equal to itself.  It is roughly in the same category as creating a __hash__ that has no relationship to __eq__ or making self-referencing sets or setting False,True=1,0 in python 2.  See http://bertrandmeyer.com/2010/02/06/reflexivity-and-other-pillars-of-civilization/ for a nice blog post on the subject.


Raymond



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/efa56ed3/attachment.html>

From guido at python.org  Wed Apr 27 16:53:36 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 07:53:36 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
Message-ID: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>

On Wed, Apr 27, 2011 at 7:39 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 27, 2011, at 2:37 AM, Hrvoje Niksic wrote:
>
> The other day I was surprised to learn this:
>
>>>> nan = float('nan')
>>>> nan == nan
> False
>>>> [nan] == [nan]
> True ?????????????????# also True in tuples, dicts, etc.
>
> Would also be surprised if you put an object in a dictionary but couldn't
> get it out? ?Or added it to a list but its count was zero?
> Identity-implies-equality is necessary so that classes can maintain their
> invariants and so that programmers can reason about their code. ?It is not
> just in PyObject_RichCompareBool, it is deeply embedded in the language (the
> logic inside dicts for example). ?It is not a short-cut, it is a way of
> making sure that internally we can count on equality relations reflexive,
> symmetric, and transitive. ?A programmer needs to be able to make basic
> deductions such as the relationship between the two forms of the
> in-operator: ? for elem in somelist: ?assert elem in somelist ?# this should
> never fail.
> What surprises me is that anyone gets surprised by anything when
> experimenting with an object that isn't equal to itself. ?It is roughly in
> the same category as creating a __hash__ that has no relationship to __eq__
> or making self-referencing sets or setting False,True=1,0 in python 2.
> ?See?http://bertrandmeyer.com/2010/02/06/reflexivity-and-other-pillars-of-civilization/?for
> a nice blog post on the subject.

Maybe we should just call off the odd NaN comparison behavior?

-- 
--Guido van Rossum (python.org/~guido)

From ncoghlan at gmail.com  Wed Apr 27 17:31:15 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 01:31:15 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>

On Thu, Apr 28, 2011 at 12:53 AM, Guido van Rossum <guido at python.org> wrote:
>> What surprises me is that anyone gets surprised by anything when
>> experimenting with an object that isn't equal to itself. ?It is roughly in
>> the same category as creating a __hash__ that has no relationship to __eq__
>> or making self-referencing sets or setting False,True=1,0 in python 2.
>> ?See?http://bertrandmeyer.com/2010/02/06/reflexivity-and-other-pillars-of-civilization/?for
>> a nice blog post on the subject.
>
> Maybe we should just call off the odd NaN comparison behavior?

Rereading Meyer's article (I read it last time this came up, but it's
a nice piece, so I ended up going over it again this time) the quote
that leapt out at me was this one:

"""A few of us who had to examine the issue recently think that ?
whatever the standard says at the machine level ? a programming
language should support the venerable properties that equality is
reflexive and that assignment yields equality.

Every programming language should decide this on its own; for Eiffel
we think this should be the specification. Do you agree?"""

Currently, Python tries to split the difference: "==" and "!=" follow
IEEE754 for NaN, but most other operations involving builtin types
rely on the assumption that equality is always reflexive (and IEEE754
be damned).

What that means is that "correct" implementations of methods like
__contains__, __eq__, __ne__, index() and count() on containers should
be using "x is y or x == y" to enforce reflexivity, but most such code
does not (e.g. our own collections.abc.Sequence implementation gets
those of these that it implements wrong, and hence Sequence based
containers will handle NaN in a way that differs from the builtin
containers)

And none of that is actually documented anywhere (other than a
behavioural note in the 3.x documentation for
PyObject_RichCompareBool), so it's currently just an implementation
detail of CPython that most of the builtin containers behave that way
in practice.

Given the status quo, what would seem to be the path of least resistance is to:
- articulate in the language specification which container special
methods are expected to enforce reflexivity of equality (even for
non-reflexive types)
- articulate in the library specification which ordinary container
methods enforce reflexivity of equality
- fix any standard library containers that don't enforce reflexivity
to do so where appropriate (e.g. collections.abc.Sequence)

Types with a non-reflexive notion of equality still wouldn't play
nicely with containers that didn't enforce reflexivity where
appropriate, but bad interactions between 3rd party types isn't really
something we can prevent.

Backing away from having float and decimal.Decimal respect the IEEE754
notion of NaN inequality at this late stage of the game seems like one
for the "too hard" basket. It also wouldn't achieve much, since we
want the builtin containers to preserve their invariants even for 3rd
party types with a non-reflexive notion of equality.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From alexander.belopolsky at gmail.com  Wed Apr 27 17:43:49 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 27 Apr 2011 11:43:49 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>

On Wed, Apr 27, 2011 at 10:53 AM, Guido van Rossum <guido at python.org> wrote:
..
> Maybe we should just call off the odd NaN comparison behavior?

+1

There was a long thread on this topic last year:

http://mail.python.org/pipermail/python-dev/2010-March/098832.html

I was trying to find a rationale for non-reflexivity of equality in
IEEE and although it is often mentioned that this property simplifies
some numerical algorithms, I am yet to find an important algorithm
that would benefit from it.  I also believe that long history of
suboptimal hardware implementations of nan arithmetics has stifled the
development of practical applications.

High performance applications that rely on non-reflexivity will still
have an option of using ctypes.c_float type or NumPy.

From ncoghlan at gmail.com  Wed Apr 27 18:01:24 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 02:01:24 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
Message-ID: <BANLkTikJLBRg73q9EcWJb6=J9dtk3tk=0Q@mail.gmail.com>

On Thu, Apr 28, 2011 at 1:43 AM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> High performance applications that rely on non-reflexivity will still
> have an option of using ctypes.c_float type or NumPy.

However, that's exactly the reason I don't see any reason to reverse
course on having float() and Decimal() follow IEEE754 semantics,
regardless of how irritating we may find those semantics to be.

Since we allow types to customise __eq__ and __ne__ with non-standard
behaviour, if we want to permit *any* type to have a non-reflexive
notion of equality, then we need to write our container types to
enforce reflexivity when appropriate. Many of the builtin types
already do this, by virtue of it being built in to RichCompareBool.
It's now a matter of documenting that properly and updating the
non-conformant types accordingly.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From alexander.belopolsky at gmail.com  Wed Apr 27 18:05:46 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 27 Apr 2011 12:05:46 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
Message-ID: <BANLkTinyvcsVHZMg4Bck16OyFwot=tw=Tw@mail.gmail.com>

On Wed, Apr 27, 2011 at 11:31 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
..
> Backing away from having float and decimal.Decimal respect the IEEE754
> notion of NaN inequality at this late stage of the game seems like one
> for the "too hard" basket.

Why?  float('nan') has always been in the use-at-your-own-risk
territory despite recent efforts to support it across Python
platforms.   I cannot speak about decimal.Decimal (and decimal is a
different story because it is tied to a particular standard), but the
only use of non-reflexifity for float nans I've seen was use of x != x
instead of math.isnan(x).

> It also wouldn't achieve much, since we
> want the builtin containers to preserve their invariants even for 3rd
> party types with a non-reflexive notion of equality.

These are orthogonal issues.   A third party type that plays with
__eq__ and other basic operations can easily break stdlib algorithms
no matter what we do.  Therefore it is important to document the
properties of the types that each algorithm relies on.  It is more
important, however that stdlib types do not break 3rd party's
algorithms.   I don't think I've ever seen a third party type that
deliberately defines a non-reflexive __eq__ except as a side effect of
using float attributes or C float members in the underlying structure.
 (Yes, decimal is a counter-example, but this is a very special case.)

From alexander.belopolsky at gmail.com  Wed Apr 27 18:10:05 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 27 Apr 2011 12:10:05 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <Pine.GSO.4.64.1104271159440.26449@core.cs.uwaterloo.ca>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
	<Pine.GSO.4.64.1104271159440.26449@core.cs.uwaterloo.ca>
Message-ID: <BANLkTinpOOcUqJ_v-dO6p=th8kgDDiLciw@mail.gmail.com>

On Wed, Apr 27, 2011 at 12:05 PM, Isaac Morland <ijmorlan at uwaterloo.ca> wrote:
..
> Of course, the definition of math.isnan cannot then be by checking its
> argument by comparison with itself - it would have to check the appropriate
> bits of the float representation.

math.isnan() is implemented in C and does not rely on float.__eq__ in any way.

From ijmorlan at uwaterloo.ca  Wed Apr 27 18:05:12 2011
From: ijmorlan at uwaterloo.ca (Isaac Morland)
Date: Wed, 27 Apr 2011 12:05:12 -0400 (EDT)
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
Message-ID: <Pine.GSO.4.64.1104271159440.26449@core.cs.uwaterloo.ca>

On Wed, 27 Apr 2011, Alexander Belopolsky wrote:

> High performance applications that rely on non-reflexivity will still
> have an option of using ctypes.c_float type or NumPy.

Python could also provide IEEE-754 equality as a function (perhaps in 
"math"), something like:

def ieee_equal (a, b):
 	return a == b and not isnan (a) and not isnan (b)

Of course, the definition of math.isnan cannot then be by checking its 
argument by comparison with itself - it would have to check the 
appropriate bits of the float representation.

Isaac Morland			CSCF Web Guru
DC 2554C, x36650		WWW Software Specialist

From solipsis at pitrou.net  Wed Apr 27 18:27:22 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 27 Apr 2011 18:27:22 +0200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
	<Pine.GSO.4.64.1104271159440.26449@core.cs.uwaterloo.ca>
Message-ID: <20110427182722.1f933d92@pitrou.net>

On Wed, 27 Apr 2011 12:05:12 -0400 (EDT)
Isaac Morland <ijmorlan at uwaterloo.ca> wrote:
> On Wed, 27 Apr 2011, Alexander Belopolsky wrote:
> 
> > High performance applications that rely on non-reflexivity will still
> > have an option of using ctypes.c_float type or NumPy.
> 
> Python could also provide IEEE-754 equality as a function (perhaps in 
> "math"), something like:
> 
> def ieee_equal (a, b):
>  	return a == b and not isnan (a) and not isnan (b)

+1 (perhaps call it math.eq()).

Regards

Antoine.



From raymond.hettinger at gmail.com  Wed Apr 27 18:28:35 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 27 Apr 2011 09:28:35 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>


On Apr 27, 2011, at 7:53 AM, Guido van Rossum wrote:

> Maybe we should just call off the odd NaN comparison behavior?

I'm reluctant to suggest changing such enshrined behavior.

ISTM, the current state of affairs is reasonable.  
Exotic objects are allowed to generate exotic behaviors
but consumers of those objects are free to ignore some
of those behaviors by making reasonable assumptions
about how an object should behave.

It's possible to make objects where the __hash__ doesn't
correspond to __eq__.; they just won't behave well with
hash tables.  Likewise, it's possible for a sequence to
define a __len__ that is different from it true length; it
just won't behave well with the various pieces of code
that assume collections are equal if the lengths are unequal.

All of this seems reasonable to me.


Raymond



From ijmorlan at uwaterloo.ca  Wed Apr 27 18:40:04 2011
From: ijmorlan at uwaterloo.ca (Isaac Morland)
Date: Wed, 27 Apr 2011 12:40:04 -0400 (EDT)
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <20110427182722.1f933d92@pitrou.net>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTi==N_X43yGwSa39qzTg4TKoUtK-Uw@mail.gmail.com>
	<Pine.GSO.4.64.1104271159440.26449@core.cs.uwaterloo.ca>
	<20110427182722.1f933d92@pitrou.net>
Message-ID: <Pine.GSO.4.64.1104271237370.26449@core.cs.uwaterloo.ca>

On Wed, 27 Apr 2011, Antoine Pitrou wrote:

> Isaac Morland <ijmorlan at uwaterloo.ca> wrote:
>>
>> Python could also provide IEEE-754 equality as a function (perhaps in
>> "math"), something like:
>>
>> def ieee_equal (a, b):
>>  	return a == b and not isnan (a) and not isnan (b)
>
> +1 (perhaps call it math.eq()).

Alexander Belopolsky pointed out to me (thanks!) that isnan is implemented 
in C so my caveat about the implementation of isnan is not an issue.  But 
then that made me realize the ieee_equal (or just "eq" if that's 
preferable) probably ought to be implemented in C using a floating point 
comparison - i.e., use the processor implementation of the comparison 
operation..

Isaac Morland			CSCF Web Guru
DC 2554C, x36650		WWW Software Specialist

From ethan at stoneleaf.us  Wed Apr 27 19:05:45 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 27 Apr 2011 10:05:45 -0700
Subject: [Python-Dev] Issue Tracker
In-Reply-To: <4DB76B0F.1040206@gmail.com>
References: <4D90EA06.3030003@stoneleaf.us> <4DB71DD8.4070506@stoneleaf.us>
	<4DB76B0F.1040206@gmail.com>
Message-ID: <4DB84CE9.1010502@stoneleaf.us>

Ezio Melotti wrote:
> On 26/04/2011 22.32, Ethan Furman wrote:
>> Okay, I finally found a little time and got roundup installed and 
>> operating.
>>
>> Only major complaint at this point is that the issue messages are 
>> presented in top-post format (argh).
>>
>> Does anyone know off the top of one's head what to change to put 
>> roundup in bottom-post (chronological) format?
>>
>> TIA!
>>
>> ~Ethan~
>>
>>
> See line 309 of 
> http://svn.python.org/view/tracker/instances/python-dev/html/issue.item.html?view=markup 
> 
> If you have other questions about Roundup see 
> https://lists.sourceforge.net/lists/listinfo/roundup-users

Thanks so much!  That was just what I needed.

~Ethan~

From alexander.belopolsky at gmail.com  Wed Apr 27 19:16:15 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 27 Apr 2011 13:16:15 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
Message-ID: <BANLkTikdpNu-3VWiXrMu6W-tv_HaygCEhA@mail.gmail.com>

On Wed, Apr 27, 2011 at 12:28 PM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 27, 2011, at 7:53 AM, Guido van Rossum wrote:
>
>> Maybe we should just call off the odd NaN comparison behavior?
>
> I'm reluctant to suggest changing such enshrined behavior.
>
> ISTM, the current state of affairs is reasonable.
> Exotic objects are allowed to generate exotic behaviors
> but consumers of those objects are free to ignore some
> of those behaviors by making reasonable assumptions
> about how an object should behave.

Unfortunately NaNs are not that exotic.  They can be silently produced
in calculations and lead to hard to find errors.  For example:

>>> x = 1e300*1e300
>>> x - x
nan

This means that every program dealing with float data has to detect
nans at every step and handle them correctly.  This in turn makes it
impossible to write efficient code that works equally well with floats
and integers.

Note that historically, Python was trying hard to prevent production
of non-finite floats.  AFAICT, none of the math functions would
produce inf or nan.   I am not sure why arithmetic operations are
different.  For example:

>>> 1e300*1e300
inf

but

>>> 1e300**2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')

and

>>> math.pow(1e300,2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: math range error

From raymond.hettinger at gmail.com  Wed Apr 27 19:40:22 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 27 Apr 2011 10:40:22 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikdpNu-3VWiXrMu6W-tv_HaygCEhA@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
	<BANLkTikdpNu-3VWiXrMu6W-tv_HaygCEhA@mail.gmail.com>
Message-ID: <A496269A-5879-4A5B-BD2B-F78790D1CADF@gmail.com>


On Apr 27, 2011, at 10:16 AM, Alexander Belopolsky wrote:
> Unfortunately NaNs are not that exotic.  

They're exotic in the sense that they have the unusual property of not being equal to themselves.

Exotic (adj) strikingly strange or unusual


Raymond


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/2e3f02bb/attachment.html>

From tjreedy at udel.edu  Wed Apr 27 19:44:30 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 27 Apr 2011 13:44:30 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <ip9km0$ppm$1@dough.gmane.org>

On 4/27/2011 10:53 AM, Guido van Rossum wrote:
> On Wed, Apr 27, 2011 at 7:39 AM, Raymond Hettinger

 >> Identity-implies-equality is necessary so that classes can maintain 
 >> their invariants and so that programmers can reason about their code.
[snip]
>>   See http://bertrandmeyer.com/2010/02/06/reflexivity-and-other-pillars-of-civilization/ for
>> a nice blog post on the subject.

I carefully reread this, with the comments, and again came to the 
conclusion that the committee left us no *good* answer, only a choice 
between various more-or-less unsatifactory answers. The current Python 
compromise may be as good as anything. In any case, I think it should be 
explicitly documented with an indexed paragraph, perhaps as follows:

"The IEEE-754 committee defined the float Not_a_Number (NaN) value as 
being incomparable with all others floats, including itself. This 
violates the math and logic rule that equality is reflexive, that 'a == 
a' is always True. And Python collection classes depend on that rule for 
their proper operation. So Python makes the follow compromise. Direct 
equality comparisons involving Nan, such as "NaN=float('NaN'); NaN == 
ob", follow the IEEE-754 rule and return False. Indirect comparisons 
conducted internally as part of a collection operation, such as 'NaN in 
someset' or 'seq.count()' or 'somedict[x]', follow the reflexive rule 
and act as it 'Nan == NaN' were True. Most Python programmers will never 
see a Nan in real programs."

This might best be an entry in the Glossary under "NaN -- Not a Number". 
It should be the first reference for Nan in the General Index and linked 
to from the float() builtin and float type Nan mentions.

> Maybe we should just call off the odd NaN comparison behavior?

Eiffel seems to have survived, though I do not know if it used for 
numerical work. I wonder how much code would break and what the scipy 
folks would think. 3.0 would have been the time, though.

-- 
Terry Jan Reedy


From v+python at g.nevcal.com  Wed Apr 27 20:41:15 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 11:41:15 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
Message-ID: <4DB8634B.6020508@g.nevcal.com>

On 4/27/2011 8:31 AM, Nick Coghlan wrote:
> What that means is that "correct" implementations of methods like
> __contains__, __eq__, __ne__, index() and count() on containers should
> be using "x is y or x == y" to enforce reflexivity, but most such code
> does not (e.g. our own collections.abc.Sequence implementation gets
> those of these that it implements wrong, and hence Sequence based
> containers will handle NaN in a way that differs from the builtin
> containers)

+1 to everything Nick said.

One issue that I don't fully understand: I know there is only one 
instance of None in Python, but I'm not sure where to discover whether 
there is only a single, or whether there can be multiple, instances of 
NaN or Inf.  The IEEE 754 spec is clear that there are multiple bit 
sequences that can be used to represent these, so I would hope that 
there can be, in fact, more than one value containing NaN (and Inf).

This would properly imply that a collection should correctly handle the 
case of storing multiple, different items using different NaN (and Inf) 
instances.  A dict, for example, should be able to hold hundreds of 
items with the index value of NaN.

The distinction between "is" and "==" would permit proper operation, and 
I believe that Python's "rebinding" of names to values rather than the 
copying of values to variables makes such a distinction possible to use 
in a correct manner.

Can someone confirm or explain this issue?

From robert.kern at gmail.com  Wed Apr 27 20:48:38 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Apr 2011 13:48:38 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ip9km0$ppm$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org>
Message-ID: <ip9oe7$hgb$1@dough.gmane.org>

On 4/27/11 12:44 PM, Terry Reedy wrote:
> On 4/27/2011 10:53 AM, Guido van Rossum wrote:

>> Maybe we should just call off the odd NaN comparison behavior?
>
> Eiffel seems to have survived, though I do not know if it used for numerical
> work. I wonder how much code would break and what the scipy folks would think.

I suspect most of us would oppose changing it on general backwards-compatibility 
grounds rather than actually *liking* the current behavior. If the behavior 
changed with Python floats, we'd have to mull over whether we try to match that 
behavior with our scalar types (one of which subclasses from float) and our 
arrays. We would be either incompatible with Python or C, and we'd probably end 
up choosing Python to diverge from. It would make a mess, honestly. We already 
have to explain why equality is funky for arrays (arr1 == arr2 is a rich 
comparison that gives an array, not a bool, so we can't do containment tests for 
lists of arrays), so NaN is pretty easy to explain afterward.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From tjreedy at udel.edu  Wed Apr 27 21:48:03 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 27 Apr 2011 15:48:03 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8634B.6020508@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
	<4DB8634B.6020508@g.nevcal.com>
Message-ID: <ip9rtk$8k8$1@dough.gmane.org>

On 4/27/2011 2:41 PM, Glenn Linderman wrote:

> One issue that I don't fully understand: I know there is only one
> instance of None in Python, but I'm not sure where to discover whether
> there is only a single, or whether there can be multiple, instances of
> NaN or Inf.

I am sure there are multiple instances with just one bit pattern, the 
same as other floats. Otherwise, float('nan') would have to either 
randomly or systematically choose from among the possibilities. Ugh.

There are functions in the math module that pull apart (and put 
together) floats.

 > The IEEE 754 spec is clear that there are multiple bit
> sequences that can be used to represent these,

Anyone actually interested in those should use C or possibly the math 
module float assembly function.

 > so I would hope that
> there can be, in fact, more than one value containing NaN (and Inf).

If you do not know which pattern is which, what use could such passibly be?

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Wed Apr 27 21:51:15 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 27 Apr 2011 15:51:15 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
Message-ID: <ip9s3k$9q4$1@dough.gmane.org>

On 4/27/2011 11:31 AM, Nick Coghlan wrote:

> Currently, Python tries to split the difference: "==" and "!=" follow
> IEEE754 for NaN, but most other operations involving builtin types
> rely on the assumption that equality is always reflexive (and IEEE754
> be damned).
>
> What that means is that "correct" implementations of methods like
> __contains__, __eq__, __ne__, index() and count() on containers should
> be using "x is y or x == y" to enforce reflexivity, but most such code
> does not (e.g. our own collections.abc.Sequence implementation gets
> those of these that it implements wrong, and hence Sequence based
> containers will handle NaN in a way that differs from the builtin
> containers)
>
> And none of that is actually documented anywhere (other than a
> behavioural note in the 3.x documentation for
> PyObject_RichCompareBool), so it's currently just an implementation
> detail of CPython that most of the builtin containers behave that way
> in practice.

Which is why I proposed a Glossary entry in another post.

> Given the status quo, what would seem to be the path of least resistance is to:
> - articulate in the language specification which container special
> methods are expected to enforce reflexivity of equality (even for
> non-reflexive types)
> - articulate in the library specification which ordinary container
> methods enforce reflexivity of equality
> - fix any standard library containers that don't enforce reflexivity
> to do so where appropriate (e.g. collections.abc.Sequence)

+1 to making my proposed text consistenly true if not now ;-).

> Backing away from having float and decimal.Decimal respect the IEEE754
> notion of NaN inequality at this late stage of the game seems like one
> for the "too hard" basket.

Robert Kern confirmed my suspicion about this relative to numpy.

 > It also wouldn't achieve much, since we
> want the builtin containers to preserve their invariants even for 3rd
> party types with a non-reflexive notion of equality.

Good point.

-- 
Terry Jan Reedy


From dickinsm at gmail.com  Wed Apr 27 23:04:56 2011
From: dickinsm at gmail.com (Mark Dickinson)
Date: Wed, 27 Apr 2011 22:04:56 +0100
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB7E3EA.3030208@avl.com>
References: <4DB7E3EA.3030208@avl.com>
Message-ID: <BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>

On Wed, Apr 27, 2011 at 10:37 AM, Hrvoje Niksic <hrvoje.niksic at avl.com> wrote:
> The other day I was surprised to learn this:
>
>>>> nan = float('nan')
>>>> nan == nan
> False
>>>> [nan] == [nan]
> True ? ? ? ? ? ? ? ? ?# also True in tuples, dicts, etc.

That one surprises me a bit too:  I knew we were using
identity-then-equality checks for containment (nan in [nan]), but I
hadn't realised identity-then-equality was also used for the
item-by-item comparisons when comparing two lists.  It's defensible,
though: [nan] == [nan] should presumably produce the same result as
{nan} == {nan}, and the latter is a test that's arguably based on
containment (for sets s and t, s == t if each element of s is in t,
and vice versa).

I don't think any of this should change.  It seems to me that we've
currently got something approaching the best approximation to
consistency and sanity achievable, given the fundamental
incompatibility of (1) nan breaking reflexivity of equality and (2)
containment being based on equality.  That incompatibility is bound to
create inconsistencies somewhere along the line.

Declaring that 'nan == nan' should be True seems attractive in theory,
but I agree that it doesn't really seem like a realistic option in
terms of backwards compatibility and compatibility with other
mainstream languages.

Mark

From dickinsm at gmail.com  Wed Apr 27 23:15:46 2011
From: dickinsm at gmail.com (Mark Dickinson)
Date: Wed, 27 Apr 2011 22:15:46 +0100
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8634B.6020508@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
	<4DB8634B.6020508@g.nevcal.com>
Message-ID: <BANLkTi=Podj_ntfprqrA=apm8kBcbmVZkw@mail.gmail.com>

On Wed, Apr 27, 2011 at 7:41 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> One issue that I don't fully understand: I know there is only one instance
> of None in Python, but I'm not sure where to discover whether there is only
> a single, or whether there can be multiple, instances of NaN or Inf. ?The
> IEEE 754 spec is clear that there are multiple bit sequences that can be
> used to represent these, so I would hope that there can be, in fact, more
> than one value containing NaN (and Inf).
>
> This would properly imply that a collection should correctly handle the case
> of storing multiple, different items using different NaN (and Inf)
> instances. ?A dict, for example, should be able to hold hundreds of items
> with the index value of NaN.
>
> The distinction between "is" and "==" would permit proper operation, and I
> believe that Python's "rebinding" of names to values rather than the copying
> of values to variables makes such a distinction possible to use in a correct
> manner.

For infinities, there's no issue:  there are exactly two distinct
infinities (+inf and -inf), and they don't have any special properties
that affect membership tests.   Your float-keyed dict can contain both
+inf and -inf keys, or just one, or neither, in exactly the same way
that it can contain both +5.0 and -5.0 as keys, or just one, or
neither.

For nans, you *can* put multiple nans into a dictionary as separate
keys, but under the current rules the test for 'sameness' of two nan
keys becomes a test of object identity, not of bitwise equality.
Python takes no notice of the sign bits and 'payload' bits of a float
nan, except in operations like struct.pack and struct.unpack.  For
example:

>>> x, y = float('nan'), float('nan')
>>> d = {x: 1, y:2}
>>> x in d
True
>>> y in d
True
>>> d[x]
1
>>> d[y]
2

But using struct.pack, you can see that x and y are bitwise identical:

>>> struct.pack('<d', x) == struct.pack('<d', y)
True


Mark

From vinay_sajip at yahoo.co.uk  Wed Apr 27 23:23:48 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Wed, 27 Apr 2011 21:23:48 +0000 (UTC)
Subject: [Python-Dev] Socket servers in the test suite
Message-ID: <loom.20110427T230704-75@post.gmane.org>

I've been recently trying to improve the test coverage for the logging package,
and have got to a not unreasonable point:

logging/__init__.py 99% (96%)
logging/config.py 89% (85%)
logging/handlers.py 60% (54%)

where the figures in parentheses include branch coverage measurements.

I'm at the point where to appreciably increase coverage, I'd need to write some
test servers to exercise client code in SocketHandler, DatagramHandler and
HTTPHandler.

I notice there are no utility classes in test.support to help with this kind of
thing - would there be any mileage in adding such things? Of course I could add
test server code just to test_logging (which already contains some socket server
code to exercise the configuration functionality), but rolling a test server
involves boilerplate such as using a custom RequestHandler-derived class for
each application. I had in mind a more streamlined approach where you can just
pass a single callable to a server to handle requests, e.g. as outlined in

https://gist.github.com/945157

I'd be grateful for any comments about adding such functionality to e.g.
test.support.

Regards,

Vinay Sajip


From v+python at g.nevcal.com  Thu Apr 28 00:26:15 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 15:26:15 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=Podj_ntfprqrA=apm8kBcbmVZkw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>	<4DB8634B.6020508@g.nevcal.com>
	<BANLkTi=Podj_ntfprqrA=apm8kBcbmVZkw@mail.gmail.com>
Message-ID: <4DB89807.3080609@g.nevcal.com>

On 4/27/2011 2:15 PM, Mark Dickinson wrote:
> On Wed, Apr 27, 2011 at 7:41 PM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> One issue that I don't fully understand: I know there is only one instance
>> of None in Python, but I'm not sure where to discover whether there is only
>> a single, or whether there can be multiple, instances of NaN or Inf.  The
>> IEEE 754 spec is clear that there are multiple bit sequences that can be
>> used to represent these, so I would hope that there can be, in fact, more
>> than one value containing NaN (and Inf).
>>
>> This would properly imply that a collection should correctly handle the case
>> of storing multiple, different items using different NaN (and Inf)
>> instances.  A dict, for example, should be able to hold hundreds of items
>> with the index value of NaN.
>>
>> The distinction between "is" and "==" would permit proper operation, and I
>> believe that Python's "rebinding" of names to values rather than the copying
>> of values to variables makes such a distinction possible to use in a correct
>> manner.
> For infinities, there's no issue:  there are exactly two distinct
> infinities (+inf and -inf), and they don't have any special properties
> that affect membership tests.   Your float-keyed dict can contain both
> +inf and -inf keys, or just one, or neither, in exactly the same way
> that it can contain both +5.0 and -5.0 as keys, or just one, or
> neither.
>
> For nans, you *can* put multiple nans into a dictionary as separate
> keys, but under the current rules the test for 'sameness' of two nan
> keys becomes a test of object identity, not of bitwise equality.
> Python takes no notice of the sign bits and 'payload' bits of a float
> nan, except in operations like struct.pack and struct.unpack.  For
> example:
Thanks, Mark, for the succinct description and demonstration.  Yes, only 
two Inf values, many possible NaNs.  And this is what I would expect.

I would not, however expect the original case that was described:
 >>> nan = float('nan')
 >>> nan == nan
False
 >>> [nan] == [nan]
True                  # also True in tuples, dicts, etc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/cf13098a/attachment.html>

From greg.ewing at canterbury.ac.nz  Thu Apr 28 00:32:55 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 28 Apr 2011 10:32:55 +1200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <4DB89997.9080102@canterbury.ac.nz>

Guido van Rossum wrote:

> Maybe we should just call off the odd NaN comparison behavior?

That's probably as good an idea as anything.

The weirdness of NaNs is supposed to ensure that they
propagate through a computation as a kind of exception
signal. But to make that work properly, comparing two
NaNs should really give you a NaB (Not a Boolean). As
long as we're not doing that, we might as well treat
NaNs sanely as Python objects.

-- 
Greg

From v+python at g.nevcal.com  Thu Apr 28 02:05:35 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 17:05:35 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
Message-ID: <4DB8AF4F.20001@g.nevcal.com>

On 4/27/2011 2:04 PM, Mark Dickinson wrote:
> On Wed, Apr 27, 2011 at 10:37 AM, Hrvoje Niksic<hrvoje.niksic at avl.com>  wrote:
>> The other day I was surprised to learn this:
>>
>>>>> nan = float('nan')
>>>>> nan == nan
>> False
>>>>> [nan] == [nan]
>> True                  # also True in tuples, dicts, etc.
> That one surprises me a bit too:  I knew we were using
> identity-then-equality checks for containment (nan in [nan]), but I
> hadn't realised identity-then-equality was also used for the
> item-by-item comparisons when comparing two lists.  It's defensible,
> though: [nan] == [nan] should presumably produce the same result as
> {nan} == {nan}, and the latter is a test that's arguably based on
> containment (for sets s and t, s == t if each element of s is in t,
> and vice versa).
>
> I don't think any of this should change.  It seems to me that we've
> currently got something approaching the best approximation to
> consistency and sanity achievable, given the fundamental
> incompatibility of (1) nan breaking reflexivity of equality and (2)
> containment being based on equality.  That incompatibility is bound to
> create inconsistencies somewhere along the line.
>
> Declaring that 'nan == nan' should be True seems attractive in theory,
> but I agree that it doesn't really seem like a realistic option in
> terms of backwards compatibility and compatibility with other
> mainstream languages.

I think it should change.  Inserting a NaN, even the same instance of 
NaN into a list shouldn't suddenly make it compare equal to itself, 
especially since the docs (section 5.9. Comparisons) say:

    *

      Tuples and lists are compared lexicographically using comparison
      of corresponding elements. This means that to compare equal, each
      element must compare equal and the two sequences must be of the
      same type and have the same length.

      If not equal, the sequences are ordered the same as their first
      differing elements. For example, [1,2,x] <= [1,2,y] has the same
      value as x <= y. If the corresponding element does not exist, the
      shorter sequence is ordered first (for example, [1,2] < [1,2,3]).

The principle of least surprise, says that if two unequal items are 
inserted into otherwise equal lists, the lists should be unequal.  NaN 
is unequal to itself.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/c2f16595/attachment.html>

From steve at pearwood.info  Thu Apr 28 02:05:59 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 28 Apr 2011 10:05:59 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <4DB8AF67.50500@pearwood.info>

Guido van Rossum wrote:

> Maybe we should just call off the odd NaN comparison behavior?


This doesn't solve the broader problem that *any* type might 
deliberately define non-reflexive equality, and therefore people will 
still be surprised by

 >>> x = SomeObject()
 >>> x == x
False
 >>> [x] == [x]
True


The "problem" (if it is a problem) here is list, not NANs. Please don't 
break NANs to not-fix a problem with list.

Since we can't (can we?) prohibit non-reflexivity, and even if we can, 
we shouldn't, reasonable solutions are:

(1) live with the fact that lists and other built-in containers will 
short-cut equality with identity for speed, ignoring __eq__;

(2) slow containers down by guaranteeing that they will use __eq__;

(but how much will it actually hurt performance for real-world cases? 
and this will have the side-effect that non-reflexivity will propagate 
to containers)

(3) allow types to register that they are non-reflexive, allowing 
containers to skip the identity shortcut when necessary.

(but it is not clear to me that the extra complexity will be worth the cost)


My vote is the status quo, (1).



-- 
Steven


From steve at pearwood.info  Thu Apr 28 02:00:55 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 28 Apr 2011 10:00:55 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ip9rtk$8k8$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>	<4DB8634B.6020508@g.nevcal.com>
	<ip9rtk$8k8$1@dough.gmane.org>
Message-ID: <4DB8AE37.3020105@pearwood.info>

Terry Reedy wrote:
> On 4/27/2011 2:41 PM, Glenn Linderman wrote:
> 
>> One issue that I don't fully understand: I know there is only one
>> instance of None in Python, but I'm not sure where to discover whether
>> there is only a single, or whether there can be multiple, instances of
>> NaN or Inf.
> 
> I am sure there are multiple instances with just one bit pattern, the 
> same as other floats. Otherwise, float('nan') would have to either 
> randomly or systematically choose from among the possibilities. Ugh.

I think Glenn is asking whether NANs are singletons. They're not:

 >>> x = float('nan')
 >>> y = float('nan')
 >>> x is y
False
 >>> [x] == [y]
False


> There are functions in the math module that pull apart (and put 
> together) floats.
> 
>> The IEEE 754 spec is clear that there are multiple bit
>> sequences that can be used to represent these,
> 
> Anyone actually interested in those should use C or possibly the math 
> module float assembly function.

I'd like to point out that way back in the 1980s, Apple's Hypercard 
allowed users to construct, and compare, distinct NANs without needing 
to use C or check bit patterns. I think it is painful and ironic that a 
development system aimed at non-programmers released by a company 
notorious for "dumbing down" interfaces over 20 years ago had better and 
simpler support for NANs than we have now.



-- 
Steven

From steve at pearwood.info  Thu Apr 28 02:15:08 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 28 Apr 2011 10:15:08 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB89997.9080102@canterbury.ac.nz>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<4DB89997.9080102@canterbury.ac.nz>
Message-ID: <4DB8B18C.1050003@pearwood.info>

Greg Ewing wrote:
> Guido van Rossum wrote:
> 
>> Maybe we should just call off the odd NaN comparison behavior?
> 
> That's probably as good an idea as anything.
> 
> The weirdness of NaNs is supposed to ensure that they
> propagate through a computation as a kind of exception
> signal. But to make that work properly, comparing two
> NaNs should really give you a NaB (Not a Boolean). As
> long as we're not doing that, we might as well treat
> NaNs sanely as Python objects.

That doesn't follow. You can compare NANs, and the result of the 
comparisons are perfectly well defined by either True or False. There's 
no need for a NAB comparison flag.



-- 
Steven


From jimjjewett at gmail.com  Thu Apr 28 02:18:35 2011
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 27 Apr 2011 20:18:35 -0400
Subject: [Python-Dev] [Python-checkins] cpython: PyGILState_Ensure(),
 PyGILState_Release(), PyGILState_GetThisThreadState() are
In-Reply-To: <E1QEqA7-0003Yx-9k@dinsdale.python.org>
References: <E1QEqA7-0003Yx-9k@dinsdale.python.org>
Message-ID: <BANLkTik7RtkXY5_EHgAUqSm6kkqpaAC=Qg@mail.gmail.com>

Would it be a problem to make them available a no-ops?

On 4/26/11, victor.stinner <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/75503c26a17f
> changeset:   69584:75503c26a17f
> user:        Victor Stinner <victor.stinner at haypocalc.com>
> date:        Tue Apr 26 23:34:58 2011 +0200
> summary:
>   PyGILState_Ensure(), PyGILState_Release(), PyGILState_GetThisThreadState()
> are
> not available if Python is compiled without threads.
>
> files:
>   Include/pystate.h |  10 +++++++---
>   1 files changed, 7 insertions(+), 3 deletions(-)
>
>
> diff --git a/Include/pystate.h b/Include/pystate.h
> --- a/Include/pystate.h
> +++ b/Include/pystate.h
> @@ -73,9 +73,9 @@
>      struct _frame *frame;
>      int recursion_depth;
>      char overflowed; /* The stack has overflowed. Allow 50 more calls
> -		        to handle the runtime error. */
> -    char recursion_critical; /* The current calls must not cause
> -				a stack overflow. */
> +                        to handle the runtime error. */
> +    char recursion_critical; /* The current calls must not cause
> +                                a stack overflow. */
>      /* 'tracing' keeps track of the execution depth when tracing/profiling.
>         This is to prevent the actual trace/profile code from being recorded
> in
>         the trace/profile. */
> @@ -158,6 +158,8 @@
>      enum {PyGILState_LOCKED, PyGILState_UNLOCKED}
>          PyGILState_STATE;
>
> +#ifdef WITH_THREAD
> +
>  /* Ensure that the current thread is ready to call the Python
>     C API, regardless of the current state of Python, or of its
>     thread lock.  This may be called as many times as desired
> @@ -199,6 +201,8 @@
>  */
>  PyAPI_FUNC(PyThreadState *) PyGILState_GetThisThreadState(void);
>
> +#endif   /* #ifdef WITH_THREAD */
> +
>  /* The implementation of sys._current_frames()  Returns a dict mapping
>     thread id to that thread's current frame.
>  */
>
> --
> Repository URL: http://hg.python.org/cpython
>

From ethan at stoneleaf.us  Thu Apr 28 03:11:15 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 27 Apr 2011 18:11:15 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
Message-ID: <4DB8BEB3.8040701@stoneleaf.us>

Mark Dickinson wrote:
> On Wed, Apr 27, 2011 at 10:37 AM, Hrvoje Niksic <hrvoje.niksic at avl.com> wrote:
>> The other day I was surprised to learn this:
>>
>>>>> nan = float('nan')
>>>>> nan == nan
>> False
>>>>> [nan] == [nan]
>> True                  # also True in tuples, dicts, etc.
> 
> That one surprises me a bit too:  I knew we were using
> identity-then-equality checks for containment (nan in [nan]), but I
> hadn't realised identity-then-equality was also used for the
> item-by-item comparisons when comparing two lists.  It's defensible,
> though: [nan] == [nan] should presumably produce the same result as
> {nan} == {nan}, and the latter is a test that's arguably based on
> containment (for sets s and t, s == t if each element of s is in t,
> and vice versa).
> 
> I don't think any of this should change.  It seems to me that we've
> currently got something approaching the best approximation to
> consistency and sanity achievable, given the fundamental
> incompatibility of (1) nan breaking reflexivity of equality and (2)
> containment being based on equality.  That incompatibility is bound to
> create inconsistencies somewhere along the line.
> 
> Declaring that 'nan == nan' should be True seems attractive in theory,
> but I agree that it doesn't really seem like a realistic option in
> terms of backwards compatibility and compatibility with other
> mainstream languages.

Totally out of my depth, but what if the a NaN object was allowed to 
compare equal to itself, but different NaN objects still compared 
unequal?  If NaN was a singleton then the current behavior makes more 
sense, but since we get a new NaN with each instance creation is there 
really a good reason why the same NaN can't be equal to itself?

~Ethan~

From v+python at g.nevcal.com  Thu Apr 28 03:15:02 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 18:15:02 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8AF67.50500@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<4DB8AF67.50500@pearwood.info>
Message-ID: <4DB8BF96.1080806@g.nevcal.com>

On 4/27/2011 5:05 PM, Steven D'Aprano wrote:
>
> (2) slow containers down by guaranteeing that they will use __eq__;
>
> (but how much will it actually hurt performance for real-world cases? 
> and this will have the side-effect that non-reflexivity will propagate 
> to containers) 

I think it is perfectly reasonable that containers containing items with 
non-reflexive equality should sometimes have non-reflexive equality also 
(depends on the placement of the item in the container, and the values 
of other items, whether the non-reflexive equality of an internal item 
will actually affect the equality of the container in practice).

I quoted the docs for tuple and list comparisons in a different part of 
this thread, and for those types, the docs are very clear that the items 
must compare equal for the lists or tuples to compare equal.  For other 
built-in types, the docs are less clear:

    *

      Mappings (dictionaries) compare equal if and only if they have the
      same (key, value) pairs. Order comparisons ('<', '<=', '>=', '>')
      raise TypeError
      <http://docs.python.org/py3k/library/exceptions.html#TypeError>.

So we can immediately conclude that mappings do not provide an ordering 
for sorts.  But, the language "same (key, value)" pairs implies identity 
comparisons, rather than equality comparisons.  But in practice, 
equality is used sometimes, and identity sometimes:

 >>> nan = float('NaN')
 >>> d1 = dict( a=1, nan=2 )
 >>> d2 = dict( a=1, nan=2.0 )
 >>> d1 == d2
True
 >>> 2 is 2.0
False

"nan" and "nan" is being compared using identity, 2 and 2.0 by 
equality.  While that may be clear to those of you that know the 
implementation (and even have described it somewhat in this thread), it 
is certainly not clear in the docs.  And I think it should read much 
more like lists and tuples... "if all the (key, value) pairs, considered 
as tuples, are equal".

    *

      Sets and frozensets define comparison operators to mean subset and
      superset tests. Those relations do not define total orderings (the
      two sets {1,2} and {2,3} are not equal, nor subsets of one
      another, nor supersets of one another). Accordingly, sets are not
      appropriate arguments for functions which depend on total
      ordering. For example, min()
      <http://docs.python.org/py3k/library/functions.html#min>, max()
      <http://docs.python.org/py3k/library/functions.html#max>, and
      sorted()
      <http://docs.python.org/py3k/library/functions.html#sorted>
      produce undefined results given a list of sets as inputs.

This clearly talks about sets and subsets, but it doesn't define those 
concepts well in this section.  It should refer to where it that concept 
is defined, perhaps.  The intuitive definition of "subset" to me is if, 
for every item in set A, if an equal item is found in set B, then set A 
is a subset of set B.  That's what I learned back in math classes.  
Since NaN is not equal to NaN, however, I would not expect a set 
containing NaN to compare equal to any other set.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/62ed4c2f/attachment-0001.html>

From v+python at g.nevcal.com  Thu Apr 28 03:20:27 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 18:20:27 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8BEB3.8040701@stoneleaf.us>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<4DB8BEB3.8040701@stoneleaf.us>
Message-ID: <4DB8C0DB.8040808@g.nevcal.com>

On 4/27/2011 6:11 PM, Ethan Furman wrote:
> Mark Dickinson wrote:
>> On Wed, Apr 27, 2011 at 10:37 AM, Hrvoje Niksic 
>> <hrvoje.niksic at avl.com> wrote:
>>> The other day I was surprised to learn this:
>>>
>>>>>> nan = float('nan')
>>>>>> nan == nan
>>> False
>>>>>> [nan] == [nan]
>>> True                  # also True in tuples, dicts, etc.
>>
>> That one surprises me a bit too:  I knew we were using
>> identity-then-equality checks for containment (nan in [nan]), but I
>> hadn't realised identity-then-equality was also used for the
>> item-by-item comparisons when comparing two lists.  It's defensible,
>> though: [nan] == [nan] should presumably produce the same result as
>> {nan} == {nan}, and the latter is a test that's arguably based on
>> containment (for sets s and t, s == t if each element of s is in t,
>> and vice versa).
>>
>> I don't think any of this should change.  It seems to me that we've
>> currently got something approaching the best approximation to
>> consistency and sanity achievable, given the fundamental
>> incompatibility of (1) nan breaking reflexivity of equality and (2)
>> containment being based on equality.  That incompatibility is bound to
>> create inconsistencies somewhere along the line.
>>
>> Declaring that 'nan == nan' should be True seems attractive in theory,
>> but I agree that it doesn't really seem like a realistic option in
>> terms of backwards compatibility and compatibility with other
>> mainstream languages.
>
> Totally out of my depth, but what if the a NaN object was allowed to 
> compare equal to itself, but different NaN objects still compared 
> unequal?  If NaN was a singleton then the current behavior makes more 
> sense, but since we get a new NaN with each instance creation is there 
> really a good reason why the same NaN can't be equal to itself?

 >>> n1 = float('NaN')
 >>> n2 = float('NaN')
 >>> n3 = n1

 >>> n1
nan
 >>> n2
nan
 >>> n3
nan

 >>> [n1] == [n2]
False
 >>> [n1] == [n3]
True

This is the current situation: some NaNs compare equal sometimes, and 
some don't.  And unless you are particularly aware of the identity of 
the object containing the NaN (not the list, but the particular NaN 
value) it is surprising and confusing, because the mathematical 
definition of NaN is that it should not be equal to itself.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/b7301b44/attachment.html>

From v+python at g.nevcal.com  Thu Apr 28 03:22:03 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 18:22:03 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8BF96.1080806@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<4DB8AF67.50500@pearwood.info>
	<4DB8BF96.1080806@g.nevcal.com>
Message-ID: <4DB8C13B.8040204@g.nevcal.com>

On 4/27/2011 6:15 PM, Glenn Linderman wrote:
> I think it is perfectly reasonable that containers containing items 
> with non-reflexive equality should sometimes have non-reflexive 
> equality also (depends on the placement of the item in the container, 
> and the values of other items, whether the non-reflexive equality of 
> an internal item will actually affect the equality of the container in 
> practice).

Pardon me, please ignore the parenthetical statement... it was really 
inspired by inequality comparisons, not equality comparisons.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/f4aff658/attachment.html>

From ncoghlan at gmail.com  Thu Apr 28 04:24:48 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 12:24:48 +1000
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <loom.20110427T230704-75@post.gmane.org>
References: <loom.20110427T230704-75@post.gmane.org>
Message-ID: <BANLkTimqCY02e+iy-OcV4nzZa1BTiC_sOQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 7:23 AM, Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:
> I've been recently trying to improve the test coverage for the logging package,
> and have got to a not unreasonable point:
>
> logging/__init__.py 99% (96%)
> logging/config.py 89% (85%)
> logging/handlers.py 60% (54%)
>
> where the figures in parentheses include branch coverage measurements.
>
> I'm at the point where to appreciably increase coverage, I'd need to write some
> test servers to exercise client code in SocketHandler, DatagramHandler and
> HTTPHandler.
>
> I notice there are no utility classes in test.support to help with this kind of
> thing - would there be any mileage in adding such things? Of course I could add
> test server code just to test_logging (which already contains some socket server
> code to exercise the configuration functionality), but rolling a test server
> involves boilerplate such as using a custom RequestHandler-derived class for
> each application. I had in mind a more streamlined approach where you can just
> pass a single callable to a server to handle requests, e.g. as outlined in
>
> https://gist.github.com/945157
>
> I'd be grateful for any comments about adding such functionality to e.g.
> test.support.

If you poke around in the test directory a bit, you may find there is
already some code along these lines in other tests (e.g. I'm pretty
sure the urllib tests already fire up a local server). Starting down
the path of standardisation of that test functionality would be good.

For larger components like this, it's also reasonable to add a
dedicated helper module rather than using test.support directly. I
started (and Antoine improved) something along those lines with the
test.script_helper module for running Python subprocesses and checking
their output, although it lacks documentation and there are lots of
older tests that still use subprocess directly.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From stephen at xemacs.org  Thu Apr 28 04:31:20 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 28 Apr 2011 11:31:20 +0900
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB89807.3080609@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>
	<4DB8634B.6020508@g.nevcal.com>
	<BANLkTi=Podj_ntfprqrA=apm8kBcbmVZkw@mail.gmail.com>
	<4DB89807.3080609@g.nevcal.com>
Message-ID: <87ei4n9kef.fsf@uwakimon.sk.tsukuba.ac.jp>

Glenn Linderman writes:

 > I would not, however expect the original case that was described:
 >  >>> nan = float('nan')
 >  >>> nan == nan
 > False
 >  >>> [nan] == [nan]
 > True                  # also True in tuples, dicts, etc.

Are you saying you would expect that

>>> nan = float('nan')
>>> a = [1, ..., 499, nan, 501, ..., 999]    # meta-ellipsis, not Ellipsis
>>> a == a
False

??

I wouldn't even expect

>>> a = [1, ..., 499, float('nan'), 501, ..., 999]
>>> b = [1, ..., 499, float('nan'), 501, ..., 999]
>>> a == b
False

but I guess I have to live with that.<wink>  While I wouldn't apply it
to other people, I have to admit Raymond's aphorism applies to me (the
surprising thing is not the behavior of NaNs, but that I'm surprised
by anything that happens in the presence of NaNs!)

From stephen at xemacs.org  Thu Apr 28 04:42:30 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 28 Apr 2011 11:42:30 +0900
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
Message-ID: <87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>

Mark Dickinson writes:

 > Declaring that 'nan == nan' should be True seems attractive in
 > theory,

No, it's intuitively attractive, but that's because humans like nice
continuous behavior.  In *theory*, it's true that some singularities
are removable, and the NaN that occurs when evaluating at that point
is actually definable in a broader context, but the point of NaN is
that some singularities are *not* removable.  This is somewhat
Pythonic: "In the presence of ambiguity, refuse to guess."


From stephen at xemacs.org  Thu Apr 28 05:06:23 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 28 Apr 2011 12:06:23 +0900
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8C0DB.8040808@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<4DB8BEB3.8040701@stoneleaf.us> <4DB8C0DB.8040808@g.nevcal.com>
Message-ID: <87bozr9is0.fsf@uwakimon.sk.tsukuba.ac.jp>

Glenn Linderman writes:
 > On 4/27/2011 6:11 PM, Ethan Furman wrote:

 > > Totally out of my depth, but what if the a NaN object was allowed to 
 > > compare equal to itself, but different NaN objects still compared 
 > > unequal?  If NaN was a singleton then the current behavior makes more 
 > > sense, but since we get a new NaN with each instance creation is there 
 > > really a good reason why the same NaN can't be equal to itself?

Yes.  A NaN is a special object that means "the computation that
produced this object is undefined."  For example, consider the
computation 1/x at x = 0.  If you approach from the left, 1/0
"obviously" means minus infinity, while if you approach from the right
just as obviously it means plus infinity.  So what does the 1/0 that
occurs in [1/x for x in range(-5, 6)] mean?  In what sense is it
"equal to itself"?  How can something which is not a number be
compared for numerical equality?

 >  >>> n1 = float('NaN')
 >  >>> n2 = float('NaN')
 >  >>> n3 = n1
 > 
 >  >>> n1
 > nan
 >  >>> n2
 > nan
 >  >>> n3
 > nan
 > 
 >  >>> [n1] == [n2]
 > False
 >  >>> [n1] == [n3]
 > True
 > 
 > This is the current situation: some NaNs compare equal sometimes, and 
 > some don't.

No, Ethan is asking for "n1 == n3" => True.  As Mark points out, "[n1]
== [n3]" can be interpreted as a containment question, rather than an
equality question, with respect to the NaNs themselves.  In standard
set theory, these are the same question, but that's not necessarily so
in other set-like toposes.  In particular, getting equality and set
membership to behave reasonably with respect to each other one of the
problems faced in developing a workable theory of fuzzy sets.

I don't think it matters what behavior you choose for NaNs, somebody
is going be unhappy sometimes.

From guido at python.org  Thu Apr 28 05:14:38 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 20:14:38 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
Message-ID: <BANLkTinE2KKCV1gSSrYWnHtJrNnt0Oq=pg@mail.gmail.com>

On Wed, Apr 27, 2011 at 9:28 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 27, 2011, at 7:53 AM, Guido van Rossum wrote:
>
>> Maybe we should just call off the odd NaN comparison behavior?
>
> I'm reluctant to suggest changing such enshrined behavior.

No doubt there would be some problems; probably more for decimals than
for floats.

> ISTM, the current state of affairs is reasonable.

Hardly; when I picked the NaN behavior I knew the IEEE std prescribed
it but had never seen any code that used this.

> Exotic objects are allowed to generate exotic behaviors
> but consumers of those objects are free to ignore some
> of those behaviors by making reasonable assumptions
> about how an object should behave.

I'd say that the various issues and inconsistencies brought up (e.g. x
in A even though no a in A equals x) make it clear that one ignores
NaN's exoticnesss at one's peril.

> It's possible to make objects where the __hash__ doesn't
> correspond to __eq__.; they just won't behave well with
> hash tables.

That's not the same thing at all. Such an object would violate a rule
of the language (although one that Python cannot strictly enforce) and
it would always be considered a bug. Currently NaN is not violating
any language rules -- it is just violating users' intuition, in a much
worse way than Inf does. (All in all, Inf behaves pretty intuitively,
at least for someone who was awake during at least a few high school
math classes. NaN is not discussed there. :-)

> Likewise, it's possible for a sequence to
> define a __len__ that is different from it true length; it
> just won't behave well with the various pieces of code
> that assume collections are equal if the lengths are unequal.

(you probably meant "are never equal")

Again, typically a bug.

> All of this seems reasonable to me.

Given the IEEE std and Python's history, it's defensible and hard to
change, but still, I find reasonable too strong a word for the
situation.

I expect that that if 15 years or so ago I had decided to ignore the
IEEE std and declare that object identity always implies equality it
would have seemed quite reasonable as well... The rule could be
something like "the == operator first checks for identity and if left
and right are the same object, the answer is True without calling the
object's __eq__ method; similarly the != would always return False
when an object is compared to itself". We wouldn't change the
inequalities, nor the outcome if a NaN is compared to another NaN (not
the same object). But we would extend the special case for object
identity from containers to all == and != operators. (Currently it
seems that all NaNs have a hash() of 0. That hasn't hurt anyone so
far.)

Doing this in 3.3 would, alas, be a huge undertaking -- I expect that
there are tons of unittests that depend either on the current NaN
behavior or on x == x calling x.__eq__(x). Plus the decimal unittests
would be affected. Perhaps somebody could try?

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Thu Apr 28 05:16:45 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 20:16:45 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ip9oe7$hgb$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
Message-ID: <BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>

On Wed, Apr 27, 2011 at 11:48 AM, Robert Kern <robert.kern at gmail.com> wrote:
> On 4/27/11 12:44 PM, Terry Reedy wrote:
>>
>> On 4/27/2011 10:53 AM, Guido van Rossum wrote:
>
>>> Maybe we should just call off the odd NaN comparison behavior?
>>
>> Eiffel seems to have survived, though I do not know if it used for
>> numerical
>> work. I wonder how much code would break and what the scipy folks would
>> think.
>
> I suspect most of us would oppose changing it on general
> backwards-compatibility grounds rather than actually *liking* the current
> behavior. If the behavior changed with Python floats, we'd have to mull over
> whether we try to match that behavior with our scalar types (one of which
> subclasses from float) and our arrays. We would be either incompatible with
> Python or C, and we'd probably end up choosing Python to diverge from. It
> would make a mess, honestly. We already have to explain why equality is
> funky for arrays (arr1 == arr2 is a rich comparison that gives an array, not
> a bool, so we can't do containment tests for lists of arrays), so NaN is
> pretty easy to explain afterward.

So does NumPy also follow Python's behavior about ignoring the NaN
special-casing when doing array ops?

-- 
--Guido van Rossum (python.org/~guido)

From robert.kern at gmail.com  Thu Apr 28 05:42:03 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Apr 2011 22:42:03 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>
Message-ID: <ipanmc$jnu$1@dough.gmane.org>

On 2011-04-27 22:16 , Guido van Rossum wrote:
> On Wed, Apr 27, 2011 at 11:48 AM, Robert Kern<robert.kern at gmail.com>  wrote:
>> On 4/27/11 12:44 PM, Terry Reedy wrote:
>>>
>>> On 4/27/2011 10:53 AM, Guido van Rossum wrote:
>>
>>>> Maybe we should just call off the odd NaN comparison behavior?
>>>
>>> Eiffel seems to have survived, though I do not know if it used for
>>> numerical
>>> work. I wonder how much code would break and what the scipy folks would
>>> think.
>>
>> I suspect most of us would oppose changing it on general
>> backwards-compatibility grounds rather than actually *liking* the current
>> behavior. If the behavior changed with Python floats, we'd have to mull over
>> whether we try to match that behavior with our scalar types (one of which
>> subclasses from float) and our arrays. We would be either incompatible with
>> Python or C, and we'd probably end up choosing Python to diverge from. It
>> would make a mess, honestly. We already have to explain why equality is
>> funky for arrays (arr1 == arr2 is a rich comparison that gives an array, not
>> a bool, so we can't do containment tests for lists of arrays), so NaN is
>> pretty easy to explain afterward.
>
> So does NumPy also follow Python's behavior about ignoring the NaN
> special-casing when doing array ops?

By "ignoring the NaN special-casing", do you mean that identity is checked 
first? When we use dtype=object arrays (arrays that contain Python objects as 
their data), yes:

[~]
|1> nan = float('nan')

[~]
|2> import numpy as np

[~]
|3> a = np.array([1, 2, nan], dtype=object)

[~]
|4> nan in a
True

[~]
|5> float('nan') in a
False


Just like lists:

[~]
|6> nan in [1, 2, nan]
True

[~]
|7> float('nan') in [1, 2, nan]
False


Actually, we go a little further by using PyObject_RichCompareBool() rather than 
PyObject_RichCompare() to implement the array-wise comparisons in addition to 
containment:

[~]
|8> a == nan
array([False, False,  True], dtype=bool)

[~]
|9> [x == nan for x in [1, 2, nan]]
[False, False, False]


But for dtype=float arrays (which contain C doubles, not Python objects) we use 
C semantics. Literally, we use whatever C's == operator gives us for the two 
double values. Since there is no concept of identity for this case, there is no 
cognate behavior of Python to match.

[~]
|10> b = np.array([1.0, 2.0, nan], dtype=float)

[~]
|11> b == nan
array([False, False, False], dtype=bool)

[~]
|12> nan in b
False

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From ncoghlan at gmail.com  Thu Apr 28 05:43:58 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 13:43:58 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>

On Thu, Apr 28, 2011 at 12:42 PM, Stephen J. Turnbull
<stephen at xemacs.org> wrote:
> Mark Dickinson writes:
>
> ?> Declaring that 'nan == nan' should be True seems attractive in
> ?> theory,
>
> No, it's intuitively attractive, but that's because humans like nice
> continuous behavior. ?In *theory*, it's true that some singularities
> are removable, and the NaN that occurs when evaluating at that point
> is actually definable in a broader context, but the point of NaN is
> that some singularities are *not* removable. ?This is somewhat
> Pythonic: "In the presence of ambiguity, refuse to guess."

Refusing to guess in this case would be to treat all NaNs as
signalling NaNs, and that wouldn't be good, either :)

I like Terry's suggestion for a glossary entry, and have created an
updated proposal at http://bugs.python.org/issue11945

(I also noted that array.array is like collections.Sequence in failing
to enforce the container invariants in the presence of NaN values)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Thu Apr 28 06:01:35 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 21:01:35 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipanmc$jnu$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>
	<ipanmc$jnu$1@dough.gmane.org>
Message-ID: <BANLkTikgDXag3BfLPoaiqXg0=bJiqKF0tA@mail.gmail.com>

On Wed, Apr 27, 2011 at 8:42 PM, Robert Kern <robert.kern at gmail.com> wrote:
> On 2011-04-27 22:16 , Guido van Rossum wrote:
>> So does NumPy also follow Python's behavior about ignoring the NaN
>> special-casing when doing array ops?
>
> By "ignoring the NaN special-casing", do you mean that identity is checked
> first? When we use dtype=object arrays (arrays that contain Python objects
> as their data), yes:
>
> [~]
> |1> nan = float('nan')
>
> [~]
> |2> import numpy as np
>
> [~]
> |3> a = np.array([1, 2, nan], dtype=object)
>
> [~]
> |4> nan in a
> True
>
> [~]
> |5> float('nan') in a
> False
>
>
> Just like lists:
>
> [~]
> |6> nan in [1, 2, nan]
> True
>
> [~]
> |7> float('nan') in [1, 2, nan]
> False
>
>
> Actually, we go a little further by using PyObject_RichCompareBool() rather
> than PyObject_RichCompare() to implement the array-wise comparisons in
> addition to containment:
>
> [~]
> |8> a == nan
> array([False, False, ?True], dtype=bool)

Hm, this sounds like NumPy always considers a NaN equal to *itself* as
long as objects are concerned.

> [~]
> |9> [x == nan for x in [1, 2, nan]]
> [False, False, False]
>
>
> But for dtype=float arrays (which contain C doubles, not Python objects) we
> use C semantics. Literally, we use whatever C's == operator gives us for the
> two double values. Since there is no concept of identity for this case,
> there is no cognate behavior of Python to match.
>
> [~]
> |10> b = np.array([1.0, 2.0, nan], dtype=float)
>
> [~]
> |11> b == nan
> array([False, False, False], dtype=bool)
>
> [~]
> |12> nan in b
> False

And I wouldn't want to change that. It sounds like NumPy wouldn't be
much affected if we were to change this (which I'm not saying we
would).

Thanks!

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Thu Apr 28 06:07:29 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 21:07:29 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
Message-ID: <BANLkTin06eTKAAZMUFsE3nh_=KcRZMezNw@mail.gmail.com>

On Wed, Apr 27, 2011 at 8:43 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> (I also noted that array.array is like collections.Sequence in failing
> to enforce the container invariants in the presence of NaN values)

Regardless of whether we go any further it would indeed be good to be
explicit about the rules in the language reference and fix the
behavior of collections.Sequence.

I'm not sure about array.array -- it doesn't hold objects so I don't
think there's anything to enforce. It seems to behave the same way as
NumPy arrays when they don't contain objects.

-- 
--Guido van Rossum (python.org/~guido)

From alexander.belopolsky at gmail.com  Thu Apr 28 06:15:00 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 00:15:00 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ip9oe7$hgb$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
Message-ID: <BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>

On Wed, Apr 27, 2011 at 2:48 PM, Robert Kern <robert.kern at gmail.com> wrote:
..
> I suspect most of us would oppose changing it on general
> backwards-compatibility grounds rather than actually *liking* the current
> behavior. If the behavior changed with Python floats, we'd have to mull over
> whether we try to match that behavior with our scalar types (one of which
> subclasses from float) and our arrays. We would be either incompatible with
> Python or C, and we'd probably end up choosing Python to diverge from. It
> would make a mess, honestly. We already have to explain why equality is
> funky for arrays (arr1 == arr2 is a rich comparison that gives an array, not
> a bool, so we can't do containment tests for lists of arrays), so NaN is
> pretty easy to explain afterward.

Most NumPy applications are actually not exposed to NaN problems
because it is recommended that NaNs be avoided in computations and
when missing or undefined values are necessary, the recommended
solution is to use ma.array or masked array which is a drop-in
replacement for numpy array type and carries a boolean "mask" value
with every element.  This allows to have undefined elements is arrays
of any type: float, integer or even boolean.  Masked values propagate
through all computations including comparisons.

From guido at python.org  Thu Apr 28 06:24:25 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 21:24:25 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
Message-ID: <BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>

On Wed, Apr 27, 2011 at 9:15 PM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Wed, Apr 27, 2011 at 2:48 PM, Robert Kern <robert.kern at gmail.com> wrote:
> ..
>> I suspect most of us would oppose changing it on general
>> backwards-compatibility grounds rather than actually *liking* the current
>> behavior. If the behavior changed with Python floats, we'd have to mull over
>> whether we try to match that behavior with our scalar types (one of which
>> subclasses from float) and our arrays. We would be either incompatible with
>> Python or C, and we'd probably end up choosing Python to diverge from. It
>> would make a mess, honestly. We already have to explain why equality is
>> funky for arrays (arr1 == arr2 is a rich comparison that gives an array, not
>> a bool, so we can't do containment tests for lists of arrays), so NaN is
>> pretty easy to explain afterward.
>
> Most NumPy applications are actually not exposed to NaN problems
> because it is recommended that NaNs be avoided in computations and
> when missing or undefined values are necessary, the recommended
> solution is to use ma.array or masked array which is a drop-in
> replacement for numpy array type and carries a boolean "mask" value
> with every element. ?This allows to have undefined elements is arrays
> of any type: float, integer or even boolean. ?Masked values propagate
> through all computations including comparisons.

So do new masks get created when the outcome of an elementwise
operation is a NaN? Because that's the only reason why one should have
NaNs in one's data in the first place -- not to indicate missing
values!

-- 
--Guido van Rossum (python.org/~guido)

From robert.kern at gmail.com  Thu Apr 28 06:25:03 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Apr 2011 23:25:03 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikgDXag3BfLPoaiqXg0=bJiqKF0tA@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>	<ipanmc$jnu$1@dough.gmane.org>
	<BANLkTikgDXag3BfLPoaiqXg0=bJiqKF0tA@mail.gmail.com>
Message-ID: <ipaq70$uu3$1@dough.gmane.org>

On 2011-04-27 23:01 , Guido van Rossum wrote:
> On Wed, Apr 27, 2011 at 8:42 PM, Robert Kern<robert.kern at gmail.com>  wrote:

>> But for dtype=float arrays (which contain C doubles, not Python objects) we
>> use C semantics. Literally, we use whatever C's == operator gives us for the
>> two double values. Since there is no concept of identity for this case,
>> there is no cognate behavior of Python to match.
>>
>> [~]
>> |10>  b = np.array([1.0, 2.0, nan], dtype=float)
>>
>> [~]
>> |11>  b == nan
>> array([False, False, False], dtype=bool)
>>
>> [~]
>> |12>  nan in b
>> False
>
> And I wouldn't want to change that. It sounds like NumPy wouldn't be
> much affected if we were to change this (which I'm not saying we
> would).

Well, I didn't say that. If Python changed its behavior for (float('nan') == 
float('nan')), we'd have to seriously consider some changes. We do like to keep 
*some* amount of correspondence with Python semantics. In particular, we like 
our scalar types that match Python types to work as close to the Python type as 
possible. We have the np.float64 type, which represents a C double scalar and 
corresponds to a Python float. It is used when a single item is indexed out of a 
float64 array. We even subclass from the Python float type to help working with 
libraries that may not know about numpy:

[~]
|5> import numpy as np

[~]
|6> nan = np.array([1.0, 2.0, float('nan')])[2]

[~]
|7> nan == nan
False

[~]
|8> type(nan)
numpy.float64

[~]
|9> type(nan).mro()
[numpy.float64,
  numpy.floating,
  numpy.inexact,
  numpy.number,
  numpy.generic,
  float,
  object]


If the Python float type changes behavior, we'd have to consider whether to keep 
that for np.float64 or change it to match the usual C semantics used elsewhere. 
So there *would* be a dilemma. Not necessarily the most nerve-wracking one, but 
a dilemma nonetheless.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From robert.kern at gmail.com  Thu Apr 28 06:33:07 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Apr 2011 23:33:07 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
Message-ID: <ipaqm5$1h7$1@dough.gmane.org>

On 2011-04-27 23:24 , Guido van Rossum wrote:
> On Wed, Apr 27, 2011 at 9:15 PM, Alexander Belopolsky
> <alexander.belopolsky at gmail.com>  wrote:
>> On Wed, Apr 27, 2011 at 2:48 PM, Robert Kern<robert.kern at gmail.com>  wrote:
>> ..
>>> I suspect most of us would oppose changing it on general
>>> backwards-compatibility grounds rather than actually *liking* the current
>>> behavior. If the behavior changed with Python floats, we'd have to mull over
>>> whether we try to match that behavior with our scalar types (one of which
>>> subclasses from float) and our arrays. We would be either incompatible with
>>> Python or C, and we'd probably end up choosing Python to diverge from. It
>>> would make a mess, honestly. We already have to explain why equality is
>>> funky for arrays (arr1 == arr2 is a rich comparison that gives an array, not
>>> a bool, so we can't do containment tests for lists of arrays), so NaN is
>>> pretty easy to explain afterward.
>>
>> Most NumPy applications are actually not exposed to NaN problems
>> because it is recommended that NaNs be avoided in computations and
>> when missing or undefined values are necessary, the recommended
>> solution is to use ma.array or masked array which is a drop-in
>> replacement for numpy array type and carries a boolean "mask" value
>> with every element.  This allows to have undefined elements is arrays
>> of any type: float, integer or even boolean.  Masked values propagate
>> through all computations including comparisons.
>
> So do new masks get created when the outcome of an elementwise
> operation is a NaN?

No.

> Because that's the only reason why one should have
> NaNs in one's data in the first place -- not to indicate missing
> values!

Yes. I'm not sure that Alexander was being entirely clear. Masked arrays are 
intended to solve just the missing data problem and not the occurrence of NaNs 
from computations. There is still a persistent part of the community that really 
does like to use NaNs for missing data, though. I don't think that's entirely 
relevant to this discussion[1].

I wouldn't say that numpy applications aren't exposed to NaN problems. They are 
just as exposed to computational NaNs as you would expect any application that 
does that many flops to be.

[1] Okay, that's a lie. I'm sure that persistent minority would *love* to have 
NaN == NaN, because that would make their (ab)use of NaNs easier to work with.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From ncoghlan at gmail.com  Thu Apr 28 06:34:25 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 14:34:25 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTin06eTKAAZMUFsE3nh_=KcRZMezNw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<BANLkTin06eTKAAZMUFsE3nh_=KcRZMezNw@mail.gmail.com>
Message-ID: <BANLkTikYFC62PUf_7jpb5TuypkSH2TPsWw@mail.gmail.com>

On Thu, Apr 28, 2011 at 2:07 PM, Guido van Rossum <guido at python.org> wrote:
> I'm not sure about array.array -- it doesn't hold objects so I don't
> think there's anything to enforce. It seems to behave the same way as
> NumPy arrays when they don't contain objects.

Yep, after reading Robert's post I realised the point about native
arrays in NumPy (and the lack of "object identity" in those cases)
applied equally well to the array module.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From v+python at g.nevcal.com  Thu Apr 28 06:52:55 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 21:52:55 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <87ei4n9kef.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>	<4DB8634B.6020508@g.nevcal.com>	<BANLkTi=Podj_ntfprqrA=apm8kBcbmVZkw@mail.gmail.com>	<4DB89807.3080609@g.nevcal.com>
	<87ei4n9kef.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <4DB8F2A7.8060709@g.nevcal.com>

On 4/27/2011 7:31 PM, Stephen J. Turnbull wrote:
> Glenn Linderman writes:
>
>   >  I would not, however expect the original case that was described:
>   >   >>>  nan = float('nan')
>   >   >>>  nan == nan
>   >  False
>   >   >>>  [nan] == [nan]
>   >  True                  # also True in tuples, dicts, etc.
>
> Are you saying you would expect that
>
>>>> nan = float('nan')
>>>> a = [1, ..., 499, nan, 501, ..., 999]    # meta-ellipsis, not Ellipsis
>>>> a == a
> False
>
> ??

Yes, absolutely.  Once you understand the definition of NaN, it 
certainly cannot be True.   a is a, but a is not equal to a.

> I wouldn't even expect
>
>>>> a = [1, ..., 499, float('nan'), 501, ..., 999]
>>>> b = [1, ..., 499, float('nan'), 501, ..., 999]
>>>> a == b
> False
>
> but I guess I have to live with that.<wink>   While I wouldn't apply it
> to other people, I have to admit Raymond's aphorism applies to me (the
> surprising thing is not the behavior of NaNs, but that I'm surprised
> by anything that happens in the presence of NaNs!)

The only thing that should happen in the presence of NaNs is more NaNs :)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/9585afbb/attachment.html>

From guido at python.org  Thu Apr 28 06:54:52 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 21:54:52 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipaq70$uu3$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>
	<ipanmc$jnu$1@dough.gmane.org>
	<BANLkTikgDXag3BfLPoaiqXg0=bJiqKF0tA@mail.gmail.com>
	<ipaq70$uu3$1@dough.gmane.org>
Message-ID: <BANLkTim4y6BXKq_YxbDtExKPvCF2PDyTjQ@mail.gmail.com>

On Wed, Apr 27, 2011 at 9:25 PM, Robert Kern <robert.kern at gmail.com> wrote:
> On 2011-04-27 23:01 , Guido van Rossum wrote:
>> And I wouldn't want to change that. It sounds like NumPy wouldn't be
>> much affected if we were to change this (which I'm not saying we
>> would).
>
> Well, I didn't say that. If Python changed its behavior for (float('nan') ==
> float('nan')), we'd have to seriously consider some changes.

Ah, but I'm not proposing anything of the sort! float('nan') returns a
new object each time and two NaNs that are not the same *object* will
still follow the IEEE std. It's just when comparing a NaN-valued
*object* to *itself* (i.e. the *same* object) that I would consider
following the lead of Python's collections.

> We do like to
> keep *some* amount of correspondence with Python semantics. In particular,
> we like our scalar types that match Python types to work as close to the
> Python type as possible. We have the np.float64 type, which represents a C
> double scalar and corresponds to a Python float. It is used when a single
> item is indexed out of a float64 array. We even subclass from the Python
> float type to help working with libraries that may not know about numpy:
>
> [~]
> |5> import numpy as np
>
> [~]
> |6> nan = np.array([1.0, 2.0, float('nan')])[2]
>
> [~]
> |7> nan == nan
> False

Yeah, this is where things might change, because it is the same
*object* left and right.

> [~]
> |8> type(nan)
> numpy.float64
>
> [~]
> |9> type(nan).mro()
> [numpy.float64,
> ?numpy.floating,
> ?numpy.inexact,
> ?numpy.number,
> ?numpy.generic,
> ?float,
> ?object]
>
>
> If the Python float type changes behavior, we'd have to consider whether to
> keep that for np.float64 or change it to match the usual C semantics used
> elsewhere. So there *would* be a dilemma. Not necessarily the most
> nerve-wracking one, but a dilemma nonetheless.

Given what I just said, would it still be a dilemma? Maybe a smaller one?

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Thu Apr 28 06:57:22 2011
From: guido at python.org (Guido van Rossum)
Date: Wed, 27 Apr 2011 21:57:22 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipaqm5$1h7$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
Message-ID: <BANLkTikrJ48mdDZYgyDoJhg2eiMcNTcnCg@mail.gmail.com>

On Wed, Apr 27, 2011 at 9:33 PM, Robert Kern <robert.kern at gmail.com> wrote:
> [1] Okay, that's a lie. I'm sure that persistent minority would *love* to
> have NaN == NaN, because that would make their (ab)use of NaNs easier to
> work with.

Too bad, because that won't change. :-) I agree that this is abuse of
NaNs and shouldn't be encouraged.

-- 
--Guido van Rossum (python.org/~guido)

From v+python at g.nevcal.com  Thu Apr 28 07:06:49 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 22:06:49 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <87bozr9is0.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<4DB8BEB3.8040701@stoneleaf.us>	<4DB8C0DB.8040808@g.nevcal.com>
	<87bozr9is0.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <4DB8F5E9.3080000@g.nevcal.com>

On 4/27/2011 8:06 PM, Stephen J. Turnbull wrote:
> Glenn Linderman writes:
>   >  On 4/27/2011 6:11 PM, Ethan Furman wrote:
>
>   >  >  Totally out of my depth, but what if the a NaN object was allowed to
>   >  >  compare equal to itself, but different NaN objects still compared
>   >  >  unequal?  If NaN was a singleton then the current behavior makes more
>   >  >  sense, but since we get a new NaN with each instance creation is there
>   >  >  really a good reason why the same NaN can't be equal to itself?
>
> Yes.  A NaN is a special object that means "the computation that
> produced this object is undefined."  For example, consider the
> computation 1/x at x = 0.  If you approach from the left, 1/0
> "obviously" means minus infinity, while if you approach from the right
> just as obviously it means plus infinity.  So what does the 1/0 that
> occurs in [1/x for x in range(-5, 6)] mean?  In what sense is it
> "equal to itself"?  How can something which is not a number be
> compared for numerical equality?
>
>   >   >>>  n1 = float('NaN')
>   >   >>>  n2 = float('NaN')
>   >   >>>  n3 = n1
>   >
>   >   >>>  n1
>   >  nan
>   >   >>>  n2
>   >  nan
>   >   >>>  n3
>   >  nan
>   >
>   >   >>>  [n1] == [n2]
>   >  False
>   >   >>>  [n1] == [n3]
>   >  True
>   >
>   >  This is the current situation: some NaNs compare equal sometimes, and
>   >  some don't.
>
> No, Ethan is asking for "n1 == n3" =>  True.  As Mark points out, "[n1]
> == [n3]" can be interpreted as a containment question, rather than an
> equality question, with respect to the NaNs themselves.

It _can_ be interpreted as a containment question, but doing so is 
contrary to the documentation of Python list comparison, which presently 
doesn't match the implementation.  The intuitive definition of equality 
of lists is that each member is equal.  The presence of NaN destroys 
intuition of people that don't expect them to be as different from 
numbers as they actually are, but for people that understand NaNs and 
expect them to behave according to their definition, then the presence 
of a NaN in a list would be expected to cause the list to not be equal 
to itself, because a NaN is not equal to itself.

> In standard
> set theory, these are the same question, but that's not necessarily so
> in other set-like toposes.  In particular, getting equality and set
> membership to behave reasonably with respect to each other one of the
> problems faced in developing a workable theory of fuzzy sets.
>
> I don't think it matters what behavior you choose for NaNs, somebody
> is going be unhappy sometimes.

Some people will be unhappy just because they exist in the language, so 
I agree :)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/1a44dd56/attachment.html>

From ncoghlan at gmail.com  Thu Apr 28 07:12:20 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 15:12:20 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTim4y6BXKq_YxbDtExKPvCF2PDyTjQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>
	<ipanmc$jnu$1@dough.gmane.org>
	<BANLkTikgDXag3BfLPoaiqXg0=bJiqKF0tA@mail.gmail.com>
	<ipaq70$uu3$1@dough.gmane.org>
	<BANLkTim4y6BXKq_YxbDtExKPvCF2PDyTjQ@mail.gmail.com>
Message-ID: <BANLkTimgxGzL8diuiG=DJg=P5kDwf_t4nQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 2:54 PM, Guido van Rossum <guido at python.org> wrote:
>> Well, I didn't say that. If Python changed its behavior for (float('nan') ==
>> float('nan')), we'd have to seriously consider some changes.
>
> Ah, but I'm not proposing anything of the sort! float('nan') returns a
> new object each time and two NaNs that are not the same *object* will
> still follow the IEEE std. It's just when comparing a NaN-valued
> *object* to *itself* (i.e. the *same* object) that I would consider
> following the lead of Python's collections.

The reason this possibility bothers me is that it doesn't mesh well
with the "implementations are free to cache and reuse immutable
objects" rule. Although, if the updated NaN semantics were explicit
that identity was now considered part of the value of NaN objects
(thus ruling out caching them at the implementation layer), I guess
that objection would go away.

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From alexander.belopolsky at gmail.com  Thu Apr 28 07:27:36 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 01:27:36 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTinE2KKCV1gSSrYWnHtJrNnt0Oq=pg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
	<BANLkTinE2KKCV1gSSrYWnHtJrNnt0Oq=pg@mail.gmail.com>
Message-ID: <BANLkTintAHHA92Y20AUGR0t8EUSv48i-DQ@mail.gmail.com>

On Wed, Apr 27, 2011 at 11:14 PM, Guido van Rossum <guido at python.org> wrote:
..
>> ISTM, the current state of affairs is reasonable.
>
> Hardly; when I picked the NaN behavior I knew the IEEE std prescribed
> it but had never seen any code that used this.
>

Same here.  The only code I've seen that depended on this NaN behavior
was either buggy (programmer did not consider NaN case) or was using x
== x as a way to detect nans.  The later idiom is universally frowned
upon regardless of the language.  In Python one should use
math.isnan() for this purpose.

I would like to present a challenge to the proponents of the status
quo.  Look through your codebase and find code that will behave
differently if nan == nan were True.   Then come back and report how
many bugs you have found. :-)  Seriously, though, I bet that if you
find anything, it will fall into one of the two cases I mentioned
above.

..
> I expect that that if 15 years or so ago I had decided to ignore the
> IEEE std and declare that object identity always implies equality it
> would have seemed quite reasonable as well... The rule could be
> something like "the == operator first checks for identity and if left
> and right are the same object, the answer is True without calling the
> object's __eq__ method; similarly the != would always return False
> when an object is compared to itself".

Note that ctypes' floats already behave this way:

>>> x = c_double(float('nan'))
>>> x == x
True

..
> Doing this in 3.3 would, alas, be a huge undertaking -- I expect that
> there are tons of unittests that depend either on the current NaN
> behavior or on x == x calling x.__eq__(x). Plus the decimal unittests
> would be affected. Perhaps somebody could try?

Before we go down this path, I would like to discuss another
peculiarity of NaNs:

>>> float('nan') < 0
False
>>> float('nan') > 0
False

This property in my experience causes much more trouble than nan ==
nan being false.  The problem is that common sorting or binary search
algorithms may degenerate into infinite loops in the presence of nans.
 This may even happen when searching for a finite value in a large
array that contains a single nan.  Errors like this do happen in the
wild and and after chasing a bug like this programmers tend to avoid
nans at all costs.  Oftentimes this leads to using "magic"
placeholders such as 1e300 for missing data.

Since py3k has already made None < 0 an error, it may be reasonable
for float('nan') < 0 to raise an error as well (probably ValueError
rather than TypeError).  This will not make lists with nans sortable
or searchable using binary search, but will make associated bugs
easier to find.

From alexander.belopolsky at gmail.com  Thu Apr 28 07:37:28 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 01:37:28 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipaqm5$1h7$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
Message-ID: <BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>

On Thu, Apr 28, 2011 at 12:33 AM, Robert Kern <robert.kern at gmail.com> wrote:
> On 2011-04-27 23:24 , Guido van Rossum wrote:
..
>> So do new masks get created when the outcome of an elementwise
>> operation is a NaN?
>
> No.

Yes.

>>> from MA import array
>>> print array([0])/array([0])
[-- ]

(I don't have numpy on this laptop, so the example is using Numeric,
but I hope you guys did not change that while I was not looking:-)

From greg.ewing at canterbury.ac.nz  Thu Apr 28 07:40:44 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 28 Apr 2011 17:40:44 +1200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8B18C.1050003@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<4DB89997.9080102@canterbury.ac.nz> <4DB8B18C.1050003@pearwood.info>
Message-ID: <4DB8FDDC.9020601@canterbury.ac.nz>

Steven D'Aprano wrote:
> You can compare NANs, and the result of the 
> comparisons are perfectly well defined by either True or False.

But it's *arbitrarily* defined, and it's far from clear that
the definition chosen is useful in any way.

If you perform a computation and get a NaN as the result,
you know that something went wrong at some point.

But if you subject that NaN to a comparison, your code
takes some arbitrarily-chosen branch and produces a
result that may look plausible but is almost certainly
wrong.

The Pythonic thing to do (in the Python 3 world at least) would
be to regard NaNs as non-comparable and raise an exception.

-- 
Greg

From alexander.belopolsky at gmail.com  Thu Apr 28 07:53:06 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 01:53:06 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
Message-ID: <BANLkTimd=eJSj85+8cO=fmQnnrWNsbpgYQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 12:24 AM, Guido van Rossum <guido at python.org> wrote:
> So do new masks get created when the outcome of an elementwise
> operation is a NaN?  Because that's the only reason why one should have
> NaNs in one's data in the first place.

If this is the case, why Python almost never produces NaNs as IEEE
standard prescribes?

>>> 0.0/0.0
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ZeroDivisionError: float division


> -- not to indicate missing values!

Sometimes you don't have a choice.  For example when you data comes
from a database that uses NaNs for missing values.

From greg.ewing at canterbury.ac.nz  Thu Apr 28 08:02:29 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 28 Apr 2011 18:02:29 +1200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <87bozr9is0.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<4DB8BEB3.8040701@stoneleaf.us> <4DB8C0DB.8040808@g.nevcal.com>
	<87bozr9is0.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <4DB902F5.1080502@canterbury.ac.nz>

Stephen J. Turnbull wrote:
> So what does the 1/0 that
> occurs in [1/x for x in range(-5, 6)] mean?  In what sense is it
> "equal to itself"?  How can something which is not a number be
> compared for numerical equality?

I would say it *can't* be compared for *numerical* equality.
It might make sense to compare it using some other notion of
equality.

One of the problems here, I think, is that Python only lets
you define one notion of equality for each type, and that
notion is the one that gets used when you compare collections
of that type. (Or at least it's supposed to, but the identity-
implies-equality shortcut that gets taken in some places
interferes with that.)

So if you're going to decide that it doesn't make sense to
compare undefined numeric quantities, then it doesn't make
sense to compare lists containing them either.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Thu Apr 28 08:07:07 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 28 Apr 2011 18:07:07 +1200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTinE2KKCV1gSSrYWnHtJrNnt0Oq=pg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<0901E43C-8EC2-475B-9D79-440D77B17FFF@gmail.com>
	<BANLkTinE2KKCV1gSSrYWnHtJrNnt0Oq=pg@mail.gmail.com>
Message-ID: <4DB9040B.9010107@canterbury.ac.nz>

Guido van Rossum wrote:
> Currently NaN is not violating
> any language rules -- it is just violating users' intuition, in a much
> worse way than Inf does.

If it's to be an official language non-rule (by which I mean
that types are officially allowed to compare non-reflexively)
then any code assuming that identity implies equality for
arbitrary objects is broken and should be fixed.

-- 
Greg

From alexander.belopolsky at gmail.com  Thu Apr 28 08:11:49 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 02:11:49 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB8FDDC.9020601@canterbury.ac.nz>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<4DB89997.9080102@canterbury.ac.nz>
	<4DB8B18C.1050003@pearwood.info>
	<4DB8FDDC.9020601@canterbury.ac.nz>
Message-ID: <BANLkTim2zd+DxUNZSBkCiTOOfUk0OHUXsA@mail.gmail.com>

On Thu, Apr 28, 2011 at 1:40 AM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
..
> The Pythonic thing to do (in the Python 3 world at least) would
> be to regard NaNs as non-comparable and raise an exception.

As I mentioned in a previous post, I agree in case of <, <=,  >, or >=
comparisons, but == and  != are a harder case because you don't want,
for example:

>>> [1,2,float('nan'),3].index(3)
3

to raise an exception.

From v+python at g.nevcal.com  Thu Apr 28 08:20:56 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Wed, 27 Apr 2011 23:20:56 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
Message-ID: <4DB90748.4030501@g.nevcal.com>

On 4/27/2011 8:43 PM, Nick Coghlan wrote:
> On Thu, Apr 28, 2011 at 12:42 PM, Stephen J. Turnbull
> <stephen at xemacs.org>  wrote:
>> Mark Dickinson writes:
>>
>>   >  Declaring that 'nan == nan' should be True seems attractive in
>>   >  theory,
>>
>> No, it's intuitively attractive, but that's because humans like nice
>> continuous behavior.  In *theory*, it's true that some singularities
>> are removable, and the NaN that occurs when evaluating at that point
>> is actually definable in a broader context, but the point of NaN is
>> that some singularities are *not* removable.  This is somewhat
>> Pythonic: "In the presence of ambiguity, refuse to guess."
> Refusing to guess in this case would be to treat all NaNs as
> signalling NaNs, and that wouldn't be good, either :)
>
> I like Terry's suggestion for a glossary entry, and have created an
> updated proposal at http://bugs.python.org/issue11945
>
> (I also noted that array.array is like collections.Sequence in failing
> to enforce the container invariants in the presence of NaN values)

In that bug, Nick, you mention that reflexive equality is something that 
container classes rely on in their implementation.  Such reliance seems 
to me to be a bug, or an inappropriate optimization, rather than a 
necessity.  I realize that classes that do not define equality use 
identity as their default equality operator, and that is acceptable for 
items that do not or cannot have any better equality operator.  It does 
lead to the situation where two objects that are bit-for-bit clones get 
separate entries in a set... exactly the same as how NaNs of different 
identity work... the situation with a NaN of the same identity not being 
added to the set multiple times seems to simply be a bug because of 
conflating identity and equality, and should not be relied on in 
container implementations.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110427/2618275e/attachment.html>

From alexander.belopolsky at gmail.com  Thu Apr 28 08:51:07 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 02:51:07 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB90748.4030501@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
Message-ID: <BANLkTimoFKAKeG3iXQnBXi4x1xoLKPnkTA@mail.gmail.com>

On Thu, Apr 28, 2011 at 2:20 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
..
> In that bug, Nick, you mention that reflexive equality is something that
> container classes rely on in their implementation.? Such reliance seems to
> me to be a bug, or an inappropriate optimization, ..

An alternative interpretation would be that it is a bug to use NaN
values in lists.  It is certainly nonsensical to use NaNs as keys in
dictionaries and that reportedly led Java designers to forgo the
nonreflexivity of nans:

"""
A "NaN" value is not equal to itself. However, a "NaN" Java "Float"
object is equal to itself. The semantic is defined this way, because
otherwise "NaN" Java "Float" objects cannot be retrieved from a hash
table.
""" - http://www.concentric.net/~ttwang/tech/javafloat.htm

With the status quo in Python, it may only make sense to store NaNs in
array.array, but not in a list.

From ncoghlan at gmail.com  Thu Apr 28 08:54:08 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 16:54:08 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB90748.4030501@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
Message-ID: <BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>

On Thu, Apr 28, 2011 at 4:20 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> In that bug, Nick, you mention that reflexive equality is something that
> container classes rely on in their implementation.? Such reliance seems to
> me to be a bug, or an inappropriate optimization, rather than a necessity.
> I realize that classes that do not define equality use identity as their
> default equality operator, and that is acceptable for items that do not or
> cannot have any better equality operator.? It does lead to the situation
> where two objects that are bit-for-bit clones get separate entries in a
> set... exactly the same as how NaNs of different identity work... the
> situation with a NaN of the same identity not being added to the set
> multiple times seems to simply be a bug because of conflating identity and
> equality, and should not be relied on in container implementations.

No, as Raymond has articulated a number of times over the years, it's
a property of the equivalence relation that is needed in order to
present sane invariants to users of the container. I included in the
bug report the critical invariants I am currently aware of that should
hold, even when the container may hold types with a non-reflexive
definition of equality:

  assert [x] == [x]                     # Generalised to all container types
  assert not [x] != [x]                # Generalised to all container types
  for x in c:
    assert x in c
    assert c.count(x) > 0                   # If applicable
    assert 0 <= c.index(x) < len(c)      # If applicable

The builtin types all already work this way, and that's a deliberate
choice - my proposal is simply to document the behaviour as
intentional, and fix the one case I know of in the standard library
where we don't implement these semantics correctly (i.e.
collections.Sequence).

The question of whether or not float and decimal.Decimal should be
modified to have reflexive definitions of equality (even for NaN
values) is actually orthogonal to the question of clarifying and
documenting the expected semantics of containers in the face of
non-reflexive definitions of equality.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From vinay_sajip at yahoo.co.uk  Thu Apr 28 09:23:43 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Thu, 28 Apr 2011 07:23:43 +0000 (UTC)
Subject: [Python-Dev] Socket servers in the test suite
References: <loom.20110427T230704-75@post.gmane.org>
	<BANLkTimqCY02e+iy-OcV4nzZa1BTiC_sOQ@mail.gmail.com>
Message-ID: <loom.20110428T091649-170@post.gmane.org>

Nick Coghlan <ncoghlan <at> gmail.com> writes:

> If you poke around in the test directory a bit, you may find there is
> already some code along these lines in other tests (e.g. I'm pretty
> sure the urllib tests already fire up a local server). Starting down
> the path of standardisation of that test functionality would be good.

I have poked around, and each test module pretty much does its own thing.
Perhaps that's unavoidable; I'll try and see if there are usable common patterns
in the specific instances.
 
> For larger components like this, it's also reasonable to add a
> dedicated helper module rather than using test.support directly. I
> started (and Antoine improved) something along those lines with the
> test.script_helper module for running Python subprocesses and checking
> their output, although it lacks documentation and there are lots of
> older tests that still use subprocess directly.

Yes, I thought perhaps it was too specialised for adding to test.support itself.

Thanks for the feedback,

Vinay


From v+python at g.nevcal.com  Thu Apr 28 09:27:26 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 28 Apr 2011 00:27:26 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
Message-ID: <4DB916DE.1050302@g.nevcal.com>

On 4/27/2011 11:54 PM, Nick Coghlan wrote:
> On Thu, Apr 28, 2011 at 4:20 PM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> In that bug, Nick, you mention that reflexive equality is something that
>> container classes rely on in their implementation.  Such reliance seems to
>> me to be a bug, or an inappropriate optimization, rather than a necessity.
>> I realize that classes that do not define equality use identity as their
>> default equality operator, and that is acceptable for items that do not or
>> cannot have any better equality operator.  It does lead to the situation
>> where two objects that are bit-for-bit clones get separate entries in a
>> set... exactly the same as how NaNs of different identity work... the
>> situation with a NaN of the same identity not being added to the set
>> multiple times seems to simply be a bug because of conflating identity and
>> equality, and should not be relied on in container implementations.
> No, as Raymond has articulated a number of times over the years, it's
> a property of the equivalence relation that is needed in order to
> present sane invariants to users of the container.

I probably wasn't around when Raymond did his articulation :)  Sorry for 
whatever amount of rehashing I'm doing here -- pointers to some of the 
articulation would be welcome, but perhaps the summary below is intended 
to recap the results of such discussions.  If my comments below seem to 
be grasping the essence of those discussions, then no need for the 
pointers... if I'm way off, I'd like to read a thread or two.

> I included in the
> bug report the critical invariants I am currently aware of that should
> hold, even when the container may hold types with a non-reflexive
> definition of equality:
>
>    assert [x] == [x]                     # Generalised to all container types
>    assert not [x] != [x]                # Generalised to all container types
>    for x in c:
>      assert x in c
>      assert c.count(x)>  0                   # If applicable
>      assert 0<= c.index(x)<  len(c)      # If applicable
>
> The builtin types all already work this way, and that's a deliberate
> choice - my proposal is simply to document the behaviour as
> intentional, and fix the one case I know of in the standard library
> where we don't implement these semantics correctly (i.e.
> collections.Sequence).
>
> The question of whether or not float and decimal.Decimal should be
> modified to have reflexive definitions of equality (even for NaN
> values) is actually orthogonal to the question of clarifying and
> documenting the expected semantics of containers in the face of
> non-reflexive definitions of equality.

Yes, I agree they are orthogonal questions... separate answers and 
choices can be made for specific classes, just like some classes 
implement equality using identity, it would also be possible to 
implement identity using equality, and it is possible to conflate the 
two as has apparently been deliberately done for Python containers, 
without reflecting that in the documentation.

If the containers have been deliberately implemented in that way, and it 
is not appropriate to change them, then more work is needed in the 
documentation than just your proposed Glossary definition, as the very 
intuitive descriptions in the Comparisons section are quite at odds with 
the current implementation.

Without having read the original articulations by Raymond or any 
discussions of the pros and cons, it would appear that the above list of 
invariants, which you refer to as "sane", are derived from a "pre-NaN" 
or "reflexive equality" perspective; while some folk perhaps think the 
concept of NaN is a particular brand of insanity, it is a standard 
brand, and therefore worthy of understanding and discussion.  And 
clearly, if the NaN perspective is intentionally corralled in Python, 
then the documentation needs to be clarified.  On the other hand, the 
SQL language has embraced the same concept as NaN in its concept of 
NULL, and has pushed that concept (they call it three-valued logic, I 
think) clear through the language.  NULL == NULL is not True, and it is 
not False, but it is NULL.  Of course, the language is different in 
other ways that Python; values are not objects and have no identity, but 
they do have collections of values called tuples, columns, and tables, 
which are similar to lists and lists of lists.  And they have mappings 
called indexes.  And they've made it all work with the concept of NULL 
and three-valued logic.  And sane people work with database systems 
built around such concepts.  So I guess I reject the argument that the 
above invariants are required for sanity.

On the other hand, having not much Python internals knowledge as yet, 
I'm in no position to know how seriously things would break internally 
should a different set of invariants that embrace and extend the concept 
of non-reflexive equality were to be invented to replace the above, nor 
whether there is a compatible migration path to achieve it in a 
reasonable manner... from future import NaNsanity ... :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/92a01b0b/attachment.html>

From alexander.belopolsky at gmail.com  Thu Apr 28 09:30:10 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 03:30:10 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
Message-ID: <BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 2:54 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
..
> No, as Raymond has articulated a number of times over the years, it's
> a property of the equivalence relation that is needed in order to
> present sane invariants to users of the container. I included in the
> bug report the critical invariants I am currently aware of that should
> hold, even when the container may hold types with a non-reflexive
> definition of equality:
>
> ?assert [x] == [x] ? ? ? ? ? ? ? ? ? ? # Generalised to all container types
> ?assert not [x] != [x] ? ? ? ? ? ? ? ?# Generalised to all container types
> ?for x in c:
> ? ?assert x in c
> ? ?assert c.count(x) > 0 ? ? ? ? ? ? ? ? ? # If applicable
> ? ?assert 0 <= c.index(x) < len(c) ? ? ?# If applicable
>

It is an interesting question of what "sane invariants" are.  Why you
consider the invariants that you listed essential while say

if c1 == c2:
   assert all(x == y for x,y in zip(c1, c2))

optional?

Can you give examples of algorithms that would break if one of your
invariants is violated, but would still work if the data contains
NaNs?

From ncoghlan at gmail.com  Thu Apr 28 09:32:33 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 17:32:33 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB916DE.1050302@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
Message-ID: <BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>

On Thu, Apr 28, 2011 at 5:27 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> Without having read the original articulations by Raymond or any discussions
> of the pros and cons,

In my first post to this thread,  I pointed out the bug tracker item
(http://bugs.python.org/issue4296) that included the discussion of
restoring this behaviour to the 3.x branch, after it was inadvertently
removed.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ziade.tarek at gmail.com  Thu Apr 28 09:54:23 2011
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Thu, 28 Apr 2011 09:54:23 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
Message-ID: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>

Hello

I removed some assert calls in distutils some time ago because the
package was not behaving correctly when people were using Python with
the --optimize flag. In other words, assert became a full part of the
code logic and removing them via -O was changing the behavior.

In my opinion assert should be avoided completely anywhere else than
in the tests. If this is a wrong statement, please let me know why :)

So, I grepped the stdlib for assert calls, and I have found 177 of
them and many of them are making Python acts differently depending on
the -O flag,

Here's an example on a randomly picked assert in the threading module:

>>>>>>>>>>>>>>>>>>>>>>>>
import threading

class test(threading.Thread):
    def __init__(self):
        self.bla = 1

    def run(self):
        print('running')

t = test()
print(t)
<<<<<<<<<<<<<<<<<<<<<<

The __repr__ method is not behaving the same way depending on the O flag:

$ python3 -O test.py
Traceback (most recent call last):
  File "test.py", line 12, in <module>
    print(t)
  File "/usr/local/lib/python3.2/threading.py", line 652, in __repr__
    if self._started.is_set():
AttributeError: 'test' object has no attribute '_started'

$ python3 test.py
Traceback (most recent call last):
  File "test.py", line 12, in <module>
    print(t)
  File "/usr/local/lib/python3.2/threading.py", line 650, in __repr__
    assert self._initialized, "Thread.__init__() was not called"
AttributeError: 'test' object has no attribute '_initialized'

$ python test.py
Traceback (most recent call last):
  File "test.py", line 12, in <module>
    print(t)
  File "/usr/lib/python2.6/threading.py", line 451, in __repr__
    assert self.__initialized, "Thread.__init__() was not called"
AssertionError: Thread.__init__() was not called
             <--- oops different error

$ python -O test.py
Traceback (most recent call last):
  File "test.py", line 12, in <module>
    print(t)
  File "/usr/lib/python2.6/threading.py", line 453, in __repr__
    if self.__started.is_set():
AttributeError: 'test' object has no attribute '_Thread__started'


I have seen some other places where thing would simply break with -O.

Am I right thinking we should do a pass on those and remove them or
turn them into exception that are triggered with -O as well ?

This flag is meant to "optimize generated bytecode slightly", but I am
not sure this involves also slightly changing the way the code behaves

Cheers
Tarek
-- 
Tarek Ziad? | http://ziade.org

From ncoghlan at gmail.com  Thu Apr 28 09:57:59 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 17:57:59 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>
Message-ID: <BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>

On Thu, Apr 28, 2011 at 5:30 PM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 2:54 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> ..
>> No, as Raymond has articulated a number of times over the years, it's
>> a property of the equivalence relation that is needed in order to
>> present sane invariants to users of the container. I included in the
>> bug report the critical invariants I am currently aware of that should
>> hold, even when the container may hold types with a non-reflexive
>> definition of equality:
>>
>> ?assert [x] == [x] ? ? ? ? ? ? ? ? ? ? # Generalised to all container types
>> ?assert not [x] != [x] ? ? ? ? ? ? ? ?# Generalised to all container types
>> ?for x in c:
>> ? ?assert x in c
>> ? ?assert c.count(x) > 0 ? ? ? ? ? ? ? ? ? # If applicable
>> ? ?assert 0 <= c.index(x) < len(c) ? ? ?# If applicable
>>
>
> It is an interesting question of what "sane invariants" are. ?Why you
> consider the invariants that you listed essential while say
>
> if c1 == c2:
> ? assert all(x == y for x,y in zip(c1, c2))
>
> optional?

Because this assertion is an assertion about the behaviour of
comparisons that violates IEEE754, while the assertions I list are all
assertions about the behaviour of containers that can be made true
*regardless* of IEEE754 by checking identity explicitly.

The correct assertion under Python's current container semantics is:

  if list(c1) == list(c2):  # Make ordering assumption explicit
    assert all(x is y or x == y for x,y in zip(c1, c2))  # Enforce reflexivity

Meyer is a purist - sticking with the mathematical definition of
equality is the sort of thing that fits his view of the world and what
Eiffel should be, even if it hinders interoperability with other
languages and tools. Python tends to be a bit more pragmatic about
things, in particular when it comes to interoperability, so it makes
sense to follow IEEE754 and the decimal specification at the
individual comparison level.

However, we can contain the damage to some degree by specifying that
containers should enforce reflexivity where they need it. This is
already the case at the implementation level (collections.Sequence
aside), it just needs to be pushed up to the language definition
level.

> Can you give examples of algorithms that would break if one of your
> invariants is violated, but would still work if the data contains
> NaNs?

Sure, anything that cares more about objects than it does about
values. The invariants are about making containers behave like
containers as far as possible, even in the face of recalcitrant types
like IEEE754 floating point.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From hrvoje.niksic at avl.com  Thu Apr 28 10:23:15 2011
From: hrvoje.niksic at avl.com (Hrvoje Niksic AVL HR)
Date: Thu, 28 Apr 2011 10:23:15 +0200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <87ei4n9kef.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<BANLkTik=xtamqNMxrnXfq2cFk=Nx+sbb-A@mail.gmail.com>	<4DB8634B.6020508@g.nevcal.com>	<BANLkTi=Podj_ntfprqrA=apm8kBcbmVZkw@mail.gmail.com>	<4DB89807.3080609@g.nevcal.com>
	<87ei4n9kef.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <4DB923F3.9020904@avl.com>

On 04/28/2011 04:31 AM, Stephen J. Turnbull wrote:
> Are you saying you would expect that
>
>>>>  nan = float('nan')
>>>>  a = [1, ..., 499, nan, 501, ..., 999]    # meta-ellipsis, not Ellipsis
>>>>  a == a
> False
>
> ??

I would expect l1 == l2, where l1 and l2 are both lists, to be 
semantically equivalent to len(l1) == len(l2) and all(imap(operator.eq, 
l1, l2)).  Currently it isn't, and that was the motivation for this thread.

If objects that break reflexivity of == are not allowed, this should be 
documented, and such objects banished from the standard library.

Hrvoje

From alexander.belopolsky at gmail.com  Thu Apr 28 10:30:56 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 04:30:56 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>
	<BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>
Message-ID: <BANLkTim92qa_6EpD-_UH8uK-TAamjpr8xg@mail.gmail.com>

On Thu, Apr 28, 2011 at 3:57 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
..
>> It is an interesting question of what "sane invariants" are. ?Why you
>> consider the invariants that you listed essential while say
>>
>> if c1 == c2:
>> ? assert all(x == y for x,y in zip(c1, c2))
>>
>> optional?
>
> Because this assertion is an assertion about the behaviour of
> comparisons that violates IEEE754, while the assertions I list are all
> assertions about the behaviour of containers that can be made true
> *regardless* of IEEE754 by checking identity explicitly.
>

AFAIK, IEEE754 says nothing about comparison of containers, so my
invariant cannot violate it.  What you probably wanted to say is that
my invariant cannot be achieved in the presence of IEEE754 conforming
floats, but this observation by itself does not make my invariant less
important than yours.  It just makes yours easier to maintain.

> The correct assertion under Python's current container semantics is:
>
> ?if list(c1) == list(c2): ?# Make ordering assumption explicit
> ? ?assert all(x is y or x == y for x,y in zip(c1, c2)) ?# Enforce reflexivity
>

Being correct is different from being important.  What practical
applications of lists containing NaNs do this and your other
invariants enable?  I think even with these invariants in place one
should either filter out NaNs from their lists or replace them with
None before doing applying container operations.

From tjreedy at udel.edu  Thu Apr 28 10:34:35 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 28 Apr 2011 04:34:35 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
Message-ID: <ipb8qr$6jl$1@dough.gmane.org>

On 4/28/2011 3:54 AM, Tarek Ziad? wrote:
> Hello
>
> I removed some assert calls in distutils some time ago because the
> package was not behaving correctly when people were using Python with
> the --optimize flag. In other words, assert became a full part of the
> code logic and removing them via -O was changing the behavior.
>
> In my opinion assert should be avoided completely anywhere else than
> in the tests. If this is a wrong statement, please let me know why :)

My understanding is that assert can be used in production code but only 
to catch logic errors by testing supposed invariants or postconditions. 
It should not be used to test usage errors, including preconditions. In 
other words, assert presence or absence should not affect behavior 
unless the code has a bug.

> So, I grepped the stdlib for assert calls, and I have found 177 of
> them and many of them are making Python acts differently depending on
> the -O flag,
>
> Here's an example on a randomly picked assert in the threading module:

This, to me is wrong:

    def __init__(self, group=None, target=None, name=None,
                  args=(), kwargs=None, verbose=None):
         assert group is None, "group argument must be None for now"

That catches a usage error and should raise a ValueError.

This

     def _wait(self, timeout):
         if not self._cond.wait_for(lambda : self._state != 0, timeout):
             #timed out.  Break the barrier
             self._break()
             raise BrokenBarrierError
         if self._state < 0:
             raise BrokenBarrierError
         assert self._state == 1

appears to be, or should be, a test of a postcondition that should 
*always* be true regardless of usage.


-- 
Terry Jan Reedy



From v+python at g.nevcal.com  Thu Apr 28 10:49:22 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 28 Apr 2011 01:49:22 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
Message-ID: <4DB92A12.8000206@g.nevcal.com>

On 4/28/2011 12:32 AM, Nick Coghlan wrote:
> On Thu, Apr 28, 2011 at 5:27 PM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> Without having read the original articulations by Raymond or any discussions
>> of the pros and cons,
> In my first post to this thread,  I pointed out the bug tracker item
> (http://bugs.python.org/issue4296) that included the discussion of
> restoring this behaviour to the 3.x branch, after it was inadvertently
> removed.

Sure.  I had read that.  It was mostly discussing it from a backward 
compatibility perspective, although it mentioned some invariants as 
well, etc.

But mentioning the invariants is different than reading discussion about 
the pros and cons of such, or what reasoning lead to wanting them to be 
invariants.  Raymond does make a comment about necessary for correctly 
reasoning about programs, but that is just a tautological statement 
based on previous agreement, rather than being the discussion itself, 
which must have happened significantly earlier.

One of your replies to Alexander seems to say the same thing I was 
saying, though....

On 4/28/2011 12:57 AM, Nick Coghlan wrote:
>> On Thu, Apr 28, 2011 at 5:30 PM, Alexander Belopolsky
>> <alexander.belopolsky at gmail.com>  wrote:
>> Can you give examples of algorithms that would break if one of your
>> >  invariants is violated, but would still work if the data contains
>> >  NaNs?
> Sure, anything that cares more about objects than it does about
> values. The invariants are about making containers behave like
> containers as far as possible, even in the face of recalcitrant types
> like IEEE754 floating point.

That reinforces the idea that the discussion about containers was to try 
to make them like containers in pre-NaN languages such as Eiffel, rather 
than in post-NaN languages such as SQL.  It is not that one cannot 
reason about containers in either case, but rather that one cannot 
borrow all the reasoning from pre-NaN concepts and apply it to post-NaN 
concepts.  So if one's experience is with pre-NaN container concepts, 
one pushes that philosophy and reasoning instead of embracing and 
extending post-NaN concepts.  That's not all bad, except when the 
documentation says one thing and the implementation does something 
else.  Your comment in that same message "we can contain the damage to 
some degree" speaks to that philosophy.  Based on my current limited 
knowledge of Python internals, and available time to pursue figuring out 
whether the compatibility issues would preclude extending Python 
containers to embrace post-NaN concepts, I'll probably just learn your 
list of invariants, and just be aware that if I need a post-NaN 
container, I'll have to implement it myself.  I suspect doing sequences 
would be quite straightforward, other containers less so, unless the 
application of concern is sufficiently value-based to permit the trick 
of creating a new NaN each time it is inserted into a different container.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/cfd5bfeb/attachment.html>

From marks at dcs.gla.ac.uk  Thu Apr 28 10:40:20 2011
From: marks at dcs.gla.ac.uk (Mark Shannon)
Date: Thu, 28 Apr 2011 09:40:20 +0100
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool identity
	shortcut)
In-Reply-To: <BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
Message-ID: <4DB927F4.3040206@dcs.gla.ac.uk>

Related to the discussion on "Not a Number" can I point out a few things 
that have not be explicitly addressed so far.

The IEEE standard is about hardware and bit patterns, rather than types 
and values so may not be entirely appropriate for high-level language
like Python.

NaN is *not* a number (the clue is in the name).
Python treats it as if it were a number:

 >>> import numbers
 >>> isinstance(nan, numbers.Number)
True

Can be read as "'Not a Number' is a Number" ;)

NaN does not have to be a float or a Decimal.
Perhaps it should have its own class.
The default comparisons will then work as expected for collections.
(No doubt, making NaN a new class will cause a whole new set of problems)

As pointed out by Meyer:
NaN == NaN is False
is no more logical than
NaN != NaN is False

Although both NaN == NaN and NaN != NaN could arguably be a "maybe" 
value, the all important reflexivity (x == x is True)  is effectively 
part of the language.
All collections rely on it and Python wouldn't be much use without 
dicts, tuples and lists.

To summarise:

NaN is required so that floating point operations on arrays and lists
do not raise unwanted exceptions.
NaN is Not a Number (therefore should be neither a float nor a Decimal).
Making it a new class would solve some of the problems discussed,
but would create new problems instead.
Correct behaviour of collections is more important than IEEE conformance
of NaN comparisons.

Mark.

From greg.ewing at canterbury.ac.nz  Thu Apr 28 11:10:04 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 28 Apr 2011 21:10:04 +1200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>
	<BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>
Message-ID: <4DB92EEC.9030303@canterbury.ac.nz>

Nick Coghlan wrote:

> Because this assertion is an assertion about the behaviour of
> comparisons that violates IEEE754, while the assertions I list are all
> assertions about the behaviour of containers that can be made true
> *regardless* of IEEE754 by checking identity explicitly.

Aren't you making something of a circular argument here?
You're saying that non-reflexive comparisons are okay because
they don't interfere with certain critical invariants. But
you're defining those invariants as the ones that don't
happen to conflict with non-reflexive comparisons!

-- 
Greg

From greg.ewing at canterbury.ac.nz  Thu Apr 28 11:17:50 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 28 Apr 2011 21:17:50 +1200
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
 identity shortcut)
In-Reply-To: <4DB927F4.3040206@dcs.gla.ac.uk>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk>
Message-ID: <4DB930BE.4070805@canterbury.ac.nz>

Mark Shannon wrote:

> NaN does not have to be a float or a Decimal.
> Perhaps it should have its own class.

Perhaps, but that wouldn't solve anything on its own. If
this new class compares reflexively, then it still violates
IEE754. Conversely, existing NaNs could be made to compare
reflexively without making them a new class.

-- 
Greg

From ncoghlan at gmail.com  Thu Apr 28 12:11:16 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 20:11:16 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTim92qa_6EpD-_UH8uK-TAamjpr8xg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>
	<BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>
	<BANLkTim92qa_6EpD-_UH8uK-TAamjpr8xg@mail.gmail.com>
Message-ID: <BANLkTi=jHZmBVtih1eEyNyduTuoug87qQg@mail.gmail.com>

On Thu, Apr 28, 2011 at 6:30 PM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 3:57 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> ..
>>> It is an interesting question of what "sane invariants" are. ?Why you
>>> consider the invariants that you listed essential while say
>>>
>>> if c1 == c2:
>>> ? assert all(x == y for x,y in zip(c1, c2))
>>>
>>> optional?
>>
>> Because this assertion is an assertion about the behaviour of
>> comparisons that violates IEEE754, while the assertions I list are all
>> assertions about the behaviour of containers that can be made true
>> *regardless* of IEEE754 by checking identity explicitly.
>>
>
> AFAIK, IEEE754 says nothing about comparison of containers, so my
> invariant cannot violate it. ?What you probably wanted to say is that
> my invariant cannot be achieved in the presence of IEEE754 conforming
> floats, but this observation by itself does not make my invariant less
> important than yours. ?It just makes yours easier to maintain.

No, I meant what I said. Your assertion includes a direct comparison
between values (the "x == y" part) which means that IEEE754 has a
bearing on whether or not it is a valid assertion. Every single one of
my stated invariants consists solely of relationships between
containers, or between a container and its contents. This keeps them
all out of the domain of IEEE754 since the *container implementers*
get to decide whether or not to factor object identity into the
management of the container contents.

The core containment invariant is really only this one:

    for x in c:
        assert x in c

That is, if we iterate over a container, all entries returned should
be in the container. Hopefully it is non-controversial that this is a
sane and reasonable invariant for a container *user* to expect.

The comparison invariants follow from the definition of set equivalence as:

  set1 == set2 iff all(x in set2 for x in set1) and all(y in set1 for y in set2)

Again, notice that there is no comparison of items here - merely a
consideration of the way items relate to containers.

The rationale behind the count() and index() assertions is harder to
define in implementation neutral terms, but their behaviour does
follow naturally from the internal enforcement of reflexivity needed
to guarantee that core invariant.

In mathematics, this is all quite straightforward and
non-controversial, since it can be taken for granted that equality is
reflexive (as it's part of the definition of what equality *means* -
equivalence relations *are* relations that are symmetric, transitive
and reflexive. Lose any one of those three properties and it isn't an
equivalence relation any more).

However, when we confront the practical reality of IEEE754 floating
point values and the lack of reflexivity in the presence of NaN, we're
faced with a choice of (at least) 4 alternatives:

1. Deny it. Say equality is reflexive at the language level, and we
don't care that it makes it impossible to fully implement IEEE754
semantics. This is what Eiffel does, and if you don't care about
interoperability and the possibility of algorithmic equivalence with
hardware implementations, it's probably not a bad idea. After all, why
discard centuries of mathematical experience based on a decision that
the IEEE754 committee can't clearly recall the rationale for, and
didn't clearly document?

2. Tolerate it, but attempt to confine the breakage of mathematical
guarantees to the arithmetic operations actually covered by the
relevant standards. This is what CPython currently does by enforcing
the container invariants at an implementation level, and, as I think
it's a good way to handle the situation, this is what I am advocating
lifting up to the language level through appropriate updates to the
library and language reference. (Note that even changing the behaviour
of float() leaves Python in this situation, since third party types
will still be free to follow IEEE754. Given that, it seems relatively
pointless to change the behaviour of builtin floats after all the
effort that has gone into bringing them ever closer to IEEE754).

3. Signal it. We already do this in some cases (e.g. for
ZeroDivisionError), and I'm personally quite happy with the idea of
raising ValueError in other cases, such as when attempting to perform
ordering comparisons on NaN values.

4. Embrace it. Promote NaN to a language level construct, define
semantics allowing it to propagate through assorted comparison and
other operations (including short-circuiting logic operators) without
being coerced to True as it is now.

Documenting the status quo is the *only* necessary step in all of this
(and Raymond has already adopted the relevant tracker issue). There
are tweaks to the current semantics that may be useful (specifically
ValueError when attempting to order NaN), but changing the meaning of
equality for floats probably isn't one of them (since that only fixes
one type, while fixing the affected algorithms fixes *all* types).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Thu Apr 28 12:27:21 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 20:27:21 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB92EEC.9030303@canterbury.ac.nz>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>
	<BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>
	<4DB92EEC.9030303@canterbury.ac.nz>
Message-ID: <BANLkTi=GP_2+yMDS974OxRF645m+Of6Aeg@mail.gmail.com>

On Thu, Apr 28, 2011 at 7:10 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Nick Coghlan wrote:
>
>> Because this assertion is an assertion about the behaviour of
>> comparisons that violates IEEE754, while the assertions I list are all
>> assertions about the behaviour of containers that can be made true
>> *regardless* of IEEE754 by checking identity explicitly.
>
> Aren't you making something of a circular argument here?
> You're saying that non-reflexive comparisons are okay because
> they don't interfere with certain critical invariants. But
> you're defining those invariants as the ones that don't
> happen to conflict with non-reflexive comparisons!

No, I'm taking the existence of non-reflexive comparisons as a given
(despite agreeing with Meyer from a theoretical standpoint) because:
1. IEEE754 works that way
2. Even if float() is changed to not work that way, 3rd party types
may still do so
3. Supporting rich comparisons makes it impossible for Python to
enforce reflexivity at the language level (even if we wanted to)

However, as I detailed in my reply to Antoine, the critical container
invariants I cite *don't include* direct object-object comparisons.
Instead, they merely describe how objects relate to containers, and
how containers relate to each other, using only the two basic rules
that objects retrieved from a container should be in that container
and that two sets are equivalent if they are each a subset of the
other.

The question then becomes, how do we reconcile the container
invariants with the existence of non-reflexive definitions of equality
at the type level, and the answer is to officially adopt the approach
already used in the standard container types: enforce reflexive
equality at the container level, so that it doesn't matter that some
types provide a non-reflexive version.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fuzzyman at voidspace.org.uk  Thu Apr 28 12:27:14 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Thu, 28 Apr 2011 11:27:14 +0100
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <ipb8qr$6jl$1@dough.gmane.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
Message-ID: <4DB94102.9020701@voidspace.org.uk>

On 28/04/2011 09:34, Terry Reedy wrote:
> On 4/28/2011 3:54 AM, Tarek Ziad? wrote:
>> Hello
>>
>> I removed some assert calls in distutils some time ago because the
>> package was not behaving correctly when people were using Python with
>> the --optimize flag. In other words, assert became a full part of the
>> code logic and removing them via -O was changing the behavior.
>>
>> In my opinion assert should be avoided completely anywhere else than
>> in the tests. If this is a wrong statement, please let me know why :)
>
> My understanding is that assert can be used in production code but 
> only to catch logic errors by testing supposed invariants or 
> postconditions. It should not be used to test usage errors, including 
> preconditions. In other words, assert presence or absence should not 
> affect behavior unless the code has a bug.

Agreed. We should ideally have buildbots doing test runs with -O and 
-OO. R. David Murray did a lot of work a year ago (or so) to ensure the 
test run passes with -OO but it easily degrades..

There are a couple of asserts in unittest (for test discovery) but I 
only use them to provide failure messages early. The functionality is 
unchanged (and tests still pass) with -OO.

All the best,

Michael Foord
>
>> So, I grepped the stdlib for assert calls, and I have found 177 of
>> them and many of them are making Python acts differently depending on
>> the -O flag,
>>
>> Here's an example on a randomly picked assert in the threading module:
>
> This, to me is wrong:
>
>    def __init__(self, group=None, target=None, name=None,
>                  args=(), kwargs=None, verbose=None):
>         assert group is None, "group argument must be None for now"
>
> That catches a usage error and should raise a ValueError.
>
> This
>
>     def _wait(self, timeout):
>         if not self._cond.wait_for(lambda : self._state != 0, timeout):
>             #timed out.  Break the barrier
>             self._break()
>             raise BrokenBarrierError
>         if self._state < 0:
>             raise BrokenBarrierError
>         assert self._state == 1
>
> appears to be, or should be, a test of a postcondition that should 
> *always* be true regardless of usage.
>
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From ncoghlan at gmail.com  Thu Apr 28 12:31:20 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 28 Apr 2011 20:31:20 +1000
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
	identity shortcut)
In-Reply-To: <4DB930BE.4070805@canterbury.ac.nz>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk>
	<4DB930BE.4070805@canterbury.ac.nz>
Message-ID: <BANLkTi=SQhWdU+De09OtOXq86uGfbBTFRw@mail.gmail.com>

On Thu, Apr 28, 2011 at 7:17 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Mark Shannon wrote:
>
>> NaN does not have to be a float or a Decimal.
>> Perhaps it should have its own class.
>
> Perhaps, but that wouldn't solve anything on its own. If
> this new class compares reflexively, then it still violates
> IEE754. Conversely, existing NaNs could be made to compare
> reflexively without making them a new class.

And 3rd party NaNs can still do whatever the heck they want :)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Thu Apr 28 12:33:25 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 28 Apr 2011 12:33:25 +0200
Subject: [Python-Dev] Socket servers in the test suite
References: <loom.20110427T230704-75@post.gmane.org>
	<BANLkTimqCY02e+iy-OcV4nzZa1BTiC_sOQ@mail.gmail.com>
	<loom.20110428T091649-170@post.gmane.org>
Message-ID: <20110428123325.6b0acf7b@pitrou.net>

On Thu, 28 Apr 2011 07:23:43 +0000 (UTC)
Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:

> Nick Coghlan <ncoghlan <at> gmail.com> writes:
> 
> > If you poke around in the test directory a bit, you may find there is
> > already some code along these lines in other tests (e.g. I'm pretty
> > sure the urllib tests already fire up a local server). Starting down
> > the path of standardisation of that test functionality would be good.
> 
> I have poked around, and each test module pretty much does its own thing.
> Perhaps that's unavoidable; I'll try and see if there are usable common patterns
> in the specific instances.
>  
> > For larger components like this, it's also reasonable to add a
> > dedicated helper module rather than using test.support directly. I
> > started (and Antoine improved) something along those lines with the
> > test.script_helper module for running Python subprocesses and checking
> > their output, although it lacks documentation and there are lots of
> > older tests that still use subprocess directly.
> 
> Yes, I thought perhaps it was too specialised for adding to test.support itself.

You can also take a look at Lib/test/ssl_servers.py.

Regards

Antoine.



From solipsis at pitrou.net  Thu Apr 28 12:34:39 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 28 Apr 2011 12:34:39 +0200
Subject: [Python-Dev] Socket servers in the test suite
References: <loom.20110427T230704-75@post.gmane.org>
	<BANLkTimqCY02e+iy-OcV4nzZa1BTiC_sOQ@mail.gmail.com>
	<loom.20110428T091649-170@post.gmane.org>
Message-ID: <20110428123439.75fbf38e@pitrou.net>

On Thu, 28 Apr 2011 07:23:43 +0000 (UTC)
Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:

> Nick Coghlan <ncoghlan <at> gmail.com> writes:
> 
> > If you poke around in the test directory a bit, you may find there is
> > already some code along these lines in other tests (e.g. I'm pretty
> > sure the urllib tests already fire up a local server). Starting down
> > the path of standardisation of that test functionality would be good.
> 
> I have poked around, and each test module pretty much does its own thing.
> Perhaps that's unavoidable; I'll try and see if there are usable common patterns
> in the specific instances.
>  
> > For larger components like this, it's also reasonable to add a
> > dedicated helper module rather than using test.support directly. I
> > started (and Antoine improved) something along those lines with the
> > test.script_helper module for running Python subprocesses and checking
> > their output, although it lacks documentation and there are lots of
> > older tests that still use subprocess directly.
> 
> Yes, I thought perhaps it was too specialised for adding to test.support itself.

You can take a look at Lib/test/ssl_servers.py.

Regards

Antoine.



From ziade.tarek at gmail.com  Thu Apr 28 14:09:17 2011
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Thu, 28 Apr 2011 14:09:17 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <4DB94102.9020701@voidspace.org.uk>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org> <4DB94102.9020701@voidspace.org.uk>
Message-ID: <BANLkTi=bMcEj9_O9n9bNej_zn_8ZA26+kA@mail.gmail.com>

On Thu, Apr 28, 2011 at 12:27 PM, Michael Foord
<fuzzyman at voidspace.org.uk> wrote:
> On 28/04/2011 09:34, Terry Reedy wrote:
>>
>> On 4/28/2011 3:54 AM, Tarek Ziad? wrote:
>>>
>>> Hello
>>>
>>> I removed some assert calls in distutils some time ago because the
>>> package was not behaving correctly when people were using Python with
>>> the --optimize flag. In other words, assert became a full part of the
>>> code logic and removing them via -O was changing the behavior.
>>>
>>> In my opinion assert should be avoided completely anywhere else than
>>> in the tests. If this is a wrong statement, please let me know why :)
>>
>> My understanding is that assert can be used in production code but only to
>> catch logic errors by testing supposed invariants or postconditions. It
>> should not be used to test usage errors, including preconditions. In other
>> words, assert presence or absence should not affect behavior unless the code
>> has a bug.
>
> Agreed. We should ideally have buildbots doing test runs with -O and -OO. R.
> David Murray did a lot of work a year ago (or so) to ensure the test run
> passes with -OO but it easily degrades..
>
> There are a couple of asserts in unittest (for test discovery) but I only
> use them to provide failure messages early. The functionality is unchanged
> (and tests still pass) with -OO.
>
> All the best,

I'll try to add a useful report on "bad asserts" in the bug tracker.

I am replying again to this on Python-ideas because I want to debate
on assert :)

Cheers
Tarek

From g.rodola at gmail.com  Thu Apr 28 14:13:36 2011
From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=)
Date: Thu, 28 Apr 2011 14:13:36 +0200
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <loom.20110427T230704-75@post.gmane.org>
References: <loom.20110427T230704-75@post.gmane.org>
Message-ID: <BANLkTi=W8RmmES7NH9waP+=VrxOHzzf4ow@mail.gmail.com>

2011/4/27 Vinay Sajip <vinay_sajip at yahoo.co.uk>:
> I've been recently trying to improve the test coverage for the logging package,
> and have got to a not unreasonable point:
>
> logging/__init__.py 99% (96%)
> logging/config.py 89% (85%)
> logging/handlers.py 60% (54%)
>
> where the figures in parentheses include branch coverage measurements.
>
> I'm at the point where to appreciably increase coverage, I'd need to write some
> test servers to exercise client code in SocketHandler, DatagramHandler and
> HTTPHandler.
>
> I notice there are no utility classes in test.support to help with this kind of
> thing - would there be any mileage in adding such things? Of course I could add
> test server code just to test_logging (which already contains some socket server
> code to exercise the configuration functionality), but rolling a test server
> involves boilerplate such as using a custom RequestHandler-derived class for
> each application. I had in mind a more streamlined approach where you can just
> pass a single callable to a server to handle requests, e.g. as outlined in
>
> https://gist.github.com/945157
>
> I'd be grateful for any comments about adding such functionality to e.g.
> test.support.
>
> Regards,
>
> Vinay Sajip
>

I agree having a standard server framework for tests woul be useful,
because it's something which appears quite often, (e.g. when writing
functional tests).
See for example:
http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_os.py#l1316
http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_ftplib.py#l211
http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_ssl.py#l844
http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_smtpd.py
http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_poplib.py#l115

Regards

--- Giampaolo
http://code.google.com/p/pyftpdlib/
http://code.google.com/p/psutil/

From solipsis at pitrou.net  Thu Apr 28 14:38:30 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 28 Apr 2011 14:38:30 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
Message-ID: <20110428143830.3a9848ad@pitrou.net>

On Thu, 28 Apr 2011 09:54:23 +0200
Tarek Ziad? <ziade.tarek at gmail.com> wrote:
> 
> I have seen some other places where thing would simply break with -O.
> 
> Am I right thinking we should do a pass on those and remove them or
> turn them into exception that are triggered with -O as well ?

Agreed. Argument checking should not depend on the -O flag.

Regards

Antoine.



From rob.cliffe at btinternet.com  Thu Apr 28 15:59:07 2011
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Thu, 28 Apr 2011 14:59:07 +0100
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
Message-ID: <4DB972AB.6090302@btinternet.com>

I am not a specialist in this area (although I call myself a 
mathematician).  But they say that sometimes the outsider sees most of 
the game, or more likely that sometimes the idiot's point of view is useful.

To me the idea of non-reflexive equality (an object not being equal to 
itself) is abhorrent.  Nothing is more likely to put off new Python 
users if they happen to run into it.  And I bet even very experienced 
programmers will be tripped up by it a good proportion of the time they 
hit it.
Basically it's deferring to a wart, of dubious value, in floating point 
calculations and/or the IEEE754 standard, and allowing it to become a 
monstrous carbuncle disfiguring the whole language.
I think implementations of equal/not-equal which are make equality 
non-reflexive (and thus break "identity implies equality") should be 
considered broken.


On 27/04/2011 15:53, Guido van Rossum wrote:
> Maybe we should just call off the odd NaN comparison behavior?
Right on, Guido.  (A pity that a lot of people don't seem to be listening.)


On 27/04/2011 17:05, Isaac Morland wrote:
> Python could also provide IEEE-754 equality as a function (perhaps in 
> "math"), something like:
>
> def ieee_equal (a, b):
>     return a == b and not isnan (a) and not isnan (b)
>
Quite.  If atypical behaviour is required in specialised areas, it can 
be coded for.  (Same goes for specialised functions for comparing lists, 
dictionaries etc. in non-standard ways.  Forced explicit is better than 
well-hidden implicit.)
> Of course, the definition of math.isnan cannot then be by checking its 
> argument by comparison with itself
Damn right - a really dirty trick if ever I saw one (not even proof 
against the introduction of new objects which also have the same 
perverse non-reflexive equality).
> - it would have to check the appropriate bits of the float representation.
So it should.


On 28/04/2011 11:11, Nick Coghlan wrote:
> After all, why discard centuries of mathematical experience based on a 
> decision that the IEEE754 committe can't clearly recall the rationale 
> for, and didn't clearly document?
Sorry Nick, I have quoted you out of context - you WEREN'T arguing for 
the same point of view.  But you express it much better than I could.


It occurred to me that the very length of this thread [so far!] 
perfectly illustrates how controversial non-reflexive "equality"  is.  
(BTW I have read, if not understood, every post to this thread and will 
continue to read them all.)
And then I came across:
On 28/04/2011 09:43, Alexander Belopolsky wrote:
> If nothing else, annual reoccurrence of long threads on this topic is 
> a reason enough to reconsider which standard to follow.
Aha, this is is a regular, is it?  'Nuff said!

Best wishes
Rob Cliffe

From merwok at netwok.org  Thu Apr 28 16:12:11 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 28 Apr 2011 16:12:11 +0200
Subject: [Python-Dev] Simple XML-RPC server over SSL/TLS
In-Reply-To: <BANLkTinDGtWZsDPZ37U5_zqw9Aio-CpeXw@mail.gmail.com>
References: <BANLkTinDGtWZsDPZ37U5_zqw9Aio-CpeXw@mail.gmail.com>
Message-ID: <4DB975BB.1040402@netwok.org>

Hi,

> But what I would like to know, is if is there any reason why XML-RPC can't
> optionally work over TLS/SSL using Python's ssl module. I'll create a
> ticket, and send a patch, but I was wondering if it was a reason why this
> was not implemented.

I think there?s no deeper reason than nobody thought about it.  The ssl
module is new in 2.6 and 3.x, xmlrpc is an older module for an old
technology *cough*, so feel free to open a bug report.  Patch guidelines
are found at http://docs.python.org/devguide  Thanks in advance!

Cheers

From merwok at netwok.org  Thu Apr 28 16:18:15 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 28 Apr 2011 16:18:15 +0200
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <BANLkTimqCY02e+iy-OcV4nzZa1BTiC_sOQ@mail.gmail.com>
References: <loom.20110427T230704-75@post.gmane.org>
	<BANLkTimqCY02e+iy-OcV4nzZa1BTiC_sOQ@mail.gmail.com>
Message-ID: <4DB97727.2070500@netwok.org>

Hi,

>> I'm at the point where to appreciably increase coverage, I'd need to write some
>> test servers to exercise client code in SocketHandler, DatagramHandler and
>> HTTPHandler.
>>
>> I notice there are no utility classes in test.support to help with this kind of
>> thing - would there be any mileage in adding such things? Of course I could add
>> test server code just to test_logging (which already contains some socket server
>> code to exercise the configuration functionality), but rolling a test server
>> involves boilerplate such as using a custom RequestHandler-derived class for
>> each application. I had in mind a more streamlined approach where you can just
>> pass a single callable to a server to handle requests,

A generic test helper to run a server for tests would be a great
addition.  In distutils/packaging (due to be merged into 3.3 Really Soon
Now?), we also have a server, to test PyPI-related functionality.  It?s
a tested module providing a server class that runs in a thread, a
SimpleHTTPRequest handler able to serve static files and reply to
XML-RPC requests, and decorators to start and stop the server for one
test method instead of a whole TestCase instance.  I?m sure some common
ground can be found and all these testing helpers factored out in one
module.

> For larger components like this, it's also reasonable to add a
> dedicated helper module rather than using test.support directly. I
> started (and Antoine improved) something along those lines with the
> test.script_helper module for running Python subprocesses and checking
> their output,

+1, script_helper is great.

Cheers

From merwok at netwok.org  Thu Apr 28 16:20:06 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 28 Apr 2011 16:20:06 +0200
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
Message-ID: <4DB97796.8010204@netwok.org>

Hi,

I?m still educating myself about concurrency and race conditions, so I
hope my na?ve question won?t be just a waste of time.  Here it is:

> http://hg.python.org/cpython/rev/0c8bc3a0130a
> user:        Senthil Kumaran <orsenthil at gmail.com>
> summary:
>   Fix closes  issue10761: tarfile.extractall failure  when symlinked files are present.

> diff --git a/Lib/tarfile.py b/Lib/tarfile.py
> --- a/Lib/tarfile.py
> +++ b/Lib/tarfile.py
> @@ -2239,6 +2239,8 @@
>          if hasattr(os, "symlink") and hasattr(os, "link"):
>              # For systems that support symbolic and hard links.
>              if tarinfo.issym():
> +                if os.path.exists(targetpath):
> +                    os.unlink(targetpath)

Is there a race condition here?

Thanks
Regards

From merwok at netwok.org  Thu Apr 28 16:22:02 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 28 Apr 2011 16:22:02 +0200
Subject: [Python-Dev] [Python-checkins] cpython (3.2): Closes #11858:
 configparser.ExtendedInterpolation and	section case.
In-Reply-To: <E1QFN56-0003S6-W1@dinsdale.python.org>
References: <E1QFN56-0003S6-W1@dinsdale.python.org>
Message-ID: <4DB9780A.30204@netwok.org>

Hi,

> http://hg.python.org/cpython/rev/57c076ab4bbd
> user:        ?ukasz Langa <lukasz at langa.pl>

> --- a/Lib/test/test_cfgparser.py
> +++ b/Lib/test/test_cfgparser.py
> @@ -20,10 +20,16 @@
>      def values(self):
>          return [i[1] for i in self.items()]
>  
> -    def iteritems(self): return iter(self.items())
> -    def iterkeys(self): return iter(self.keys())
> +    def iteritems(self):
> +        return iter(self.items())
> +
> +    def iterkeys(self):
> +        return iter(self.keys())
> +
> +    def itervalues(self):
> +        return iter(self.values())
> +

The dict methods in that subclass could probably be cleaned up.

Regards

From merwok at netwok.org  Thu Apr 28 16:22:46 2011
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 28 Apr 2011 16:22:46 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Refined time test in
	test_logging.
In-Reply-To: <E1QEnuK-0000DF-OV@dinsdale.python.org>
References: <E1QEnuK-0000DF-OV@dinsdale.python.org>
Message-ID: <4DB97836.3090004@netwok.org>

Hi,

> http://hg.python.org/cpython/rev/5185e1d91f3d
> user:        Vinay Sajip <vinay_sajip at yahoo.co.uk>
> summary:
>   Refined time test in test_logging.

> +ZERO = datetime.timedelta(0)
> +
> +class UTC(datetime.tzinfo):
> +    def utcoffset(self, dt):
> +        return ZERO
> +
> +    dst = utcoffset
> +
> +    def tzname(self, dt):
> +        return 'UTC'
> +
> +utc = UTC()

Any reason not to use datetime.datetime.utc here?

Regards

From barry at python.org  Thu Apr 28 16:37:33 2011
From: barry at python.org (Barry Warsaw)
Date: Thu, 28 Apr 2011 10:37:33 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <ipb8qr$6jl$1@dough.gmane.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
Message-ID: <20110428103733.5aefc6e0@neurotica.wooz.org>

On Apr 28, 2011, at 04:34 AM, Terry Reedy wrote:

>On 4/28/2011 3:54 AM, Tarek Ziad? wrote:
>> Hello
>>
>> I removed some assert calls in distutils some time ago because the
>> package was not behaving correctly when people were using Python with
>> the --optimize flag. In other words, assert became a full part of the
>> code logic and removing them via -O was changing the behavior.
>>
>> In my opinion assert should be avoided completely anywhere else than
>> in the tests. If this is a wrong statement, please let me know why :)

>My understanding is that assert can be used in production code but only to
>catch logic errors by testing supposed invariants or postconditions. It
>should not be used to test usage errors, including preconditions. In other
>words, assert presence or absence should not affect behavior unless the code
>has a bug.

I would agree.  Use asserts for "this can't possibly happen <wink>"
conditions.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/58c85c34/attachment.pgp>

From orsenthil at gmail.com  Thu Apr 28 16:44:50 2011
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Thu, 28 Apr 2011 22:44:50 +0800
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <4DB97796.8010204@netwok.org>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org>
Message-ID: <20110428144450.GB2699@kevin>

On Thu, Apr 28, 2011 at 04:20:06PM +0200, ?ric Araujo wrote:
> >          if hasattr(os, "symlink") and hasattr(os, "link"):
> >              # For systems that support symbolic and hard links.
> >              if tarinfo.issym():
> > +                if os.path.exists(targetpath):
> > +                    os.unlink(targetpath)
> 
> Is there a race condition here?

The lock to avoid race conditions (if you were thinking along those
lines) would usually be implemented at the higher level code which is
using extractall in threads.

Checking that no one else is accessing the file before unlinking may
not be suitable for the library method and of course, we cannot check
if someone is waiting to act on that file.

-- 
Senthil

From steve at pearwood.info  Thu Apr 28 16:58:08 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 00:58:08 +1000
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
 identity shortcut)
In-Reply-To: <4DB927F4.3040206@dcs.gla.ac.uk>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk>
Message-ID: <4DB98080.2060903@pearwood.info>

Mark Shannon wrote:
> Related to the discussion on "Not a Number" can I point out a few things 
> that have not be explicitly addressed so far.
> 
> The IEEE standard is about hardware and bit patterns, rather than types 
> and values so may not be entirely appropriate for high-level language
> like Python.

I would argue that the implementation of NANs is irrelevant. If NANs are 
useful in hardware floats -- and I think they are -- then they're just 
as equally useful as objects, or as strings in languages like REXX or 
Hypertalk where all data is stored as strings, or as quantum wave 
functions in some future quantum computer.


> NaN is *not* a number (the clue is in the name).
> Python treats it as if it were a number:
> 
>  >>> import numbers
>  >>> isinstance(nan, numbers.Number)
> True
> 
> Can be read as "'Not a Number' is a Number" ;)

I see your wink, but what do you make of these?

class NotAnObject(object):
     pass

nao = NotAnObject()
assert isinstance(nao, object)

class NotAType(object):
     pass

assert type(NotAType) is type



> NaN does not have to be a float or a Decimal.
> Perhaps it should have its own class.

Others have already pointed out this won't make any difference.

Fundamentally, the problem is that some containers bypass equality tests 
for identity tests. There may be good reasons for that shortcut, but it 
leads to problems with *any* object that does not define equality to be 
reflexive, not just NANs.


 >>> class Null:
...     def __eq__(self, other):
...             return False
...
 >>> null = Null()
 >>> null == null
False
 >>> [null] == [null]
True



> The default comparisons will then work as expected for collections.
> (No doubt, making NaN a new class will cause a whole new set of problems)
> 
> As pointed out by Meyer:
> NaN == NaN is False
> is no more logical than
> NaN != NaN is False

I don't agree with this argument. I think Meyer is completely mistaken 
there. The question of NAN equality is that of a vacuous truth, quite 
similar to the Present King of France:

http://en.wikipedia.org/wiki/Present_King_of_France

Meyer would have us accept that:

     The present King of France is a talking horse

and

     The present King of France is not a talking horse

are equally (pun not intended) valid. No, no they're not. I don't know 
much about who the King of France would be if France had a king, but I 
do know that he wouldn't be a talking horse.

Once you accept that NANs aren't equal to anything else, it becomes a 
matter of *practicality beats purity* to accept that they can't be equal 
to themselves either. A NAN doesn't represent a specific thing. It's a 
signal that your calculation has generated an indefinite, undefined, 
undetermined value. NANs aren't equal to anything. The fact that a NAN 
happens to have an existence as a bit-pattern at some location, or as a 
distinct object, is an implementation detail that is irrelevant. If you 
just happen by some fluke to compare a NAN to "itself", that shouldn't 
change the result of the comparison:

     The present King of France is the current male sovereign who
     rules France

is still false, even if you happen to write it like this:

     The present King of France is the present King of France


This might seem surprising to those who are used to reflexivity. Oh 
well. Just because reflexivity holds for actual things, doesn't mean it 
holds for, er, things that aren't things. NANs are things that aren't 
things.





> Although both NaN == NaN and NaN != NaN could arguably be a "maybe" 
> value, the all important reflexivity (x == x is True)  is effectively 
> part of the language.
> All collections rely on it and Python wouldn't be much use without 
> dicts, tuples and lists.

Perhaps they shouldn't rely on it. Identity tests are an implementation 
detail. But in any case, reflexivity is *not* a guarantee of Python. 
With rich comparisons, you can define __eq__ to do anything you like.





-- 
Steven


From fdrake at acm.org  Thu Apr 28 17:04:25 2011
From: fdrake at acm.org (Fred Drake)
Date: Thu, 28 Apr 2011 11:04:25 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <20110428103733.5aefc6e0@neurotica.wooz.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
	<20110428103733.5aefc6e0@neurotica.wooz.org>
Message-ID: <BANLkTim1_+QjrR+fyuJWM16tD_A9QXd-=Q@mail.gmail.com>

On Thu, Apr 28, 2011 at 10:37 AM, Barry Warsaw <barry at python.org> wrote:
> I would agree. ?Use asserts for "this can't possibly happen <wink>"
> conditions.

Maybe we should rename "assert" to "wink", just to be clear on the usage.  :-)


  -Fred

-- 
Fred L. Drake, Jr.? ? <fdrake at acm.org>
"Give me the luxuries of life and I will willingly do without the necessities."
?? --Frank Lloyd Wright

From skip at pobox.com  Thu Apr 28 17:22:20 2011
From: skip at pobox.com (skip at pobox.com)
Date: Thu, 28 Apr 2011 10:22:20 -0500
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <20110428103733.5aefc6e0@neurotica.wooz.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
	<20110428103733.5aefc6e0@neurotica.wooz.org>
Message-ID: <19897.34348.886773.133607@montanaro.dyndns.org>


    Barry> I would agree.  Use asserts for "this can't possibly happen
    Barry> <wink>" conditions.

Without looking, I suspect that's probably what the author thought he was
doing.

Skip

From barry at python.org  Thu Apr 28 17:26:29 2011
From: barry at python.org (Barry Warsaw)
Date: Thu, 28 Apr 2011 11:26:29 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <19897.34348.886773.133607@montanaro.dyndns.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
	<20110428103733.5aefc6e0@neurotica.wooz.org>
	<19897.34348.886773.133607@montanaro.dyndns.org>
Message-ID: <20110428112629.7dd26254@neurotica.wooz.org>

On Apr 28, 2011, at 10:22 AM, skip at pobox.com wrote:

>    Barry> I would agree.  Use asserts for "this can't possibly happen
>    Barry> <wink>" conditions.
>
>Without looking, I suspect that's probably what the author thought he was
>doing.

BTW, I think it always helps to have a really good assert message, and/or a
leading comment to explain *why* that condition can't possibly happen.

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/c3f222d5/attachment.pgp>

From barry at python.org  Thu Apr 28 17:27:08 2011
From: barry at python.org (Barry Warsaw)
Date: Thu, 28 Apr 2011 11:27:08 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTim1_+QjrR+fyuJWM16tD_A9QXd-=Q@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
	<20110428103733.5aefc6e0@neurotica.wooz.org>
	<BANLkTim1_+QjrR+fyuJWM16tD_A9QXd-=Q@mail.gmail.com>
Message-ID: <20110428112708.106d9b22@neurotica.wooz.org>

On Apr 28, 2011, at 11:04 AM, Fred Drake wrote:

>On Thu, Apr 28, 2011 at 10:37 AM, Barry Warsaw <barry at python.org> wrote:
>> I would agree. ?Use asserts for "this can't possibly happen <wink>"
>> conditions.
>
>Maybe we should rename "assert" to "wink", just to be clear on the usage.  :-)

Off to python-ideas for you! <wink>

-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/ba6b5469/attachment.pgp>

From solipsis at pitrou.net  Thu Apr 28 17:32:14 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 28 Apr 2011 17:32:14 +0200
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org> <20110428144450.GB2699@kevin>
Message-ID: <20110428173214.19fe3445@pitrou.net>

On Thu, 28 Apr 2011 22:44:50 +0800
Senthil Kumaran <orsenthil at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 04:20:06PM +0200, ?ric Araujo wrote:
> > >          if hasattr(os, "symlink") and hasattr(os, "link"):
> > >              # For systems that support symbolic and hard links.
> > >              if tarinfo.issym():
> > > +                if os.path.exists(targetpath):
> > > +                    os.unlink(targetpath)
> > 
> > Is there a race condition here?
> 
> The lock to avoid race conditions (if you were thinking along those
> lines) would usually be implemented at the higher level code which is
> using extractall in threads.

A lock would only protect only against multi-threaded use of the
tarfile module, which is probably quite rare and therefore not a real
concern.
The kind of race condition which can happen here is if an attacker
creates "targetpath" between os.path.exists and os.unlink. Whether it
is an exploitable flaw would need a detailed analysis, of course.

Regards

Antoine.



From nadeem.vawda at gmail.com  Thu Apr 28 17:40:05 2011
From: nadeem.vawda at gmail.com (Nadeem Vawda)
Date: Thu, 28 Apr 2011 17:40:05 +0200
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <20110428144450.GB2699@kevin>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org> <20110428144450.GB2699@kevin>
Message-ID: <BANLkTinW4ZhukNBuT+zvwpFop-HKiK6Spg@mail.gmail.com>

On Thu, Apr 28, 2011 at 4:44 PM, Senthil Kumaran <orsenthil at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 04:20:06PM +0200, ?ric Araujo wrote:
>> >          if hasattr(os, "symlink") and hasattr(os, "link"):
>> >              # For systems that support symbolic and hard links.
>> >              if tarinfo.issym():
>> > +                if os.path.exists(targetpath):
>> > +                    os.unlink(targetpath)
>>
>> Is there a race condition here?
>
> The lock to avoid race conditions (if you were thinking along those
> lines) would usually be implemented at the higher level code which is
> using extractall in threads.
>
> Checking that no one else is accessing the file before unlinking may
> not be suitable for the library method and of course, we cannot check
> if someone is waiting to act on that file.

I think ?ric is referring to the possibility of another process creating or
deleting targetpath between the calls to os.path.exists() and os.unlink().
This would result in symlink() or unlink() raising an exception.

The deletion case could be handled like this:

             if tarinfo.issym():
+                try:
+                    os.unlink(targetpath)
+                except OSError as e:
+                    if e.errno != errno.ENOENT:
+                        raise
                 os.symlink(tarinfo.linkname, targetpath)

I'm not sure what the best way of handling the creation case is. The obvious
solution would be to try the above code in a loop, repeating until we succeed
(or fail for a different reason), but this would not be guaranteed to
terminate.

Cheers,
Nadeem

From ziade.tarek at gmail.com  Thu Apr 28 17:45:28 2011
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Thu, 28 Apr 2011 17:45:28 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <20110428112629.7dd26254@neurotica.wooz.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
	<20110428103733.5aefc6e0@neurotica.wooz.org>
	<19897.34348.886773.133607@montanaro.dyndns.org>
	<20110428112629.7dd26254@neurotica.wooz.org>
Message-ID: <BANLkTikCsGweatq1YPsOGbbc_ND1sFE6LQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 5:26 PM, Barry Warsaw <barry at python.org> wrote:
> On Apr 28, 2011, at 10:22 AM, skip at pobox.com wrote:
>
>> ? ?Barry> I would agree. ?Use asserts for "this can't possibly happen
>> ? ?Barry> <wink>" conditions.
>>
>>Without looking, I suspect that's probably what the author thought he was
>>doing.
>
> BTW, I think it always helps to have a really good assert message, and/or a
> leading comment to explain *why* that condition can't possibly happen.

why bother, it can't happen ;)

>
> -Barry
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com
>
>



-- 
Tarek Ziad? | http://ziade.org

From robert.kern at gmail.com  Thu Apr 28 17:52:02 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Apr 2011 10:52:02 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTim4y6BXKq_YxbDtExKPvCF2PDyTjQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimEzYbo24h1WwQBt9LjybSX9gUFwg@mail.gmail.com>	<ipanmc$jnu$1@dough.gmane.org>	<BANLkTikgDXag3BfLPoaiqXg0=bJiqKF0tA@mail.gmail.com>	<ipaq70$uu3$1@dough.gmane.org>
	<BANLkTim4y6BXKq_YxbDtExKPvCF2PDyTjQ@mail.gmail.com>
Message-ID: <ipc2f3$1gc$1@dough.gmane.org>

On 4/27/11 11:54 PM, Guido van Rossum wrote:
> On Wed, Apr 27, 2011 at 9:25 PM, Robert Kern<robert.kern at gmail.com>  wrote:
>> On 2011-04-27 23:01 , Guido van Rossum wrote:
>>> And I wouldn't want to change that. It sounds like NumPy wouldn't be
>>> much affected if we were to change this (which I'm not saying we
>>> would).
>>
>> Well, I didn't say that. If Python changed its behavior for (float('nan') ==
>> float('nan')), we'd have to seriously consider some changes.
>
> Ah, but I'm not proposing anything of the sort! float('nan') returns a
> new object each time and two NaNs that are not the same *object* will
> still follow the IEEE std. It's just when comparing a NaN-valued
> *object* to *itself* (i.e. the *same* object) that I would consider
> following the lead of Python's collections.

Ah, I see!

>> We do like to
>> keep *some* amount of correspondence with Python semantics. In particular,
>> we like our scalar types that match Python types to work as close to the
>> Python type as possible. We have the np.float64 type, which represents a C
>> double scalar and corresponds to a Python float. It is used when a single
>> item is indexed out of a float64 array. We even subclass from the Python
>> float type to help working with libraries that may not know about numpy:
>>
>> [~]
>> |5>  import numpy as np
>>
>> [~]
>> |6>  nan = np.array([1.0, 2.0, float('nan')])[2]
>>
>> [~]
>> |7>  nan == nan
>> False
>
> Yeah, this is where things might change, because it is the same
> *object* left and right.
>
>> [~]
>> |8>  type(nan)
>> numpy.float64
>>
>> [~]
>> |9>  type(nan).mro()
>> [numpy.float64,
>>   numpy.floating,
>>   numpy.inexact,
>>   numpy.number,
>>   numpy.generic,
>>   float,
>>   object]
>>
>>
>> If the Python float type changes behavior, we'd have to consider whether to
>> keep that for np.float64 or change it to match the usual C semantics used
>> elsewhere. So there *would* be a dilemma. Not necessarily the most
>> nerve-wracking one, but a dilemma nonetheless.
>
> Given what I just said, would it still be a dilemma? Maybe a smaller one?

Smaller, certainly. But now it's a trilemma. :-)

1. Have just np.float64 and np.complex128 scalars follow the Python float 
semantics since they subclass Python float and complex, respectively.
2. Have all np.float* and np.complex* scalars follow the Python float semantics.
3. Keep the current IEEE-754 semantics for all float scalar types.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From robert.kern at gmail.com  Thu Apr 28 18:01:28 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Apr 2011 11:01:28 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
Message-ID: <ipc30p$4sj$1@dough.gmane.org>

On 4/28/11 12:37 AM, Alexander Belopolsky wrote:
> On Thu, Apr 28, 2011 at 12:33 AM, Robert Kern<robert.kern at gmail.com>  wrote:
>> On 2011-04-27 23:24 , Guido van Rossum wrote:
> ..
>>> So do new masks get created when the outcome of an elementwise
>>> operation is a NaN?
>>
>> No.
>
> Yes.
>
>>>> from MA import array
>>>> print array([0])/array([0])
> [-- ]
>
> (I don't have numpy on this laptop, so the example is using Numeric,
> but I hope you guys did not change that while I was not looking:-)

This behavior is not what you think it is. Rather, some binary operations have 
been augmented with a domain of validity, and the results will be masked out 
when the domain is violated. Division is one of them, and division by zero will 
cause the result to be masked. You can produce NaNs in other ways that will not 
be masked in both numpy and old Numeric:

[~]
|4> minf = np.ma.array([1e300]) * np.ma.array([1e300])
Warning: overflow encountered in multiply

[~]
|5> minf
masked_array(data = [ inf],
              mask = False,
        fill_value = 1e+20)


[~]
|6> minf - minf
masked_array(data = [ nan],
              mask = False,
        fill_value = 1e+20)

[~]
|14> import MA

[~]
|15> minf = MA.array([1e300]) * MA.array([1e300])

[~]
|16> minf
array([              inf,])

[~]
|17> (minf - minf)[0]
nan

[~]
|25> (minf - minf)._mask is None
True


Numeric has a bug where it cannot print arrays with NaNs, so I just grabbed the 
element out instead of showing it. But I guarantee you that it is not masked.

Masked arrays are not a way to avoid NaNs arising from computations. NaN 
handling is an important part of computing with numpy.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From marks at dcs.gla.ac.uk  Thu Apr 28 18:04:29 2011
From: marks at dcs.gla.ac.uk (Mark Shannon)
Date: Thu, 28 Apr 2011 17:04:29 +0100
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
 identity shortcut)
In-Reply-To: <4DB98080.2060903@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>
	<4DB98080.2060903@pearwood.info>
Message-ID: <4DB9900D.9060805@dcs.gla.ac.uk>

Steven D'Aprano wrote:
> Mark Shannon wrote:
>> Related to the discussion on "Not a Number" can I point out a few things 
>> that have not be explicitly addressed so far.
>>
>> The IEEE standard is about hardware and bit patterns, rather than types 
>> and values so may not be entirely appropriate for high-level language
>> like Python.
> 
> I would argue that the implementation of NANs is irrelevant. If NANs are 
> useful in hardware floats -- and I think they are -- then they're just 
> as equally useful as objects, or as strings in languages like REXX or 
> Hypertalk where all data is stored as strings, or as quantum wave 
> functions in some future quantum computer.

So,
Indeed, so its OK if type(NaN) != type(0.0) ?

> 
> 
>> NaN is *not* a number (the clue is in the name).
>> Python treats it as if it were a number:
>>
>>  >>> import numbers
>>  >>> isinstance(nan, numbers.Number)
>> True
>>
>> Can be read as "'Not a Number' is a Number" ;)
> 
> I see your wink, but what do you make of these?
> 
> class NotAnObject(object):
>      pass
> 
> nao = NotAnObject()
> assert isinstance(nao, object)

Trying to make something not an object in a language where everything is 
an object is bound to be problematic.

> 
> class NotAType(object):
>      pass
> 
> assert type(NotAType) is type
> 
> 
> 
>> NaN does not have to be a float or a Decimal.
>> Perhaps it should have its own class.
> 
> Others have already pointed out this won't make any difference.
> 
> Fundamentally, the problem is that some containers bypass equality tests 
> for identity tests. There may be good reasons for that shortcut, but it 
> leads to problems with *any* object that does not define equality to be 
> reflexive, not just NANs.
> 
> 
>  >>> class Null:
> ...     def __eq__(self, other):
> ...             return False
> ...
>  >>> null = Null()
>  >>> null == null
> False
>  >>> [null] == [null]
> True
> 

Just because you can do that, doesn't mean you should.
Equality should be reflexive, without that fundamental assumption many 
non-numeric algorithms fall apart.

> 
> 
>> The default comparisons will then work as expected for collections.
>> (No doubt, making NaN a new class will cause a whole new set of problems)
>>
>> As pointed out by Meyer:
>> NaN == NaN is False
>> is no more logical than
>> NaN != NaN is False
> 
> I don't agree with this argument. I think Meyer is completely mistaken 
> there. The question of NAN equality is that of a vacuous truth, quite 
> similar to the Present King of France:
> 
> http://en.wikipedia.org/wiki/Present_King_of_France
> 
> Meyer would have us accept that:
> 
>      The present King of France is a talking horse
> 
> and
> 
>      The present King of France is not a talking horse
> 
> are equally (pun not intended) valid. No, no they're not. I don't know 
> much about who the King of France would be if France had a king, but I 
> do know that he wouldn't be a talking horse.
> 
> Once you accept that NANs aren't equal to anything else, it becomes a 
> matter of *practicality beats purity* to accept that they can't be equal 

Not breaking a whole bunch of collections and algorithms has a certain 
practical appeal as well ;)

> to themselves either. A NAN doesn't represent a specific thing. It's a 
> signal that your calculation has generated an indefinite, undefined, 
> undetermined value. NANs aren't equal to anything. The fact that a NAN 
> happens to have an existence as a bit-pattern at some location, or as a 
> distinct object, is an implementation detail that is irrelevant. If you 
> just happen by some fluke to compare a NAN to "itself", that shouldn't 
> change the result of the comparison:
> 
>      The present King of France is the current male sovereign who
>      rules France
> 
> is still false, even if you happen to write it like this:
> 
>      The present King of France is the present King of France
> 

The problem with this argument is the present King of France does not 
exist, whereas NaN (as a Python object) does exist.

The present King of France argument only applies to non-existent things. 
Python objects do exist (as much as any computer language entity 
exists). So the expression "The present King of France" either raises an 
exception (non-existence) or evaluates to an object (existence).
In this case "the present King of France" doesn't exist and should raise 
a FifthRepublicException :)
inf / inf does not raise an exception, but evaluates to NaN, so NaN
exists. For objects (that exist):
(x is x) is True.
The present President of France is the present President of France,
regardless of who he or she may be.
> 
> This might seem surprising to those who are used to reflexivity. Oh 
> well. Just because reflexivity holds for actual things, doesn't mean it 
> holds for, er, things that aren't things. NANs are things that aren't 
> things.
A NaN is thing that *is* a thing; it exists:
object.__repr__(float('nan'))
> 

Of course if inf - inf, inf/inf raised exceptions,
then NaN wouldn't exist (as a Python object)
and the problem would just go away :)
After all 0.0/0.0 already raises an exception, but the
IEEE defines 0.0/0.0  as NaN.
> 
>> Although both NaN == NaN and NaN != NaN could arguably be a "maybe" 
>> value, the all important reflexivity (x == x is True)  is effectively 
>> part of the language.
>> All collections rely on it and Python wouldn't be much use without 
>> dicts, tuples and lists.
> 
> Perhaps they shouldn't rely on it. Identity tests are an implementation 
> detail. But in any case, reflexivity is *not* a guarantee of Python. 
> With rich comparisons, you can define __eq__ to do anything you like.

And if you do define __eq__ to be non-reflexive then things will break.
Should an object that breaks so much (ie NaN in its current form) be in 
the standard library?
Perhaps we should just get rid of it?



From rob.cliffe at btinternet.com  Thu Apr 28 18:11:24 2011
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Thu, 28 Apr 2011 17:11:24 +0100
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
 identity shortcut)
In-Reply-To: <4DB98080.2060903@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>
	<4DB98080.2060903@pearwood.info>
Message-ID: <4DB991AC.7020101@btinternet.com>


On 28/04/2011 15:58, Steven D'Aprano wrote:
> Fundamentally, the problem is that some containers bypass equality 
> tests for identity tests. There may be good reasons for that shortcut, 
> but it leads to problems with *any* object that does not define 
> equality to be reflexive, not just NANs.
I say you have that backwards.  It is a legitimate shortcut, and any 
object that (perversely) doesn't define equality to be reflexive leads 
(unsurprisingly) to problems with it (and with *anything else* that - 
very reasonably - assumes that identity implies equality).

>
> Mark Shannon wrote:
>> Although both NaN == NaN and NaN != NaN could arguably be a "maybe" 
>> value, the all important reflexivity (x == x is True)  is effectively 
>> part of the language.
>> All collections rely on it and Python wouldn't be much use without 
>> dicts, tuples and lists.
>
> Perhaps they shouldn't rely on it. Identity tests are an 
> implementation detail. But in any case, reflexivity is *not* a 
> guarantee of Python. With rich comparisons, you can define __eq__ to 
> do anything you like.
>
And you can write
     True = False
(at least in older versions of Python you could).  No language stops you 
from writing stupid programs.

In fact I would propose that the language should DEFINE the meaning of 
"==" to be True if its operands are identical, and only if they are not 
would it use the comparison operators, thus enforcing reflexivity.  
(Nothing stops you from writing your own non-reflexive __eq__ and 
calling it explicitly, and I think it is right that you should have to 
work harder and be more explicit if you want that behaviour.)

Please, please, can we have a bit of common sense and perspective here.  
No-one (not even a mathematician) except someone from Wonderland would 
seriously want an object not equal to itself.

Regards
Rob Cliffe

From steve at pearwood.info  Thu Apr 28 18:33:03 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 02:33:03 +1000
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
 identity shortcut)
In-Reply-To: <4DB9900D.9060805@dcs.gla.ac.uk>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>	<4DB98080.2060903@pearwood.info>
	<4DB9900D.9060805@dcs.gla.ac.uk>
Message-ID: <4DB996BF.90604@pearwood.info>

Mark Shannon wrote:
> Steven D'Aprano wrote:
>> Mark Shannon wrote:
>>> Related to the discussion on "Not a Number" can I point out a few 
>>> things that have not be explicitly addressed so far.
>>>
>>> The IEEE standard is about hardware and bit patterns, rather than 
>>> types and values so may not be entirely appropriate for high-level 
>>> language
>>> like Python.
>>
>> I would argue that the implementation of NANs is irrelevant. If NANs 
>> are useful in hardware floats -- and I think they are -- then they're 
>> just as equally useful as objects, or as strings in languages like 
>> REXX or Hypertalk where all data is stored as strings, or as quantum 
>> wave functions in some future quantum computer.
> 
> So,
> Indeed, so its OK if type(NaN) != type(0.0) ?

Sure. But that just adds complexity without actually resolving anything.



>> Fundamentally, the problem is that some containers bypass equality 
>> tests for identity tests. There may be good reasons for that shortcut, 
>> but it leads to problems with *any* object that does not define 
>> equality to be reflexive, not just NANs.
[...]
> Just because you can do that, doesn't mean you should.
> Equality should be reflexive, without that fundamental assumption many 
> non-numeric algorithms fall apart.

So what? If I have a need for non-reflexivity in my application, why 
should I care that some other algorithm, which I'm not using, will fail?

Python supports non-reflexivity. If I take advantage of that feature, I 
can't guarantee that *other objects* will be smart enough to understand 
this. This is no different from any other property of my objects.



>>> The default comparisons will then work as expected for collections.
>>> (No doubt, making NaN a new class will cause a whole new set of 
>>> problems)
>>>
>>> As pointed out by Meyer:
>>> NaN == NaN is False
>>> is no more logical than
>>> NaN != NaN is False
>>
>> I don't agree with this argument. I think Meyer is completely mistaken 
>> there. The question of NAN equality is that of a vacuous truth, quite 
>> similar to the Present King of France:
>>
>> http://en.wikipedia.org/wiki/Present_King_of_France
[...]
> The problem with this argument is the present King of France does not 
> exist, whereas NaN (as a Python object) does exist.

NANs (as Python objects) exist in the same way as the present King of 
France exists as words. It's an implementation detail: we can't talk 
about the non-existent present King of France without using words, and 
we can't do calculations on non-existent/indeterminate values in Python 
without objects.

Words can represent things that don't exist, and so can bit-patterns or 
objects or any other symbol. We must be careful to avoid mistaking the 
symbol (the NAN bit-pattern or object) for the thing (the result of 
whatever calculation generated that NAN). The idea of equality we care 
about is equality of what the symbol represents, not the symbol itself.

The meaning of "spam and eggs" should not differ according to the 
typeface we write the words in. Likewise the number 42 should not differ 
according to how the int object is laid out, or whether the bit-pattern 
is little-endian or big-endian. What matters is the "thing" itself, 42, 
not the symbol: it will still be 42 even if we decided to write it in 
Roman numerals or base 13.

Likewise, what matters is the non-thingness of NANs, not the fact that 
the symbol for them has an existence as an object or a bit-pattern.



-- 
Steven

From guido at python.org  Thu Apr 28 18:55:37 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 09:55:37 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipc30p$4sj$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
Message-ID: <BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>

[This is a mega-reply, combining responses to several messages in this
thread. I may be repeating myself a bit, but I think I am being
consistent. :-)]


On Wed, Apr 27, 2011 at 10:12 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 2:54 PM, Guido van Rossum <guido at python.org> wrote:
>>> Well, I didn't say that. If Python changed its behavior for (float('nan') ==
>>> float('nan')), we'd have to seriously consider some changes.
>>
>> Ah, but I'm not proposing anything of the sort! float('nan') returns a
>> new object each time and two NaNs that are not the same *object* will
>> still follow the IEEE std. It's just when comparing a NaN-valued
>> *object* to *itself* (i.e. the *same* object) that I would consider
>> following the lead of Python's collections.
>
> The reason this possibility bothers me is that it doesn't mesh well
> with the "implementations are free to cache and reuse immutable
> objects" rule. Although, if the updated NaN semantics were explicit
> that identity was now considered part of the value of NaN objects
> (thus ruling out caching them at the implementation layer), I guess
> that objection would go away.

The rules for float could be expanded to disallow NaN caching.

But even if we didn't change any rules, reusing immutable objects
could currently make computations undefined, because container
comparisons use the "identity wins" rule. E.g. if we didn't change the
rule for nan==nan, but we did change float("nan") to always return a
specific singleton, comparisons like [float("nan")] == [float("nan")]
would change in outcome. (Note that not all NaNs could be the same
object, since there are multiple bit patterns meaning NaN; IIUC this
is different from Inf.)

All this makes me realize that there would be another issue, one that
I wouldn't know how to deal with: a JITting interpreter could
translate code involving floats into machine code, at which point
object identity would be lost (presumably the machine code would use
IEEE value semantics for NaN).

This also reminds me that the current "identity wins" rules for
containers, combined with the "NaN==NaN is always False" for
non-container contexts, theoretically also might pose constraints on
the correctness of certain JIT optimizations. I don't know if PyPy
optimizes any code involving tuples or lists of floats, so I don't
know if it is a problem in practice, but it does seem to pose a
complex constraint in theory.

TBH Whatever Raymond may say, I have never been a fan of the "identity
wins" rules for containers given that we don't have a corresponding
rule requiring __eq__ to return True for x.__eq__(x).


On Wed, Apr 27, 2011 at 10:27 PM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> Note that ctypes' floats already behave this way:
>
>>>> x = c_double(float('nan'))
>>>> x == x
> True

But ctypes floats are not numbers. I don't think this provides any
evidence (except of possibly a shortcut in the ctypes implementation
for == :-).

> Before we go down this path, I would like to discuss another
> peculiarity of NaNs:
>
>>>> float('nan') < 0
> False
>>>> float('nan') > 0
> False
>
> This property in my experience causes much more trouble than nan ==
> nan being false.  The problem is that common sorting or binary search
> algorithms may degenerate into infinite loops in the presence of nans.
>  This may even happen when searching for a finite value in a large
> array that contains a single nan.  Errors like this do happen in the
> wild and and after chasing a bug like this programmers tend to avoid
> nans at all costs.  Oftentimes this leads to using "magic"
> placeholders such as 1e300 for missing data.
>
> Since py3k has already made None < 0 an error, it may be reasonable
> for float('nan') < 0 to raise an error as well (probably ValueError
> rather than TypeError).  This will not make lists with nans sortable
> or searchable using binary search, but will make associated bugs
> easier to find.

Hmm... It feels like a much bigger can of worms and I'm not at all
sure that it is going to work out any better than the current behavior
(which can be coarsely characterized as "tough shit, float + {NaN} do
not form a total ordering" :-). Remember when some string comparisons
would raise exceptions if "uncomparable" Unicode and non-Unicode
values were involved? That was a major pain and we gladly killed that
in Py3k. (Though it was for ==/!=, not for < etc.)

Basically I think the IEEE std has probably done a decent job of
defining how NaNs should behave, with the exception of object identity
-- because the IEEE std does not deal with objects, only with values.
The only other thing that could perhaps work would be to disallow NaN
from ever being created, instead always raising an exception if NaN
would be produced. Like we do with division by zero. But that would be
a *huge* incompatible change to Python's floating point capabilities
and I'm not interested in going there. The *only* point where I think
we might have a real problem is the discrepancy between individual NaN
comparisons and container comparisons involving NaN (which take
identity into account in a way that individual comparisons don't).


On Wed, Apr 27, 2011 at 10:53 PM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 12:24 AM, Guido van Rossum <guido at python.org> wrote:
>> So do new masks get created when the outcome of an elementwise
>> operation is a NaN?  Because that's the only reason why one should have
>> NaNs in one's data in the first place.
>
> If this is the case, why Python almost never produces NaNs as IEEE
> standard prescribes?
>
>>>> 0.0/0.0
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> ZeroDivisionError: float division

Even the IEEE std, AFAIK, lets you separately control what happens on
zero division and on NaN-producing operations. Python has chosen to
always raise an exception on zero division, and I don't think this
violates the IEEE std.

>> -- not to indicate missing values!
>
> Sometimes you don't have a choice.  For example when you data comes
> from a database that uses NaNs for missing values.

I would choose to call that a bug in the database. It should use None, not NaN.


On Wed, Apr 27, 2011 at 11:07 PM, Greg Ewing
<greg.ewing at canterbury.ac.nz> wrote:
> Guido van Rossum wrote:
>>
>> Currently NaN is not violating
>> any language rules -- it is just violating users' intuition, in a much
>> worse way than Inf does.
>
> If it's to be an official language non-rule (by which I mean
> that types are officially allowed to compare non-reflexively)
> then any code assuming that identity implies equality for
> arbitrary objects is broken and should be fixed.

Only if there's a use case for passing it NaNs.


On Wed, Apr 27, 2011 at 11:51 PM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Thu, Apr 28, 2011 at 2:20 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> ..
>> In that bug, Nick, you mention that reflexive equality is something that
>> container classes rely on in their implementation.  Such reliance seems to
>> me to be a bug, or an inappropriate optimization, ..
>
> An alternative interpretation would be that it is a bug to use NaN
> values in lists.

This would be bad; the list shouldn't care what kind of objects can be
stored in it.

> It is certainly nonsensical to use NaNs as keys in
> dictionaries

But somehow it works, if you consider each NaN *object* as a different
value. :-)

> and that reportedly led Java designers to forgo the
> nonreflexivity of nans:
>
> """
> A "NaN" value is not equal to itself. However, a "NaN" Java "Float"
> object is equal to itself. The semantic is defined this way, because
> otherwise "NaN" Java "Float" objects cannot be retrieved from a hash
> table.
> """ - http://www.concentric.net/~ttwang/tech/javafloat.htm

That is exactly the change I am proposing (currently with a strength
of +0) for Python, because Python's containers (at least the built-in
ones) have already decided to follow this rule even if the float type
itself has not yet.

> With the status quo in Python, it may only make sense to store NaNs in
> array.array, but not in a list.

I do not see how this follows.


On Thu, Apr 28, 2011 at 12:57 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Because this assertion is an assertion about the behaviour of
> comparisons that violates IEEE754, while the assertions I list are all
> assertions about the behaviour of containers that can be made true
> *regardless* of IEEE754 by checking identity explicitly.
>
> The correct assertion under Python's current container semantics is:
>
>  if list(c1) == list(c2):  # Make ordering assumption explicit
>    assert all(x is y or x == y for x,y in zip(c1, c2))  # Enforce reflexivity

That does not apply to all containers and does not make much sense for
any containers except those we call sequences (although there are
different but similar rules for other categories of containers). And I
think you meant it backwards: the second line is actually the
(current) *definition* of sequence identity, it does not just follow
from sequence identity.

However, Python *used* to define sequence equality as plain
elementwise equality, meaning that if nan==nan is always False,
[nan]==[nan] would likewise be False.

Raymond strongly believes that containers must be allowed to use the
modified definition, I believe purely for performance reasons.
(Without this rule, a list or tuple could not even cut short being
compared to *itself*.) It seems you are in that camp too.

I think that if the rule for containers is really that important, we
should take the logical consequence and make a rule that a
well-behaved type defines __eq__ and __ne__ to let object identity
overrule whatever definition of value equality it has, and we should
change float and decimal to follow this rule. (The "well-behaved"
qualifier is intended to clarify that the language doesn't actually
try to enforce this rule, similar to the existing rule about
correspondence between __hash__ and __eq__.)

> Meyer is a purist - sticking with the mathematical definition of
> equality is the sort of thing that fits his view of the world and what
> Eiffel should be, even if it hinders interoperability with other
> languages and tools. Python tends to be a bit more pragmatic about
> things, in particular when it comes to interoperability, so it makes
> sense to follow IEEE754 and the decimal specification at the
> individual comparison level.

So what *does* Eiffel do when comparing two NaNs from different sources?

I would say that in this case, Python's approach started out as naive,
not pragmatic -- I was (and still mostly am) clueless about all issues
numeric. Augmenting float/decimal equality to let object identity win
would be an example of pragmatic.

> However, we can contain the damage to some degree by specifying that
> containers should enforce reflexivity where they need it. This is
> already the case at the implementation level (collections.Sequence
> aside), it just needs to be pushed up to the language definition
> level.

I think that when objects are involved, the word reflexivity does not
convey the right intuition.

>> Can you give examples of algorithms that would break if one of your
>> invariants is violated, but would still work if the data contains
>> NaNs?
>
> Sure, anything that cares more about objects than it does about
> values. The invariants are about making containers behave like
> containers as far as possible, even in the face of recalcitrant types
> like IEEE754 floating point.

TBH I think it's more about being allowed to take various shortcuts in
the implementation than about some abstract behavioral property. The
abstract behavioral property doesn't matter that much, but assuming it
enables the optimization, and the optimization does matter. Another
example of pragmatics.


On Thu, Apr 28, 2011 at 8:52 AM, Robert Kern <robert.kern at gmail.com> wrote:
> Smaller, certainly. But now it's a trilemma. :-)
>
> 1. Have just np.float64 and np.complex128 scalars follow the Python float
> semantics since they subclass Python float and complex, respectively.
> 2. Have all np.float* and np.complex* scalars follow the Python float
> semantics.
> 3. Keep the current IEEE-754 semantics for all float scalar types.

*If* my proposal gets accepted, there will be a blanket rule that no
matter how exotic an type's __eq__ is defined, self.__eq__(self)
(i.e., __eq__ called with the same *object* argument) must return True
if the type's __eq__ is to be considered well-behaved; and Python
containers may assume (for the purpose of optimizing their own
comparison operations) that their elements have a well-behaved __eq__.

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Thu Apr 28 18:59:40 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 09:59:40 -0700
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
Message-ID: <BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
> In my opinion assert should be avoided completely anywhere else than
> in the tests. If this is a wrong statement, please let me know why :)

I would turn that around. The assert statement should not be used in
unit tests; unit tests should use self.assertXyzzy() always. In
regular code, assert should be about detecting buggy code. It should
not be used to test for error conditions in input data. (Both these
can be summarized as "if you still want the test to happen with -O,
don't use assert.)

-- 
--Guido van Rossum (python.org/~guido)

From steve at pearwood.info  Thu Apr 28 19:25:08 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 03:25:08 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTimoFKAKeG3iXQnBXi4x1xoLKPnkTA@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>
	<BANLkTimoFKAKeG3iXQnBXi4x1xoLKPnkTA@mail.gmail.com>
Message-ID: <4DB9A2F4.7020807@pearwood.info>

Alexander Belopolsky wrote:

> With the status quo in Python, it may only make sense to store NaNs in
> array.array, but not in a list.


That's a bit extreme. It only gets you into trouble if you reason like this:

 >>> a = b = [1, 2, 3, float('nan')]
 >>> if a == b:
...     for x,y in zip(a,b):
...             assert x==y
...
Traceback (most recent call last):
   File "<stdin>", line 3, in <module>
AssertionError


But it's perfectly fine to do this:

 >>> sum(a)
nan


exactly as expected. Prohibiting NANs from lists is massive overkill for 
a small (alleged) problem.

I know thousands of words have been spilled on this, including many by 
myself, but I really believe this discussion is mostly bike-shedding. 
Given the vehemence of some replies, and the volume of talk, anyone 
would think that you could hardly write a line of Python code without 
badly tripping over problems caused by NANs. The truth is, I think, that 
most people will never see one in real world code, and those who are 
least likely to come across them are the most likely to be categorically 
against them.

(I grant that Alexander is an exception -- I understand that he does do 
a lot of numeric work, and does come across NANs, and still doesn't like 
them one bit.)



-- 
Steven


From steve at pearwood.info  Thu Apr 28 19:26:48 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 03:26:48 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB972AB.6090302@btinternet.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<4DB972AB.6090302@btinternet.com>
Message-ID: <4DB9A358.3070704@pearwood.info>

Rob Cliffe wrote:

> To me the idea of non-reflexive equality (an object not being equal to 
> itself) is abhorrent.  Nothing is more likely to put off new Python 
> users if they happen to run into it.

I believe that's a gross exaggeration. In any case, that's just your 
opinion, and Python is hardly the only language that supports (at least 
partially) NANs.

Besides, floats have all sorts of unintuitive properties that go against 
properties of real numbers, and new users manage to cope.

With floats, even ignoring NANs, you can't assume:

a*(b+c) == a*b + a*c
a+b+c = c+b+a
1.0/x*x = 1
x+y-x = y
x+1 > x

or many other properties of real numbers. In real code, the lack of 
reflexivity for NANs is just not that important. You can program for 
*years* without once accidentally stumbling over one, whereas you can't 
do the simplest floating point calculation without stubbing your toes on 
things like this:

 >>> 1.0/10
0.10000000000000001

Search the archives of the python-list at python.org mailing list. You will 
find regular questions from newbies similar to "Why doesn't Python 
calculate 1/10 correctly, is it broken?"

(Except that most of the time they don't *ask* if it's broken, they just 
declare that it is.)

Compared to that, which is concrete and obvious and frequent, NANs are 
usually rare and mild.

The fact is, NANs are useful. Less useful in Python, which goes out of 
the way to avoid generating them (a pity, in my opinion), but still useful.


> Basically it's deferring to a wart, of dubious value, in floating point 
> calculations and/or the IEEE754 standard, and allowing it to become a 
> monstrous carbuncle disfiguring the whole language.

A ridiculous over-reaction. How long have you been programming in 
Python? Months? Years? If the language was "disfigured" by a "monstrous 
carbuncle", you haven't noticed until now.


> I think implementations of equal/not-equal which are make equality 
> non-reflexive (and thus break "identity implies equality") should be 
> considered broken.

Then Python is broken by design, because by design *all* rich comparison 
methods can do anything.


> On 27/04/2011 15:53, Guido van Rossum wrote:
>> Maybe we should just call off the odd NaN comparison behavior?
> Right on, Guido.  (A pity that a lot of people don't seem to be listening.)

Oh we're listening. Some of us are just *disagreeing*.



-- 
Steven


From steve at pearwood.info  Thu Apr 28 19:41:25 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 03:41:25 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
Message-ID: <4DB9A6C5.6070804@pearwood.info>

Guido van Rossum wrote:

> *If* my proposal gets accepted, there will be a blanket rule that no
> matter how exotic an type's __eq__ is defined, self.__eq__(self)
> (i.e., __eq__ called with the same *object* argument) must return True
> if the type's __eq__ is to be considered well-behaved; and Python
> containers may assume (for the purpose of optimizing their own
> comparison operations) that their elements have a well-behaved __eq__.

I think that so long as "badly defined" objects are explicitly still 
permitted (with the understanding that they may behave badly in 
containers), and so long as NANs continue to be "badly behaved" in this 
sense, then I could live with that. It's really just formalizing the 
status quo as deliberate policy rather than an accident:

nan == nan will still return False

[nan] == [nan] will still return True.

Purists on both sides will hate it :)



-- 
Steven

From steve at pearwood.info  Thu Apr 28 20:00:01 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 04:00:01 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB9A2F4.7020807@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTimoFKAKeG3iXQnBXi4x1xoLKPnkTA@mail.gmail.com>
	<4DB9A2F4.7020807@pearwood.info>
Message-ID: <4DB9AB21.9060003@pearwood.info>

Steven D'Aprano wrote:

> I know thousands of words have been spilled on this, including many by 
> myself, but I really believe this discussion is mostly bike-shedding. 

Hmmm... on reflection, I think I may have been a bit unfair. In 
particular, I don't mean any slight on any of the people who have made 
intelligent, insightful posts, even if I disagree with them.




-- 
Steven


From glyph at twistedmatrix.com  Thu Apr 28 20:03:10 2011
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Thu, 28 Apr 2011 14:03:10 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
Message-ID: <220C4E56-7BF1-41A0-9C4C-16DDFFB86585@twistedmatrix.com>


On Apr 28, 2011, at 12:59 PM, Guido van Rossum wrote:

> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>> In my opinion assert should be avoided completely anywhere else than
>> in the tests. If this is a wrong statement, please let me know why :)
> 
> I would turn that around. The assert statement should not be used in
> unit tests; unit tests should use self.assertXyzzy() always. In
> regular code, assert should be about detecting buggy code. It should
> not be used to test for error conditions in input data. (Both these
> can be summarized as "if you still want the test to happen with -O,
> don't use assert.)

You're both right! :)  My take on "assert" is "don't use it, ever".

assert is supposed to be about conditions that never happen.  So there are a few cases where I might use it:

If I use it to enforce a precondition, it's wrong because under -OO my preconditions won't be checked and my input might be invalid.

If I use it to enforce a postcondition, then my API's consumers have to occasionally handle this weird error, except it won't be checked under -OO so they won't be able to handle it consistently.

If I use it to try to make assertions about internal state during a computation, then I introduce an additional, untested (at the very least untested under -OO), probably undocumented (did I remember to say "and raises AssertionError when..." in its docstring?) code path where when this "bad" thing happens, I get an exception instead of a result.

If that's an important failure mode, then there ought to be a documented exception, which the computation's consumers can deal with.

If it really should "never happen", then I really should have just written some unit tests verifying that it doesn't happen in any case I can think of.  And I shouldn't be writing code to handle cases I can't come up with any way to exercise, because how do I know that it's going to do the right thing?  (If I had a dollar for every 'assert' message that didn't have the right number of arguments to its format string, etc.)

Also, when things that should "never happen" do actually happen in real life, is a random exception that interrupts the process actually an improvement over just continuing on with some potentially bad data?  In most cases, no, it really isn't, because by blowing up you've removed the ability of the user to take corrective action or do a workaround.  (In the cases where blowing up is better because you're about to do something destructive, again, a test seems in order.)

My python code is very well documented, which means that there is sometimes a significant runtime overhead from docstrings.  That's really my only interest in -OO: reducing memory footprint of Python processes by dropping dozens of megabytes of library documentation from each process.  The fact that it changes the semantics of 'assert' is an unfortunate distraction.

So the only time I'd even consider using 'assert' is in a throwaway script which might be run once, that I'm not going to write any tests for and I'm not going to maintain, but I might care about just enough to want to blow up instead of calling 'os.unlink' if certain conditions are not met.

(But then every time I actually use it that way, I realize that I should have dealt with the error sanely and I probably have to go back and fix it anyway.)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/8ed3d779/attachment.html>

From alexander.belopolsky at gmail.com  Thu Apr 28 20:09:21 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 14:09:21 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB9A2F4.7020807@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTimoFKAKeG3iXQnBXi4x1xoLKPnkTA@mail.gmail.com>
	<4DB9A2F4.7020807@pearwood.info>
Message-ID: <BANLkTindeU1VxK2VR+VrkBMQ_wk-_00n=A@mail.gmail.com>

On Thu, Apr 28, 2011 at 1:25 PM, Steven D'Aprano <steve at pearwood.info> wrote:
..
> But it's perfectly fine to do this:
>
>>>> sum(a)
> nan
>

This use case reminded me Kahan's

"""
Were there no way to get rid of NaNs, they would be as useless as
Indefinites on CRAYs; as soon as one were encountered, computation
would be best stopped rather than continued for an indefinite time to
an Indefinite conclusion.
""" http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps

More often than not, you would want to sum non-NaN values instead.

..
> (I grant that Alexander is an exception -- I understand that he does do a
> lot of numeric work, and does come across NANs, and still doesn't like them
> one bit.)

I like NaNs for high-performance calculations, but once you wrap
floats individually in Python objects, performance is killed and you
are better off using None instead of NaN.

Python lists don't support element-wise operations and therefore there
is little gain from being able to write x + y in loops over list
elements instead of ieee_add(x, y) or add_or_none(x, y) with proper
definitions of these functions.  On the other hand, __eq__ gets
invoked implicitly in cases where you don't access to the loop.  Your
only choice is to filter your data before invoking such operations.

From rob.cliffe at btinternet.com  Thu Apr 28 20:14:12 2011
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Thu, 28 Apr 2011 19:14:12 +0100
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB9A358.3070704@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<4DB972AB.6090302@btinternet.com>
	<4DB9A358.3070704@pearwood.info>
Message-ID: <4DB9AE74.6030909@btinternet.com>

On 28/04/2011 18:26, Steven D'Aprano wrote:
> Rob Cliffe wrote:
>
>> To me the idea of non-reflexive equality (an object not being equal 
>> to itself) is abhorrent.  Nothing is more likely to put off new 
>> Python users if they happen to run into it.
>
> I believe that's a gross exaggeration. In any case, that's just your 
> opinion, and Python is hardly the only language that supports (at 
> least partially) NANs.
>
> Besides, floats have all sorts of unintuitive properties that go 
> against properties of real numbers, and new users manage to cope.
>
> With floats, even ignoring NANs, you can't assume:
>
> a*(b+c) == a*b + a*c
> a+b+c = c+b+a
> 1.0/x*x = 1
> x+y-x = y
> x+1 > x
>
> or many other properties of real numbers. In real code, the lack of 
> reflexivity for NANs is just not that important. You can program for 
> *years* without once accidentally stumbling over one, whereas you 
> can't do the simplest floating point calculation without stubbing your 
> toes on things like this:
>
> >>> 1.0/10
> 0.10000000000000001
>
Of course, these are inevitable consequences of floating-point 
representation.  Inevitable in just about *any* language.
>
> The fact is, NANs are useful. Less useful in Python, which goes out of 
> the way to avoid generating them (a pity, in my opinion), but still 
> useful.
>
I am not arguing against the use of NANs.  Or even against different 
NANs not being equal to each other.
What I was arguing about was the behaviour of Python objects that 
represent NANs, specifically in allowing
     x == x
to be False, something which is *not* inevitable but a choice of 
language design or usage.

Rob Cliffe

From dasdasich at googlemail.com  Thu Apr 28 20:55:19 2011
From: dasdasich at googlemail.com (DasIch)
Date: Thu, 28 Apr 2011 20:55:19 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
Message-ID: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>

Hello,
As announced in my GSoC proposal I'd like to announce which benchmarks
I'll use for the benchmark suite I will work on this summer.

As of now there are two benchmark suites (that I know of) which
receive some sort of attention, those are the ones developed as part
of the PyPy project[1] which is used for http://speed.pypy.org and the
one initially developed for Unladen Swallow which has been continued
by CPython[2]. The PyPy benchmarks contain a lot of interesting
benchmarks some explicitly developed for that suite, the CPython
benchmarks have an extensive set of microbenchmarks in the pybench
package as well as the previously mentioned modifications made to the
Unladen Swallow benchmarks.

I'd like to "simply" merge both suites so that no changes are lost.
However I'd like to leave out the waf benchmark which is part of the
PyPy suite, the removal was proposed on pypy-dev for obvious
deficits[3]. It will be easier to add a better benchmark later than
replacing it at a later point.

Unless there is a major issue with this plan I'd like to go forward with this.

.. [1]: https://bitbucket.org/pypy/benchmarks
.. [2]: http://hg.python.org/benchmarks
.. [3]: http://mailrepository.com/pypy-dev.codespeak.net/msg/3627509/

From janssen at parc.com  Thu Apr 28 20:54:09 2011
From: janssen at parc.com (Bill Janssen)
Date: Thu, 28 Apr 2011 11:54:09 PDT
Subject: [Python-Dev] Simple XML-RPC server over SSL/TLS
In-Reply-To: <4DB975BB.1040402@netwok.org>
References: <BANLkTinDGtWZsDPZ37U5_zqw9Aio-CpeXw@mail.gmail.com>
	<4DB975BB.1040402@netwok.org>
Message-ID: <27392.1304016849@parc.com>

?ric Araujo <merwok at netwok.org> wrote:

> Hi,
> 
> > But what I would like to know, is if is there any reason why XML-RPC can't
> > optionally work over TLS/SSL using Python's ssl module. I'll create a
> > ticket, and send a patch, but I was wondering if it was a reason why this
> > was not implemented.
> 
> I think there?s no deeper reason than nobody thought about it.  The ssl
> module is new in 2.6 and 3.x, xmlrpc is an older module for an old
> technology *cough*, so feel free to open a bug report.  Patch guidelines
> are found at http://docs.python.org/devguide  Thanks in advance!

What he said.  I'm not a big fan of XMLRPC in the first place, so I
probably didn't even notice that there wasn't SSL support for it.

Go for it!

Bill

From tjreedy at udel.edu  Thu Apr 28 21:30:58 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 28 Apr 2011 15:30:58 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=jHZmBVtih1eEyNyduTuoug87qQg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<BANLkTikdix72o=46aOrr1Dh-WjXnFZ4auQ@mail.gmail.com>	<BANLkTik8gRUPt2jxSkbEy9GVo1nzdFT0dg@mail.gmail.com>	<BANLkTim92qa_6EpD-_UH8uK-TAamjpr8xg@mail.gmail.com>
	<BANLkTi=jHZmBVtih1eEyNyduTuoug87qQg@mail.gmail.com>
Message-ID: <ipcf9i$hg6$1@dough.gmane.org>

On 4/28/2011 6:11 AM, Nick Coghlan wrote:
> On Thu, Apr 28, 2011 at 6:30 PM, Alexander Belopolsky
> <alexander.belopolsky at gmail.com>  wrote:
>> On Thu, Apr 28, 2011 at 3:57 AM, Nick Coghlan<ncoghlan at gmail.com>  wrote:
>> ..
>>>> It is an interesting question of what "sane invariants" are.  Why you
>>>> consider the invariants that you listed essential while say
>>>>
>>>> if c1 == c2:
>>>>    assert all(x == y for x,y in zip(c1, c2))
>>>>
>>>> optional?
>>>
>>> Because this assertion is an assertion about the behaviour of
>>> comparisons that violates IEEE754, while the assertions I list are all
>>> assertions about the behaviour of containers that can be made true
>>> *regardless* of IEEE754 by checking identity explicitly.
>>>
>>
>> AFAIK, IEEE754 says nothing about comparison of containers, so my
>> invariant cannot violate it.  What you probably wanted to say is that
>> my invariant cannot be achieved in the presence of IEEE754 conforming
>> floats, but this observation by itself does not make my invariant less
>> important than yours.  It just makes yours easier to maintain.
>
> No, I meant what I said. Your assertion includes a direct comparison
> between values (the "x == y" part) which means that IEEE754 has a
> bearing on whether or not it is a valid assertion. Every single one of
> my stated invariants consists solely of relationships between
> containers, or between a container and its contents. This keeps them
> all out of the domain of IEEE754 since the *container implementers*
> get to decide whether or not to factor object identity into the
> management of the container contents.
>
> The core containment invariant is really only this one:
>
>      for x in c:
>          assert x in c
>
> That is, if we iterate over a container, all entries returned should
> be in the container. Hopefully it is non-controversial that this is a
> sane and reasonable invariant for a container *user* to expect.
>
> The comparison invariants follow from the definition of set equivalence as:
>
>    set1 == set2 iff all(x in set2 for x in set1) and all(y in set1 for y in set2)
>
> Again, notice that there is no comparison of items here - merely a
> consideration of the way items relate to containers.

I agree that the container (author) gets to define container equality. 
The definition should also be correctly documented.

5.9. Comparisons says "Tuples and lists are compared lexicographically 
using comparison of corresponding elements. This means that to compare 
equal, each element must compare equal and the two sequences must be of 
the same type and have the same length.". This, I believe is the same as 
what Hrvoje said "I would expect l1 == l2, where l1 and l2 are both 
lists, to be semantically equivalent to len(l1) == len(l2) and 
all(imap(operator.eq, l1, l2))."

But "Currently it isn't, and that was the motivation for this thread." 
In this case, I think the discrepancy should be fixed by changing the 
doc. Add 'be identical or ' before 'compare equal'.

-- 
Terry Jan Reedy


From stefan_ml at behnel.de  Thu Apr 28 21:37:31 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Thu, 28 Apr 2011 21:37:31 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>
Message-ID: <ipcfls$jti$1@dough.gmane.org>

DasIch, 28.04.2011 20:55:
> the CPython
> benchmarks have an extensive set of microbenchmarks in the pybench
> package

Try not to care too much about pybench. There is some value in it, but some 
of its microbenchmarks are also tied to CPython's interpreter behaviour. 
For example, the benchmarks for literals can easily be considered dead code 
by other Python implementations so that they may end up optimising the 
benchmarked code away completely, or at least partially. That makes a 
comparison of the results somewhat pointless.

Stefan


From raymond.hettinger at gmail.com  Thu Apr 28 21:51:29 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 28 Apr 2011 12:51:29 -0700
Subject: [Python-Dev] Identity implies equality
Message-ID: <72A64C96-D7DC-453D-86A4-D7E4ED793025@gmail.com>

ISTM there is no right or wrong answer.
There is just a question of what is most useful.

AFAICT, the code for dictionaries (and therefore the code for sets)
has always had identity-implies-equality logic.  It makes dicts
blindingly fast for common cases.  It also confers some nice
properties like making it possible to retrieve a NaN that has
been stored as a key; otherwise, you could store it but not
look it up, pop it, or delete it (because the equality test would
always fail).  The logic also confers other nice-to-have
properties such as:  

*  d[k] = v; assert k in d   # assignment-implies-contains
*  assert all(k in d for k in d)  # all-members-are-members

These aren't essential invariants but they do provide
a pleasant programming environment and make it easier
to reason about programs.

Another place where identity-implies-equality logic
is explicit is in Py_RichCompareBool().  That lets
methods in many other functions and methods work like
dicts and sets.  It speeds them up and confers
some nice-to-haves like:

*  mylist.append(obj) implies mylist.count(obj) > 0 
*  x = obj implies x == obj   # assignment really works

There may be lots of other code that implicitly
makes similar assumptions.  I don't know how you
could reliably find those and rip them out.

If identity-implies-equality does get ripped out,
I don't know what we would win.  It would make it
possible to do some cute NaN tricks, but I don't
think you can defend against the general problem
of funky objects being able to muck-up code that
looks correct.  You get oddities when an object
lies about its length.  You get oddities when an
object has a hash that doesn't match its equality
function.  The situation with NaNs and sorts is
a prime example:

   >>> sorted([1.2, 3.4, float('Nan'), -1.2, 
              float('Inf'), float('Nan')]) 
   [1.2, 3.4, nan, -1.2, inf, nan]

Personally, I think the status quo is fine
and that practicality is beating purity.
High quality programs are written every day.
Numeric programmers seem to have no problem
using NaNs as-is.  AFAICT, the only actual
problem in front us is the OP's post where
he was able to surprise himself with some
NaN experiments at the interactive prompt.


Raymond

From tjreedy at udel.edu  Thu Apr 28 22:13:25 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 28 Apr 2011 16:13:25 -0400
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
	identity shortcut)
In-Reply-To: <4DB927F4.3040206@dcs.gla.ac.uk>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk>
Message-ID: <ipchp6$1ba$1@dough.gmane.org>

On 4/28/2011 4:40 AM, Mark Shannon wrote:

> NaN is *not* a number (the clue is in the name).

The problem is that the committee itself did not believe or stay 
consistent with that. In the text of the draft, they apparently refer to 
Nan as an indefinite, unspecified *number*. Sort of like a random 
variable with a uniform pseudo* distribution over the reals (* 0 
everywhere with integral 1). Or a quantum particle present but smeared 
out over all space. And that apparently is their rationale for Nan != 
NaN: an unspecified number will equal another unspecified number with 
probability 0. The rationale for bool(NaN)==True is that an unspecified 
*number* will be 0 with probability 0. If Nan truly indicated an 
*absence* (like 0 and '') then bool(NaN) should be False,

I think the committee goofed -- badly. Statisticians used missing value 
indicators long before the committee existed. They has no problem 
thinking that the indicator, as an object, equaled itself. So one could 
write (and I often did through the 1980s) the equivalent of

for i,x in enumerate(datavec):
   if x == XMIS: # singleton missing value indicator for BMDP
     datavec[i] = default

(Statistics packages have no concept of identity different from equality.)

If statisticians had made XMIS != XMIS, that obvious code would not have 
worked, as it will not today for Python. Instead, the special case 
circumlocution of "if isXMIS(x):" would have been required, adding one 
more unnecessary function to the list of builtins.

NaN is, in its domain, the equivalent of None (== Not a Value), which 
also serves an an alternative to immediately raising an exception. But 
like XMIS, None==None. Also, bool(None) is corretly for something that 
indicates absence.


> Python treats it as if it were a number:

As I said, so did the committee, and that was its mistake that we are 
more or less stuck with.

> NaN does not have to be a float or a Decimal.
> Perhaps it should have its own class.

Like None

> As pointed out by Meyer:
> NaN == NaN is False
> is no more logical than
> NaN != NaN is False

This is wrong if False/True are interpreted as probabilities 0 and 1.

> To summarise:
>
> NaN is required so that floating point operations on arrays and lists
> do not raise unwanted exceptions.

Like None.

> NaN is Not a Number (therefore should be neither a float nor a Decimal).
> Making it a new class would solve some of the problems discussed,
> but would create new problems instead.

Agreed, if we were starting fresh.

> Correct behaviour of collections is more important than IEEE conformance
> of NaN comparisons.

Also agreed.

-- 
Terry Jan Reedy


From _ at lvh.cc  Thu Apr 28 22:15:59 2011
From: _ at lvh.cc (Laurens Van Houtven)
Date: Thu, 28 Apr 2011 22:15:59 +0200
Subject: [Python-Dev] Identity implies equality
In-Reply-To: <72A64C96-D7DC-453D-86A4-D7E4ED793025@gmail.com>
References: <72A64C96-D7DC-453D-86A4-D7E4ED793025@gmail.com>
Message-ID: <BANLkTi=4UJMvvV9UZEcoj3h+4b1iVVya0A@mail.gmail.com>

On Thu, Apr 28, 2011 at 9:51 PM, Raymond Hettinger <
raymond.hettinger at gmail.com> wrote:

> Personally, I think the status quo is fine
> and that practicality is beating purity.
>

+1


>
> Raymond
>

cheers
lvh
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110428/8e868db1/attachment.html>

From mal at egenix.com  Thu Apr 28 22:23:54 2011
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 28 Apr 2011 22:23:54 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <ipcfls$jti$1@dough.gmane.org>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>
	<ipcfls$jti$1@dough.gmane.org>
Message-ID: <4DB9CCDA.5060808@egenix.com>

Stefan Behnel wrote:
> DasIch, 28.04.2011 20:55:
>> the CPython
>> benchmarks have an extensive set of microbenchmarks in the pybench
>> package
> 
> Try not to care too much about pybench. There is some value in it, but
> some of its microbenchmarks are also tied to CPython's interpreter
> behaviour. For example, the benchmarks for literals can easily be
> considered dead code by other Python implementations so that they may
> end up optimising the benchmarked code away completely, or at least
> partially. That makes a comparison of the results somewhat pointless.

The point of the micro benchmarks in pybench is to be able to compare
them one-by-one, not by looking at the sum of the tests.

If one implementation optimizes away some parts, then the comparison
will show this fact very clearly - and that's the whole point.

Taking the sum of the micro benchmarks only has some meaning
as very rough indicator of improvement. That's why I wrote pybench:
to get a better, more details picture of what's happening,
rather than trying to find some way of measuring "average"
use.

This "average" is very different depending on where you look:
for some applications method calls may be very important,
for others, arithmetic operations, and yet others may have more
need for fast attribute lookup.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 28 2011)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2011-06-20: EuroPython 2011, Florence, Italy               53 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From holger.krekel at gmail.com  Thu Apr 28 22:27:10 2011
From: holger.krekel at gmail.com (Holger Krekel)
Date: Thu, 28 Apr 2011 22:27:10 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
Message-ID: <BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>

On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>> In my opinion assert should be avoided completely anywhere else than
>> in the tests. If this is a wrong statement, please let me know why :)
>
> I would turn that around. The assert statement should not be used in
> unit tests; unit tests should use self.assertXyzzy() always.

FWIW this is only true for the unittest module/pkg policy for writing and
organising tests. There are other popular test frameworks like nose and pytest
which promote using plain asserts within writing unit tests and also allow to
write tests in functions.  And judging from my tutorials and others places many
people appreciate the ease of using asserts as compared to learning tons
of new methods. YMMV.

Holger

> regular code, assert should be about detecting buggy code. It should
> not be used to test for error conditions in input data. (Both these
> can be summarized as "if you still want the test to happen with -O,
> don't use assert.)
>
> --
> --Guido van Rossum (python.org/~guido)
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/holger.krekel%40gmail.com
>

From tjreedy at udel.edu  Thu Apr 28 22:48:48 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 28 Apr 2011 16:48:48 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
Message-ID: <ipcjrg$dit$1@dough.gmane.org>

On 4/28/2011 12:55 PM, Guido van Rossum wrote:

> *If* my proposal gets accepted, there will be a blanket rule that no
> matter how exotic an type's __eq__ is defined, self.__eq__(self)
> (i.e., __eq__ called with the same *object* argument) must return True
> if the type's __eq__ is to be considered well-behaved;

This, to me, is a statement of the obvious ;-), but it should be stated 
in the docs.

Do you also propose to make NaNs at least this well-behaved or leave 
them ill-behaved?

 > and Python
> containers may assume (for the purpose of optimizing their own
> comparison operations) that their elements have a well-behaved __eq__.

This almost states the status quo of the implementation, and the doc 
needs to be updated correspondingly. I do not think we should let object 
ill-behavior infect containers, so that they also become ill-behaved 
(not equal to themselves).

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Thu Apr 28 22:56:54 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 28 Apr 2011 16:56:54 -0400
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <19897.34348.886773.133607@montanaro.dyndns.org>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>	<ipb8qr$6jl$1@dough.gmane.org>	<20110428103733.5aefc6e0@neurotica.wooz.org>
	<19897.34348.886773.133607@montanaro.dyndns.org>
Message-ID: <ipckam$gar$1@dough.gmane.org>

On 4/28/2011 11:22 AM, skip at pobox.com wrote:
>
>      Barry>  I would agree.  Use asserts for "this can't possibly happen
>      Barry>  <wink>" conditions.
>
> Without looking, I suspect that's probably what the author thought he was
> doing.

You wish: to repeat the example from threading:

    def __init__(self, group=None, target=None, name=None,
                  args=(), kwargs=None, verbose=None):
         assert group is None, "group argument must be None for now"

is something that can easily happen.

-- 
Terry Jan Reedy


From robert.kern at gmail.com  Thu Apr 28 23:08:51 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Apr 2011 16:08:51 -0500
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
Message-ID: <ipcl14$kdv$1@dough.gmane.org>

On 4/28/11 3:27 PM, Holger Krekel wrote:
> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum<guido at python.org>  wrote:
>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad?<ziade.tarek at gmail.com>  wrote:
>>> In my opinion assert should be avoided completely anywhere else than
>>> in the tests. If this is a wrong statement, please let me know why :)
>>
>> I would turn that around. The assert statement should not be used in
>> unit tests; unit tests should use self.assertXyzzy() always.
>
> FWIW this is only true for the unittest module/pkg policy for writing and
> organising tests. There are other popular test frameworks like nose and pytest
> which promote using plain asserts within writing unit tests and also allow to
> write tests in functions.  And judging from my tutorials and others places many
> people appreciate the ease of using asserts as compared to learning tons
> of new methods. YMMV.
>
> Holger
>
>> regular code, assert should be about detecting buggy code. It should
>> not be used to test for error conditions in input data. (Both these
>> can be summarized as "if you still want the test to happen with -O,
>> don't use assert.)

Regardless of whether those frameworks encourage it, it's still the wrong thing 
to do for the reason that Guido states. Some bugs only show up under -O, so you 
ought to be running your test suite under -O, too.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From stefan_ml at behnel.de  Thu Apr 28 23:10:08 2011
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Thu, 28 Apr 2011 23:10:08 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <4DB9CCDA.5060808@egenix.com>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>	<ipcfls$jti$1@dough.gmane.org>
	<4DB9CCDA.5060808@egenix.com>
Message-ID: <ipcl3g$ksq$1@dough.gmane.org>

M.-A. Lemburg, 28.04.2011 22:23:
> Stefan Behnel wrote:
>> DasIch, 28.04.2011 20:55:
>>> the CPython
>>> benchmarks have an extensive set of microbenchmarks in the pybench
>>> package
>>
>> Try not to care too much about pybench. There is some value in it, but
>> some of its microbenchmarks are also tied to CPython's interpreter
>> behaviour. For example, the benchmarks for literals can easily be
>> considered dead code by other Python implementations so that they may
>> end up optimising the benchmarked code away completely, or at least
>> partially. That makes a comparison of the results somewhat pointless.
>
> The point of the micro benchmarks in pybench is to be able to compare
> them one-by-one, not by looking at the sum of the tests.
>
> If one implementation optimizes away some parts, then the comparison
> will show this fact very clearly - and that's the whole point.
>
> Taking the sum of the micro benchmarks only has some meaning
> as very rough indicator of improvement. That's why I wrote pybench:
> to get a better, more details picture of what's happening,
> rather than trying to find some way of measuring "average"
> use.
>
> This "average" is very different depending on where you look:
> for some applications method calls may be very important,
> for others, arithmetic operations, and yet others may have more
> need for fast attribute lookup.

I wasn't talking about "averages" or "sums", and I also wasn't trying to 
put down pybench in general. As it stands, it makes sense as a benchmark 
for CPython.

However, I'm arguing that a substantial part of it does not make sense as a 
benchmark for PyPy and others. With Cython, I couldn't get some of the 
literal arithmetic benchmarks to run at all. The runner script simply bails 
out with an error when the benchmarks accidentally run faster than the 
initial empty loop. I imagine that PyPy would eventually even drop the loop 
itself, thus leaving nothing to compare. Does that tell us that PyPy is 
faster than Cython for arithmetic? I don't think it does.

When I see that a benchmark shows that one implementation runs in 100% less 
time than another, I simply go *shrug* and look for a better benchmark to 
compare the two.

Stefan


From guido at python.org  Thu Apr 28 23:36:44 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 14:36:44 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipcjrg$dit$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcjrg$dit$1@dough.gmane.org>
Message-ID: <BANLkTi=BwFNEd+ru7kSXOzN7jeAagEpbxA@mail.gmail.com>

On Thu, Apr 28, 2011 at 1:48 PM, Terry Reedy <tjreedy at udel.edu> wrote:
> On 4/28/2011 12:55 PM, Guido van Rossum wrote:
>
>> *If* my proposal gets accepted, there will be a blanket rule that no
>> matter how exotic an type's __eq__ is defined, self.__eq__(self)
>> (i.e., __eq__ called with the same *object* argument) must return True
>> if the type's __eq__ is to be considered well-behaved;
>
> This, to me, is a statement of the obvious ;-), but it should be stated in
> the docs.
>
> Do you also propose to make NaNs at least this well-behaved or leave them
> ill-behaved?

As I said, my proposal is to consider this a bug of the same severity
as __hash__ and __eq__ disagreeing, and would require float and
Decimal to be changed.

The more conservative folks are in favor of not changing anything
(except the abstract Sequence class), and solving things by
documentation only. In that case the exotic current behavior of should
not be considered a bug but merely unusual, and the behavior of
collections (assuming an object is always equal to itself, never mind
what its __eq__ says) documented as just that. There would not be any
mention of well-behaved nor a judgment that NaN is not well-behaved.

If my proposal is accepted, the definition of sequence comparison etc.
would actually become simpler, since it should not have to mention the
special-casing of object identity; instead it could mention the
assumption of items being well-behaved. Again, the relationship
between __eq__ and __hash__ would be the model here; and in fact a
"well-behaved" type would have both properties (__eq__ returns true ->
same __hash__, object identity -> __eq__ returns true). A type that is
not well-behaved has a bug. I do not want to declare the behavior of
NaN a bug.

>> and Python
>>
>> containers may assume (for the purpose of optimizing their own
>> comparison operations) that their elements have a well-behaved __eq__.
>
> This almost states the status quo of the implementation, and the doc needs
> to be updated correspondingly. I do not think we should let object
> ill-behavior infect containers, so that they also become ill-behaved (not
> equal to themselves).

There are other kinds of bad behavior that will still affect
containers. So we have no choice about containers containing
ill-behaved objects being (potentially) ill-behaved.

In some sense the primary issue at hand is whether "x == x returns
False" indicates that x has a bug, or not. If it is a bug, the current
float and Decimal types have that bug, and need to be fixed; and then
the current behavior of containers is "merely' an optimization which
may fail if there is a buggy item.

The alternative is that we continue to say that it is not a bug,
merely exotic, and that containers should test for identity before
equality, not just as an optimization, but as the very essence of
their semantics.

The third option would be to say that the optimization is wrong. But
nobody wants that, as it would require a container's __eq__ method to
always compare all items before returning True, even when comparing a
containing to *itself*.

-- 
--Guido van Rossum (python.org/~guido)

From raymond.hettinger at gmail.com  Thu Apr 28 23:53:50 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 28 Apr 2011 14:53:50 -0700
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
Message-ID: <EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>


On Apr 28, 2011, at 1:27 PM, Holger Krekel wrote:

> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>> In my opinion assert should be avoided completely anywhere else than
>>> in the tests. If this is a wrong statement, please let me know why :)
>> 
>> I would turn that around. The assert statement should not be used in
>> unit tests; unit tests should use self.assertXyzzy() always.
> 
> FWIW this is only true for the unittest module/pkg policy for writing and
> organising tests. There are other popular test frameworks like nose and pytest
> which promote using plain asserts within writing unit tests and also allow to
> write tests in functions.  And judging from my tutorials and others places many
> people appreciate the ease of using asserts as compared to learning tons
> of new methods. YMMV.

I've also observed that people appreciate using asserts with nose.py and py.test.


Raymond

From guido at python.org  Fri Apr 29 00:07:10 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 15:07:10 -0700
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
	<EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
Message-ID: <BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 2:53 PM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 28, 2011, at 1:27 PM, Holger Krekel wrote:
>
>> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
>>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>>> In my opinion assert should be avoided completely anywhere else than
>>>> in the tests. If this is a wrong statement, please let me know why :)
>>>
>>> I would turn that around. The assert statement should not be used in
>>> unit tests; unit tests should use self.assertXyzzy() always.
>>
>> FWIW this is only true for the unittest module/pkg policy for writing and
>> organising tests. There are other popular test frameworks like nose and pytest
>> which promote using plain asserts within writing unit tests and also allow to
>> write tests in functions. ?And judging from my tutorials and others places many
>> people appreciate the ease of using asserts as compared to learning tons
>> of new methods. YMMV.
>
> I've also observed that people appreciate using asserts with nose.py and py.test.

They must not appreciate -O. :-)

-- 
--Guido van Rossum (python.org/~guido)

From robert.kern at gmail.com  Fri Apr 29 00:22:13 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Apr 2011 17:22:13 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
Message-ID: <ipcpam$ceq$1@dough.gmane.org>

On 4/28/11 11:55 AM, Guido van Rossum wrote:

> On Thu, Apr 28, 2011 at 8:52 AM, Robert Kern<robert.kern at gmail.com>  wrote:
>> Smaller, certainly. But now it's a trilemma. :-)
>>
>> 1. Have just np.float64 and np.complex128 scalars follow the Python float
>> semantics since they subclass Python float and complex, respectively.
>> 2. Have all np.float* and np.complex* scalars follow the Python float
>> semantics.
>> 3. Keep the current IEEE-754 semantics for all float scalar types.
>
> *If* my proposal gets accepted, there will be a blanket rule that no
> matter how exotic an type's __eq__ is defined, self.__eq__(self)
> (i.e., __eq__ called with the same *object* argument) must return True
> if the type's __eq__ is to be considered well-behaved; and Python
> containers may assume (for the purpose of optimizing their own
> comparison operations) that their elements have a well-behaved __eq__.

*If* so, then we would then just have to decide between #2 and #3.

With respect to this proposal, how does that interact with types that do not 
return bools for rich comparisons? For example, numpy arrays return bool arrays 
for comparisons. SQLAlchemy expressions return other SQLAlchemy expressions, 
etc. I realize that by being "not well-behaved" in this respect, we give up our 
rights to be proper elements in sortable, containment-checking containers. But 
in this and your followup message, you seem to be making a stronger statement 
that self.__eq__(self) not returning anything but True would be a bug in all 
contexts.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From raymond.hettinger at gmail.com  Fri Apr 29 00:31:46 2011
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 28 Apr 2011 15:31:46 -0700
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
	<EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
	<BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
Message-ID: <3A131195-A97F-4C3A-A28D-7DEBB930CD03@gmail.com>


On Apr 28, 2011, at 3:07 PM, Guido van Rossum wrote:

> On Thu, Apr 28, 2011 at 2:53 PM, Raymond Hettinger
> <raymond.hettinger at gmail.com> wrote:
>> 
>> On Apr 28, 2011, at 1:27 PM, Holger Krekel wrote:
>> 
>>> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
>>>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>>>> In my opinion assert should be avoided completely anywhere else than
>>>>> in the tests. If this is a wrong statement, please let me know why :)
>>>> 
>>>> I would turn that around. The assert statement should not be used in
>>>> unit tests; unit tests should use self.assertXyzzy() always.
>>> 
>>> FWIW this is only true for the unittest module/pkg policy for writing and
>>> organising tests. There are other popular test frameworks like nose and pytest
>>> which promote using plain asserts within writing unit tests and also allow to
>>> write tests in functions.  And judging from my tutorials and others places many
>>> people appreciate the ease of using asserts as compared to learning tons
>>> of new methods. YMMV.
>> 
>> I've also observed that people appreciate using asserts with nose.py and py.test.
> 
> They must not appreciate -O. :-)

It might be nice if there were a pragma or module variable to selectively enable asserts for a given test module so that -O would turn-off asserts in the production code but leave them on in a test_suite.

Raymond

From ncoghlan at gmail.com  Fri Apr 29 00:43:10 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 08:43:10 +1000
Subject: [Python-Dev] Identity implies equality
In-Reply-To: <72A64C96-D7DC-453D-86A4-D7E4ED793025@gmail.com>
References: <72A64C96-D7DC-453D-86A4-D7E4ED793025@gmail.com>
Message-ID: <BANLkTin36UNvZ+C80omgi-s2Qzz3MU8Ldg@mail.gmail.com>

On Fri, Apr 29, 2011 at 5:51 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
> * ?x = obj implies x == obj ? # assignment really works

While I agree with your point of view regarding the status quo as a
useful, practical compromise, I need to call out that particular
example:

>>> nan = float('nan')
>>> x = nan
>>> x == nan
False
>>> x in locals().values()
True

Due to rich comparison and the freedom to implement non-reflexive
definitions of "equality", the assignment "x = obj" implies only that:
- x is obj
- x in locals().values()

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Fri Apr 29 00:55:06 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 15:55:06 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <ipcpam$ceq$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
Message-ID: <BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>

On Thu, Apr 28, 2011 at 3:22 PM, Robert Kern <robert.kern at gmail.com> wrote:
> On 4/28/11 11:55 AM, Guido van Rossum wrote:
>
>> On Thu, Apr 28, 2011 at 8:52 AM, Robert Kern<robert.kern at gmail.com>
>> ?wrote:
>>>
>>> Smaller, certainly. But now it's a trilemma. :-)
>>>
>>> 1. Have just np.float64 and np.complex128 scalars follow the Python float
>>> semantics since they subclass Python float and complex, respectively.
>>> 2. Have all np.float* and np.complex* scalars follow the Python float
>>> semantics.
>>> 3. Keep the current IEEE-754 semantics for all float scalar types.
>>
>> *If* my proposal gets accepted, there will be a blanket rule that no
>> matter how exotic an type's __eq__ is defined, self.__eq__(self)
>> (i.e., __eq__ called with the same *object* argument) must return True
>> if the type's __eq__ is to be considered well-behaved; and Python
>> containers may assume (for the purpose of optimizing their own
>> comparison operations) that their elements have a well-behaved __eq__.
>
> *If* so, then we would then just have to decide between #2 and #3.
>
> With respect to this proposal, how does that interact with types that do not
> return bools for rich comparisons? For example, numpy arrays return bool
> arrays for comparisons. SQLAlchemy expressions return other SQLAlchemy
> expressions, etc. I realize that by being "not well-behaved" in this
> respect, we give up our rights to be proper elements in sortable,
> containment-checking containers. But in this and your followup message, you
> seem to be making a stronger statement that self.__eq__(self) not returning
> anything but True would be a bug in all contexts.

Sorry, we'll have to make an exception for those of course. This will
somewhat complicate the interpretation of well-behaved, because those
are *not* well-behaved as far as containers go (both dict key lookup
and list membership are affected) but it is not a bug -- however it is
a bug to put these in containers and then use container comparisons on
the container.

-- 
--Guido van Rossum (python.org/~guido)

From ncoghlan at gmail.com  Fri Apr 29 01:04:09 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 09:04:09 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
Message-ID: <BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>

On Fri, Apr 29, 2011 at 8:55 AM, Guido van Rossum <guido at python.org> wrote:
> Sorry, we'll have to make an exception for those of course. This will
> somewhat complicate the interpretation of well-behaved, because those
> are *not* well-behaved as far as containers go (both dict key lookup
> and list membership are affected) but it is not a bug -- however it is
> a bug to put these in containers and then use container comparisons on
> the container.

That's a point in favour of the status quo, though - with the burden
of enforcing reflexivity placed on the containers, types are free to
make use of rich comparisons to return more than just simple
True/False results.

I hadn't really thought about it that way before this discussion - it
is the identity checking behaviour of the builtin containers that lets
us sensibly handle cases like sets of NumPy arrays.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From benjamin at python.org  Fri Apr 29 01:09:59 2011
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 28 Apr 2011 18:09:59 -0500
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <3A131195-A97F-4C3A-A28D-7DEBB930CD03@gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
	<EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
	<BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
	<3A131195-A97F-4C3A-A28D-7DEBB930CD03@gmail.com>
Message-ID: <BANLkTimJTC-y+uNJERUOuKSKom+sjeCZSg@mail.gmail.com>

2011/4/28 Raymond Hettinger <raymond.hettinger at gmail.com>:
> It might be nice if there were a pragma or module variable to selectively enable asserts for a given test module so that -O would turn-off asserts in the production code but leave them on in a test_suite.

from __future__ import perfect_code_so_no_asserts

:)



-- 
Regards,
Benjamin

From ncoghlan at gmail.com  Fri Apr 29 01:12:54 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 09:12:54 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
Message-ID: <BANLkTinqqDRw_Dut70pHLh_5UF5LXEq02A@mail.gmail.com>

On Fri, Apr 29, 2011 at 2:55 AM, Guido van Rossum <guido at python.org> wrote:
> Raymond strongly believes that containers must be allowed to use the
> modified definition, I believe purely for performance reasons.
> (Without this rule, a list or tuple could not even cut short being
> compared to *itself*.) It seems you are in that camp too.

I'm a fan of the status quo, but not just for performance reasons -
there is quite a bit of set theory that breaks once you allow
non-reflexive equality*, so it makes sense to me to make it official
that containers should ignore any non-reflexivity they come across.

*To all the mathematicians in the audience yelling at their screens
that the very idea of "non-reflexive equality" is an oxymoron... yes,
I know :P

Cheers,
Nick.

P.S. It's hard to explain the slightly odd point of view that seeing
standard arithmetic constructed from Peano's Axioms and set theory can
give you on discussions like this. It's a seriously different (and
strange) way of thinking about the basic arithmetic constructs we
normally take for granted, though :)

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Fri Apr 29 01:13:58 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 16:13:58 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
Message-ID: <BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 4:04 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Fri, Apr 29, 2011 at 8:55 AM, Guido van Rossum <guido at python.org> wrote:
>> Sorry, we'll have to make an exception for those of course. This will
>> somewhat complicate the interpretation of well-behaved, because those
>> are *not* well-behaved as far as containers go (both dict key lookup
>> and list membership are affected) but it is not a bug -- however it is
>> a bug to put these in containers and then use container comparisons on
>> the container.
>
> That's a point in favour of the status quo, though - with the burden
> of enforcing reflexivity placed on the containers, types are free to
> make use of rich comparisons to return more than just simple
> True/False results.

Possibly. Though for types that *do* return True/False, NaN's behavior
can still be infuriating.

> I hadn't really thought about it that way before this discussion - it
> is the identity checking behaviour of the builtin containers that lets
> us sensibly handle cases like sets of NumPy arrays.

But do they? For non-empty arrays, __eq__ will always return something
that is considered true, so any hash collisions will cause false
positives. And look at this simple example:

>>> class C(list):
...   def __eq__(self, other):
...     if isinstance(other, C):
...       return [x == y for x, y in zip(self, other)]
...
>>> a = C([1,2,3])
>>> b = C([2,1,3])
>>> a == b
[False, False, True]
>>> x = [a, a]
>>> b in x
True

-- 
--Guido van Rossum (python.org/~guido)

From ncoghlan at gmail.com  Fri Apr 29 01:40:40 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 09:40:40 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
Message-ID: <BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>

On Fri, Apr 29, 2011 at 9:13 AM, Guido van Rossum <guido at python.org> wrote:
>> I hadn't really thought about it that way before this discussion - it
>> is the identity checking behaviour of the builtin containers that lets
>> us sensibly handle cases like sets of NumPy arrays.
>
> But do they? For non-empty arrays, __eq__ will always return something
> that is considered true, so any hash collisions will cause false
> positives. And look at this simple example:
>
>>>> class C(list):
> ... ? def __eq__(self, other):
> ... ? ? if isinstance(other, C):
> ... ? ? ? return [x == y for x, y in zip(self, other)]
> ...
>>>> a = C([1,2,3])
>>>> b = C([2,1,3])
>>>> a == b
> [False, False, True]
>>>> x = [a, a]
>>>> b in x
> True

Hmm, true. And things like count() and index() would still be
thoroughly broken for sequences. OK, so scratch that idea - there's
simply no sane way to handle such objects without using an
identity-based container that ignores equality definitions altogether.

Pondering the NaN problem further, I think we can relatively easily
argue that reflexive behaviour at the object level fits within the
scope of IEEE754.

1. IEEE754 is a value-based system, with a finite number of distinct
NaN payloads
2. Python is an object-based system. In addition to their payload, NaN
objects are further distinguished by their identity (infinite in
theory, in practice limited by available memory).
3. We can still technically be conformant with IEEE754 even if we say
that a given NaN object is equivalent to itself, but not to other NaN
objects with the same payload.

Unfortunately, this still doesn't play well with serialisation, which
assumes that the identity of float objects doesn't matter:

>>> import pickle
>>> nan = float('nan')
>>> x = [nan, nan]
>>> x[0] is x[1]
True
>>> y = pickle.loads(pickle.dumps(x))
>>> y
[nan, nan]
>>> y[0] is y[1]
False

Contrast that with the handling of lists, where identity is known to
be significant:

>>> x = [[]]*2
>>> x[0] is x[1]
True
>>> y = pickle.loads(pickle.dumps(x))
>>> y
[[], []]
>>> y[0] is y[1]
True

I'd say I've definitely come around to being +0 on the idea of making
the float() and decimal.Decimal() __eq__ definitions reflexive, but
doing so does have implications when it comes to the ability to
accurately save and restore application state. It isn't as simple as
just adding "if self is other: return True" to the respective __eq__
implementations.

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Fri Apr 29 01:47:13 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 16:47:13 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
	<BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
Message-ID: <BANLkTi=2ekzkCndEufCAfsrmW2_9k1ji4w@mail.gmail.com>

On Thu, Apr 28, 2011 at 4:40 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Pondering the NaN problem further, I think we can relatively easily
> argue that reflexive behaviour at the object level fits within the
> scope of IEEE754.

Now we're talking. :-)

> 1. IEEE754 is a value-based system, with a finite number of distinct
> NaN payloads
> 2. Python is an object-based system. In addition to their payload, NaN
> objects are further distinguished by their identity (infinite in
> theory, in practice limited by available memory).
> 3. We can still technically be conformant with IEEE754 even if we say
> that a given NaN object is equivalent to itself, but not to other NaN
> objects with the same payload.
>
> Unfortunately, this still doesn't play well with serialisation, which
> assumes that the identity of float objects doesn't matter:
>
>>>> import pickle
>>>> nan = float('nan')
>>>> x = [nan, nan]
>>>> x[0] is x[1]
> True
>>>> y = pickle.loads(pickle.dumps(x))
>>>> y
> [nan, nan]
>>>> y[0] is y[1]
> False
>
> Contrast that with the handling of lists, where identity is known to
> be significant:
>
>>>> x = [[]]*2
>>>> x[0] is x[1]
> True
>>>> y = pickle.loads(pickle.dumps(x))
>>>> y
> [[], []]
>>>> y[0] is y[1]
> True

Probably wouldn't kill us if fixed pickle to take object identity into
account for floats whose value is nan. (Fortunately for 3rd party
types pickle always preserves identity.)

> I'd say I've definitely come around to being +0 on the idea of making
> the float() and decimal.Decimal() __eq__ definitions reflexive, but
> doing so does have implications when it comes to the ability to
> accurately save and restore application state. It isn't as simple as
> just adding "if self is other: return True" to the respective __eq__
> implementations.

But it seems pickle is *already* broken, so that can't really be an
argument against the proposed change, right?

-- 
--Guido van Rossum (python.org/~guido)

From alexander.belopolsky at gmail.com  Fri Apr 29 01:57:40 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 28 Apr 2011 19:57:40 -0400
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=2ekzkCndEufCAfsrmW2_9k1ji4w@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
	<BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
	<BANLkTi=2ekzkCndEufCAfsrmW2_9k1ji4w@mail.gmail.com>
Message-ID: <BANLkTinW1=jKgY+r+z1AK-RwW42eg5S49g@mail.gmail.com>

On Thu, Apr 28, 2011 at 7:47 PM, Guido van Rossum <guido at python.org> wrote:
> On Thu, Apr 28, 2011 at 4:40 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> Pondering the NaN problem further, I think we can relatively easily
>> argue that reflexive behaviour at the object level fits within the
>> scope of IEEE754.
>
> Now we're talking. :-)
>

Note that Kahan is very critical of Java's approach, but NaN objects'
comparison is not on his list of Java warts:

http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf

From v+python at g.nevcal.com  Fri Apr 29 02:24:35 2011
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Thu, 28 Apr 2011 17:24:35 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>	<ipc30p$4sj$1@dough.gmane.org>	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>	<ipcpam$ceq$1@dough.gmane.org>	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
	<BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
Message-ID: <4DBA0543.1010603@g.nevcal.com>

On 4/28/2011 4:40 PM, Nick Coghlan wrote:
> Hmm, true. And things like count() and index() would still be
> thoroughly broken for sequences. OK, so scratch that idea - there's
> simply no sane way to handle such objects without using an
> identity-based container that ignores equality definitions altogether.


And the problem with that is that not all values are interned, to share 
a single identity per value, correct?

On the other hand, proliferation of float objects containing NaN 
"works", thus so would proliferation of non-float objects of the same 
value... but "works" would have a different meaning when there could be 
multiple identities of 6,981,433 in the same set.

But this does bring up an interesting enough point to cause me to rejoin 
the conversation:

Would it be reasonable to implement 3 types of containers:

1) using __eq__ (would not use identity comparison optimization)
2) using is (the case you describe above)
3) the status quo: is or __eq__

The first two would require an explicit constructor call because the 
syntax would be retained for case 3 for backward compatibility.

Heavy users of NaN and other similar values might find case 1 useful, 
although they would need to be careful with mappings and sets.

Heavy users of NumPy and other similar structures might find case 2 useful.


Offering the choice, and documenting the alternatives may make a lot 
more programmers choose the proper comparison operations, and less 
likely to overlook or pooh-pooh the issue with the thought that it won't 
happen to their program anyway...

From guido at python.org  Fri Apr 29 02:30:01 2011
From: guido at python.org (Guido van Rossum)
Date: Thu, 28 Apr 2011 17:30:01 -0700
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DBA0543.1010603@g.nevcal.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
	<BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
	<4DBA0543.1010603@g.nevcal.com>
Message-ID: <BANLkTin++aOi3oin_7qhLEoOKy_=CGi9HQ@mail.gmail.com>

On Thu, Apr 28, 2011 at 5:24 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> Would it be reasonable to implement 3 types of containers:

That's something for python-ideas. Occasionally containers that use
custom comparisons come in handy (e.g. case-insensitive dicts).

-- 
--Guido van Rossum (python.org/~guido)

From steve at pearwood.info  Fri Apr 29 02:31:10 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 10:31:10 +1000
Subject: [Python-Dev] Not-a-Number (was
 PyObject_RichCompareBool	identity shortcut)
In-Reply-To: <ipchp6$1ba$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>
	<ipchp6$1ba$1@dough.gmane.org>
Message-ID: <4DBA06CE.1080305@pearwood.info>

Terry Reedy wrote:

> I think the committee goofed -- badly. Statisticians used missing value 
> indicators long before the committee existed. They has no problem 
> thinking that the indicator, as an object, equaled itself. So one could 
> write (and I often did through the 1980s) the equivalent of
> 
> for i,x in enumerate(datavec):
>   if x == XMIS: # singleton missing value indicator for BMDP
>     datavec[i] = default

But NANs aren't missing values (although some people use them as such, 
that can be considered abuse of the concept).

R distinguishes NANs from missing values: they have a built-in value 
NaN, and a separate built-in value NA which is the canonical missing 
value. R also treats comparisons of both special values as a missing value:

 > NA == NA
[1] NA
 > NaN == NaN
[1] NA

including reflexivity:

 > x = NA
 > x == x
[1] NA

which strikes me as the worst of both worlds, guaranteed to annoy those 
who want the IEEE behaviour where NANs compare unequal, those like Terry 
who expect missing values to compare equal to other missing values, and 
those who want reflexivity to be treated as an invariant no matter what.



>> NaN is Not a Number (therefore should be neither a float nor a Decimal).
>> Making it a new class would solve some of the problems discussed,
>> but would create new problems instead.
> 
> Agreed, if we were starting fresh.

I don't see that making NANs a separate class would make any practical 
difference what-so-ever, but the point is moot since we're not starting 
fresh :)


>> Correct behaviour of collections is more important than IEEE conformance
>> of NaN comparisons.
> 
> Also agreed.

To be pedantic, the IEEE standard doesn't have anything to say about 
comparisons of lists of floats that might contain NANs. Given the 
current *documented* behaviour that list equality is based on object 
equality, the actual behaviour is surprising, but I don't think there is 
anything wrong with the idea of containers assuming that their elements 
are reflexive.





-- 
Steven

From robert.kern at gmail.com  Fri Apr 29 02:58:03 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Apr 2011 19:58:03 -0500
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>	<ipc30p$4sj$1@dough.gmane.org>	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>	<ipcpam$ceq$1@dough.gmane.org>	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
Message-ID: <ipd2es$m7s$1@dough.gmane.org>

On 4/28/11 6:13 PM, Guido van Rossum wrote:
> On Thu, Apr 28, 2011 at 4:04 PM, Nick Coghlan<ncoghlan at gmail.com>  wrote:
>> On Fri, Apr 29, 2011 at 8:55 AM, Guido van Rossum<guido at python.org>  wrote:
>>> Sorry, we'll have to make an exception for those of course. This will
>>> somewhat complicate the interpretation of well-behaved, because those
>>> are *not* well-behaved as far as containers go (both dict key lookup
>>> and list membership are affected) but it is not a bug -- however it is
>>> a bug to put these in containers and then use container comparisons on
>>> the container.
>>
>> That's a point in favour of the status quo, though - with the burden
>> of enforcing reflexivity placed on the containers, types are free to
>> make use of rich comparisons to return more than just simple
>> True/False results.
>
> Possibly. Though for types that *do* return True/False, NaN's behavior
> can still be infuriating.
>
>> I hadn't really thought about it that way before this discussion - it
>> is the identity checking behaviour of the builtin containers that lets
>> us sensibly handle cases like sets of NumPy arrays.
>
> But do they? For non-empty arrays, __eq__ will always return something
> that is considered true,

Actually, numpy.ndarray.__nonzero__() raises an exception. We've decided that 
there are no good conventions for deciding whether an array should be considered 
True or False that won't mislead people. It's quite astonishing how many people 
will just test "if x == y:" or "if x != y:" for a single set of inputs and 
"confirm" their guess as to the general rule from that.

But your point stands, numpy arrays cannot be members of sets or keys of dicts 
or orderable/"in-able" elements of lists.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From greg.ewing at canterbury.ac.nz  Fri Apr 29 03:08:05 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 29 Apr 2011 13:08:05 +1200
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>
	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>
	<ip9km0$ppm$1@dough.gmane.org> <ip9oe7$hgb$1@dough.gmane.org>
	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>
	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>
	<ipaqm5$1h7$1@dough.gmane.org>
	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>
	<ipc30p$4sj$1@dough.gmane.org>
	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>
	<ipcpam$ceq$1@dough.gmane.org>
	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>
	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>
Message-ID: <4DBA0F75.5040002@canterbury.ac.nz>

Nick Coghlan wrote:

> I hadn't really thought about it that way before this discussion - it
> is the identity checking behaviour of the builtin containers that lets
> us sensibly handle cases like sets of NumPy arrays.

Except that it doesn't:

 >>> from numpy import array
 >>> a1 = array([1,2])
 >>> a2 = array([3,4])
 >>> s = set([a1, a2])
Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'numpy.ndarray'

Lists aren't trouble-free either:

 >>> lst = [a1, a2]
 >>> a2 in lst
Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. 
Use a.any() or a.all()

-- 
Greg

From ben+python at benfinney.id.au  Fri Apr 29 03:11:59 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Fri, 29 Apr 2011 11:11:59 +1000
Subject: [Python-Dev] Not-a-Number
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
Message-ID: <87wridkgio.fsf@benfinney.id.au>

Terry Reedy <tjreedy at udel.edu> writes:

> On 4/28/2011 4:40 AM, Mark Shannon wrote:
> > NaN does not have to be a float or a Decimal.
> > Perhaps it should have its own class.
>
> Like None

Would it make sense for ?NaN? to be another instance of ?NoneType??

-- 
 \      ?I am too firm in my consciousness of the marvelous to be ever |
  `\       fascinated by the mere supernatural ?? ?Joseph Conrad, _The |
_o__)                                                     Shadow-Line_ |
Ben Finney


From greg.ewing at canterbury.ac.nz  Fri Apr 29 03:18:55 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 29 Apr 2011 13:18:55 +1200
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <87wridkgio.fsf@benfinney.id.au>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au>
Message-ID: <4DBA11FF.6080606@canterbury.ac.nz>

Taking a step back from all this, why does Python allow
NaNs to arise from computations *at all*?

+Inf and -Inf are arguably useful elements of the algebra,
yet Python insists on raising an exception for 1.0./0.0
instead of returning an infinity.

Why do this but not raise an exception for any operation
that produces a NaN?

-- 
Greg

From prologic at shortcircuit.net.au  Fri Apr 29 03:20:41 2011
From: prologic at shortcircuit.net.au (James Mills)
Date: Fri, 29 Apr 2011 11:20:41 +1000
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <87wridkgio.fsf@benfinney.id.au>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au>
Message-ID: <BANLkTi=xQz2Twh7LS2mu44J9CXOtmHSe+A@mail.gmail.com>

On Fri, Apr 29, 2011 at 11:11 AM, Ben Finney <ben+python at benfinney.id.au> wrote:
> Would it make sense for ?NaN? to be another instance of ?NoneType??

This is fine IHMO as I (personally) find myself doing things like:

if x is None:
    ...

cheers
James

-- 
-- James Mills
--
-- "Problems are solved by method"

From steve at pearwood.info  Fri Apr 29 03:37:16 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 11:37:16 +1000
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<633DDCFF-D9E0-4F66-AB5B-B3672F9F1740@gmail.com>	<BANLkTi=BnJJOHYSpS5jZE7MmczrV3VvqzQ@mail.gmail.com>	<ip9km0$ppm$1@dough.gmane.org>
	<ip9oe7$hgb$1@dough.gmane.org>	<BANLkTimS=Kf8JpJc51_1JNWr3VjWWvS8fw@mail.gmail.com>	<BANLkTikmOD1exDeCf0FqmjbjKNeBzyQRMg@mail.gmail.com>	<ipaqm5$1h7$1@dough.gmane.org>	<BANLkTik9a1_QVMEs23LGN_Zvvy9_x4a0Jg@mail.gmail.com>	<ipc30p$4sj$1@dough.gmane.org>	<BANLkTi=wcAarA++N7ToMwvYd8BcbYKPWfQ@mail.gmail.com>	<ipcpam$ceq$1@dough.gmane.org>	<BANLkTi=G_Vzocf_4WO2YeZva2=k2M4SFwg@mail.gmail.com>	<BANLkTi=OVfe-dq2sOhfWLau1mHUTBYTHiQ@mail.gmail.com>	<BANLkTikufXV_Ooe=uNmr=rmvyv=AVezKtQ@mail.gmail.com>
	<BANLkTimOUJkHz1EPTmLb1OXeExz+8j0KZg@mail.gmail.com>
Message-ID: <4DBA164C.2020406@pearwood.info>

Nick Coghlan wrote:

> 1. IEEE754 is a value-based system, with a finite number of distinct
> NaN payloads
> 2. Python is an object-based system. In addition to their payload, NaN
> objects are further distinguished by their identity (infinite in
> theory, in practice limited by available memory).

I argue that's an implementation detail that makes no difference. NANs 
should compare unequal, including to itself. That's the clear intention 
of IEEE-754. There's no exception made for "unless y is another name for 
x". If there was, NANs would be reflexive, and we wouldn't be having 
this discussion, but the non-reflexivity of NANs is intended behaviour.

The clear equivalent to object identity in value-languages is memory 
location. If you compare variable x to the same x, IEEE754 says you 
should get False.

Consider:

# Two distinct NANs are calculated somewhere...
x = float('nan')
y = float('nan')

# They find themselves in some data in some arbitrary place
seq = [1, 2, x, y]
random.shuffle(seq)

# And later x is compared to some arbitrary element in the data
if math.isnan(x):
     if x == seq[0]:
         print("Definitely not a NAN")


nan != x is an important invariant, breaking it just makes NANs more 
complicated and less useful. Tests will need to be written "if x == y 
and not math.isnan(x)" to avoid getting the wrong result for NANs.

I don't see what the problem is that we're trying to fix. If containers 
wish to define container equality as taking identity into account, good 
for the container. Document it and move on, but please don't touch floats.



-- 
Steven

From steve at pearwood.info  Fri Apr 29 03:44:10 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 11:44:10 +1000
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <4DBA11FF.6080606@canterbury.ac.nz>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>
	<ipchp6$1ba$1@dough.gmane.org>	<87wridkgio.fsf@benfinney.id.au>
	<4DBA11FF.6080606@canterbury.ac.nz>
Message-ID: <4DBA17EA.8020401@pearwood.info>

Greg Ewing wrote:
> Taking a step back from all this, why does Python allow
> NaNs to arise from computations *at all*?

The real question should be, why does Python treat all NANs as 
signalling NANs instead of quiet NANs? I don't believe this helps anyone.

> +Inf and -Inf are arguably useful elements of the algebra,
> yet Python insists on raising an exception for 1.0./0.0
> instead of returning an infinity.

I would argue that Python is wrong to do so.

As I've mentioned a couple of times now, 20 years ago Apple felt that 
NANs and INFs weren't too complicated for non-programmers using 
Hypercard. There's no sign that Apple were wrong to expose NANs and INFs 
to users, no flood of Hypercard users confused by NAN inequality.



-- 
Steven

From robert.kern at gmail.com  Fri Apr 29 04:56:47 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Apr 2011 21:56:47 -0500
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <4DBA17EA.8020401@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>	<ipchp6$1ba$1@dough.gmane.org>	<87wridkgio.fsf@benfinney.id.au>	<4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info>
Message-ID: <ipd9df$j5e$1@dough.gmane.org>

On 4/28/11 8:44 PM, Steven D'Aprano wrote:
> Greg Ewing wrote:
>> Taking a step back from all this, why does Python allow
>> NaNs to arise from computations *at all*?
>
> The real question should be, why does Python treat all NANs as signalling NANs
> instead of quiet NANs? I don't believe this helps anyone.

Actually, Python treats all NaNs as quiet NaNs and never signalling NaNs.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From stephen at xemacs.org  Fri Apr 29 06:24:38 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 29 Apr 2011 13:24:38 +0900
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTikCsGweatq1YPsOGbbc_ND1sFE6LQ@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<ipb8qr$6jl$1@dough.gmane.org>
	<20110428103733.5aefc6e0@neurotica.wooz.org>
	<19897.34348.886773.133607@montanaro.dyndns.org>
	<20110428112629.7dd26254@neurotica.wooz.org>
	<BANLkTikCsGweatq1YPsOGbbc_ND1sFE6LQ@mail.gmail.com>
Message-ID: <874o5hadmh.fsf@uwakimon.sk.tsukuba.ac.jp>

Tarek Ziad? writes:
 > On Thu, Apr 28, 2011 at 5:26 PM, Barry Warsaw <barry at python.org> wrote:

 > > BTW, I think it always helps to have a really good assert message, and/or a
 > > leading comment to explain *why* that condition can't possibly happen.
 > 
 > why bother, it can't happen ;)

Except before breakfast, says the Red Queen.


From stephen at xemacs.org  Fri Apr 29 06:36:08 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 29 Apr 2011 13:36:08 +0900
Subject: [Python-Dev] PyObject_RichCompareBool identity shortcut
In-Reply-To: <4DB9A2F4.7020807@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTimoFKAKeG3iXQnBXi4x1xoLKPnkTA@mail.gmail.com>
	<4DB9A2F4.7020807@pearwood.info>
Message-ID: <8739l1ad3b.fsf@uwakimon.sk.tsukuba.ac.jp>

Steven D'Aprano writes:

 > (I grant that Alexander is an exception -- I understand that he does do 
 > a lot of numeric work, and does come across NANs, and still doesn't like 
 > them one bit.)

I don't think I'd want anybody who *likes* NaNs to have a commit bit
at python.org.<shiver/>

From steve at pearwood.info  Fri Apr 29 07:28:23 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 15:28:23 +1000
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <ipd9df$j5e$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>	<ipchp6$1ba$1@dough.gmane.org>	<87wridkgio.fsf@benfinney.id.au>	<4DBA11FF.6080606@canterbury.ac.nz>	<4DBA17EA.8020401@pearwood.info>
	<ipd9df$j5e$1@dough.gmane.org>
Message-ID: <4DBA4C77.2020507@pearwood.info>

Robert Kern wrote:
> On 4/28/11 8:44 PM, Steven D'Aprano wrote:
>> Greg Ewing wrote:
>>> Taking a step back from all this, why does Python allow
>>> NaNs to arise from computations *at all*?
>>
>> The real question should be, why does Python treat all NANs as 
>> signalling NANs
>> instead of quiet NANs? I don't believe this helps anyone.
> 
> Actually, Python treats all NaNs as quiet NaNs and never signalling NaNs.

Sorry, did I get that backwards? I thought it was signalling NANs that 
cause a signal (in Python terms, an exception)?

If I do x = 0.0/0 I get an exception instead of a NAN. Hence a 
signalling NAN.



-- 
Steven




From eliben at gmail.com  Fri Apr 29 08:26:27 2011
From: eliben at gmail.com (Eli Bendersky)
Date: Fri, 29 Apr 2011 09:26:27 +0300
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <20110428173214.19fe3445@pitrou.net>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org>
	<20110428144450.GB2699@kevin> <20110428173214.19fe3445@pitrou.net>
Message-ID: <BANLkTi=DrH=QEiVwoO=YZPoc_WP7YhMMZw@mail.gmail.com>

>> On Thu, Apr 28, 2011 at 04:20:06PM +0200, ?ric Araujo wrote:
>> > > ? ? ? ? ?if hasattr(os, "symlink") and hasattr(os, "link"):
>> > > ? ? ? ? ? ? ?# For systems that support symbolic and hard links.
>> > > ? ? ? ? ? ? ?if tarinfo.issym():
>> > > + ? ? ? ? ? ? ? ?if os.path.exists(targetpath):
>> > > + ? ? ? ? ? ? ? ? ? ?os.unlink(targetpath)
>> >
>> > Is there a race condition here?
>>
>> The lock to avoid race conditions (if you were thinking along those
>> lines) would usually be implemented at the higher level code which is
>> using extractall in threads.
>
> A lock would only protect only against multi-threaded use of the
> tarfile module, which is probably quite rare and therefore not a real
> concern.
> The kind of race condition which can happen here is if an attacker
> creates "targetpath" between os.path.exists and os.unlink. Whether it
> is an exploitable flaw would need a detailed analysis, of course.
>

Just out of curiosity, could you please elaborate on the potential
threat of this? If the "exists" condition is true, targetpath already
exists, so what use there is in overwriting it? If the condition is
false, unlink isn't executed, so no harm either. What am I missing?

Eli

From ben+python at benfinney.id.au  Fri Apr 29 08:29:49 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Fri, 29 Apr 2011 16:29:49 +1000
Subject: [Python-Dev] Not-a-Number
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au> <4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info> <ipd9df$j5e$1@dough.gmane.org>
	<4DBA4C77.2020507@pearwood.info>
Message-ID: <878vutk1sy.fsf@benfinney.id.au>

Steven D'Aprano <steve at pearwood.info> writes:

> Robert Kern wrote:
> > On 4/28/11 8:44 PM, Steven D'Aprano wrote:
> >> The real question should be, why does Python treat all NANs as
> >> signalling NANs instead of quiet NANs? I don't believe this helps
> >> anyone.
> >
> > Actually, Python treats all NaNs as quiet NaNs and never signalling NaNs.
>
> Sorry, did I get that backwards? I thought it was signalling NANs that
> cause a signal (in Python terms, an exception)?
>
> If I do x = 0.0/0 I get an exception instead of a NAN. Hence a
> signalling NAN.

Robert has interpreted your ?treats all NaNs as signalling NaNs? to mean
?treats all objects that Python calls a NaN as signalling NaNs?, and is
pointing out that no, the objects that Python calls ?NaN? are all quiet
NaNs.

You might be clearer if you distinguish between what Python calls a NaN
and what you call a NaN. It seems you're saying that some Python
exception objects (e.g. ZeroDivisionError objects) are what you call
NaNs, despite the fact that they're not what Python calls a NaN.

-- 
 \             ?We can't depend for the long run on distinguishing one |
  `\         bitstream from another in order to figure out which rules |
_o__)               apply.? ?Eben Moglen, _Anarchism Triumphant_, 1999 |
Ben Finney


From ncoghlan at gmail.com  Fri Apr 29 08:35:21 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 16:35:21 +1000
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <4DBA4C77.2020507@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au>
	<4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info> <ipd9df$j5e$1@dough.gmane.org>
	<4DBA4C77.2020507@pearwood.info>
Message-ID: <BANLkTik+4+Z+gqscy4Adft5A42C4TCbkcg@mail.gmail.com>

On Fri, Apr 29, 2011 at 3:28 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> Robert Kern wrote:
>> Actually, Python treats all NaNs as quiet NaNs and never signalling NaNs.
>
> Sorry, did I get that backwards? I thought it was signalling NANs that cause
> a signal (in Python terms, an exception)?
>
> If I do x = 0.0/0 I get an exception instead of a NAN. Hence a signalling
> NAN.

Aside from the divide-by-zero case, we treat NaNs as quiet NaNs. This
is largely due to the fact float operations are delegated to the
underlying CPU, and SIGFPE is ignored by default. You can fiddle with
it either by building and using the fpectl module, or else by
switching to decimal.Decimal() instead (which offers much finer
control over signalling through its thread local context information).
The latter is by far the preferable course, unless you're targeting
specific hardware with well-defined FPE behaviour.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From fijall at gmail.com  Fri Apr 29 08:38:56 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Fri, 29 Apr 2011 08:38:56 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <ipcl3g$ksq$1@dough.gmane.org>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>
	<ipcfls$jti$1@dough.gmane.org> <4DB9CCDA.5060808@egenix.com>
	<ipcl3g$ksq$1@dough.gmane.org>
Message-ID: <BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>

On Thu, Apr 28, 2011 at 11:10 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> M.-A. Lemburg, 28.04.2011 22:23:
>>
>> Stefan Behnel wrote:
>>>
>>> DasIch, 28.04.2011 20:55:
>>>>
>>>> the CPython
>>>> benchmarks have an extensive set of microbenchmarks in the pybench
>>>> package
>>>
>>> Try not to care too much about pybench. There is some value in it, but
>>> some of its microbenchmarks are also tied to CPython's interpreter
>>> behaviour. For example, the benchmarks for literals can easily be
>>> considered dead code by other Python implementations so that they may
>>> end up optimising the benchmarked code away completely, or at least
>>> partially. That makes a comparison of the results somewhat pointless.
>>
>> The point of the micro benchmarks in pybench is to be able to compare
>> them one-by-one, not by looking at the sum of the tests.
>>
>> If one implementation optimizes away some parts, then the comparison
>> will show this fact very clearly - and that's the whole point.
>>
>> Taking the sum of the micro benchmarks only has some meaning
>> as very rough indicator of improvement. That's why I wrote pybench:
>> to get a better, more details picture of what's happening,
>> rather than trying to find some way of measuring "average"
>> use.
>>
>> This "average" is very different depending on where you look:
>> for some applications method calls may be very important,
>> for others, arithmetic operations, and yet others may have more
>> need for fast attribute lookup.
>
> I wasn't talking about "averages" or "sums", and I also wasn't trying to put
> down pybench in general. As it stands, it makes sense as a benchmark for
> CPython.
>
> However, I'm arguing that a substantial part of it does not make sense as a
> benchmark for PyPy and others. With Cython, I couldn't get some of the
> literal arithmetic benchmarks to run at all. The runner script simply bails
> out with an error when the benchmarks accidentally run faster than the
> initial empty loop. I imagine that PyPy would eventually even drop the loop
> itself, thus leaving nothing to compare. Does that tell us that PyPy is
> faster than Cython for arithmetic? I don't think it does.
>
> When I see that a benchmark shows that one implementation runs in 100% less
> time than another, I simply go *shrug* and look for a better benchmark to
> compare the two.

I second here what Stefan says. This sort of benchmarks might be
useful for CPython, but they're not particularly useful for PyPy or
for comparisons (or any other implementation which tries harder to
optimize stuff away). For example a method call in PyPy would be
inlined and completely removed if method is empty, which does not
measure method call overhead at all. That's why we settled on
medium-to-large examples where it's more of an average of possible
scenarios than just one.

From steve at pearwood.info  Fri Apr 29 08:49:49 2011
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 29 Apr 2011 16:49:49 +1000
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <878vutk1sy.fsf@benfinney.id.au>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>
	<ipchp6$1ba$1@dough.gmane.org>	<87wridkgio.fsf@benfinney.id.au>
	<4DBA11FF.6080606@canterbury.ac.nz>	<4DBA17EA.8020401@pearwood.info>
	<ipd9df$j5e$1@dough.gmane.org>	<4DBA4C77.2020507@pearwood.info>
	<878vutk1sy.fsf@benfinney.id.au>
Message-ID: <4DBA5F8D.60404@pearwood.info>

Ben Finney wrote:
> Steven D'Aprano <steve at pearwood.info> writes:
> 
>> Robert Kern wrote:
>>> On 4/28/11 8:44 PM, Steven D'Aprano wrote:
>>>> The real question should be, why does Python treat all NANs as
>>>> signalling NANs instead of quiet NANs? I don't believe this helps
>>>> anyone.
>>> Actually, Python treats all NaNs as quiet NaNs and never signalling NaNs.
>> Sorry, did I get that backwards? I thought it was signalling NANs that
>> cause a signal (in Python terms, an exception)?
>>
>> If I do x = 0.0/0 I get an exception instead of a NAN. Hence a
>> signalling NAN.
> 
> Robert has interpreted your ?treats all NaNs as signalling NaNs? to mean
> ?treats all objects that Python calls a NaN as signalling NaNs?, and is
> pointing out that no, the objects that Python calls ?NaN? are all quiet
> NaNs.

I'm sorry for my lack of clarity. I'm referring to functions which 
potentially produce NANs, not the exceptions themselves. A calculation 
which might have produced a (quiet) NAN as the result instead raises an 
exception (which I'm treating as equivalent to a signal).




-- 
Steven


From ncoghlan at gmail.com  Fri Apr 29 08:52:01 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 16:52:01 +1000
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <BANLkTi=DrH=QEiVwoO=YZPoc_WP7YhMMZw@mail.gmail.com>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org> <20110428144450.GB2699@kevin>
	<20110428173214.19fe3445@pitrou.net>
	<BANLkTi=DrH=QEiVwoO=YZPoc_WP7YhMMZw@mail.gmail.com>
Message-ID: <BANLkTin1V1MwPxh8i=6543qqV30Gg7FU-g@mail.gmail.com>

On Fri, Apr 29, 2011 at 4:26 PM, Eli Bendersky <eliben at gmail.com> wrote:
>>> On Thu, Apr 28, 2011 at 04:20:06PM +0200, ?ric Araujo wrote:
>> The kind of race condition which can happen here is if an attacker
>> creates "targetpath" between os.path.exists and os.unlink. Whether it
>> is an exploitable flaw would need a detailed analysis, of course.
>>
>
> Just out of curiosity, could you please elaborate on the potential
> threat of this? If the "exists" condition is true, targetpath already
> exists, so what use there is in overwriting it? If the condition is
> false, unlink isn't executed, so no harm either. What am I missing?

That's the "detailed analysis" part. What happens if other code
deletes the path, and the unlink() call subsequently fails despite the
successful exists() check? Hence why exception checking (as Nadeem
posted) is typically the only right way to do things that access an
external environment that supports multiple concurrent processes.

For this kind of case, denial-of-service (i.e. an externally induced
program crash) is likely to be the limit of the damage, so the threat
isn't severe. Still worth avoiding the risk, though.

Really tricky cases can lead to all sorts of fun and games, like
manipulating programs that were granted elevated privileges into
executing malicious code that was put in place using only user
privileges (combining "sudo" and its ilk with "python" without passing
-E and -s is an unfortunately-less-than-tricky way sysadmins can shoot
themselves in the foot on that front).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From stephen at xemacs.org  Fri Apr 29 09:10:15 2011
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 29 Apr 2011 16:10:15 +0900
Subject: [Python-Dev] Not-a-Number (was
	PyObject_RichCompareBool	identity shortcut)
In-Reply-To: <ipchp6$1ba$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
Message-ID: <871v0la5yg.fsf@uwakimon.sk.tsukuba.ac.jp>

Terry Reedy writes:

 > > Python treats it as if it were a number:
 > 
 > As I said, so did the committee, and that was its mistake that we are 
 > more or less stuck with.

The committee didn't really have a choice.  You could ask that they
call NaNs something else, but some bit pattern is going to appear in
the result register after each computation, and further operations may
(try to) use that bit pattern.  Seems reasonable to me to apply duck-
typing and call those patterns "numbers" for the purpose of IEEE 754,
and to define them in such a way that operating on them produces a
non-NaN only when *all* numbers (including infinity) produce the same
non-NaN.

The alternative is to raise an exception whenever a NaN would be
generated (but something is still going to appear in the register; I
don't know any number that should be put there, do you?)  That is
excessively punishing to Python users and programmers, though, since
Python handles exceptions by terminating the computation.  (Kahan
points out that signaling NaNs are essentially never used for this
reason.)

Other aspects of NaN behavior may be a mistake.  But it's not clear to
me, even after all the discussion in this thread.

From holger.krekel at gmail.com  Fri Apr 29 09:55:57 2011
From: holger.krekel at gmail.com (Holger Krekel)
Date: Fri, 29 Apr 2011 09:55:57 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <3A131195-A97F-4C3A-A28D-7DEBB930CD03@gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
	<EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
	<BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
	<3A131195-A97F-4C3A-A28D-7DEBB930CD03@gmail.com>
Message-ID: <BANLkTim4-aLU392nWoy1uV+139fZ8vucBg@mail.gmail.com>

On Fri, Apr 29, 2011 at 12:31 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
>
> On Apr 28, 2011, at 3:07 PM, Guido van Rossum wrote:
>
>> On Thu, Apr 28, 2011 at 2:53 PM, Raymond Hettinger
>> <raymond.hettinger at gmail.com> wrote:
>>>
>>> On Apr 28, 2011, at 1:27 PM, Holger Krekel wrote:
>>>
>>>> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
>>>>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>>>>> In my opinion assert should be avoided completely anywhere else than
>>>>>> in the tests. If this is a wrong statement, please let me know why :)
>>>>>
>>>>> I would turn that around. The assert statement should not be used in
>>>>> unit tests; unit tests should use self.assertXyzzy() always.
>>>>
>>>> FWIW this is only true for the unittest module/pkg policy for writing and
>>>> organising tests. There are other popular test frameworks like nose and pytest
>>>> which promote using plain asserts within writing unit tests and also allow to
>>>> write tests in functions. ?And judging from my tutorials and others places many
>>>> people appreciate the ease of using asserts as compared to learning tons
>>>> of new methods. YMMV.
>>>
>>> I've also observed that people appreciate using asserts with nose.py and py.test.
>>
>> They must not appreciate -O. :-)
>
> It might be nice if there were a pragma or module variable to selectively enable asserts for a given test module so that -O would turn-off asserts in the production code but leave them on in a test_suite.

A way to tell Python "if you are going to compile this module/path,
don't turn off asserts, no matter what" would be great.  Then
nose/py.test and whichever tools/apps could fine-tune the handling of
asserts.   (This would probably be better than marking things inline
for those use cases).  Then testing with "-O" would work nicely as
well which would be appreciated :)

best,
holger


> Raymond

From eliben at gmail.com  Fri Apr 29 10:02:51 2011
From: eliben at gmail.com (Eli Bendersky)
Date: Fri, 29 Apr 2011 11:02:51 +0300
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <BANLkTin1V1MwPxh8i=6543qqV30Gg7FU-g@mail.gmail.com>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org>
	<20110428144450.GB2699@kevin> <20110428173214.19fe3445@pitrou.net>
	<BANLkTi=DrH=QEiVwoO=YZPoc_WP7YhMMZw@mail.gmail.com>
	<BANLkTin1V1MwPxh8i=6543qqV30Gg7FU-g@mail.gmail.com>
Message-ID: <BANLkTiknQdW_LTiYzWFzyJud0+DjqYMFuQ@mail.gmail.com>

On Fri, Apr 29, 2011 at 09:52, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Fri, Apr 29, 2011 at 4:26 PM, Eli Bendersky <eliben at gmail.com> wrote:
>>>> On Thu, Apr 28, 2011 at 04:20:06PM +0200, ?ric Araujo wrote:
>>> The kind of race condition which can happen here is if an attacker
>>> creates "targetpath" between os.path.exists and os.unlink. Whether it
>>> is an exploitable flaw would need a detailed analysis, of course.
>>>
>>
>> Just out of curiosity, could you please elaborate on the potential
>> threat of this? If the "exists" condition is true, targetpath already
>> exists, so what use there is in overwriting it? If the condition is
>> false, unlink isn't executed, so no harm either. What am I missing?
>
> That's the "detailed analysis" part. What happens if other code
> deletes the path, and the unlink() call subsequently fails despite the
> successful exists() check? Hence why exception checking (as Nadeem
> posted) is typically the only right way to do things that access an
> external environment that supports multiple concurrent processes.
>

I completely understand this "other code/thread deletes the path
between exists() and unlink()" case - it indeed is a race condition
waiting to happen. What I didn't understand was Antoine's example of
"attacker creates targetpath between os.path.exists and os.unlink",
and was asking for a more detailed example, since I'm not really
experienced with security-oriented thinking.

Eli

From fijall at gmail.com  Fri Apr 29 10:12:54 2011
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Fri, 29 Apr 2011 10:12:54 +0200
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTim4-aLU392nWoy1uV+139fZ8vucBg@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
	<EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
	<BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
	<3A131195-A97F-4C3A-A28D-7DEBB930CD03@gmail.com>
	<BANLkTim4-aLU392nWoy1uV+139fZ8vucBg@mail.gmail.com>
Message-ID: <BANLkTin3Tki+2U8z1EVSDbwps4SO=vKWvA@mail.gmail.com>

On Fri, Apr 29, 2011 at 9:55 AM, Holger Krekel <holger.krekel at gmail.com> wrote:
> On Fri, Apr 29, 2011 at 12:31 AM, Raymond Hettinger
> <raymond.hettinger at gmail.com> wrote:
>>
>> On Apr 28, 2011, at 3:07 PM, Guido van Rossum wrote:
>>
>>> On Thu, Apr 28, 2011 at 2:53 PM, Raymond Hettinger
>>> <raymond.hettinger at gmail.com> wrote:
>>>>
>>>> On Apr 28, 2011, at 1:27 PM, Holger Krekel wrote:
>>>>
>>>>> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
>>>>>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>>>>>> In my opinion assert should be avoided completely anywhere else than
>>>>>>> in the tests. If this is a wrong statement, please let me know why :)
>>>>>>
>>>>>> I would turn that around. The assert statement should not be used in
>>>>>> unit tests; unit tests should use self.assertXyzzy() always.
>>>>>
>>>>> FWIW this is only true for the unittest module/pkg policy for writing and
>>>>> organising tests. There are other popular test frameworks like nose and pytest
>>>>> which promote using plain asserts within writing unit tests and also allow to
>>>>> write tests in functions. ?And judging from my tutorials and others places many
>>>>> people appreciate the ease of using asserts as compared to learning tons
>>>>> of new methods. YMMV.
>>>>
>>>> I've also observed that people appreciate using asserts with nose.py and py.test.
>>>
>>> They must not appreciate -O. :-)
>>
>> It might be nice if there were a pragma or module variable to selectively enable asserts for a given test module so that -O would turn-off asserts in the production code but leave them on in a test_suite.
>
> A way to tell Python "if you are going to compile this module/path,
> don't turn off asserts, no matter what" would be great. ?Then
> nose/py.test and whichever tools/apps could fine-tune the handling of
> asserts. ? (This would probably be better than marking things inline
> for those use cases). ?Then testing with "-O" would work nicely as
> well which would be appreciated :)
>
> best,
> holger
>
>
>> Raymond
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

Any reason why -O behavior regarding removing asserts should not be
changed? Or should I go to python-ideas?

From marks at dcs.gla.ac.uk  Fri Apr 29 11:03:34 2011
From: marks at dcs.gla.ac.uk (Mark Shannon)
Date: Fri, 29 Apr 2011 10:03:34 +0100
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>	<ipcfls$jti$1@dough.gmane.org>
	<4DB9CCDA.5060808@egenix.com>	<ipcl3g$ksq$1@dough.gmane.org>
	<BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>
Message-ID: <4DBA7EE6.1000108@dcs.gla.ac.uk>

Maciej Fijalkowski wrote:
> On Thu, Apr 28, 2011 at 11:10 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
>> M.-A. Lemburg, 28.04.2011 22:23:
>>> Stefan Behnel wrote:
>>>> DasIch, 28.04.2011 20:55:
>>>>> the CPython
>>>>> benchmarks have an extensive set of microbenchmarks in the pybench
>>>>> package
>>>> Try not to care too much about pybench. There is some value in it, but
>>>> some of its microbenchmarks are also tied to CPython's interpreter
>>>> behaviour. For example, the benchmarks for literals can easily be
>>>> considered dead code by other Python implementations so that they may
>>>> end up optimising the benchmarked code away completely, or at least
>>>> partially. That makes a comparison of the results somewhat pointless.
>>> The point of the micro benchmarks in pybench is to be able to compare
>>> them one-by-one, not by looking at the sum of the tests.
>>>
>>> If one implementation optimizes away some parts, then the comparison
>>> will show this fact very clearly - and that's the whole point.
>>>
>>> Taking the sum of the micro benchmarks only has some meaning
>>> as very rough indicator of improvement. That's why I wrote pybench:
>>> to get a better, more details picture of what's happening,
>>> rather than trying to find some way of measuring "average"
>>> use.
>>>
>>> This "average" is very different depending on where you look:
>>> for some applications method calls may be very important,
>>> for others, arithmetic operations, and yet others may have more
>>> need for fast attribute lookup.
>> I wasn't talking about "averages" or "sums", and I also wasn't trying to put
>> down pybench in general. As it stands, it makes sense as a benchmark for
>> CPython.
>>
>> However, I'm arguing that a substantial part of it does not make sense as a
>> benchmark for PyPy and others. With Cython, I couldn't get some of the
>> literal arithmetic benchmarks to run at all. The runner script simply bails
>> out with an error when the benchmarks accidentally run faster than the
>> initial empty loop. I imagine that PyPy would eventually even drop the loop
>> itself, thus leaving nothing to compare. Does that tell us that PyPy is
>> faster than Cython for arithmetic? I don't think it does.
>>
>> When I see that a benchmark shows that one implementation runs in 100% less
>> time than another, I simply go *shrug* and look for a better benchmark to
>> compare the two.
> 
> I second here what Stefan says. This sort of benchmarks might be
> useful for CPython, but they're not particularly useful for PyPy or
> for comparisons (or any other implementation which tries harder to
> optimize stuff away). For example a method call in PyPy would be
> inlined and completely removed if method is empty, which does not
> measure method call overhead at all. That's why we settled on
> medium-to-large examples where it's more of an average of possible
> scenarios than just one.

If CPython were to start incorporating any specialising optimisations,
pybench wouldn't be much use for CPython.
The Unladen Swallow folks didn't like pybench as a benchmark.

From nadeem.vawda at gmail.com  Fri Apr 29 11:24:48 2011
From: nadeem.vawda at gmail.com (Nadeem Vawda)
Date: Fri, 29 Apr 2011 11:24:48 +0200
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Fix closes
 issue10761: tarfile.extractall failure when symlinked files are
In-Reply-To: <BANLkTiknQdW_LTiYzWFzyJud0+DjqYMFuQ@mail.gmail.com>
References: <E1QFLhw-0004TQ-Qe@dinsdale.python.org>
	<4DB97796.8010204@netwok.org> <20110428144450.GB2699@kevin>
	<20110428173214.19fe3445@pitrou.net>
	<BANLkTi=DrH=QEiVwoO=YZPoc_WP7YhMMZw@mail.gmail.com>
	<BANLkTin1V1MwPxh8i=6543qqV30Gg7FU-g@mail.gmail.com>
	<BANLkTiknQdW_LTiYzWFzyJud0+DjqYMFuQ@mail.gmail.com>
Message-ID: <BANLkTinY27g6c3K_oqmWgQrdryTzhe0F-w@mail.gmail.com>

On Fri, Apr 29, 2011 at 10:02 AM, Eli Bendersky <eliben at gmail.com> wrote:
> I completely understand this "other code/thread deletes the path
> between exists() and unlink()" case - it indeed is a race condition
> waiting to happen. What I didn't understand was Antoine's example of
> "attacker creates targetpath between os.path.exists and os.unlink",
> and was asking for a more detailed example, since I'm not really
> experienced with security-oriented thinking.

If targetpath is created after the os.path.exists() check, then os.unlink()
will not be called, so os.symlink() will raise an exception when it sees that
targetpath already exists.

On Thu, Apr 28, 2011 at 5:44 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Thu, 28 Apr 2011 17:40:05 +0200
> Nadeem Vawda <nadeem.vawda at gmail.com> wrote:
>>
>> The deletion case could be handled like this:
>>
>>              if tarinfo.issym():
>> +                try:
>> +                    os.unlink(targetpath)
>> +                except OSError as e:
>> +                    if e.errno != errno.ENOENT:
>> +                        raise
>>                  os.symlink(tarinfo.linkname, targetpath)
>
> Someone can still create "targetpath" between the calls to unlink() and
> symlink() though.

Like I said, that code only handles the case of targetpath being deleted.
I can't think of a similarly easy fix for the creation case. You could solve
that particular form of the problem with something like:

    if tarinfo.issym():
        while True:
            try:
                os.symlink(tarinfo.linkname, targetpath)
            except OSError as e:
                if e.errno != errno.EEXIST:
                    raise
            else:
                break
            try:
                os.unlink(targetpath)
            except OSError as e:
                if e.errno != errno.ENOENT:
                    raise

... but that would open up another DOS vulnerability - if an attacker manages
to keep re-creating targetpath in the window between unlink() and symlink(),
the loop would never terminate. Also, the code is rather ugly ;-)

Cheers,
Nadeem

From mal at egenix.com  Fri Apr 29 12:04:23 2011
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 29 Apr 2011 12:04:23 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <4DBA7EE6.1000108@dcs.gla.ac.uk>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>	<ipcfls$jti$1@dough.gmane.org>	<4DB9CCDA.5060808@egenix.com>	<ipcl3g$ksq$1@dough.gmane.org>	<BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>
	<4DBA7EE6.1000108@dcs.gla.ac.uk>
Message-ID: <4DBA8D27.1070602@egenix.com>

Mark Shannon wrote:
> Maciej Fijalkowski wrote:
>> On Thu, Apr 28, 2011 at 11:10 PM, Stefan Behnel <stefan_ml at behnel.de>
>> wrote:
>>> M.-A. Lemburg, 28.04.2011 22:23:
>>>> Stefan Behnel wrote:
>>>>> DasIch, 28.04.2011 20:55:
>>>>>> the CPython
>>>>>> benchmarks have an extensive set of microbenchmarks in the pybench
>>>>>> package
>>>>> Try not to care too much about pybench. There is some value in it, but
>>>>> some of its microbenchmarks are also tied to CPython's interpreter
>>>>> behaviour. For example, the benchmarks for literals can easily be
>>>>> considered dead code by other Python implementations so that they may
>>>>> end up optimising the benchmarked code away completely, or at least
>>>>> partially. That makes a comparison of the results somewhat pointless.
>>>> The point of the micro benchmarks in pybench is to be able to compare
>>>> them one-by-one, not by looking at the sum of the tests.
>>>>
>>>> If one implementation optimizes away some parts, then the comparison
>>>> will show this fact very clearly - and that's the whole point.
>>>>
>>>> Taking the sum of the micro benchmarks only has some meaning
>>>> as very rough indicator of improvement. That's why I wrote pybench:
>>>> to get a better, more details picture of what's happening,
>>>> rather than trying to find some way of measuring "average"
>>>> use.
>>>>
>>>> This "average" is very different depending on where you look:
>>>> for some applications method calls may be very important,
>>>> for others, arithmetic operations, and yet others may have more
>>>> need for fast attribute lookup.
>>> I wasn't talking about "averages" or "sums", and I also wasn't trying
>>> to put
>>> down pybench in general. As it stands, it makes sense as a benchmark for
>>> CPython.
>>>
>>> However, I'm arguing that a substantial part of it does not make
>>> sense as a
>>> benchmark for PyPy and others. With Cython, I couldn't get some of the
>>> literal arithmetic benchmarks to run at all. The runner script simply
>>> bails
>>> out with an error when the benchmarks accidentally run faster than the
>>> initial empty loop. I imagine that PyPy would eventually even drop
>>> the loop
>>> itself, thus leaving nothing to compare. Does that tell us that PyPy is
>>> faster than Cython for arithmetic? I don't think it does.
>>>
>>> When I see that a benchmark shows that one implementation runs in
>>> 100% less
>>> time than another, I simply go *shrug* and look for a better
>>> benchmark to
>>> compare the two.
>>
>> I second here what Stefan says. This sort of benchmarks might be
>> useful for CPython, but they're not particularly useful for PyPy or
>> for comparisons (or any other implementation which tries harder to
>> optimize stuff away). For example a method call in PyPy would be
>> inlined and completely removed if method is empty, which does not
>> measure method call overhead at all. That's why we settled on
>> medium-to-large examples where it's more of an average of possible
>> scenarios than just one.
> 
> If CPython were to start incorporating any specialising optimisations,
> pybench wouldn't be much use for CPython.
> The Unladen Swallow folks didn't like pybench as a benchmark.

This is all true, but I think there's a general misunderstanding
of what pybench is.

I wrote pybench in 1997 when I was working on optimizing the
Python 1.5 implementation for use in an web application server.

At the time, we had pystone and that was a really poor benchmark
for determining of whether certain optimizations in the Python VM
and compiler made sense or not.

pybench was then improved and extended over the course of
several years and then added to Python 2.5 in 2006.

The benchmark is written as framework for micro benchmarks
based on the assumption of a non-optimizing (byte code)
compiler.

As such it may or may not work with an optimizing compiler.
The calibration part would likely have to be disabled for
an optimizing compiler (run with -C 0) and a new set of
benchmark tests would have to be added; one which tests
the Python implementation at a higher level than the
existing tests.

That last part is something people tend to forget: pybench
is not a monolithic application with a predefined and
fixed set of tests. It's a framework that can be extended
as needed.

All you have to do is add a new module with test classes
and import it in Setup.py.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 29 2011)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2011-06-20: EuroPython 2011, Florence, Italy               52 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From fuzzyman at voidspace.org.uk  Fri Apr 29 12:22:23 2011
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Fri, 29 Apr 2011 11:22:23 +0100
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <4DBA8D27.1070602@egenix.com>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>	<ipcfls$jti$1@dough.gmane.org>	<4DB9CCDA.5060808@egenix.com>	<ipcl3g$ksq$1@dough.gmane.org>	<BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>	<4DBA7EE6.1000108@dcs.gla.ac.uk>
	<4DBA8D27.1070602@egenix.com>
Message-ID: <4DBA915F.2060001@voidspace.org.uk>

On 29/04/2011 11:04, M.-A. Lemburg wrote:
> Mark Shannon wrote:
>> Maciej Fijalkowski wrote:
>>> On Thu, Apr 28, 2011 at 11:10 PM, Stefan Behnel<stefan_ml at behnel.de>
>>> wrote:
>>>> M.-A. Lemburg, 28.04.2011 22:23:
>>>>> Stefan Behnel wrote:
>>>>>> DasIch, 28.04.2011 20:55:
>>>>>>> the CPython
>>>>>>> benchmarks have an extensive set of microbenchmarks in the pybench
>>>>>>> package
>>>>>> Try not to care too much about pybench. There is some value in it, but
>>>>>> some of its microbenchmarks are also tied to CPython's interpreter
>>>>>> behaviour. For example, the benchmarks for literals can easily be
>>>>>> considered dead code by other Python implementations so that they may
>>>>>> end up optimising the benchmarked code away completely, or at least
>>>>>> partially. That makes a comparison of the results somewhat pointless.
>>>>> The point of the micro benchmarks in pybench is to be able to compare
>>>>> them one-by-one, not by looking at the sum of the tests.
>>>>>
>>>>> If one implementation optimizes away some parts, then the comparison
>>>>> will show this fact very clearly - and that's the whole point.
>>>>>
>>>>> Taking the sum of the micro benchmarks only has some meaning
>>>>> as very rough indicator of improvement. That's why I wrote pybench:
>>>>> to get a better, more details picture of what's happening,
>>>>> rather than trying to find some way of measuring "average"
>>>>> use.
>>>>>
>>>>> This "average" is very different depending on where you look:
>>>>> for some applications method calls may be very important,
>>>>> for others, arithmetic operations, and yet others may have more
>>>>> need for fast attribute lookup.
>>>> I wasn't talking about "averages" or "sums", and I also wasn't trying
>>>> to put
>>>> down pybench in general. As it stands, it makes sense as a benchmark for
>>>> CPython.
>>>>
>>>> However, I'm arguing that a substantial part of it does not make
>>>> sense as a
>>>> benchmark for PyPy and others. With Cython, I couldn't get some of the
>>>> literal arithmetic benchmarks to run at all. The runner script simply
>>>> bails
>>>> out with an error when the benchmarks accidentally run faster than the
>>>> initial empty loop. I imagine that PyPy would eventually even drop
>>>> the loop
>>>> itself, thus leaving nothing to compare. Does that tell us that PyPy is
>>>> faster than Cython for arithmetic? I don't think it does.
>>>>
>>>> When I see that a benchmark shows that one implementation runs in
>>>> 100% less
>>>> time than another, I simply go *shrug* and look for a better
>>>> benchmark to
>>>> compare the two.
>>> I second here what Stefan says. This sort of benchmarks might be
>>> useful for CPython, but they're not particularly useful for PyPy or
>>> for comparisons (or any other implementation which tries harder to
>>> optimize stuff away). For example a method call in PyPy would be
>>> inlined and completely removed if method is empty, which does not
>>> measure method call overhead at all. That's why we settled on
>>> medium-to-large examples where it's more of an average of possible
>>> scenarios than just one.
>> If CPython were to start incorporating any specialising optimisations,
>> pybench wouldn't be much use for CPython.
>> The Unladen Swallow folks didn't like pybench as a benchmark.
> This is all true, but I think there's a general misunderstanding
> of what pybench is.

pybench proved useful for IronPython. It certainly highlighted some 
performance problems with some of the basic operations it measures.

All the best,

Michael Foord

> I wrote pybench in 1997 when I was working on optimizing the
> Python 1.5 implementation for use in an web application server.
>
> At the time, we had pystone and that was a really poor benchmark
> for determining of whether certain optimizations in the Python VM
> and compiler made sense or not.
>
> pybench was then improved and extended over the course of
> several years and then added to Python 2.5 in 2006.
>
> The benchmark is written as framework for micro benchmarks
> based on the assumption of a non-optimizing (byte code)
> compiler.
>
> As such it may or may not work with an optimizing compiler.
> The calibration part would likely have to be disabled for
> an optimizing compiler (run with -C 0) and a new set of
> benchmark tests would have to be added; one which tests
> the Python implementation at a higher level than the
> existing tests.
>
> That last part is something people tend to forget: pybench
> is not a monolithic application with a predefined and
> fixed set of tests. It's a framework that can be extended
> as needed.
>
> All you have to do is add a new module with test classes
> and import it in Setup.py.
>


-- 
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html


From flub at devork.be  Fri Apr 29 12:41:16 2011
From: flub at devork.be (Floris Bruynooghe)
Date: Fri, 29 Apr 2011 11:41:16 +0100
Subject: [Python-Dev] the role of assert in the standard library ?
In-Reply-To: <BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
References: <BANLkTimtoxmyNg9sn7e27soPBdxQMZ-q2Q@mail.gmail.com>
	<BANLkTimOyCx-6C_wWzLombH0Z=jj6S3DbQ@mail.gmail.com>
	<BANLkTiko_MkPkf1M7jm2cW7+UzXzVNQeqg@mail.gmail.com>
	<EA284630-4FC8-4BBB-9033-7AE553660489@gmail.com>
	<BANLkTi=_R19SXp1t1fwrO-o+wTTGJCOBcQ@mail.gmail.com>
Message-ID: <BANLkTikcUoOZwNwHHM4+8pm4A0oAmmx2xA@mail.gmail.com>

On 28 April 2011 23:07, Guido van Rossum <guido at python.org> wrote:
> On Thu, Apr 28, 2011 at 2:53 PM, Raymond Hettinger
> <raymond.hettinger at gmail.com> wrote:
>>
>> On Apr 28, 2011, at 1:27 PM, Holger Krekel wrote:
>>
>>> On Thu, Apr 28, 2011 at 6:59 PM, Guido van Rossum <guido at python.org> wrote:
>>>> On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>>>> In my opinion assert should be avoided completely anywhere else than
>>>>> in the tests. If this is a wrong statement, please let me know why :)
>>>>
>>>> I would turn that around. The assert statement should not be used in
>>>> unit tests; unit tests should use self.assertXyzzy() always.
>>>
>>> FWIW this is only true for the unittest module/pkg policy for writing and
>>> organising tests. There are other popular test frameworks like nose and pytest
>>> which promote using plain asserts within writing unit tests and also allow to
>>> write tests in functions. ?And judging from my tutorials and others places many
>>> people appreciate the ease of using asserts as compared to learning tons
>>> of new methods. YMMV.
>>
>> I've also observed that people appreciate using asserts with nose.py and py.test.
>
> They must not appreciate -O. :-)

Personaly I'd love to get rid of all of -O's meanings apart from
setting __debug__ to False.  Then you can write a strip tool which
could strip all docstrings, just unused docstrings (an improvement
over -O), and any "dead" code resulting from setting __debug__ to
either True or False.  The last thing to do is place assert statements
inside a if __debug__ block.

That way you could use the strip tool on the modules under test but
not on the test modules.

Regards
Floris

PS: I actually wrote some prototype code for such a strip tool last
year but never finished it off, so I'm pretty sure most of this is
possible.

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org

From g.brandl at gmx.net  Fri Apr 29 13:44:53 2011
From: g.brandl at gmx.net (Georg Brandl)
Date: Fri, 29 Apr 2011 13:44:53 +0200
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <loom.20110427T230704-75@post.gmane.org>
References: <loom.20110427T230704-75@post.gmane.org>
Message-ID: <ipe8bp$cje$1@dough.gmane.org>

On 27.04.2011 23:23, Vinay Sajip wrote:
> I've been recently trying to improve the test coverage for the logging package,
> and have got to a not unreasonable point:
> 
> logging/__init__.py 99% (96%)
> logging/config.py 89% (85%)
> logging/handlers.py 60% (54%)
> 
> where the figures in parentheses include branch coverage measurements.

BTW, didn't we agree not to put "pragma" comments into the stdlib code?

Georg



From ncoghlan at gmail.com  Fri Apr 29 14:13:18 2011
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 29 Apr 2011 22:13:18 +1000
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <ipe8bp$cje$1@dough.gmane.org>
References: <loom.20110427T230704-75@post.gmane.org>
	<ipe8bp$cje$1@dough.gmane.org>
Message-ID: <BANLkTim4qEXWbY1f02T4TJYPVx9h=zPstA@mail.gmail.com>

On Fri, Apr 29, 2011 at 9:44 PM, Georg Brandl <g.brandl at gmx.net> wrote:
> On 27.04.2011 23:23, Vinay Sajip wrote:
>> I've been recently trying to improve the test coverage for the logging package,
>> and have got to a not unreasonable point:
>>
>> logging/__init__.py 99% (96%)
>> logging/config.py 89% (85%)
>> logging/handlers.py 60% (54%)
>>
>> where the figures in parentheses include branch coverage measurements.
>
> BTW, didn't we agree not to put "pragma" comments into the stdlib code?

I think some folks objected, but since they're essential to keeping
track of progress in code coverage improvement efforts, there wasn't a
consensus to leave them out. The pragmas themselves are easy enough to
grep for, so it isn't like they don't leave a record of which lines
may not be getting tested.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From dasdasich at googlemail.com  Fri Apr 29 14:29:46 2011
From: dasdasich at googlemail.com (DasIch)
Date: Fri, 29 Apr 2011 14:29:46 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <4DBA8D27.1070602@egenix.com>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>
	<ipcfls$jti$1@dough.gmane.org> <4DB9CCDA.5060808@egenix.com>
	<ipcl3g$ksq$1@dough.gmane.org>
	<BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>
	<4DBA7EE6.1000108@dcs.gla.ac.uk> <4DBA8D27.1070602@egenix.com>
Message-ID: <BANLkTikTexQyXAP42DU=OYZJkaf8-e1K6Q@mail.gmail.com>

Given those facts I think including pybench is a mistake. It does not
allow for a fair or meaningful comparison between implementations
which is one of the things the suite is supposed to be used for in the
future.

This easily leads to misinterpretation of the results from this
particular benchmark and it negatively affects the performance data as
a whole.

The same applies to several Unladen Swallow microbenchmarks such as
bm_call_method_*, bm_call_simple and bm_unpack_sequence.

From mal at egenix.com  Fri Apr 29 14:37:29 2011
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 29 Apr 2011 14:37:29 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
In-Reply-To: <BANLkTikTexQyXAP42DU=OYZJkaf8-e1K6Q@mail.gmail.com>
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>	<ipcfls$jti$1@dough.gmane.org>
	<4DB9CCDA.5060808@egenix.com>	<ipcl3g$ksq$1@dough.gmane.org>	<BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>	<4DBA7EE6.1000108@dcs.gla.ac.uk>
	<4DBA8D27.1070602@egenix.com>
	<BANLkTikTexQyXAP42DU=OYZJkaf8-e1K6Q@mail.gmail.com>
Message-ID: <4DBAB109.7080309@egenix.com>

DasIch wrote:
> Given those facts I think including pybench is a mistake. It does not
> allow for a fair or meaningful comparison between implementations
> which is one of the things the suite is supposed to be used for in the
> future.
> 
> This easily leads to misinterpretation of the results from this
> particular benchmark and it negatively affects the performance data as
> a whole.
> 
> The same applies to several Unladen Swallow microbenchmarks such as
> bm_call_method_*, bm_call_simple and bm_unpack_sequence.

I don't think we should exclude any implementation specific
benchmarks from a common suite.

They will not necessarily allow for comparisons between
implementations, but will provide important information
about the progress made in optimizing a particular
implementation.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 29 2011)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2011-06-20: EuroPython 2011, Florence, Italy               52 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From solipsis at pitrou.net  Fri Apr 29 14:48:00 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 29 Apr 2011 14:48:00 +0200
Subject: [Python-Dev] Proposal for a common benchmark suite
References: <BANLkTik6rAqX28TrXgHmErK8jD5n_uquNQ@mail.gmail.com>
	<ipcfls$jti$1@dough.gmane.org> <4DB9CCDA.5060808@egenix.com>
	<ipcl3g$ksq$1@dough.gmane.org>
	<BANLkTik7k3NLo10=oueJ=RkXhORLvg9q6w@mail.gmail.com>
	<4DBA7EE6.1000108@dcs.gla.ac.uk> <4DBA8D27.1070602@egenix.com>
	<BANLkTikTexQyXAP42DU=OYZJkaf8-e1K6Q@mail.gmail.com>
Message-ID: <20110429144800.3b8c80cb@pitrou.net>

On Fri, 29 Apr 2011 14:29:46 +0200
DasIch <dasdasich at googlemail.com> wrote:
> Given those facts I think including pybench is a mistake. It does not
> allow for a fair or meaningful comparison between implementations
> which is one of the things the suite is supposed to be used for in the
> future.

"Including" is quite vague. pybench is "included" in the suite of
benchmarks at hg.python.org, but that doesn't mean it is given any
particular importance: you can select whichever benchmarks you want to
run when "perf.py" is executed (there are even several predefined
benchmark groups, none of which pybench is a member IIRC).

Regards

Antoine.



From ben+python at benfinney.id.au  Fri Apr 29 14:57:34 2011
From: ben+python at benfinney.id.au (Ben Finney)
Date: Fri, 29 Apr 2011 22:57:34 +1000
Subject: [Python-Dev] Not-a-Number
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au> <4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info> <ipd9df$j5e$1@dough.gmane.org>
	<4DBA4C77.2020507@pearwood.info> <878vutk1sy.fsf@benfinney.id.au>
	<4DBA5F8D.60404@pearwood.info>
Message-ID: <87vcxxi5a9.fsf@benfinney.id.au>

Steven D'Aprano <steve at pearwood.info> writes:

> I'm sorry for my lack of clarity. I'm referring to functions which
> potentially produce NANs, not the exceptions themselves. A calculation
> which might have produced a (quiet) NAN as the result instead raises
> an exception (which I'm treating as equivalent to a signal).

Yes, it produces a Python exception, which is not a Python NaN.

If you want to talk about ?signalling NaNs?, you'll have to distinguish
that (every time!) so you're not misunderstood as referring to a Python
NaN object.

-- 
 \     ?It's my belief we developed language because of our deep inner |
  `\                  need to complain.? ?Jane Wagner, via Lily Tomlin |
_o__)                                                                  |
Ben Finney


From starsareblueandfaraway at gmail.com  Fri Apr 29 16:27:46 2011
From: starsareblueandfaraway at gmail.com (Roy Hyunjin Han)
Date: Fri, 29 Apr 2011 10:27:46 -0400
Subject: [Python-Dev] What if replacing items in a dictionary returns the
	new dictionary?
Message-ID: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>

It would be convenient if replacing items in a dictionary returns the
new dictionary, in a manner analogous to str.replace().  What do you
think?
::

    # Current behavior
    x = {'key1': 1}
    x.update(key1=3) == None
    x == {'key1': 3} # Original variable has changed

    # Possible behavior
    x = {'key1': 1}
    x.replace(key1=3) == {'key1': 3}
    x == {'key1': 1} # Original variable is unchanged

From marks at dcs.gla.ac.uk  Fri Apr 29 16:36:01 2011
From: marks at dcs.gla.ac.uk (Mark Shannon)
Date: Fri, 29 Apr 2011 15:36:01 +0100
Subject: [Python-Dev] What if replacing items in a dictionary returns
 the	new dictionary?
In-Reply-To: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
References: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
Message-ID: <4DBACCD1.3050803@dcs.gla.ac.uk>


Roy Hyunjin Han wrote:
> It would be convenient if replacing items in a dictionary returns the
> new dictionary, in a manner analogous to str.replace().  What do you
> think?
> ::
> 
>     # Current behavior
>     x = {'key1': 1}
>     x.update(key1=3) == None
>     x == {'key1': 3} # Original variable has changed
> 
>     # Possible behavior
>     x = {'key1': 1}
>     x.replace(key1=3) == {'key1': 3}
>     x == {'key1': 1} # Original variable is unchanged
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/marks%40dcs.gla.ac.uk
> 

Could you please post this to python-ideas, rather than python-dev
Python-dev is about aspects of the implementation,
not significant language changes.

Mark.

From rdmurray at bitdance.com  Fri Apr 29 16:43:16 2011
From: rdmurray at bitdance.com (R. David Murray)
Date: Fri, 29 Apr 2011 10:43:16 -0400
Subject: [Python-Dev] What if replacing items in a dictionary returns
	the new dictionary?
In-Reply-To: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
References: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
Message-ID: <20110429144316.F1F07250CAF@mailhost.webabinitio.net>

On Fri, 29 Apr 2011 10:27:46 -0400, Roy Hyunjin Han <starsareblueandfaraway at gmail.com> wrote:
> It would be convenient if replacing items in a dictionary returns the
> new dictionary, in a manner analogous to str.replace().  What do you
> think?

This belongs on python-ideas, but the short answer is no.  The
general language design principle (as I understand it) is that
mutable object do not return themselves upon mutation, while
immutable objects do return the new object.

--
R. David Murray           http://www.bitdance.com

From phd at phdru.name  Fri Apr 29 16:34:06 2011
From: phd at phdru.name (Oleg Broytman)
Date: Fri, 29 Apr 2011 18:34:06 +0400
Subject: [Python-Dev] What if replacing items in a dictionary returns
 the new dictionary?
In-Reply-To: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
References: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
Message-ID: <20110429143406.GA441@iskra.aviel.ru>

Hi! Seems like a question for python-ideas mailing list, not for python-dev.

On Fri, Apr 29, 2011 at 10:27:46AM -0400, Roy Hyunjin Han wrote:
> It would be convenient if replacing items in a dictionary returns the
> new dictionary, in a manner analogous to str.replace().  What do you
> think?
> ::
> 
>     # Current behavior
>     x = {'key1': 1}
>     x.update(key1=3) == None
>     x == {'key1': 3} # Original variable has changed
> 
>     # Possible behavior
>     x = {'key1': 1}
>     x.replace(key1=3) == {'key1': 3}
>     x == {'key1': 1} # Original variable is unchanged

   You can implement this in your own subclass of dict, no?

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From starsareblueandfaraway at gmail.com  Fri Apr 29 16:59:26 2011
From: starsareblueandfaraway at gmail.com (Roy Hyunjin Han)
Date: Fri, 29 Apr 2011 10:59:26 -0400
Subject: [Python-Dev] What if replacing items in a dictionary returns
 the new dictionary?
In-Reply-To: <20110429144316.F1F07250CAF@mailhost.webabinitio.net>
References: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
	<20110429144316.F1F07250CAF@mailhost.webabinitio.net>
Message-ID: <BANLkTinzAXEA7C5GJLREVrPS7s_avtH=bQ@mail.gmail.com>

2011/4/29 R. David Murray <rdmurray at bitdance.com>:
> 2011/4/29 Roy Hyunjin Han <starsareblueandfaraway at gmail.com>:
>> It would be convenient if replacing items in a dictionary returns the
>> new dictionary, in a manner analogous to str.replace()
>
> This belongs on python-ideas, but the short answer is no. ?The
> general language design principle (as I understand it) is that
> mutable object do not return themselves upon mutation, while
> immutable objects do return the new object.

Thanks for the responses.  Sorry for the mispost, I'll post things
like this on python-ideas from now on.

RHH

From starsareblueandfaraway at gmail.com  Fri Apr 29 17:05:35 2011
From: starsareblueandfaraway at gmail.com (Roy Hyunjin Han)
Date: Fri, 29 Apr 2011 11:05:35 -0400
Subject: [Python-Dev] What if replacing items in a dictionary returns
 the new dictionary?
In-Reply-To: <20110429143406.GA441@iskra.aviel.ru>
References: <BANLkTin8sB+85CicRtqkbrgtN7--Ujh3jQ@mail.gmail.com>
	<20110429143406.GA441@iskra.aviel.ru>
Message-ID: <BANLkTikt4ue3NYBzna3p=GbNr6J6zEtGDA@mail.gmail.com>

> ? You can implement this in your own subclass of dict, no?

Yes, I just thought it would be convenient to have in the language
itself, but the responses to my post seem to indicate that [not
returning the updated object] is an intended language feature for
mutable types like dict or list.

class ReplaceableDict(dict):
    def replace(self, **kw):
        'Works for replacing string-based keys'
        return dict(self.items() + kw.items())

From robert.kern at gmail.com  Fri Apr 29 17:31:00 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Fri, 29 Apr 2011 10:31:00 -0500
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <BANLkTik+4+Z+gqscy4Adft5A42C4TCbkcg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>	<4DB90748.4030501@g.nevcal.com>	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>	<4DB916DE.1050302@g.nevcal.com>	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>	<4DB927F4.3040206@dcs.gla.ac.uk>
	<ipchp6$1ba$1@dough.gmane.org>	<87wridkgio.fsf@benfinney.id.au>	<4DBA11FF.6080606@canterbury.ac.nz>	<4DBA17EA.8020401@pearwood.info>
	<ipd9df$j5e$1@dough.gmane.org>	<4DBA4C77.2020507@pearwood.info>
	<BANLkTik+4+Z+gqscy4Adft5A42C4TCbkcg@mail.gmail.com>
Message-ID: <ipeljl$v9o$1@dough.gmane.org>

On 4/29/11 1:35 AM, Nick Coghlan wrote:
> On Fri, Apr 29, 2011 at 3:28 PM, Steven D'Aprano<steve at pearwood.info>  wrote:
>> Robert Kern wrote:
>>> Actually, Python treats all NaNs as quiet NaNs and never signalling NaNs.
>>
>> Sorry, did I get that backwards? I thought it was signalling NANs that cause
>> a signal (in Python terms, an exception)?
>>
>> If I do x = 0.0/0 I get an exception instead of a NAN. Hence a signalling
>> NAN.
>
> Aside from the divide-by-zero case, we treat NaNs as quiet NaNs.

And in fact, 0.0/0.0 is covered by the more general rule that x/0.0 raises 
ZeroDivisionError, not a rule that converts IEEE-754 INVALID exceptions into 
Python exceptions. Other operations that produce a NaN and issue an IEEE-754 
INVALID signal do not raise a Python exception.

But that's not the difference between signalling NaNs and quiet NaNs. A 
signalling NaN is one that when it is used as an *input* to an operation, it 
issues an INVALID signal, not whether a signal is issued when it is the *output* 
of an operation.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


From status at bugs.python.org  Fri Apr 29 18:07:22 2011
From: status at bugs.python.org (Python tracker)
Date: Fri, 29 Apr 2011 18:07:22 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20110429160722.203BE1D15C@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2011-04-22 - 2011-04-29)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    2760 ( +8)
  closed 20976 (+39)
  total  23736 (+47)

Open issues with patches: 1201 


Issues opened (33)
==================

#3561: Windows installer should add Python and Scripts directories to
http://bugs.python.org/issue3561  reopened by ncoghlan

#10912: PyObject_RichCompare differs in behaviour from PyObject_RichCo
http://bugs.python.org/issue10912  reopened by ncoghlan

#10914: Python sub-interpreter test
http://bugs.python.org/issue10914  reopened by pitrou

#11895: pybench prep_times calculation error
http://bugs.python.org/issue11895  reopened by jcea

#11908: Weird `slice.stop or sys.maxint`
http://bugs.python.org/issue11908  opened by cool-RR

#11909: Doctest sees directives in strings when it should only see the
http://bugs.python.org/issue11909  opened by Devin Jeanpierre

#11910: test_heapq C tests are not skipped when _heapq is missing
http://bugs.python.org/issue11910  opened by ezio.melotti

#11912: Python shouldn't use the mprotect() system call
http://bugs.python.org/issue11912  opened by breun

#11913: sdist should allow for README.rst
http://bugs.python.org/issue11913  opened by ingy

#11914: pydoc modules/help('modules') crash in dirs with unreadable su
http://bugs.python.org/issue11914  opened by okopnik

#11916: A few errnos from OSX
http://bugs.python.org/issue11916  opened by pcarrier

#11920: ctypes: Strange bitfield structure sizing issue
http://bugs.python.org/issue11920  opened by Steve.Thompson

#11921: distutils2 should be able to compile an Extension based on the
http://bugs.python.org/issue11921  opened by dholth

#11924: Pickle and copyreg modules don't document the interface
http://bugs.python.org/issue11924  opened by jcea

#11925: test_ttk_guionly.test_traversal() failed on "x86 Windows7 3.x"
http://bugs.python.org/issue11925  opened by haypo

#11927: SMTP_SSL doesn't use port 465 by default
http://bugs.python.org/issue11927  opened by pitrou

#11928: fail on filename with space at the end
http://bugs.python.org/issue11928  opened by techtonik

#11930: Remove time.accept2dyear
http://bugs.python.org/issue11930  opened by belopolsky

#11931: Regular expression documentation patch
http://bugs.python.org/issue11931  opened by Retro

#11933: newer() function in dep_util.py mixes up new vs. old files due
http://bugs.python.org/issue11933  opened by jsjgruber

#11934: build with --prefix=/dev/null and zlib enabled in Modules/Setu
http://bugs.python.org/issue11934  opened by ysj.ray

#11935: MMDF/MBOX mailbox need utime
http://bugs.python.org/issue11935  opened by sdaoden

#11937: Interix support
http://bugs.python.org/issue11937  opened by mduft

#11939: Implement stat.st_dev and os.path.samefile on windows
http://bugs.python.org/issue11939  opened by amaury.forgeotdarc

#11941: Support st_atim, st_mtim and st_ctim attributes in os.stat_res
http://bugs.python.org/issue11941  opened by Arfrever

#11943: Add TLS-SRP (RFC 5054) support to ssl, _ssl, http, and urllib
http://bugs.python.org/issue11943  opened by sqs

#11944: Function call with * and generator hide exception raised by ge
http://bugs.python.org/issue11944  opened by falsetru

#11945: Adopt and document consistent semantics for handling NaN value
http://bugs.python.org/issue11945  opened by ncoghlan

#11948: Tutorial/Modules - small fix to better clarify the modules sea
http://bugs.python.org/issue11948  opened by sandro.tosi

#11949: Make float('nan') unorderable
http://bugs.python.org/issue11949  opened by belopolsky

#11950: logger use dict for loggers instead of WeakValueDictionary
http://bugs.python.org/issue11950  opened by mmarkk

#11953: Missing WSA* error codes
http://bugs.python.org/issue11953  opened by pitrou

#11954: 3.3 - 'make test' fails
http://bugs.python.org/issue11954  opened by Jason.Vas.Dias



Most recent 15 issues with no replies (15)
==========================================

#11950: logger use dict for loggers instead of WeakValueDictionary
http://bugs.python.org/issue11950

#11935: MMDF/MBOX mailbox need utime
http://bugs.python.org/issue11935

#11934: build with --prefix=/dev/null and zlib enabled in Modules/Setu
http://bugs.python.org/issue11934

#11924: Pickle and copyreg modules don't document the interface
http://bugs.python.org/issue11924

#11916: A few errnos from OSX
http://bugs.python.org/issue11916

#11909: Doctest sees directives in strings when it should only see the
http://bugs.python.org/issue11909

#11898: Sending binary data with a POST request in httplib can cause U
http://bugs.python.org/issue11898

#11894: test_multiprocessing failure on "AMD64 OpenIndiana 3.x": KeyEr
http://bugs.python.org/issue11894

#11893: Obsolete SSLFakeFile in smtplib?
http://bugs.python.org/issue11893

#11887: unittest fails on comparing str with bytes if python has the -
http://bugs.python.org/issue11887

#11870: test_3_join_in_forked_from_thread() of test_threading hangs 1 
http://bugs.python.org/issue11870

#11869: Include information about the bug tracker Rietveld code review
http://bugs.python.org/issue11869

#11866: race condition in threading._newname()
http://bugs.python.org/issue11866

#11838: IDLE: make interactive code savable as a runnable script
http://bugs.python.org/issue11838

#11836: multiprocessing.queues.SimpleQueue is undocumented
http://bugs.python.org/issue11836



Most recent 15 issues waiting for review (15)
=============================================

#11949: Make float('nan') unorderable
http://bugs.python.org/issue11949

#11948: Tutorial/Modules - small fix to better clarify the modules sea
http://bugs.python.org/issue11948

#11943: Add TLS-SRP (RFC 5054) support to ssl, _ssl, http, and urllib
http://bugs.python.org/issue11943

#11937: Interix support
http://bugs.python.org/issue11937

#11935: MMDF/MBOX mailbox need utime
http://bugs.python.org/issue11935

#11931: Regular expression documentation patch
http://bugs.python.org/issue11931

#11930: Remove time.accept2dyear
http://bugs.python.org/issue11930

#11927: SMTP_SSL doesn't use port 465 by default
http://bugs.python.org/issue11927

#11916: A few errnos from OSX
http://bugs.python.org/issue11916

#11910: test_heapq C tests are not skipped when _heapq is missing
http://bugs.python.org/issue11910

#11909: Doctest sees directives in strings when it should only see the
http://bugs.python.org/issue11909

#11898: Sending binary data with a POST request in httplib can cause U
http://bugs.python.org/issue11898

#11895: pybench prep_times calculation error
http://bugs.python.org/issue11895

#11887: unittest fails on comparing str with bytes if python has the -
http://bugs.python.org/issue11887

#11883: Call connect() before sending an email with smtplib
http://bugs.python.org/issue11883



Top 10 most discussed issues (10)
=================================

#11954: 3.3 - 'make test' fails
http://bugs.python.org/issue11954  16 msgs

#10914: Python sub-interpreter test
http://bugs.python.org/issue10914  13 msgs

#11945: Adopt and document consistent semantics for handling NaN value
http://bugs.python.org/issue11945  11 msgs

#3526: Customized malloc implementation on SunOS and AIX
http://bugs.python.org/issue3526   9 msgs

#9614: _pickle is not entirely 64-bit safe
http://bugs.python.org/issue9614   7 msgs

#11849: glibc allocator doesn't release all free()ed memory
http://bugs.python.org/issue11849   7 msgs

#11912: Python shouldn't use the mprotect() system call
http://bugs.python.org/issue11912   7 msgs

#11930: Remove time.accept2dyear
http://bugs.python.org/issue11930   7 msgs

#11933: newer() function in dep_util.py mixes up new vs. old files due
http://bugs.python.org/issue11933   7 msgs

#9390: Error in sys.excepthook on windows when redirecting output of 
http://bugs.python.org/issue9390   6 msgs



Issues closed (39)
==================

#2736: datetime needs an "epoch" method
http://bugs.python.org/issue2736  closed by belopolsky

#6780: startswith error message is incomplete
http://bugs.python.org/issue6780  closed by ezio.melotti

#8326: Cannot import name SemLock on Ubuntu
http://bugs.python.org/issue8326  closed by barry

#10517: test_concurrent_futures crashes with "--with-pydebug" on RHEL5
http://bugs.python.org/issue10517  closed by pitrou

#10632: multiprocessing generates a fatal error
http://bugs.python.org/issue10632  closed by jnoller

#10761: tarfile.extractall fails to overwrite symlinks
http://bugs.python.org/issue10761  closed by python-dev

#11005: Assertion error on RLock._acquire_restore
http://bugs.python.org/issue11005  closed by haypo

#11236: getpass.getpass does not respond to ctrl-c or ctrl-z
http://bugs.python.org/issue11236  closed by orsenthil

#11324: ConfigParser(interpolation=None) doesn't work
http://bugs.python.org/issue11324  closed by python-dev

#11332: Increase logging/__init__.py coverage to 97%
http://bugs.python.org/issue11332  closed by vinay.sajip

#11382: some posix module functions unnecessarily release the GIL
http://bugs.python.org/issue11382  closed by pitrou

#11670: configparser read_file now iterates over f, docs still say it 
http://bugs.python.org/issue11670  closed by python-dev

#11786: ConfigParser.[Raw]ConfigParser optionxform()
http://bugs.python.org/issue11786  closed by python-dev

#11811: ssl.get_server_certificate() does not work for IPv6 addresses
http://bugs.python.org/issue11811  closed by pitrou

#11832: Add option to pause regrtest to attach a debugger
http://bugs.python.org/issue11832  closed by brian.curtin

#11856: Optimize parsing of JSON numbers
http://bugs.python.org/issue11856  closed by pitrou

#11858: configparser.ExtendedInterpolation and section case
http://bugs.python.org/issue11858  closed by python-dev

#11860: reference 2.3 has text that runs past the page
http://bugs.python.org/issue11860  closed by terry.reedy

#11884: Argparse calls ngettext but doesn't import it
http://bugs.python.org/issue11884  closed by eric.araujo

#11901: Docs for sys.hexversion should give the algorithm
http://bugs.python.org/issue11901  closed by r.david.murray

#11907: SysLogHandler can't send long messages
http://bugs.python.org/issue11907  closed by vinay.sajip

#11911: Interlink Python versions docs
http://bugs.python.org/issue11911  closed by ezio.melotti

#11915: test_ctypes hangs in sandbox
http://bugs.python.org/issue11915  closed by Arfrever

#11917: Test Error
http://bugs.python.org/issue11917  closed by Abbaszadeh

#11918: Drop OS/2 and VMS support in Python 3.3
http://bugs.python.org/issue11918  closed by haypo

#11919: test_imp failures
http://bugs.python.org/issue11919  closed by pitrou

#11922: Add General Index to Windows .chm help file Contents
http://bugs.python.org/issue11922  closed by terry.reedy

#11923: gcc: unrecognized option '-n32'
http://bugs.python.org/issue11923  closed by paulg_ca

#11926: help("keywords") returns incomplete list of keywords
http://bugs.python.org/issue11926  closed by ezio.melotti

#11929: Improve usage of PEP8 in Docs/includes/*
http://bugs.python.org/issue11929  closed by rhettinger

#11932: Email multipart boundary detection fails on a wrapped header
http://bugs.python.org/issue11932  closed by davidstrauss

#11936: plistlib.writePlistToBytes does not exist on 2.6  (osx) and do
http://bugs.python.org/issue11936  closed by ned.deily

#11938: duplicated test name in getattr_static's test case
http://bugs.python.org/issue11938  closed by ezio.melotti

#11940: Howto/Advocacy - update the link to John Ousterhout paper
http://bugs.python.org/issue11940  closed by rhettinger

#11942: Fix signature of Py_AddPendingCall
http://bugs.python.org/issue11942  closed by ezio.melotti

#11946: 2.7.1 'test_commands' build test fails
http://bugs.python.org/issue11946  closed by r.david.murray

#11947: re.IGNORECASE does not match literal "_" (underscore)
http://bugs.python.org/issue11947  closed by ezio.melotti

#11951: Mac OSX IDLE 3.2 does not allow entering text into toolbar win
http://bugs.python.org/issue11951  closed by ned.deily

#11952: typo in multiprocessing documentation: __main__ method should 
http://bugs.python.org/issue11952  closed by ezio.melotti

From vinay_sajip at yahoo.co.uk  Fri Apr 29 18:09:40 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Fri, 29 Apr 2011 16:09:40 +0000 (UTC)
Subject: [Python-Dev] Socket servers in the test suite
References: <loom.20110427T230704-75@post.gmane.org>
	<ipe8bp$cje$1@dough.gmane.org>
	<BANLkTim4qEXWbY1f02T4TJYPVx9h=zPstA@mail.gmail.com>
Message-ID: <loom.20110429T175003-77@post.gmane.org>

[Georg]
> > BTW, didn't we agree not to put "pragma" comments into the stdlib code?

I'd be grateful for a link to the prior discussion - it must have passed me by
originally, and I searched python-dev on gmane but couldn't find any threads
about this.

[Nick] 
> I think some folks objected, but since they're essential to keeping
> track of progress in code coverage improvement efforts, there wasn't a
> consensus to leave them out. The pragmas themselves are easy enough to
> grep for, so it isn't like they don't leave a record of which lines
> may not be getting tested.

Yes - in theory the pragmas can give a false idea about coverage, but in
practice they help increase the signal-to-noise ratio. As maintainer of a
module, one'd only be kidding oneself by adding pragmas willy-nilly. The
coverage reports are up-front about telling you how many lines were excluded,
both in the summary HTML pages and the drill-downs HTML pages for individual
modules.

BTW, is there a public place somewhere showing stdlib coverage statistics? I
looked on the buildbot pages as the likeliest home for them, but perhaps I
missed them.

Regards,


Vinay Sajip


From alexander.belopolsky at gmail.com  Fri Apr 29 18:35:55 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Fri, 29 Apr 2011 12:35:55 -0400
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <ipeljl$v9o$1@dough.gmane.org>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au>
	<4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info> <ipd9df$j5e$1@dough.gmane.org>
	<4DBA4C77.2020507@pearwood.info>
	<BANLkTik+4+Z+gqscy4Adft5A42C4TCbkcg@mail.gmail.com>
	<ipeljl$v9o$1@dough.gmane.org>
Message-ID: <BANLkTinMKgmFJpu7C1Aa74Eh-dQdRsujxg@mail.gmail.com>

On Fri, Apr 29, 2011 at 11:31 AM, Robert Kern <robert.kern at gmail.com> wrote:
..
> And in fact, 0.0/0.0 is covered by the more general rule that x/0.0 raises
> ZeroDivisionError, not a rule that converts IEEE-754 INVALID exceptions into
> Python exceptions.

It is unfortunate that official text of IEEE-754 is not freely
available and as a result a lot of discussion in this thread is based
on imperfect information.

I find Kahan's "Lecture Notes on the Status of IEEE Standard 754 for
Binary Floating-Point Arithmetic" [1] a reasonable reference in the
absence of the official text.   According to Kahan's notes, INVALID
operation is defined as follows:

"""
Exception: INVALID operation.

Signaled by the raising of the INVALID flag whenever an operation's
operands lie outside its domain, this exception's default, delivered
only because any other real or infinite value would most likely cause
worse confusion, is NaN , which means ? Not a Number.? IEEE 754
specifies that seven invalid arithmetic operations shall deliver a NaN
unless they are trapped:

    real ?(Negative) , 0*? , 0.0/0.0 , ?/?,
    REMAINDER(Anything, 0.0) , REMAINDER( ?, Anything) ,
    ? - ? when signs agree (but ? + ? = ? when signs agree).

Conversion from floating-point to other formats can be INVALID too, if
their limits are violated, even if no NaN can be delivered.
"""

In contrast, Kahan describes DIVIDE by ZERO exception as "a misnomer
perpetrated for historical reasons. A better name for this exception
is 'Infinite result computed Exactly from Finite operands.'"

> Other operations that produce a NaN and issue an IEEE-754
> INVALID signal do not raise a Python exception.

Some do:

>>> math.sqrt(-1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: math domain error

I think the only exception are the operations involving infinity.  The
likely rationale is that since infinity is not produced by python
arithmetics, those who use inf are likely to expect inf*0 etc. to
produce nan.

The following seems to be an oversight:

>>> 1e300 * 1e300
inf

compared to

>>> 1e300 ** 2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')


[1] http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps

From guido at python.org  Fri Apr 29 19:11:35 2011
From: guido at python.org (Guido van Rossum)
Date: Fri, 29 Apr 2011 10:11:35 -0700
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
	identity shortcut)
In-Reply-To: <871v0la5yg.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<871v0la5yg.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <BANLkTi=v98ZLbqTGSBED-MdE4V4X6JoTdg@mail.gmail.com>

On Fri, Apr 29, 2011 at 12:10 AM, Stephen J. Turnbull
<stephen at xemacs.org> wrote:
> Other aspects of NaN behavior may be a mistake. ?But it's not clear to
> me, even after all the discussion in this thread.

ISTM that the current behavior of NaN (never mind the identity issue)
helps numeric experts write better code. For naive users, however, it
causes puzzlement if they ever run into it.

Decimal, for that reason, has a context that lets one specify
different behaviors when a NaN is produced. Would it make sense to add
a float context that also lets one specify what should happen? That
could include returning Inf for 1.0/0.0 (for experts), or raising
exceptions when NaNs are produced (for the numerically naive like
myself).

I could see a downside too, e.g. the correctness of code that
passingly uses floats might be affected by the context settings.
There's also the question of whether the float context should affect
int operations; floats vs. ints is another can of worms since (in
Python 3) we attempt to tie them together through 1/2 == 0.5, but ints
have a much larger range than floats.

-- 
--Guido van Rossum (python.org/~guido)

From alexander.belopolsky at gmail.com  Fri Apr 29 19:35:41 2011
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Fri, 29 Apr 2011 13:35:41 -0400
Subject: [Python-Dev] Not-a-Number (was PyObject_RichCompareBool
	identity shortcut)
In-Reply-To: <BANLkTi=v98ZLbqTGSBED-MdE4V4X6JoTdg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<871v0la5yg.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=v98ZLbqTGSBED-MdE4V4X6JoTdg@mail.gmail.com>
Message-ID: <BANLkTi=S=nnrVFijFT1nXfe8QO7T1_1NGw@mail.gmail.com>

On Fri, Apr 29, 2011 at 1:11 PM, Guido van Rossum <guido at python.org> wrote:
> ? Would it make sense to add
> a float context that also lets one specify what should happen? That
> could include returning Inf for 1.0/0.0 (for experts), or raising
> exceptions when NaNs are produced (for the numerically naive like
> myself).

ISTM, this is approaching py4k territory.  Adding contexts will not
solve backward compatibility problem unless you introduce a "quirks"
contexts that would preserve current warts and make it default.

For what it's worth, I think the next major version of Python should
use decimal as its main floating point type an leave binary floats to
numerical experts.

From tjreedy at udel.edu  Fri Apr 29 21:11:48 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 29 Apr 2011 15:11:48 -0400
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <loom.20110429T175003-77@post.gmane.org>
References: <loom.20110427T230704-75@post.gmane.org>	<ipe8bp$cje$1@dough.gmane.org>	<BANLkTim4qEXWbY1f02T4TJYPVx9h=zPstA@mail.gmail.com>
	<loom.20110429T175003-77@post.gmane.org>
Message-ID: <ipf2hl$f24$1@dough.gmane.org>

On 4/29/2011 12:09 PM, Vinay Sajip wrote:

> BTW, is there a public place somewhere showing stdlib coverage statistics? I
> looked on the buildbot pages as the likeliest home for them, but perhaps I
> missed them.

http://docs.python.org/devguide/coverage.html
has a link to
http://coverage.livinglogic.de/

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Fri Apr 29 21:18:54 2011
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 29 Apr 2011 15:18:54 -0400
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <ipf2hl$f24$1@dough.gmane.org>
References: <loom.20110427T230704-75@post.gmane.org>	<ipe8bp$cje$1@dough.gmane.org>	<BANLkTim4qEXWbY1f02T4TJYPVx9h=zPstA@mail.gmail.com>	<loom.20110429T175003-77@post.gmane.org>
	<ipf2hl$f24$1@dough.gmane.org>
Message-ID: <ipf2uv$hdh$1@dough.gmane.org>

On 4/29/2011 3:11 PM, Terry Reedy wrote:
> On 4/29/2011 12:09 PM, Vinay Sajip wrote:
>
>> BTW, is there a public place somewhere showing stdlib coverage
>> statistics? I
>> looked on the buildbot pages as the likeliest home for them, but
>> perhaps I
>> missed them.
>
> http://docs.python.org/devguide/coverage.html
> has a link to
> http://coverage.livinglogic.de/

which, however, currently has nothing for *.py.
Perhaps a glitch/bug, as there used to be such.
Anyone who knows the page owner might ask about this.

-- 
Terry Jan Reedy


From michael at voidspace.org.uk  Fri Apr 29 22:03:49 2011
From: michael at voidspace.org.uk (Michael Foord)
Date: Fri, 29 Apr 2011 21:03:49 +0100
Subject: [Python-Dev] Fwd: viewVC shows traceback on non utf-8 module markup
Message-ID: <4DBB19A5.4010409@voidspace.org.uk>

I know that the svn repo is now for legacy purposes only, but I doubt it 
is intended that the online source browser should raise exceptions.

(See report below.)

All the best,

Michael

-------- Original Message --------
Subject: 	viewVC shows traceback on non utf-8 module markup
Date: 	Thu, 28 Apr 2011 17:47:12 +0900
From: 	Ikkei Shimomura <ikkei.shimomura at gmail.com>
To: 	webmaster at python.org



Hi,
here is a report, I found some module markup shows traceback.

like this:
http://svn.python.org/view/python/trunk/Lib/heapq.py?view=markup

>  UnicodeDecodeError: 'utf8' codec can't decode byte 0xe7 in position 1428: invalid continuation byte

I do not know about latin-1 coding, this is just note what I found at
that position It's "\xe7"

>  ... by Fran\xe7o  ...

and as I read the traceback, viewvc and pygment assumes utf-8
encoding, its hard-coded.

Other modules which use non- utf8 encoding (searched by grep -r coding\: *.py)
inspect, pydoc were ok. tarfile, shlex were not.



Excute me for my writting broken English.
----
Ikkei Shimomura

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110429/f3c93590/attachment.html>

From vinay_sajip at yahoo.co.uk  Fri Apr 29 23:33:47 2011
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Fri, 29 Apr 2011 21:33:47 +0000 (UTC)
Subject: [Python-Dev] Socket servers in the test suite
References: <loom.20110427T230704-75@post.gmane.org>	<ipe8bp$cje$1@dough.gmane.org>	<BANLkTim4qEXWbY1f02T4TJYPVx9h=zPstA@mail.gmail.com>	<loom.20110429T175003-77@post.gmane.org>
	<ipf2hl$f24$1@dough.gmane.org> <ipf2uv$hdh$1@dough.gmane.org>
Message-ID: <loom.20110429T233205-225@post.gmane.org>

Terry Reedy <tjreedy <at> udel.edu> writes:


> > http://coverage.livinglogic.de/
> 
> which, however, currently has nothing for *.py.
> Perhaps a glitch/bug, as there used to be such.
> Anyone who knows the page owner might ask about this.
> 

Thanks for the pointer, nevertheless, Terry.

Regards,

Vinay Sajip


From ethan at stoneleaf.us  Fri Apr 29 23:53:56 2011
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 29 Apr 2011 14:53:56 -0700
Subject: [Python-Dev] python and super
In-Reply-To: <BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
References: <BANLkTimr4yWCCfgdm2KcWbpP-XcYP2SANw@mail.gmail.com>	<70EF2C52-1D92-4351-884E-52AF76BAAC6D@mac.com>	<4DA70ACA.4070204@voidspace.org.uk>	<20110414153503.F125B3A4063@sparrow.telecommunity.com>	<825E6CD5-8673-463B-92AE-59677C327C0A@gmail.com>	<4DA71C63.3030809@voidspace.org.uk>	<8A4A58EF-70F7-4F2F-8564-AE8611713986@mac.com>
	<BANLkTincGnYrc48jzQfNrc_HFidohJXvwA@mail.gmail.com>
Message-ID: <4DBB3374.9060104@stoneleaf.us>

Ricardo Kirkner wrote:
> I'll give you the example I came upon:
> 
> I have a TestCase class, which inherits from both Django's TestCase
> and from some custom TestCases that act as mixin classes. So I have
> something like
> 
> class MyTestCase(TestCase, Mixin1, Mixin2):
>    ...
> 
> now django's TestCase class inherits from unittest2.TestCase, which we
> found was not calling super. Even if this is a bug and should be fixed
> in unittest2, this is an example where I, as a consumer of django,
> shouldn't have to be worried about how django's TestCase class is
> implemented.

I have to disagree -- anytime you are using somebody else's code you 
need to be aware of what it's supposed to do -- especially when playing 
with multiple inheritance.

This response to the decorator I wrote for this situation may be helpful:

Carl Banks wrote (on Python-List):
 > The problem is that he was doing mixins wrong.  Way wrong.
 >
 > Here is my advice on mixins:
 >
 > Mixins should almost always be listed first in the bases.  (The only
 > exception is to work around a technicality.  Otherwise mixins go
 > first.)
 >
 > If a mixin defines __init__, it should always accept self, *args and
 > **kwargs (and no other arguments), and pass those on to
 > super().__init__.  Same deal with any other function that different
 > sister classes might define in varied ways (such as __call__).
 >
 > A mixin should not accept arguments in __init__.  Instead, it should
 > burden the derived class to accept arguments on its behalf, and set
 > attributes before calling super().__init__, which the mixin can
 > access.
 >
 > If you insist on a mixin that accepts arguments in __init__, then it
 > should should pop them off kwargs.  Avoid using positional arguments,
 > and never use named arguments.  Always go through args and kwargs.
 >
 > If mixins follow these rules, they'll be reasonably safe to use on a
 > variety of classes.  (Maybe even safe enough to use in Django
 > classes.)

~Ethan~

From robert.kern at gmail.com  Fri Apr 29 22:54:52 2011
From: robert.kern at gmail.com (Robert Kern)
Date: Fri, 29 Apr 2011 15:54:52 -0500
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <BANLkTinMKgmFJpu7C1Aa74Eh-dQdRsujxg@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au> <4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info> <ipd9df$j5e$1@dough.gmane.org>
	<4DBA4C77.2020507@pearwood.info>
	<BANLkTik+4+Z+gqscy4Adft5A42C4TCbkcg@mail.gmail.com>
	<ipeljl$v9o$1@dough.gmane.org>
	<BANLkTinMKgmFJpu7C1Aa74Eh-dQdRsujxg@mail.gmail.com>
Message-ID: <BANLkTimXeXntPzKAeiYOqeQJM_wbGgUrSg@mail.gmail.com>

On Fri, Apr 29, 2011 at 11:35, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Fri, Apr 29, 2011 at 11:31 AM, Robert Kern <robert.kern at gmail.com> wrote:
> ..
>> And in fact, 0.0/0.0 is covered by the more general rule that x/0.0 raises
>> ZeroDivisionError, not a rule that converts IEEE-754 INVALID exceptions into
>> Python exceptions.
>
> It is unfortunate that official text of IEEE-754 is not freely
> available and as a result a lot of discussion in this thread is based
> on imperfect information.
>
> I find Kahan's "Lecture Notes on the Status of IEEE Standard 754 for
> Binary Floating-Point Arithmetic" [1] a reasonable reference in the
> absence of the official text. ? According to Kahan's notes, INVALID
> operation is defined as follows:
>
> """
> Exception: INVALID operation.
>
> Signaled by the raising of the INVALID flag whenever an operation's
> operands lie outside its domain, this exception's default, delivered
> only because any other real or infinite value would most likely cause
> worse confusion, is NaN , which means ? Not a Number.? IEEE 754
> specifies that seven invalid arithmetic operations shall deliver a NaN
> unless they are trapped:
>
> ? ?real ?(Negative) , 0*? , 0.0/0.0 , ?/?,
> ? ?REMAINDER(Anything, 0.0) , REMAINDER( ?, Anything) ,
> ? ?? - ? when signs agree (but ? + ? = ? when signs agree).
>
> Conversion from floating-point to other formats can be INVALID too, if
> their limits are violated, even if no NaN can be delivered.
> """
>
> In contrast, Kahan describes DIVIDE by ZERO exception as "a misnomer
> perpetrated for historical reasons. A better name for this exception
> is 'Infinite result computed Exactly from Finite operands.'"

Nonetheless, the reason that *Python* raises a ZeroDivisionError is
because it checks that the divisor is 0.0, not because 0.0/0.0 would
issue an INVALID signal. I didn't mean that 0.0/0.0 is a "Division by
Zero" error as defined in IEEE-754. This is another area where Python
ignores the INVALID signal and does its own thing.

>> Other operations that produce a NaN and issue an IEEE-754
>> INVALID signal do not raise a Python exception.
>
> Some do:
>
>>>> math.sqrt(-1)
> Traceback (most recent call last):
> ?File "<stdin>", line 1, in <module>
> ValueError: math domain error

Right. Elsewhere I gave a more exhaustive list including this one. The
other is int(nan), though that becomes a Python exception for a more
fundamental reason (there is no integer value that can represent it)
than that the IEEE-754 standard specifies that the operation should
signal INVALID. Arithmetic operations on signalling NaNs don't raise
an exception either.

These are the minority *exceptions* to the majority of cases where
operations on Python floats that would issue an INVALID signal do not
raise Python exceptions. If you want to lump all of the inf-related
cases together, that's fine, but arithmetic operations on signalling
NaNs and comparisons with NaNs form two more groups of INVALID
operations that do not raise Python exceptions.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
? -- Umberto Eco

From greg.ewing at canterbury.ac.nz  Sat Apr 30 01:50:47 2011
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 30 Apr 2011 11:50:47 +1200
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <4DBA4C77.2020507@pearwood.info>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au> <4DBA11FF.6080606@canterbury.ac.nz>
	<4DBA17EA.8020401@pearwood.info> <ipd9df$j5e$1@dough.gmane.org>
	<4DBA4C77.2020507@pearwood.info>
Message-ID: <4DBB4ED7.1040003@canterbury.ac.nz>

Steven D'Aprano wrote:

> If I do x = 0.0/0 I get an exception instead of a NAN.

But the exception you get is ZeroDivisionError, so I think
Python is catching this before you get to the stage of
producing a NaN.

-- 
Greg

From dickinsm at gmail.com  Sat Apr 30 09:02:33 2011
From: dickinsm at gmail.com (Mark Dickinson)
Date: Sat, 30 Apr 2011 08:02:33 +0100
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <4DBA11FF.6080606@canterbury.ac.nz>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au>
	<4DBA11FF.6080606@canterbury.ac.nz>
Message-ID: <BANLkTi=638RcatGeB2F3nj8_BRGdj6xphQ@mail.gmail.com>

On Fri, Apr 29, 2011 at 2:18 AM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Taking a step back from all this, why does Python allow
> NaNs to arise from computations *at all*?

History, I think.  There's a c.l.p. message from Tim Peters somewhere
saying something along the lines that he'd love to make (e.g.,) 1e300
* 1e300 raise an exception instead of producing an infinity, but dare
not for fear of the resulting outcry from people who use the current
behaviour.  Apologies if I've misrepresented what he actually
said---I'm failing to find the exact message at the moment.

If it weren't for backwards compatibility, I'd love to see Python
raise exceptions instead of producing IEEE special values:  IOW, to
act as though the divide-by-zero, overflow and invalid_operation FP
signals all produce an exception.  As a bonus, perhaps there could be
a mode that allowed 'nonstop' arithmetic, under which infinities and
nans were produced as per IEEE 754:

    with math.non_stop_arithmetic():
        ...

But this is python-ideas territory.

Mark

From solipsis at pitrou.net  Sat Apr 30 12:37:22 2011
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 30 Apr 2011 12:37:22 +0200
Subject: [Python-Dev] Not-a-Number
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au>
	<4DBA11FF.6080606@canterbury.ac.nz>
	<BANLkTi=638RcatGeB2F3nj8_BRGdj6xphQ@mail.gmail.com>
Message-ID: <20110430123722.5ef9df15@pitrou.net>

On Sat, 30 Apr 2011 08:02:33 +0100
Mark Dickinson <dickinsm at gmail.com> wrote:
> On Fri, Apr 29, 2011 at 2:18 AM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> > Taking a step back from all this, why does Python allow
> > NaNs to arise from computations *at all*?
> 
> History, I think.  There's a c.l.p. message from Tim Peters somewhere
> saying something along the lines that he'd love to make (e.g.,) 1e300
> * 1e300 raise an exception instead of producing an infinity, but dare
> not for fear of the resulting outcry from people who use the current
> behaviour.  Apologies if I've misrepresented what he actually
> said---I'm failing to find the exact message at the moment.
> 
> If it weren't for backwards compatibility, I'd love to see Python
> raise exceptions instead of producing IEEE special values:  IOW, to
> act as though the divide-by-zero, overflow and invalid_operation FP
> signals all produce an exception.  As a bonus, perhaps there could be
> a mode that allowed 'nonstop' arithmetic, under which infinities and
> nans were produced as per IEEE 754:
> 
>     with math.non_stop_arithmetic():
>         ...
> 
> But this is python-ideas territory.

I would much prefer this idea than the idea of making NaNs
non-orderable.  It would break code, but at least it would break
in less unexpected and annoying ways.

Regards

Antoine.



From victor.stinner at haypocalc.com  Sat Apr 30 14:06:33 2011
From: victor.stinner at haypocalc.com (Victor Stinner)
Date: Sat, 30 Apr 2011 14:06:33 +0200
Subject: [Python-Dev] [Python-checkins] cpython: PyGILState_Ensure(),
 PyGILState_Release(), PyGILState_GetThisThreadState() are
In-Reply-To: <BANLkTik7RtkXY5_EHgAUqSm6kkqpaAC=Qg@mail.gmail.com>
References: <E1QEqA7-0003Yx-9k@dinsdale.python.org>
	<BANLkTik7RtkXY5_EHgAUqSm6kkqpaAC=Qg@mail.gmail.com>
Message-ID: <1304165193.6598.4.camel@marge>

Le mercredi 27 avril 2011 ? 20:18 -0400, Jim Jewett a ?crit :
> Would it be a problem to make them available a no-ops?
> 
> On 4/26/11, victor.stinner <python-checkins at python.org> wrote:
> > http://hg.python.org/cpython/rev/75503c26a17f
> > changeset:   69584:75503c26a17f
> > user:        Victor Stinner <victor.stinner at haypocalc.com>
> > date:        Tue Apr 26 23:34:58 2011 +0200
> > summary:
> >   PyGILState_Ensure(), PyGILState_Release(), PyGILState_GetThisThreadState()
> > are
> > not available if Python is compiled without threads.

Oh, I realized that PyGILState_STATE may also be included only if Python
is compiled with threads.

--

PyGILState_Ensure() and PyGILState_Release() could be no-op yes, it
would simplify the usage of these functions. For example:

#ifdef WITH_THREAD
        PyGILState_STATE gil;
#endif
        fprintf(stderr, "object  : ");
#ifdef WITH_THREAD
        gil = PyGILState_Ensure();
#endif
        (void)PyObject_Print(op, stderr, 0);
#ifdef WITH_THREAD
        PyGILState_Release(gil);
#endif

--

Even without threads, a Python process has a PyThreadState structure, so
PyGILState_GetThisThreadState() can be patched to work even if Python is
compiled without threads.

--

Would you like to work on such patch? Or at least open an issue?

Victor


From tim.peters at gmail.com  Sat Apr 30 16:17:28 2011
From: tim.peters at gmail.com (Tim Peters)
Date: Sat, 30 Apr 2011 10:17:28 -0400
Subject: [Python-Dev] Not-a-Number
In-Reply-To: <BANLkTi=638RcatGeB2F3nj8_BRGdj6xphQ@mail.gmail.com>
References: <4DB7E3EA.3030208@avl.com>
	<BANLkTik6Fr0e=5PLNTu4x=CT+v12tt3Tsg@mail.gmail.com>
	<87d3k79jvt.fsf@uwakimon.sk.tsukuba.ac.jp>
	<BANLkTi=AusPRDsf2zKDGteZ5dGxs0EEuXw@mail.gmail.com>
	<4DB90748.4030501@g.nevcal.com>
	<BANLkTi=eAug-2n+MsQvSpaet5PM4NQDHSg@mail.gmail.com>
	<4DB916DE.1050302@g.nevcal.com>
	<BANLkTikGVfox3dXkO7B5f5iQbX5L8ypNgw@mail.gmail.com>
	<4DB927F4.3040206@dcs.gla.ac.uk> <ipchp6$1ba$1@dough.gmane.org>
	<87wridkgio.fsf@benfinney.id.au> <4DBA11FF.6080606@canterbury.ac.nz>
	<BANLkTi=638RcatGeB2F3nj8_BRGdj6xphQ@mail.gmail.com>
Message-ID: <BANLkTi=DTEPAhVtEqacvoQY0+UKLyRbAJg@mail.gmail.com>

[Greg Ewing]
>> Taking a step back from all this, why does Python allow
>> NaNs to arise from computations *at all*?

[Mark Dickinson]
> History, I think. ?There's a c.l.p. message from Tim Peters somewhere
> saying something along the lines that he'd love to make (e.g.,) 1e300
> * 1e300 raise an exception instead of producing an infinity, but dare
> not for fear of the resulting outcry from people who use the current
> behaviour. ?Apologies if I've misrepresented what he actually
> said---I'm failing to find the exact message at the moment.
>
> If it weren't for backwards compatibility, I'd love to see Python
> raise exceptions instead of producing IEEE special values: ?IOW, to
> act as though the divide-by-zero, overflow and invalid_operation FP
> signals all produce an exception.

Exactly.  It's impossible to create a NaN from "normal" inputs without
triggering div-by-0 or invalid_operation, and if overflow were also
enabled it would likewise be impossible to create an infinity from
normal inputs.  So, 20 years ago, that's how I arranged Kendall Square
Research's default numeric environment:  enabled those three exception
traps by default, and left the underflow and inexact exception traps
disabled by default.  It's not just "naive" users initially baffled by
NaNs and infinities; most of KSR's customers were heavy duty number
crunchers, and they didn't know what to make of them at first either.

But experts do find them very useful (after climbing the 754 learning
curve), so there was also a simple function call (from all the
languages we supported - C, C++, FORTRAN and Pascal), to establish the
754 default all-traps-disabled mode:

>?As a bonus, perhaps there could be a mode that allowed 'nonstop'
> arithmetic, under which infinities and nans were produced as per IEEE 754:
>
> ? ?with math.non_stop_arithmetic():
> ? ? ? ?...
>
> But this is python-ideas territory.

All of which is just moving toward the numeric environment 754 was
aiming for from the start:  complete user control over which exception
traps are and aren't currently enabled.  The only quibble I had with
that vision was its baffle-99%-of-users requirement that they _all_ be
disabled by default.

As Kahan wrote, it's called "an exception" because no matter _what_
you do, someone will take exception to your policy ;-)  That's why
user control is crucial in a 754 environment.  He wanted even more
control than 754 recommends (in particular, he wanted the user to be
able to specify _which_ value was returned when an exception
triggered; e.g., in some apps it may well be more useful for overflow
to produce a NaN than an infinity, or to return the largest normal
value with the correct sign).

Unfortunately, the hardware and academic types who created 754 had no
grasp of how difficult it is to materialize their vision in software,
and especially not of how very difficult it is to backstitch a
pleasant wholly conforming environment into an existing language.  As
a result, I'm afraid the bulk of 754's features are stilled viewed as
"a nuisance" by a vast majority of users :-(

From techtonik at gmail.com  Sat Apr 30 16:53:12 2011
From: techtonik at gmail.com (anatoly techtonik)
Date: Sat, 30 Apr 2011 17:53:12 +0300
Subject: [Python-Dev] Issue Tracker
In-Reply-To: <20110329013756.99EB8D64A7@kimball.webabinitio.net>
References: <4D90EA06.3030003@stoneleaf.us>
	<AANLkTikK=4Js-4Z2NRgmkhhkfKX_CufXTi3E0A2MhTPe@mail.gmail.com>
	<20110328223112.76482a9d@pitrou.net>
	<20110329013756.99EB8D64A7@kimball.webabinitio.net>
Message-ID: <BANLkTi=ppYhHd4hAHMGeByTN1aUcBF2WNg@mail.gmail.com>

On Tue, Mar 29, 2011 at 4:37 AM, R. David Murray <rdmurray at bitdance.com> wrote:
>
> The hardest part is debugging the TAL when you make a mistake, but
> even that isn't a whole lot worse than any other templating language.

How much in % is it worse than Django templating language?
--
anatoly t.

From merwok at netwok.org  Sat Apr 30 17:57:47 2011
From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=)
Date: Sat, 30 Apr 2011 17:57:47 +0200
Subject: [Python-Dev] Socket servers in the test suite
In-Reply-To: <loom.20110429T175003-77@post.gmane.org>
References: "\"<loom.20110427T230704-75@post.gmane.org>"
	<ipe8bp$cje$1@dough.gmane.org>"
	<BANLkTim4qEXWbY1f02T4TJYPVx9h=zPstA@mail.gmail.com>
	<loom.20110429T175003-77@post.gmane.org>
Message-ID: <4695083000b3da82a1281e25fa44f1f0@netwok.org>

 Hi,

 Le 29/04/2011 18:09, Vinay Sajip a ?crit :
> [Georg]
>>> BTW, didn't we agree not to put "pragma" comments into the stdlib 
>>> code?
> I'd be grateful for a link to the prior discussion - it must have 
> passed me by
> originally, and I searched python-dev on gmane but couldn't find any 
> threads
> about this.

 I remember only this: http://bugs.python.org/issue11572#msg131139

 Regards

From adrian3 at gmail.com  Sat Apr 30 04:13:33 2011
From: adrian3 at gmail.com (Adrian Johnston)
Date: Fri, 29 Apr 2011 19:13:33 -0700
Subject: [Python-Dev] running/stepping python backwards
Message-ID: <BANLkTinSQtdpOVKn0GhH4=cP6NnhGgOD0A@mail.gmail.com>

This may seem like an odd question, but I?m intrigued by the idea of using
Python as a data definition language with ?undo? support.



If I were to try and instrument the Python interpreter to be able to step
backwards, would that be an unduly difficult or inefficient thing to do?


(Please reply to me directly.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110429/f719052f/attachment.html>