From benjamin at python.org  Sun Apr  1 00:53:58 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Sat, 31 Mar 2012 18:53:58 -0400
Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error
In-Reply-To: <CAP7+vJJ43-cJO7JR-PrKp_p5smhZtcmN1UE0DKbbpeSxbnzg2g@mail.gmail.com>
References: <20120329195825.843352500E9@webabinitio.net>
	<CAP7+vJ+nb7X+9bs=WP8Rf6797BxEhkaPhpn4d7_ZtsBc0NQ9jg@mail.gmail.com>
	<20120329203103.95A4B2500E9@webabinitio.net>
	<20120329204815.D7AC32500E9@webabinitio.net>
	<CAP7+vJJjDKBtBxTd3GXOMvNvHWn9mE4BJhoFsyrP9aE1=nbhVg@mail.gmail.com>
	<CADiSq7eLpqXu+wk6j2Qs66bv-XaOYC5_Q+xfSiWaDHXoPQeyLA@mail.gmail.com>
	<20120331174533.E0E612500E9@webabinitio.net>
	<CAP7+vJJ43-cJO7JR-PrKp_p5smhZtcmN1UE0DKbbpeSxbnzg2g@mail.gmail.com>
Message-ID: <CAPZV6o8E-e8suO=hCqiJVDn=F1KC8mktyiMc6iSSBj=gqrfKPA@mail.gmail.com>

2012/3/31 Guido van Rossum <guido at python.org>:
> Try reducing sys.setcheckinterval().

setcheckinterval() is a no-op since the New-GIL. sys.setswitchinterval
has superseded it.



-- 
Regards,
Benjamin

From larry at hastings.org  Sun Apr  1 02:07:28 2012
From: larry at hastings.org (Larry Hastings)
Date: Sun, 01 Apr 2012 02:07:28 +0200
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAMpsgwbVoUqLJObF8cw-r9BZOq9_iTr9W0yr7HTJHeEZgymw8Q@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CAP7+vJ+1u_tUtr70+0L9wt7xmj9sHPv1=6=aQMkhWxqfhD1hWQ@mail.gmail.com>
	<CAP7+vJKuva5q=Y14pKTqbMijsVbFmH=vOi6cPsZ01XE3F9g+9A@mail.gmail.com>
	<CAMpsgwbVoUqLJObF8cw-r9BZOq9_iTr9W0yr7HTJHeEZgymw8Q@mail.gmail.com>
Message-ID: <4F779C40.9050601@hastings.org>


On 03/31/2012 12:47 AM, Victor Stinner wrote:
>> Can you go into more detail about QPC()'s issues?
> Yes, see the PEP:
> http://www.python.org/dev/peps/pep-0418/#windows-queryperformancecounter

FYI, Victor, the PEP is slightly incomplete.  Not that this is your 
fault--you've done your homework.  But I've actually lived through it.  
I was a professional Win32 developer for 15 years, and I attempted to 
write a game on Windows back in the early-mid 2000s.

On Windows XP, QPC /usually/ uses the ACPI timer in my experience, but 
sometimes uses RDTSC.  Both of these had troubles.

With TSC, there's the clock skew between the two cores that they claim 
was fixed in SP2.  (You could also sidestep this problem by setting core 
affinity to the same core for all your threads that were going to 
examine the time.)  But there's another problem: the TSC frequency 
actually *does* change when SpeedStep kicks in.  I know someone who 
complained bitterly about running Half-Life 2 on their shiny new laptop, 
and when it'd overheat SpeedStep would knock down the processor speed 
and the game's logic update rate would drop in half and now Gordon was 
running through molasses.

With the ACPI timer, that's where you saw the 
leap-forwards-by-several-seconds-under-heavy-load problem (the cited 
MSKB article KB274323).  That only happened with a specific south bridge 
chipset, which was Pentium-III-only.  I never heard about anyone 
experiencing that problem--personally I had good experiences with that 
timer.  The downside of the ACPI timer is that it's slow to read; it 
took just over a microsecond in my experiments.  (timeGetTime was 20x 
faster.  I don't know how long GetTickCount takes.)

The documentation warnings about timeBeginPeriod is ancient, like 
Windows 95 era.  On any computer running Python 3.3, you can safely call 
timeBeginPeriod(1) with confidence that you'll get consistent 1ms 
resolution.  Likewise with calling into winmm--it shipped with every OS 
3.3 supports.  It's just not a big deal and you don't need to mention it 
in the PEP.

I had a hypothetical idea for a hybrid software clock for games that 
would poll all possible sources of time (RDTSC, QPC, GetTickCount, 
timeGetTime) and do its level best to create a high-quality synthetic 
time.  Like, if QPC jumped forward by a huge amount, and that jump 
wasn't corroborated by the other time functions, it'd throw that delta 
away completely.  It'd also notice if QPC's frequency had changed due to 
SpeedStep and recalibrate.  And it'd handle rollover of timeGetTime().  
Of course, part of the problem is that calling all these clocks is 
slow.  Another is that if QPC is implemented using RDTSC and RDTSC has 
problems you're kind of out of options--your best clock at that point 
only has 1ms accuracy.  Anyway I never wound up getting this to work--my 
attempts were all full of nasty heuristics and the code turned into 
hash.  Maybe someone smarter than me could figure out how to get it to work.

Sorry that this is incomplete / dashed off, but I'm still on vacation 
and it's been a few years since I did Windows timing stuff.  And I gotta 
go to bed--going to Madurodam in the morning!


//arry/

From ncoghlan at gmail.com  Sun Apr  1 03:41:23 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 1 Apr 2012 11:41:23 +1000
Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error
In-Reply-To: <CAPZV6o8E-e8suO=hCqiJVDn=F1KC8mktyiMc6iSSBj=gqrfKPA@mail.gmail.com>
References: <20120329195825.843352500E9@webabinitio.net>
	<CAP7+vJ+nb7X+9bs=WP8Rf6797BxEhkaPhpn4d7_ZtsBc0NQ9jg@mail.gmail.com>
	<20120329203103.95A4B2500E9@webabinitio.net>
	<20120329204815.D7AC32500E9@webabinitio.net>
	<CAP7+vJJjDKBtBxTd3GXOMvNvHWn9mE4BJhoFsyrP9aE1=nbhVg@mail.gmail.com>
	<CADiSq7eLpqXu+wk6j2Qs66bv-XaOYC5_Q+xfSiWaDHXoPQeyLA@mail.gmail.com>
	<20120331174533.E0E612500E9@webabinitio.net>
	<CAP7+vJJ43-cJO7JR-PrKp_p5smhZtcmN1UE0DKbbpeSxbnzg2g@mail.gmail.com>
	<CAPZV6o8E-e8suO=hCqiJVDn=F1KC8mktyiMc6iSSBj=gqrfKPA@mail.gmail.com>
Message-ID: <CADiSq7dbiA9Sndk1cLgQ-3gRuKGHCEoQTEjy1KSc30opnEtkNw@mail.gmail.com>

On Apr 1, 2012 8:54 AM, "Benjamin Peterson" <benjamin at python.org> wrote:
>
> 2012/3/31 Guido van Rossum <guido at python.org>:
> > Try reducing sys.setcheckinterval().
>
> setcheckinterval() is a no-op since the New-GIL. sys.setswitchinterval
> has superseded it

Ah, that's at least one thing wrong with my initial attempt - I was still
thinking in terms of "number of bytecodes executed". Old habits die hard :)

--
Sent from my phone, thus the relative brevity :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120401/2124fd21/attachment.html>

From victor.stinner at gmail.com  Sun Apr  1 03:56:27 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 1 Apr 2012 03:56:27 +0200
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <4F779C40.9050601@hastings.org>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CAP7+vJ+1u_tUtr70+0L9wt7xmj9sHPv1=6=aQMkhWxqfhD1hWQ@mail.gmail.com>
	<CAP7+vJKuva5q=Y14pKTqbMijsVbFmH=vOi6cPsZ01XE3F9g+9A@mail.gmail.com>
	<CAMpsgwbVoUqLJObF8cw-r9BZOq9_iTr9W0yr7HTJHeEZgymw8Q@mail.gmail.com>
	<4F779C40.9050601@hastings.org>
Message-ID: <CAMpsgwbTUdACDCiDE04Qg6zu4WR1omZ20W6=YrbGbdd8REBZzw@mail.gmail.com>

> FYI, Victor, the PEP is slightly incomplete.

Sure. What should be added to the PEP?

> But there's another problem: the TSC frequency actually *does*
> change when SpeedStep kicks in. ?I know someone who complained bitterly
> about running Half-Life 2 on their shiny new laptop, and when it'd overheat
> SpeedStep would knock down the processor speed and the game's logic update
> rate would drop in half and now Gordon was running through molasses.

Yes, I already changed the PEP to not use QPC anymore for
time.monotonic() because it has too many issues.

I didn't mention the CPU frequency change issue in the PEP because I
failed to find recent information about this issue. Is it an old bug
or does it still occur with Windows Vista or Seven? Does Windows Vista
and Seven still use TSC or they prefer other hardware clocks like ACPI
PMT or HPET?

Last info that I found: "Historically, the TSC increased with every
internal processor clock cycle, but now the rate is usually constant
(even if the processor changes frequency) and usually equals the
maximum processor frequency. The instructor RDTSC can be used to read
this counter."

> The documentation warnings about timeBeginPeriod is ancient, like Windows 95 era.

Which warning? The power consumption issue mentioned in the PEP?

> Likewise with calling into winmm--it shipped with every OS 3.3
> supports. ?It's just not a big deal and you don't need to mention it in the
> PEP.

I mentioned that the function requires the winmm library because a
static or dynamic link to a library can be an issue. Especially if we
use this function in Python core. clock_gettime(CLOCK_REALTIME) is not
used on Linux for _PyTime_gettimeofday() because it requires to link
Python to the rt (real-time) library, but it is used by time.time() (I
changed it recently).

Victor

From victor.stinner at gmail.com  Sun Apr  1 04:37:00 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 1 Apr 2012 04:37:00 +0200
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAMpsgwY3uakpEhLm4uO5f3H6fAosavq6OG55fbvBdMus6EUj7w@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CAP7+vJ+1u_tUtr70+0L9wt7xmj9sHPv1=6=aQMkhWxqfhD1hWQ@mail.gmail.com>
	<CAP7+vJKuva5q=Y14pKTqbMijsVbFmH=vOi6cPsZ01XE3F9g+9A@mail.gmail.com>
	<CAMpsgwbVoUqLJObF8cw-r9BZOq9_iTr9W0yr7HTJHeEZgymw8Q@mail.gmail.com>
	<CAP7+vJJ6tftkazM4suWW0-+ApOmCRxUR+MSfZT7Ns-EPVHbvxw@mail.gmail.com>
	<CAMpsgwasJOB4pYHHEfM1Wv+eBQmCD0n8tzo6d40WEL5CM7=WaA@mail.gmail.com>
	<CAP7+vJKWDNeAKV+_c--XWDrjohvmWCCV3jfd8iaKDD_YSQqxeg@mail.gmail.com>
	<CAMpsgwY3uakpEhLm4uO5f3H6fAosavq6OG55fbvBdMus6EUj7w@mail.gmail.com>
Message-ID: <CAMpsgwatGnMYrKEOs-GPd6fv-BKFBcL7T0-oMXuKH_CR_xRBgw@mail.gmail.com>

> If we provide a way to check if the monotonic clock is monotonic (or
> not), I agree to drop the flag from time.monotonic(fallback=True) and
> always fallback. I was never a fan of the "truly monotonic clock".
>
> time.clock_info('monotonic')['is_monotonic'] is a good candidate to
> store this information.

I updated the PEP to add time.get_clock_info() and to drop the
fallback parameter of time.monotonic() (which now always falls back).

Because "monotonic" word cannot define time.monotonic() anymore, I
suggest to rename the time.monotonic() function to time.steady(). So
we would have:

- time.steady() may or may not be monotonic, but its is as steady as possible.
- time.get_clock_info('steady')['is_monotonic'] which looks less
surprising than time.get_clock_info('monotonic')['is_monotonic']

It doesn't follow the C++ steady_clock definition, but it looks like
the Boost library doesn't follow the C++ definition... (it uses
CLOCK_MONOTONIC on Linux)

By the way, it now prefer to use CLOCK_MONOTONIC instead of
CLOCK_MONOTONIC_RAW on Linux. It is what I need in practice. If the
hardware clock is a little bit too fast or too slow, NTP adjusts its
rate so a delta of two timestamps is really a number of seconds. It's
not yet written explicitly in the PEP, but the unit of
time.monotonic/time.steady and time.highres is a second.

Victor

From ncoghlan at gmail.com  Sun Apr  1 05:33:36 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 1 Apr 2012 13:33:36 +1000
Subject: [Python-Dev] [Python-checkins] cpython: Issue #14435: Add
	Misc/NEWS and Misc/ACKS
In-Reply-To: <E1SDy4R-0005Q0-Dp@dinsdale.python.org>
References: <E1SDy4R-0005Q0-Dp@dinsdale.python.org>
Message-ID: <CADiSq7cn5c2BRg0fzQED_p8YcnOPbTNkCYOGgY29fxGydBBHrg@mail.gmail.com>

On Sat, Mar 31, 2012 at 11:10 PM, kristjan.jonsson
<python-checkins at python.org> wrote:
> diff --git a/Misc/ACKS b/Misc/ACKS
> --- a/Misc/ACKS
> +++ b/Misc/ACKS
> @@ -507,6 +507,7 @@
> ?Richard Jones
> ?Irmen de Jong
> ?Lucas de Jonge
> +Kristj?n Valur J?nsson
> ?Jens B. Jorgensen
> ?John Jorgensen
> ?Sijin Joseph

*blinks*

This must have been one of those cases where everyone assumed your
name was already there and never thought to check...

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Sun Apr  1 05:46:04 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 31 Mar 2012 20:46:04 -0700
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAMpsgwatGnMYrKEOs-GPd6fv-BKFBcL7T0-oMXuKH_CR_xRBgw@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CAP7+vJ+1u_tUtr70+0L9wt7xmj9sHPv1=6=aQMkhWxqfhD1hWQ@mail.gmail.com>
	<CAP7+vJKuva5q=Y14pKTqbMijsVbFmH=vOi6cPsZ01XE3F9g+9A@mail.gmail.com>
	<CAMpsgwbVoUqLJObF8cw-r9BZOq9_iTr9W0yr7HTJHeEZgymw8Q@mail.gmail.com>
	<CAP7+vJJ6tftkazM4suWW0-+ApOmCRxUR+MSfZT7Ns-EPVHbvxw@mail.gmail.com>
	<CAMpsgwasJOB4pYHHEfM1Wv+eBQmCD0n8tzo6d40WEL5CM7=WaA@mail.gmail.com>
	<CAP7+vJKWDNeAKV+_c--XWDrjohvmWCCV3jfd8iaKDD_YSQqxeg@mail.gmail.com>
	<CAMpsgwY3uakpEhLm4uO5f3H6fAosavq6OG55fbvBdMus6EUj7w@mail.gmail.com>
	<CAMpsgwatGnMYrKEOs-GPd6fv-BKFBcL7T0-oMXuKH_CR_xRBgw@mail.gmail.com>
Message-ID: <CAP7+vJK4HQmK-Cwc9x1-kDBcj=T0vxM0tBth1-qOkKY1HrWNcg@mail.gmail.com>

On Sat, Mar 31, 2012 at 7:37 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
>> If we provide a way to check if the monotonic clock is monotonic (or
>> not), I agree to drop the flag from time.monotonic(fallback=True) and
>> always fallback. I was never a fan of the "truly monotonic clock".
>>
>> time.clock_info('monotonic')['is_monotonic'] is a good candidate to
>> store this information.
>
> I updated the PEP to add time.get_clock_info() and to drop the
> fallback parameter of time.monotonic() (which now always falls back).
>
> Because "monotonic" word cannot define time.monotonic() anymore, I
> suggest to rename the time.monotonic() function to time.steady(). So
> we would have:
>
> - time.steady() may or may not be monotonic, but its is as steady as possible.
> - time.get_clock_info('steady')['is_monotonic'] which looks less
> surprising than time.get_clock_info('monotonic')['is_monotonic']
>
> It doesn't follow the C++ steady_clock definition, but it looks like
> the Boost library doesn't follow the C++ definition... (it uses
> CLOCK_MONOTONIC on Linux)
>
> By the way, it now prefer to use CLOCK_MONOTONIC instead of
> CLOCK_MONOTONIC_RAW on Linux. It is what I need in practice. If the
> hardware clock is a little bit too fast or too slow, NTP adjusts its
> rate so a delta of two timestamps is really a number of seconds. It's
> not yet written explicitly in the PEP, but the unit of
> time.monotonic/time.steady and time.highres is a second.

Hmm... I believe NTP can also slew the clock to deal with leap seconds
(which the POSIX standard requires must be ignored). That is, when a
leap second is inserted, the clock is supposed to stop value for one
second. What actually happens is that for some time around the leap
second (I've heard maybe a day), the clock is slowed down slightly.
I'm guessing that this affects CLOCK_MONOTONIC but not
CLOCK_MONOTONIC_RAW. Personally I'd rather use the latter -- if I want
to be synchronous with wall clock time, I can just use time.time().

-- 
--Guido van Rossum (python.org/~guido)

From kristjan at ccpgames.com  Sun Apr  1 14:31:57 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Sun, 1 Apr 2012 12:31:57 +0000
Subject: [Python-Dev] [Python-checkins] cpython: Issue #14435: Add
	Misc/NEWS and	Misc/ACKS
In-Reply-To: <CADiSq7cn5c2BRg0fzQED_p8YcnOPbTNkCYOGgY29fxGydBBHrg@mail.gmail.com>
References: <E1SDy4R-0005Q0-Dp@dinsdale.python.org>,
	<CADiSq7cn5c2BRg0fzQED_p8YcnOPbTNkCYOGgY29fxGydBBHrg@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD3382C99@RKV-IT-EXCH104.ccp.ad.local>

Wishing to cause minimal disruption, I actually read http://docs.python.org/devguide/committing.html where this file is mentioned as part of the commit checklist.  Never knew it existed before.
K

________________________________________
Fr?: python-checkins-bounces+kristjan=ccpgames.com at python.org [python-checkins-bounces+kristjan=ccpgames.com at python.org] fyrir h&#246;nd Nick Coghlan [ncoghlan at gmail.com]
Sent: 1. apr?l 2012 03:33
To: python-dev at python.org
Cc: python-checkins at python.org
Efni: Re: [Python-checkins] cpython: Issue #14435: Add Misc/NEWS and    Misc/ACKS

On Sat, Mar 31, 2012 at 11:10 PM, kristjan.jonsson
<python-checkins at python.org> wrote:
> diff --git a/Misc/ACKS b/Misc/ACKS
> --- a/Misc/ACKS
> +++ b/Misc/ACKS
> @@ -507,6 +507,7 @@
>  Richard Jones
>  Irmen de Jong
>  Lucas de Jonge
> +Kristj?n Valur J?nsson
>  Jens B. Jorgensen
>  John Jorgensen
>  Sijin Joseph

*blinks*

This must have been one of those cases where everyone assumed your
name was already there and never thought to check...

Cheers,
Nick.

--
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
_______________________________________________
Python-checkins mailing list
Python-checkins at python.org
http://mail.python.org/mailman/listinfo/python-checkins

From regebro at gmail.com  Sun Apr  1 15:16:44 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Sun, 1 Apr 2012 15:16:44 +0200
Subject: [Python-Dev] datetime module and pytz with dateutil
In-Reply-To: <jl7ldl$9vm$1@dough.gmane.org>
References: <CAL3CFcWLgZ+G++it4HYsQRACTOg9YfGH9yMjzTAH8DwWH2nqFQ@mail.gmail.com>
	<jl42es$jj9$1@dough.gmane.org>
	<CAL0kPAV8o7L0Ar1hu2Yi=t-qNCZ7-iTCXRNQMot-mV0EAKTBWg@mail.gmail.com>
	<F48B0C27-EFCD-4A7B-9CCD-80973772005A@voidspace.org.uk>
	<jl7ldl$9vm$1@dough.gmane.org>
Message-ID: <CAL0kPAV2GRfyqac+ZGx2-UKw4eXrFjJh8h+cd28uMKGWGze7YA@mail.gmail.com>

On Sat, Mar 31, 2012 at 21:20, Terry Reedy <tjreedy at udel.edu> wrote:
> The Windows installer, by default, installs tcl/tk while Python on other
> systems uses the system install. Why can't we do the same for the Olson
> database?

The problem is that it needs updating.
We could include pytz, but it would be useless on Windows, unless you
also separately installs the Olson database. But including it and
updating it is not Python's job and should not be.

//Lennart

From regebro at gmail.com  Sun Apr  1 15:17:34 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Sun, 1 Apr 2012 15:17:34 +0200
Subject: [Python-Dev] PEP 418: Add monotonic clock
In-Reply-To: <CANF4RMn1T=dZrFOw_wNxDtFOHLhnx+ZKeT2VKFFxdsO34Op5_g@mail.gmail.com>
References: <CAMpsgwZ6QzbfHSj6vQjsNf4mHOF1GsLH11oCLUjQ0nVT9ayhEQ@mail.gmail.com>
	<4F710870.9090602@scottdial.com>
	<CAMpsgwbLBQ_wJ0nUVZTAWn=1ATva3U70GQn=EZyRCVYOBkZnww@mail.gmail.com>
	<4F72003F.6000606@voidspace.org.uk> <4F725D02.1080706@gmail.com>
	<4F72B258.10306@scottdial.com>
	<CAMpsgwbaVwKdazvFaf1DE0_X-qjAj-pO7mMD=FHa2CqZ+THGXA@mail.gmail.com>
	<4F72DBDE.6040003@scottdial.com>
	<CAMpsgwYYH0cRZ+Dff-izF6yM8wC8JkyqBvWGMj-cvHtTQwqFTg@mail.gmail.com>
	<CAL0kPAWHedc0a57unVvWNCLuT5oP_3u_o0dL+jGL9X3XtoqrUQ@mail.gmail.com>
	<CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
	<CAL0kPAWRCZjbMBjMnODh_W1+QX6LXVJD9k-3xXoOsgFyjx1BoQ@mail.gmail.com>
	<CAL0kPAUiwm1w=7tJmG1QbF7vnvGeMzmo3UNUD7Z1oPCAEnARhg@mail.gmail.com>
	<CAP7+vJKkCQ0yqaSzXUnwfjU83LWiGBMQfCvBycf9ZamCRVyNWg@mail.gmail.com>
	<4F764F3B.2020306@pearwood.info>
	<CAL0kPAUWKRtZPERKNxUMZRqRzNFtS6afi9cV56pYiHV4FCAsaw@mail.gmail.com>
	<CANF4RMn1T=dZrFOw_wNxDtFOHLhnx+ZKeT2VKFFxdsO34Op5_g@mail.gmail.com>
Message-ID: <CAL0kPAVyfm1roSPr=6E9Yj8uJC=PFtR0zENJ0P7kbKbsZMV3Gw@mail.gmail.com>

On Sat, Mar 31, 2012 at 11:50, Nadeem Vawda <nadeem.vawda at gmail.com> wrote:
> Out of the big synonym list Guido posted, I rather like time.stopwatch() - it
> makes it more explicit that the purpose of the function is to measure intervals,
> rather identifying absolute points in time.

I guess it's the least bad.

//Lennart

From animelovin at gmail.com  Sun Apr  1 17:49:44 2012
From: animelovin at gmail.com (Etienne Robillard)
Date: Sun, 01 Apr 2012 11:49:44 -0400
Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error
In-Reply-To: <20120330192545.1CC4A2500E9@webabinitio.net>
References: <20120329195825.843352500E9@webabinitio.net>
	<CAP7+vJ+nb7X+9bs=WP8Rf6797BxEhkaPhpn4d7_ZtsBc0NQ9jg@mail.gmail.com>
	<20120329203103.95A4B2500E9@webabinitio.net>
	<jl2ih5$mtk$1@dough.gmane.org>
	<20120329220755.377052500E9@webabinitio.net>
	<4F75A510.7080401@gmail.com>
	<CAP7+vJ+jEro8gPU_3-i4kpFsL+NEGnBfHya3EwQbXrH2-RkPGw@mail.gmail.com>
	<4F75D501.2050400@gmail.com> <jl4kvo$fm9$1@dough.gmane.org>
	<4F75DA77.6040305@gmail.com> <jl4md6$324$1@dough.gmane.org>
	<4F75F4D2.4050706@gmail.com> <4F75FA16.6040704@stoneleaf.us>
	<4F7601D4.1080706@gmail.com>
	<CAP7+vJKOZLH2FZWv60q9+qqKgDQRoe-DqyhePy6hU-iJ9pumJQ@mail.gmail.com>
	<4F7605E0.6070503@gmail.com>
	<20120330192545.1CC4A2500E9@webabinitio.net>
Message-ID: <4F787918.5020501@gmail.com>

On 03/30/2012 03:25 PM, R. David Murray wrote:
> On Fri, 30 Mar 2012 15:13:36 -0400, Etienne Robillard<animelovin at gmail.com>  wrote:
>> So far I was only attempting to verify whether this is related to
>> PEP-416 or not. If this is indeed related PEP 416, then I must obviously
>> attest that I must still understand why a immutable dict would prevent
>> this bug or not...
>
> OK, that seems to be the source of your confusion, then.  This has
> nothing to do with PEP-416.
>
> We are talking about issue Issue 14417 (like it says in the subject),
> which in turn is a reaction to the fix for issue 14205.
>
> --David
>

Don't be so naive, David. This issue is more likely related to immutable 
dicts whether you like it or not, otherwise there would be no need to 
patch python 3.3 and include a new dict proxy type without exposing it 
fully.

And secondly this is not only a speculation but my humble understanding 
of the interdependencies which seems to be related to the inclusion of a 
new dict proxy (immutable) mapper affecting invariably code expecting a 
mutable dictionary lookup to succeed whenever the dict structure has 
changed by (ie) overriding __hash__.

Now if you don't mind, I don't mind to remind you how cell phones can
be unsafe on an extended period of times for your health and brain so I 
really don't recommend their uses for cloud-based platforms requiring 
more advanced thread locking mecanism than what is being included in 
traditional CPython using standard (mutable) dicts.

Regards,
Etienne

From tjreedy at udel.edu  Sun Apr  1 23:29:27 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 01 Apr 2012 17:29:27 -0400
Subject: [Python-Dev] datetime module and pytz with dateutil
In-Reply-To: <CAL0kPAV2GRfyqac+ZGx2-UKw4eXrFjJh8h+cd28uMKGWGze7YA@mail.gmail.com>
References: <CAL3CFcWLgZ+G++it4HYsQRACTOg9YfGH9yMjzTAH8DwWH2nqFQ@mail.gmail.com>
	<jl42es$jj9$1@dough.gmane.org>
	<CAL0kPAV8o7L0Ar1hu2Yi=t-qNCZ7-iTCXRNQMot-mV0EAKTBWg@mail.gmail.com>
	<F48B0C27-EFCD-4A7B-9CCD-80973772005A@voidspace.org.uk>
	<jl7ldl$9vm$1@dough.gmane.org>
	<CAL0kPAV2GRfyqac+ZGx2-UKw4eXrFjJh8h+cd28uMKGWGze7YA@mail.gmail.com>
Message-ID: <jlahc1$1an$1@dough.gmane.org>

On 4/1/2012 9:16 AM, Lennart Regebro wrote:
> On Sat, Mar 31, 2012 at 21:20, Terry Reedy<tjreedy at udel.edu>  wrote:
>> The Windows installer, by default, installs tcl/tk while Python on other
>> systems uses the system install. Why can't we do the same for the Olson
>> database?
>
> The problem is that it needs updating.
> We could include pytz, but it would be useless on Windows, unless you
> also separately installs the Olson database. But including it and
> updating it is not Python's job and should not be.

My main point is that I (as a Windows's user) do not think that 
difficulties with Windows should stop inclusion of a useful module. On 
import, pytz should check for database accessibility and raise an 
exception if not, and possibly refer to manual section on how to make it 
accessible.

-- 
Terry Jan Reedy


From brian at python.org  Sun Apr  1 23:46:01 2012
From: brian at python.org (Brian Curtin)
Date: Sun, 1 Apr 2012 16:46:01 -0500
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F75CA7E.7030204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>
Message-ID: <CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>

On Fri, Mar 30, 2012 at 10:00, Mat?j Cepl <mcepl at redhat.com> wrote:
> Why does HG cpython repo contains .{bzr,git}ignore at all?
> IMHO, all .*ignore files should be strictly repository dependent and they
> should not be mixed together.

For what reason? Are the git or bzr files causing issues on HG?

From guido at python.org  Mon Apr  2 00:52:03 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 1 Apr 2012 15:52:03 -0700
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F75CA7E.7030204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>
Message-ID: <CAP7+vJLRN2D3+pLxcHV+APzrcoWA9bNS3NfN6zpzpDK2LQRPgg@mail.gmail.com>

On Fri, Mar 30, 2012 at 8:00 AM, Mat?j Cepl <mcepl at redhat.com> wrote:
> Why does HG cpython repo contains .{bzr,git}ignore at all?

So that when people switch between repo software the set of ignored
files remains constant. While the "official" repo isn't going to
switch any time soon, various developers for various reasons prefer
different repo software and the tools for copying repos work well
enough that people actually do this, for various workflow purposes. As
long as patches eventually find their way back into the central Hg
repo I have no problem in it.

> IMHO, all .*ignore files should be strictly repository dependent and they
> should not be mixed together.

No, because then everybody who copies a repo to a different tool would
have to start over from scratch.

> It is even worse, that (understandingly) .{bzr,git}ignore are apparently
> poorly maintained, so in order to get an equivalent of .hgignore in
> .gitignore, one has to apply the attached patch.

Please file a bug to get this reviewed and checked in.

-- 
--Guido van Rossum (python.org/~guido)

From mcepl at redhat.com  Mon Apr  2 00:31:24 2012
From: mcepl at redhat.com (=?UTF-8?B?TWF0xJtqIENlcGw=?=)
Date: Mon, 02 Apr 2012 00:31:24 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
Message-ID: <4F78D73C.4000204@redhat.com>

On 1.4.2012 23:46, Brian Curtin wrote:
> For what reason? Are the git or bzr files causing issues on HG?

No, but wrong .gitignore causes issues with git repo obtained via 
hg-fast-import. If it is meant as an intentional sabotage of using git 
(and bzr) for cpython, then that's the only explanation I can 
understand, otherwise it doesn't make sense to me why these files are in 
HG repository at all.

Mat?j
-- 
http://www.ceplovi.cz/matej/, Jabber: mcepl<at>ceplovi.cz
GPG Finger: 89EF 4BC6 288A BF43 1BAB  25C3 E09F EF25 D964 84AC

Somewhere at the edge of the Bell curve was the girl for me.
     -- Based on http://xkcd.com/314/


From tseaver at palladion.com  Mon Apr  2 02:32:47 2012
From: tseaver at palladion.com (Tres Seaver)
Date: Sun, 01 Apr 2012 20:32:47 -0400
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F78D73C.4000204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
Message-ID: <jlas3k$24l$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/01/2012 06:31 PM, Mat?j Cepl wrote:
> On 1.4.2012 23:46, Brian Curtin wrote:
>> For what reason? Are the git or bzr files causing issues on HG?
> 
> No, but wrong .gitignore causes issues with git repo obtained via 
> hg-fast-import. If it is meant as an intentional sabotage of using git
>  (and bzr) for cpython, then that's the only explanation I can 
> understand, otherwise it doesn't make sense to me why these files are
> in HG repository at all.

Hanlon's Razor, paraphrased:  "Never attribute to malice that which can
be adqeuately explained by [bitrot]."

Actually, the Goethe quote from [1] is even more apropos:
"[M]isunderstandings and neglect create more confusion in this world than
trickery and malice. At any rate, the last two are certainly much less
frequent."

[1] http://en.wikipedia.org/wiki/Hanlon's_razor




Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk94868ACgkQ+gerLs4ltQ505gCghFGqdB6KUMExjxAxjkb1vGu2
/GMAn3k/wNqphKwancGHWageYGpefzTB
=KJrm
-----END PGP SIGNATURE-----


From cs at zip.com.au  Mon Apr  2 02:43:27 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Mon, 2 Apr 2012 10:43:27 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418 (was:
 PEP 418: Add monotonic clock)
In-Reply-To: <CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
References: <CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
Message-ID: <20120402004327.GA18861@cskk.homeip.net>

On 28Mar2012 23:40, Victor Stinner <victor.stinner at gmail.com> wrote:
| > Does this primarily give a high resolution clock, or primarily a
| > monotonic clock? That's not clear from either the name, or the PEP.
| 
| I expect a better resolution from time.monotonic() than time.time(). I
| don't have exact numbers right now, but I began to document each OS
| clock in the PEP.

I wish to raise an alternative to these set-in-stone policy-in-the-library
choices, and an alternative to any proposal that does fallback in a function
whose name suggests otherwise.

Off in another thread on PEP 418 I suggested a cleaner approach to
offering clocks to the user: let the user ask!

My (just two!) posts on this are here:

  http://www.mail-archive.com/python-dev at python.org/msg66174.html
  http://www.mail-archive.com/python-dev at python.org/msg66179.html

The second post is more important as it fleshes out my reasons for
considering this appraoch better.

I've just finished sketching out a skeleton here:

  https://bitbucket.org/cameron_simpson/css/src/fb476fcdcfce/lib/python/cs/clockutils.py

In short:

  - take Victor's hard work on system clocks and classifying thm by
    feature set

  - tabulate access to them in a list of clock objects

  - base access class goes (example user call):

      # get a clock object - often a singleton under the hood
      T = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_STEADY|T_HIRES)
      # what kind of clock did I get?
      print T.flags
      # get the current time
      now = T.now

  - offer monotonic() and/or steady() etc as convenience functions
    calling get_clock() in a fashion like the above example

  - don't try to guess the user's use case ahead of time

This removes policy from the library functions and makes it both simple
and obvious in the user's calling code, and also makes it possible for
the user to inspect the clock and find out what quality/flavour of clock
they got.

Please have a glance through the code, especially the top and botth bits;
it is only 89 lines long and includes (presently) just a simple object for
time.time() and (importantly for the bikeshedding) an example synthetic
clock to give a monotonic caching clock from another non-monotonic clock
(default, again, time.time() in this prototype).

Suitably fleshed out with access to the various system clocks, this can
offer all the current bikeshedding in a simple interface and without
constraining user choices to "what we thought of, or what we thought
likely".

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Availability: Samples Q1/97
              Volume  H2/97
So, it's vapor right now, but if you want to sell vapor in 1997 you
better had damn fast vapor then...
        - Burkhard Neidecker-Lutz on the DEC Alpha 21264, October 1996

From brian at python.org  Mon Apr  2 02:44:00 2012
From: brian at python.org (Brian Curtin)
Date: Sun, 1 Apr 2012 19:44:00 -0500
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F78D73C.4000204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
Message-ID: <CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>

On Sun, Apr 1, 2012 at 17:31, Mat?j Cepl <mcepl at redhat.com> wrote:
> On 1.4.2012 23:46, Brian Curtin wrote:
>>
>> For what reason? Are the git or bzr files causing issues on HG?
>
>
> No, but wrong .gitignore causes issues with git repo obtained via
> hg-fast-import. If it is meant as an intentional sabotage of using git (and
> bzr) for cpython, then that's the only explanation I can understand,
> otherwise it doesn't make sense to me why these files are in HG repository
> at all.

Then you won't understand. Sometimes things get out of date when they
aren't used or maintained.

You're welcome to fix the problem if you're a Git user, as suggested earlier.

From stephen at xemacs.org  Mon Apr  2 04:18:04 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Mon, 2 Apr 2012 11:18:04 +0900
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAP7+vJJ6tftkazM4suWW0-+ApOmCRxUR+MSfZT7Ns-EPVHbvxw@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CAP7+vJ+1u_tUtr70+0L9wt7xmj9sHPv1=6=aQMkhWxqfhD1hWQ@mail.gmail.com>
	<CAP7+vJKuva5q=Y14pKTqbMijsVbFmH=vOi6cPsZ01XE3F9g+9A@mail.gmail.com>
	<CAMpsgwbVoUqLJObF8cw-r9BZOq9_iTr9W0yr7HTJHeEZgymw8Q@mail.gmail.com>
	<CAP7+vJJ6tftkazM4suWW0-+ApOmCRxUR+MSfZT7Ns-EPVHbvxw@mail.gmail.com>
Message-ID: <CAL_0O1-sifGYAZi4uDqRyLYXD5148wEsmro1-7CcKQLne9pYjw@mail.gmail.com>

On Sat, Mar 31, 2012 at 8:46 AM, Guido van Rossum <guido at python.org> wrote:
> Given the amount of disagreement I sense, I think we'll need to wait
> for more people to chime in.

I currently can't imagine why I *personally* would want anything
better than what we currently call time.time.  For that reason, I like
Cameron's proposal best.  If and when I have a use case, I'll be able
to query the system for the clock that has the best combination of
desirable properties.  Admittedly, by then the answer probably will be
"time.time."<wink/>

From ncoghlan at gmail.com  Mon Apr  2 04:24:27 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 2 Apr 2012 12:24:27 +1000
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F78D73C.4000204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
Message-ID: <CADiSq7cG9+hgk4UufpPf-Jo7g=5a8TTL_UYeDG7P8DbhQmqMFQ@mail.gmail.com>

On Mon, Apr 2, 2012 at 8:31 AM, Mat?j Cepl <mcepl at redhat.com> wrote:
> On 1.4.2012 23:46, Brian Curtin wrote:
>>
>> For what reason? Are the git or bzr files causing issues on HG?
>
>
> No, but wrong .gitignore causes issues with git repo obtained via
> hg-fast-import. If it is meant as an intentional sabotage of using git (and
> bzr) for cpython, then that's the only explanation I can understand,
> otherwise it doesn't make sense to me why these files are in HG repository
> at all.

As Guido explained, the bzr and git ignore files are there to allow
Git and Bzr users to collaborate on them, updating a standard copy
when the .hgignore entries change (which doesn't happen very often).

If they get outdated (or otherwise contain erroneous entries), then
the appropriate response is to either update them directly (core
developers that use a different DVCS for their local workflow), or
raise a tracker issue pointing out they they have become stale
(everyone else that uses a different DVCS for their local workflow).

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From scott+python-dev at scottdial.com  Mon Apr  2 05:26:48 2012
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Sun, 01 Apr 2012 23:26:48 -0400
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F75CA7E.7030204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>
Message-ID: <4F791C78.50606@scottdial.com>

On 3/30/2012 11:00 AM, Mat?j Cepl wrote:
> It is even worse, that (understandingly) .{bzr,git}ignore are apparently
> poorly maintained, so in order to get an equivalent of .hgignore in
> .gitignore, one has to apply the attached patch.

Create an issue on the bug tracker. In the meantime, you can either
commit the change to your clone, or you can put your ignores into
.git/info/exclude. No reason to be so sore about it, since Git lets you
have your own ignore file without requiring it be a tracked file.

-- 
Scott Dial
scott at scottdial.com

From georg at python.org  Mon Apr  2 07:43:52 2012
From: georg at python.org (Georg Brandl)
Date: Mon, 02 Apr 2012 07:43:52 +0200
Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 1
Message-ID: <4F793C98.8030504@python.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On behalf of the Python development team, I'm happy to announce the
second alpha release of Python 3.3.0.

This is a preview release, and its use is not recommended in
production settings.

Python 3.3 includes a range of improvements of the 3.x series, as well
as easier porting between 2.x and 3.x.  Major new features and changes
in the 3.3 release series are:

* PEP 380, Syntax for Delegating to a Subgenerator ("yield from")
* PEP 393, Flexible String Representation (doing away with the
  distinction between "wide" and "narrow" Unicode builds)
* PEP 409, Suppressing Exception Context
* PEP 3151, Reworking the OS and IO exception hierarchy
* A C implementation of the "decimal" module, with up to 80x speedup
  for decimal-heavy applications
* The new "packaging" module, building upon the "distribute" and
  "distutils2" projects and deprecating "distutils"
* The new "lzma" module with LZMA/XZ support
* PEP 3155, Qualified name for classes and functions
* PEP 414, explicit Unicode literals to help with porting
* The new "faulthandler" module that helps diagnosing crashes
* Wrappers for many more POSIX functions in the "os" and "signal"
  modules, as well as other useful functions such as "sendfile()"
* Hash randomization, introduced in earlier bugfix releases, is now
  switched on by default.

For a more extensive list of changes in 3.3.0, see

    http://docs.python.org/3.3/whatsnew/3.3.html (*)

To download Python 3.3.0 visit:

    http://www.python.org/download/releases/3.3.0/

Please consider trying Python 3.3.0 with your code and reporting any bugs
you may notice to:

    http://bugs.python.org/


Enjoy!

(*) Please note that this document is usually finalized late in the release
    cycle and therefore may have stubs and missing entries at this point.

- --
Georg Brandl, Release Manager
georg at python.org
(on behalf of the entire python-dev team and 3.3's contributors)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iEYEARECAAYFAk95PJgACgkQN9GcIYhpnLCN1QCfeYWp+QbPGYhaLSxc4YKnlE/F
zU8An2q5qzvjL0qaxqaYleFGoGKPzzJu
=qo4v
-----END PGP SIGNATURE-----

From georg at python.org  Mon Apr  2 07:55:41 2012
From: georg at python.org (Georg Brandl)
Date: Mon, 02 Apr 2012 07:55:41 +0200
Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 2
Message-ID: <4F793F5D.3040808@python.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On behalf of the Python development team, I'm happy to announce the
second alpha release of Python 3.3.0.

This is a preview release, and its use is not recommended in
production settings.

Python 3.3 includes a range of improvements of the 3.x series, as well
as easier porting between 2.x and 3.x.  Major new features and changes
in the 3.3 release series are:

* PEP 380, Syntax for Delegating to a Subgenerator ("yield from")
* PEP 393, Flexible String Representation (doing away with the
  distinction between "wide" and "narrow" Unicode builds)
* PEP 409, Suppressing Exception Context
* PEP 3151, Reworking the OS and IO exception hierarchy
* A C implementation of the "decimal" module, with up to 80x speedup
  for decimal-heavy applications
* The new "packaging" module, building upon the "distribute" and
  "distutils2" projects and deprecating "distutils"
* The new "lzma" module with LZMA/XZ support
* PEP 3155, Qualified name for classes and functions
* PEP 414, explicit Unicode literals to help with porting
* The new "faulthandler" module that helps diagnosing crashes
* Wrappers for many more POSIX functions in the "os" and "signal"
  modules, as well as other useful functions such as "sendfile()"
* Hash randomization, introduced in earlier bugfix releases, is now
  switched on by default.

For a more extensive list of changes in 3.3.0, see

    http://docs.python.org/3.3/whatsnew/3.3.html (*)

To download Python 3.3.0 visit:

    http://www.python.org/download/releases/3.3.0/

Please consider trying Python 3.3.0 with your code and reporting any bugs
you may notice to:

    http://bugs.python.org/


Enjoy!

(*) Please note that this document is usually finalized late in the release
    cycle and therefore may have stubs and missing entries at this point.

- - --
Georg Brandl, Release Manager
georg at python.org
(on behalf of the entire python-dev team and 3.3's contributors)
- -----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iEYEARECAAYFAk95PJgACgkQN9GcIYhpnLCN1QCfeYWp+QbPGYhaLSxc4YKnlE/F
zU8An2q5qzvjL0qaxqaYleFGoGKPzzJu
=qo4v
- -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iEYEARECAAYFAk95P10ACgkQN9GcIYhpnLBo8QCePW2BuTqXSmtVl6/Yae1HWrGB
UFgAn0ytSqd70vq58gj9N8xUtKC+BJ2D
=3DA/
-----END PGP SIGNATURE-----

From martin at v.loewis.de  Mon Apr  2 08:03:34 2012
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Mon, 02 Apr 2012 08:03:34 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F78D73C.4000204@redhat.com>
References: <4F75CA7E.7030204@redhat.com>	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
Message-ID: <4F794136.2060301@v.loewis.de>

Am 02.04.2012 00:31, schrieb Mat?j Cepl:
> On 1.4.2012 23:46, Brian Curtin wrote:
>> For what reason? Are the git or bzr files causing issues on HG?
> 
> No, but wrong .gitignore causes issues with git repo obtained via
> hg-fast-import. If it is meant as an intentional sabotage of using git
> (and bzr) for cpython, then that's the only explanation I can
> understand, otherwise it doesn't make sense to me why these files are in
> HG repository at all.
>

Sabotage, most certainly.

Regards,
Martin

From mail at timgolden.me.uk  Mon Apr  2 09:44:06 2012
From: mail at timgolden.me.uk (Tim Golden)
Date: Mon, 02 Apr 2012 08:44:06 +0100
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F794136.2060301@v.loewis.de>
References: <4F75CA7E.7030204@redhat.com>	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com> <4F794136.2060301@v.loewis.de>
Message-ID: <4F7958C6.9020609@timgolden.me.uk>

On 02/04/2012 07:03, "Martin v. L?wis" wrote:
> Am 02.04.2012 00:31, schrieb Mat?j Cepl:
>> On 1.4.2012 23:46, Brian Curtin wrote:
>>> For what reason? Are the git or bzr files causing issues on HG?
>>
>> No, but wrong .gitignore causes issues with git repo obtained via
>> hg-fast-import. If it is meant as an intentional sabotage of using git
>> (and bzr) for cpython, then that's the only explanation I can
>> understand, otherwise it doesn't make sense to me why these files are in
>> HG repository at all.
>>
>
> Sabotage, most certainly.

I had to laugh. It's the deadpan delivery.

TJG

From sam.partington at gmail.com  Mon Apr  2 11:31:53 2012
From: sam.partington at gmail.com (Sam Partington)
Date: Mon, 2 Apr 2012 10:31:53 +0100
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
Message-ID: <CABuPkmRU8JyS3co2LEC-fGpfU4N2=kpP7J83h9SbpdfsyU2kRQ@mail.gmail.com>

On 30 March 2012 21:52, Guido van Rossum <guido at python.org> wrote:
> Oh dear. I really want to say that 15 ms is good enough. Some possible
> exceptions I can think of:
>
> - Profiling. But this really wants to measure CPU time anyways, and it
> already uses a variety of hacks and heuristics to pick the best timer,
> so I don't really care.

That depends on what you're profiling.  If you're profiling CPU bound
algorithms then yes CPU time is better. But if your trying to
profile/measure hardware device/comms performance for example then CPU
time is of no interest.  There, on windows the 15ms resolution of
time.time makes it useless.  For some reason I always forget this and
sit looking at trace outs for 5 minutes wondering why everything takes
either 0, 15, or 30ms.

For nearly all my use cases I'm not terribly interested in
monotonicity, or stability in suspend/resume states so I won't give my
opinions on thiose (though I can see they are good things and can well
imagine needing them one day), I just want an easy way of getting at
least micro second resolution cross platform.

I don't mind particularly what you call it but FWIW 'highres' seems a
bit odd to me.  It seems that highres is likely to seem lowres one
day, and then you need to add higherres() and then
evenhigherthanthatres().

I would go with microtime(), or nanotime()  it doesn't make any
promises about anything other than the resolution.

Sam

From mcepl at redhat.com  Mon Apr  2 12:25:49 2012
From: mcepl at redhat.com (Matej Cepl)
Date: Mon, 02 Apr 2012 12:25:49 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <4F791C78.50606@scottdial.com>
References: <4F75CA7E.7030204@redhat.com> <4F791C78.50606@scottdial.com>
Message-ID: <4F797EAD.7030702@redhat.com>

On 2.4.2012 05:26, Scott Dial wrote:
> Create an issue on the bug tracker. In the meantime, you can either
> commit the change to your clone, or you can put your ignores into
> .git/info/exclude. No reason to be so sore about it, since Git lets you
> have your own ignore file without requiring it be a tracked file.

And yes, I am sorry for the tone of my original post. The fact I didn't 
understand the reason, doesn't excuse me.

Mat?j


From mcepl at redhat.com  Mon Apr  2 12:26:31 2012
From: mcepl at redhat.com (=?UTF-8?B?TWF0xJtqIENlcGw=?=)
Date: Mon, 02 Apr 2012 12:26:31 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <CAP7+vJLRN2D3+pLxcHV+APzrcoWA9bNS3NfN6zpzpDK2LQRPgg@mail.gmail.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAP7+vJLRN2D3+pLxcHV+APzrcoWA9bNS3NfN6zpzpDK2LQRPgg@mail.gmail.com>
Message-ID: <4F797ED7.8000003@redhat.com>

On 2.4.2012 00:52, Guido van Rossum wrote:
> Please file a bug to get this reviewed and checked in.

OK, I don't agree with the reasoning, but I willingly submit to BDFL ;)

http://bugs.python.org/issue14472

Mat?j

From victor.stinner at gmail.com  Mon Apr  2 13:37:46 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 2 Apr 2012 13:37:46 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120402004327.GA18861@cskk.homeip.net>
References: <CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
	<20120402004327.GA18861@cskk.homeip.net>
Message-ID: <CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>

> I've just finished sketching out a skeleton here:
>
> ?https://bitbucket.org/cameron_simpson/css/src/fb476fcdcfce/lib/python/cs/clockutils.py

get_clock() returns None if no clock has the requested flags, whereas
I expected an exception (LookupError or NotImplementError?).

get_clock() doesn't remember if a clock works or not (if it raises an
OSError) and does not fallback to the next clock on error. See
"pseudo-codes" in the PEP 418.

The idea of flags attached to each clock is interesting, but I don't
like the need of different list of clocks. Should I use
MONTONIC_CLOCKS or HIRES_CLOCKS when I would like a monotonic and
high-resolution clock? It would be simpler to have only one global and
*private* list.

If you have only one list of clocks, how do sort the list to get
QueryPerformanceCounter when the user asks for highres and
GetTickCount when the user asks for monotonic? The "if clock.flags &
flags == flags:" test in get_clock() is maybe not enough. I suppose
that we would have the following flags for Windows functions:

QueryPerformanceCounter.flags = T_HIRES
GetTickCount.flags = T_MONOTONIC | T_STEADY

(or maybe QueryPerformanceCounter.flags = T_HIRES | T_MONOTONIC ?)

monotonic_clock() should maybe try to get a clock using the following
list of conditions:
 - T_MONOTONIC | T_STEADY
 - T_MONOTONIC | T_HIGHRES
 - T_MONOTONIC

The T_HIGHRES flag in unclear, even in the PEP. According to the PEP,
any monotonic clock is considered as a "high-resolution" clock. Do you
agree? So we would have:

GetTickCount.flags = T_MONOTONIC | T_STEADY | T_HIGHRES

Even if GetTickCount has only an accuracy of 15 ms :-/

Can list please give the list of flags of each clocks listed in the
PEP? Only clocks used for time.time, time.monotonic and time.highres
(not process and thread clocks nor QueryUnbiasedInterruptTime).

> ? ? ?# get a clock object - often a singleton under the hood
> ? ? ?T = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_STEADY|T_HIRES)
> ? ? ?# what kind of clock did I get?
> ? ? ?print T.flags
> ? ? ?# get the current time
> ? ? ?now = T.now

The API looks much more complex than the API proposed in PEP 418 just
to get the time. You have to call a function to get a function, and
then call the function, instead of just calling a function directly.

Instead of returning an object with a now() method, I would prefer to
get directly the function getting time, and another function to get
"metadata" of the clock.

> This removes policy from the library functions and makes it both simple
> and obvious in the user's calling code, and also makes it possible for
> the user to inspect the clock and find out what quality/flavour of clock
> they got.

I'm not sure that users understand correctly differences between all
these clocks and are able to use your API correctly. How should I
combinese these 3 flags (T_HIRES, T_MONOTONIC and T_STEADY)? Can I use
any combinaison?

Which flags are "portable"? Or should I always use an explicit
fallback to ensure getting a clock on any platform?

Could you please update your code according to my remarks? I will try
to integrate it into the PEP. A PEP should list all alternatives!

Victor

From solipsis at pitrou.net  Mon Apr  2 13:50:48 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 2 Apr 2012 13:50:48 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
Message-ID: <20120402135048.6ef7d87d@pitrou.net>

On Sun, 1 Apr 2012 19:44:00 -0500
Brian Curtin <brian at python.org> wrote:

> On Sun, Apr 1, 2012 at 17:31, Mat?j Cepl <mcepl at redhat.com> wrote:
> > On 1.4.2012 23:46, Brian Curtin wrote:
> >>
> >> For what reason? Are the git or bzr files causing issues on HG?
> >
> >
> > No, but wrong .gitignore causes issues with git repo obtained via
> > hg-fast-import. If it is meant as an intentional sabotage of using git (and
> > bzr) for cpython, then that's the only explanation I can understand,
> > otherwise it doesn't make sense to me why these files are in HG repository
> > at all.
> 
> Then you won't understand. Sometimes things get out of date when they
> aren't used or maintained.
> 
> You're welcome to fix the problem if you're a Git user, as suggested earlier.

That said, these files will always be outdated, so we might as well
remove them so that at least git / bzr users don't get confused.

Regards

Antoine.



From stefan_ml at behnel.de  Mon Apr  2 14:54:21 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 02 Apr 2012 14:54:21 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <20120402135048.6ef7d87d@pitrou.net>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
Message-ID: <jlc7ht$409$1@dough.gmane.org>

Antoine Pitrou, 02.04.2012 13:50:
> On Sun, 1 Apr 2012 19:44:00 -0500
> Brian Curtin wrote:
>> On Sun, Apr 1, 2012 at 17:31, Mat?j Cepl wrote:
>>> On 1.4.2012 23:46, Brian Curtin wrote:
>>>> For what reason? Are the git or bzr files causing issues on HG?
>>>
>>>
>>> No, but wrong .gitignore causes issues with git repo obtained via
>>> hg-fast-import. If it is meant as an intentional sabotage of using git (and
>>> bzr) for cpython, then that's the only explanation I can understand,
>>> otherwise it doesn't make sense to me why these files are in HG repository
>>> at all.
>>
>> Then you won't understand. Sometimes things get out of date when they
>> aren't used or maintained.
>>
>> You're welcome to fix the problem if you're a Git user, as suggested earlier.
> 
> That said, these files will always be outdated, so we might as well
> remove them so that at least git / bzr users don't get confused.

How often is anything added to the .hgignore file? I doubt that these files
will "sufficiently always" be outdated to be unhelpful.

Stefan


From guido at python.org  Mon Apr  2 16:57:10 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 2 Apr 2012 07:57:10 -0700
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CABuPkmRU8JyS3co2LEC-fGpfU4N2=kpP7J83h9SbpdfsyU2kRQ@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CABuPkmRU8JyS3co2LEC-fGpfU4N2=kpP7J83h9SbpdfsyU2kRQ@mail.gmail.com>
Message-ID: <CAP7+vJ+zvSNg_j3nsvMKznSLhNMO5eWV612SEkg+D+W-=cckBw@mail.gmail.com>

On Mon, Apr 2, 2012 at 2:31 AM, Sam Partington <sam.partington at gmail.com> wrote:
> On 30 March 2012 21:52, Guido van Rossum <guido at python.org> wrote:
>> Oh dear. I really want to say that 15 ms is good enough. Some possible
>> exceptions I can think of:
>>
>> - Profiling. But this really wants to measure CPU time anyways, and it
>> already uses a variety of hacks and heuristics to pick the best timer,
>> so I don't really care.
>
> That depends on what you're profiling. ?If you're profiling CPU bound
> algorithms then yes CPU time is better. But if your trying to
> profile/measure hardware device/comms performance for example then CPU
> time is of no interest. ?There, on windows the 15ms resolution of
> time.time makes it useless. ?For some reason I always forget this and
> sit looking at trace outs for 5 minutes wondering why everything takes
> either 0, 15, or 30ms.
>
> For nearly all my use cases I'm not terribly interested in
> monotonicity, or stability in suspend/resume states so I won't give my
> opinions on thiose (though I can see they are good things and can well
> imagine needing them one day), I just want an easy way of getting at
> least micro second resolution cross platform.
>
> I don't mind particularly what you call it but FWIW 'highres' seems a
> bit odd to me. ?It seems that highres is likely to seem lowres one
> day, and then you need to add higherres() and then
> evenhigherthanthatres().
>
> I would go with microtime(), or nanotime() ?it doesn't make any
> promises about anything other than the resolution.

You're being altogether too reasonable about it. :-) People keep
asking for a clock that has nanosecond precision and yet will tell
time accurately for centuries, without ever skipping forward or
backward...

-- 
--Guido van Rossum (python.org/~guido)

From rosuav at gmail.com  Mon Apr  2 17:03:15 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Tue, 3 Apr 2012 01:03:15 +1000
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <20120402135048.6ef7d87d@pitrou.net>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
Message-ID: <CAPTjJmonkVcUT6LUFKJTAGV_1Xf3jhWbGcQMmbr=9VeBqTqtag@mail.gmail.com>

On Mon, Apr 2, 2012 at 9:50 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> That said, these files will always be outdated, so we might as well
> remove them so that at least git / bzr users don't get confused.

Apologies for what may be a stupid suggestion, but is it possible to
write a script that generates .gitignore and .bzrignore from
.hgignore? That ought to solve the problem - take the former two out
of the repository, and everyone who wants to use git or bzr can simply
generate them on requirement.

Chris Angelico

From lukas.lueg at googlemail.com  Mon Apr  2 17:57:58 2012
From: lukas.lueg at googlemail.com (Lukas Lueg)
Date: Mon, 2 Apr 2012 17:57:58 +0200
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAP7+vJ+zvSNg_j3nsvMKznSLhNMO5eWV612SEkg+D+W-=cckBw@mail.gmail.com>
References: <CAMpsgwZmAr8GXAW653X0RFD-sFqOpG-M4AYrukei9P6mTXSoPQ@mail.gmail.com>
	<CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<CABuPkmRU8JyS3co2LEC-fGpfU4N2=kpP7J83h9SbpdfsyU2kRQ@mail.gmail.com>
	<CAP7+vJ+zvSNg_j3nsvMKznSLhNMO5eWV612SEkg+D+W-=cckBw@mail.gmail.com>
Message-ID: <CAJF-kYmkQA_7g8biv+P8yTWa4JHON0mY6_zLT-4sdy1ceCe3Qg@mail.gmail.com>

At least on some versions of Windows (e.g. XP) the
QueryPerformanceCounter()-API is more or less only a stub around a
call to RDTSC which in turn varies in frequency on (at least) Intel
Pentium 4, Pentium M and Xeon processors (bound to the current clock
frequencies).

From kristjan at ccpgames.com  Mon Apr  2 19:39:05 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Mon, 2 Apr 2012 17:39:05 +0000
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <20120330214319.GA3106@cskk.homeip.net>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
Message-ID: <EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of Cameron Simpson
> Sent: 30. mars 2012 21:43
> There seem to be a few competing features for clocks that people want:
> 
>   - monotonic - never going backward at all
>   - high resolution
>   - no steps
> 
 "no steps" is something unquantifiable.  All time has steps in it.  What you mean here is no 'noise'.  And this is also never actually achievable.
A clock that ticks forwards, but sometimes stops some and then  ticks some more, is simply a clock with a lower resolution on average than what can be observed for certain time periods.

It befuddles me somewhat how complicated you are making all of this.
Simply provide the best high resolution, non-backwards ticking, performance timer that the platform provides, and don't try to make promises about unquantifiable things such as 'steps'.
You can do this simply using QPC on windows and enforcing the forward ticking using a static local.
Simply promise that this is a forward ticking clock with the highest resolution and lowest noise available for the platform and make no other guarantees, other than perhaps suggesting that this might not be used reliably for benchmarking on older os/hardware platforms.

K




From guido at python.org  Mon Apr  2 19:42:50 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 2 Apr 2012 10:42:50 -0700
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
	<EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <CAP7+vJLKfpzPRYkX4LZd1QBpqkuSiaFcpboTFdcNdyAVuwU=zw@mail.gmail.com>

You seem to have missed the episode where I explained that caching the
last value in order to avoid going backwards doesn't work -- at least
not if the cached value is internal to the API implementation.

2012/4/2 Kristj?n Valur J?nsson <kristjan at ccpgames.com>:
>
>
>> -----Original Message-----
>> From: python-dev-bounces+kristjan=ccpgames.com at python.org
>> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
>> Behalf Of Cameron Simpson
>> Sent: 30. mars 2012 21:43
>> There seem to be a few competing features for clocks that people want:
>>
>> ? - monotonic - never going backward at all
>> ? - high resolution
>> ? - no steps
>>
> ?"no steps" is something unquantifiable. ?All time has steps in it. ?What you mean here is no 'noise'. ?And this is also never actually achievable.
> A clock that ticks forwards, but sometimes stops some and then ?ticks some more, is simply a clock with a lower resolution on average than what can be observed for certain time periods.
>
> It befuddles me somewhat how complicated you are making all of this.
> Simply provide the best high resolution, non-backwards ticking, performance timer that the platform provides, and don't try to make promises about unquantifiable things such as 'steps'.
> You can do this simply using QPC on windows and enforcing the forward ticking using a static local.
> Simply promise that this is a forward ticking clock with the highest resolution and lowest noise available for the platform and make no other guarantees, other than perhaps suggesting that this might not be used reliably for benchmarking on older os/hardware platforms.
>
> K
>
>
>



-- 
--Guido van Rossum (python.org/~guido)

From v+python at g.nevcal.com  Mon Apr  2 19:44:35 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Mon, 02 Apr 2012 10:44:35 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
References: <CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
	<20120402004327.GA18861@cskk.homeip.net>
	<CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
Message-ID: <4F79E583.5020704@g.nevcal.com>

On 4/2/2012 4:37 AM, Victor Stinner wrote:
> The API looks much more complex than the API proposed in PEP 418 just
> to get the time. You have to call a function to get a function, and
> then call the function, instead of just calling a function directly.
>
> Instead of returning an object with a now() method, I would prefer to
> get directly the function getting time, and another function to get
> "metadata" of the clock.

If there are more than two clocks, with different characteristics, no 
API is going to be both simple to use and fast to call.

If there are more than two clocks, with different characteristics, then 
having an API to get the right API to call to get a time seems very 
natural to me.

One thing I don't like about the idea of fallback being buried under 
some API is that the efficiency of that API on each call must be less 
than the efficiency of directly calling an API to get a single clock's 
time.  For frequently called high resolution clocks, this is more 
burdensome than infrequently called clocks.... yet those seem to be the 
ones for which fallbacks are proposed, because of potential unavailability.

Having properties on each of various different clock functions is 
cumbersome... the user code must know about each clock, how to obtain 
the properties, and then how to choose one for use... And how will one 
be chosen for use? Under the assumption that all return some sort of 
timestamp and take no parameters, a local name will be assigned to the 
clock of interest:

if ...:
      myTime = os.monotonous
elif ...:
     myTime = os.evenhigherres
...
elif ...:
      myTime = time. time

so that myTime can be use throughout.  Cameron's API hides all the names 
of the clocks, and instead offers to do the conditional logic for you, 
and the resultant API returned can be directly assigned to myTime, and 
the logic for choosing a clock deals only with the properties of the 
clock, not the names of the APIs, which is a nice abstraction.  There 
would not even be a need to document the actual names of the APIs for 
each individual clock, except that probably some folks would want to 
directly code them, especially if they are not interested in 
cross-platform work.

The only thing I'm not so sure about: can the properties be described by 
flags?  Might it not be better to have an API that allows specification 
of minimum resolution, in terms of fractional seconds? Perhaps other 
properties suffice as flags, but perhaps not resolution.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120402/70587611/attachment-0001.html>

From glyph at twistedmatrix.com  Mon Apr  2 20:29:28 2012
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Mon, 2 Apr 2012 11:29:28 -0700
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
	and/or time.highres()?
In-Reply-To: <EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
	<EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <FA504509-8311-47A3-983D-486BE9D9D73C@twistedmatrix.com>


On Apr 2, 2012, at 10:39 AM, Kristj?n Valur J?nsson wrote:

> "no steps" is something unquantifiable.  All time has steps in it.

"No steps" means something very specific when referring to time APIs.  As I recently explained here: <http://article.gmane.org/gmane.comp.python.devel/131487/>.

-glyph


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120402/64d9879e/attachment.html>

From scott+python-dev at scottdial.com  Mon Apr  2 22:17:24 2012
From: scott+python-dev at scottdial.com (Scott Dial)
Date: Mon, 02 Apr 2012 16:17:24 -0400
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <CAPTjJmonkVcUT6LUFKJTAGV_1Xf3jhWbGcQMmbr=9VeBqTqtag@mail.gmail.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
	<CAPTjJmonkVcUT6LUFKJTAGV_1Xf3jhWbGcQMmbr=9VeBqTqtag@mail.gmail.com>
Message-ID: <4F7A0954.2060203@scottdial.com>

On 4/2/2012 11:03 AM, Chris Angelico wrote:
> On Mon, Apr 2, 2012 at 9:50 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> That said, these files will always be outdated, so we might as well
>> remove them so that at least git / bzr users don't get confused.
> 
> Apologies for what may be a stupid suggestion, but is it possible to
> write a script that generates .gitignore and .bzrignore from
> .hgignore? That ought to solve the problem - take the former two out
> of the repository, and everyone who wants to use git or bzr can simply
> generate them on requirement.

In general, Hg's ignore files are more expressive (regex and globbing)
than Git's ignore files (globbing only). Our .hgignore file has regex
rules, but if someone was so inclined, they could expand those rules
based on their current HEAD.

I do not know if such a tool already exists in the wild.

-- 
Scott Dial
scott at scottdial.com

From cs at zip.com.au  Mon Apr  2 23:38:43 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 07:38:43 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
References: <CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
Message-ID: <20120402213843.GA8530@cskk.homeip.net>

On 02Apr2012 13:37, Victor Stinner <victor.stinner at gmail.com> wrote:
| > I've just finished sketching out a skeleton here:
| > ?https://bitbucket.org/cameron_simpson/css/src/fb476fcdcfce/lib/python/cs/clockutils.py
| 
| get_clock() returns None if no clock has the requested flags, whereas
| I expected an exception (LookupError or NotImplementError?).

That is deliberate. People can easily write fallback like this:

  clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)

With exceptions one gets a complicated try/except/else chain that is
much harder to read. With a second fallback the try/except gets even
worse.

If one wants an exception it is easy to follow up with:

  if not clock:
    raise RunTimeError("no suitable clocks on offer on this platform")

| get_clock() doesn't remember if a clock works or not (if it raises an
| OSError) and does not fallback to the next clock on error. See
| "pseudo-codes" in the PEP 418.

I presume the available clocks are all deduced from the platform. Your
pseudo code checks for OSError at fetch-the-clock time. I expect that
to occur once when the module is loaded, purely to populate the table
of avaiable platform clocks.

If you are concerned about clocks being available/unavailable at
different times (unplugging the GPS peripheral? just guessing here)
that will have to raise OSError during the now() call (assuming the
clock even exposes the failure; IMO it should when now() is called).

| The idea of flags attached to each clock is interesting, but I don't
| like the need of different list of clocks.

There's no need, just quality of implementation for the monotonic()/hires()
convenience calls, which express the (hoped to be common) policy of what
clock to offer for each.

We've just had pages upon pages of discussion about what clock to offer
for the rather bald monotonic() (et al) calls. The ordering of the
MONTONIC_CLOCKS list would express the result of that discussion,
in that the "better" clocks come first.

| Should I use
| MONTONIC_CLOCKS or HIRES_CLOCKS when I would like a monotonic and
| high-resolution clock?

Note that you don't need to provide a clock list at all; get_clock(0
will use ALL_CLOCKS by default, and hires() and monotonic() should each
have their own default list.

I'll put in montonic() and montonic_clock(clocklist=MONOTONIC_CLOCKS)
into the skeleton to make this clear; I see I've omitted them.

Regarding the choice itself: as the _caller_ (not the library author),
you must decide what you want most. You're already planning offering
monotonic() and hires() calls without my proposal! Taking your query "Should
I use MONTONIC_CLOCKS or HIRES_CLOCKS when I would like a monotonic and
high-resolution clock" is _already_ a problem. Of course you must call
monotonic() or hires() first under the current scheme, and must answer this
question anyway. Do you prefer hires? Use it first! No preference? Then the
question does not matter.

If I, as the caller, have a preference then it is obvious what to use.
If I do not have a preference then I can just call get_clock() with both
flags and then arbitrarily fall back to hires() or monotonic() if that
does not work.

| It would be simpler to have only one global and
| *private* list.

No. No no no no no!

The whole point is to let the user be _able_ to control the choices to a
fair degree without platform special knowledge. The lists are
deliberately _optional_ parameters and anyway hidden in the hires() and
monotonic() convenince functions; the user does not need to care about
them. But the picky user may! The lists align exactly one to one with
the feature flags, so there is no special knowledge present here that is
not already implicit in publishing the feature flags.

| If you have only one list of clocks, how do sort the list to get
| QueryPerformanceCounter when the user asks for highres and
| GetTickCount when the user asks for monotonic?

This is exactly why there are supposed to be different lists.
You have just argued against your objection above.

| The "if clock.flags &
| flags == flags:" test in get_clock() is maybe not enough. I suppose
| that we would have the following flags for Windows functions:
| 
| QueryPerformanceCounter.flags = T_HIRES
| GetTickCount.flags = T_MONOTONIC | T_STEADY
| 
| (or maybe QueryPerformanceCounter.flags = T_HIRES | T_MONOTONIC ?)

Obviously these depend on the clock characteristics. Is
QueryPerformanceCounter monotonic?

| monotonic_clock() should maybe try to get a clock using the following
| list of conditions:
|  - T_MONOTONIC | T_STEADY
|  - T_MONOTONIC | T_HIGHRES
|  - T_MONOTONIC

Sure, seems reasonable. That is library internal policy _for the convenince
monotonic() function()_.

| The T_HIGHRES flag in unclear, even in the PEP. According to the PEP,
| any monotonic clock is considered as a "high-resolution" clock. Do you
| agree?

Not particularly. I easily can imagine a clock with one second resolution
hich was monotonic. I would not expect it to have the T_HIRES flag.
Example: a synthetic monotonic clock based on a V7 UNIX time() call.

But, if it _happens_ that all the monotonic clocks are also hires, so be
it. That would be an empirical outcome, not policy.

| So we would have:
| 
| GetTickCount.flags = T_MONOTONIC | T_STEADY | T_HIGHRES
| 
| Even if GetTickCount has only an accuracy of 15 ms :-/

T_HIGHRES is a quality call, surely? If 15ms is too sloppy for a "high
resolution, the is should _not_ have the T_HIRES flag.

| Can list please give the list of flags of each clocks listed in the
| PEP? Only clocks used for time.time, time.monotonic and time.highres
| (not process and thread clocks nor QueryUnbiasedInterruptTime).
| 
| > ? ? ?# get a clock object - often a singleton under the hood
| > ? ? ?T = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_STEADY|T_HIRES)
| > ? ? ?# what kind of clock did I get?
| > ? ? ?print T.flags
| > ? ? ?# get the current time
| > ? ? ?now = T.now
| 
| The API looks much more complex than the API proposed in PEP 418 just
| to get the time. You have to call a function to get a function, and
| then call the function, instead of just calling a function directly.

One could have a flat interface as in the PEP, but then the results are
not inspectable; the user cannot find out what specific clock, or even
kind of clock, was used for the result returned.

Unless you want to subclass float for the return values.  You could return
instances of a float with meta information pointing at the clock used to
provide it. I'm -0.5 on that idea.

Another advantage of returning a clock object is that it avoids the
difficulty of switching implementations behind the user's back, an issue
raised in the discussion and rightly rejected as a bad occurence.

If the user is handed a clock object, _they_ keep the "current clock in
use" state by by having the object reference.

| Instead of returning an object with a now() method, I would prefer to
| get directly the function getting time, and another function to get
| "metadata" of the clock.

Then they're disconnected. How do I know the get-metadata call accesses
the clock I just used? Only by having library internal global state.

I agree some people probably want the flat "get me the time" call, and have
no real objection to such existing. But I strongly object to not giving
the user control over what they use, and the current API offers no
control.

| > This removes policy from the library functions and makes it both simple
| > and obvious in the user's calling code, and also makes it possible for
| > the user to inspect the clock and find out what quality/flavour of clock
| > they got.
| 
| I'm not sure that users understand correctly differences between all
| these clocks and are able to use your API correctly. How should I
| combinese these 3 flags (T_HIRES, T_MONOTONIC and T_STEADY)? Can I use
| any combinaison?

Of course. Just as with web searches, too many flags may get you an
empty result on some platforms, hence the need to fall back. But the
_nature_ of the fallback should be in the user's hands. The hires() et
al calls can of course offer convenient presupplied fallback according
to the preconceptions of the library authors, hopefully well tuned to
common users' needs. But if should not be the only mode offered, because
you don't know the user's needs.

| Which flags are "portable"? Or should I always use an explicit
| fallback to ensure getting a clock on any platform?

All the flag are portable, but if the platform doesn't supply a clock
with the requested flags, even if there's only one flag, the correct
result is "None" for the clock offered.

Note you can supply no flags!

You can always fall all the way back to 0 for the flags; in the skeleton
provided that will get you UNIXClock, which is a wrapper for the
existing time.time(). In fact, I'll make the flags parameter also
optional for get_clock(), defaulting to 0, to make that easy. That
becomes your totally portable call:-)

| Could you please update your code according to my remarks? I will try
| to integrate it into the PEP. A PEP should list all alternatives!

Surely.

The only updates I can see are to provide the flat interface
(instead of via clock-object indirection) and the missing hires_clock()
and monotonic_clock() convenience methods.

I'll do that. Followup post shortly with new code URL.
Would you propose other specific additions?

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Whatever is not nailed down is mine.  What I can pry loose is not nailed
down. - Collis P. Huntingdon

From ncoghlan at gmail.com  Mon Apr  2 23:40:28 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 3 Apr 2012 07:40:28 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F79E583.5020704@g.nevcal.com>
References: <CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
	<20120402004327.GA18861@cskk.homeip.net>
	<CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
	<4F79E583.5020704@g.nevcal.com>
Message-ID: <CADiSq7cKr4gqc8MygShzCS9hbofTySoOVDzob0BBypVPt3M8pA@mail.gmail.com>

On Tue, Apr 3, 2012 at 3:44 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> One thing I don't like about the idea of fallback being buried under some
> API is that the efficiency of that API on each call must be less than the
> efficiency of directly calling an API to get a single clock's time.

No, that's a misunderstanding of the fallback mechanism. The fallback
happens when the time module is initialised, not on every call. Once
the appropriate clock has been selected during module initialisation,
it is invoked directly at call time.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Mon Apr  2 23:43:20 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 3 Apr 2012 07:43:20 +1000
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <20120402135048.6ef7d87d@pitrou.net>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
Message-ID: <CADiSq7fsn1aduDoT-5xfJjwo3VGVvPkwg6ji2U91HCZGhNsbfQ@mail.gmail.com>

On Mon, Apr 2, 2012 at 9:50 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> That said, these files will always be outdated, so we might as well
> remove them so that at least git / bzr users don't get confused.

Given that they were originally *added* by core devs that are (or
were) using git/bzr for their own local development, I don't think
it's quite that simple.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From cs at zip.com.au  Mon Apr  2 23:51:57 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 07:51:57 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120402213843.GA8530@cskk.homeip.net>
References: <20120402213843.GA8530@cskk.homeip.net>
Message-ID: <20120402215157.GA18668@cskk.homeip.net>

On 03Apr2012 07:38, I wrote:
| On 02Apr2012 13:37, Victor Stinner <victor.stinner at gmail.com> wrote:
| | Could you please update your code according to my remarks? I will try
| | to integrate it into the PEP. A PEP should list all alternatives!

New code here:
  https://bitbucket.org/cameron_simpson/css/src/91848af8663b/lib/python/cs/clockutils.py

Diff:
  https://bitbucket.org/cameron_simpson/css/changeset/91848af8663b

Changelog: updates based on suggestions from Victor Stinner: "flat" API
calls to get time directly, make now() a method instead of a property,
default flags for get_clock(), adjust hr_clock() to hires_clock(0 for
consistency.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Q: How does a hacker fix a function which doesn't work for all of the elements in its domain?
A: He changes the domain.
- Rich Wareham <rjw57 at hermes.cam.ac.uk>

From v+python at g.nevcal.com  Mon Apr  2 23:59:02 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Mon, 02 Apr 2012 14:59:02 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CADiSq7cKr4gqc8MygShzCS9hbofTySoOVDzob0BBypVPt3M8pA@mail.gmail.com>
References: <CAMpsgwZ7+6ONeY=0jdDtzf2XbA=SmhYYXCoH4PzmEkQ_bPzQbA@mail.gmail.com>
	<20120402004327.GA18861@cskk.homeip.net>
	<CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
	<4F79E583.5020704@g.nevcal.com>
	<CADiSq7cKr4gqc8MygShzCS9hbofTySoOVDzob0BBypVPt3M8pA@mail.gmail.com>
Message-ID: <4F7A2126.1080300@g.nevcal.com>

On 4/2/2012 2:40 PM, Nick Coghlan wrote:
> On Tue, Apr 3, 2012 at 3:44 AM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> >  One thing I don't like about the idea of fallback being buried under some
>> >  API is that the efficiency of that API on each call must be less than the
>> >  efficiency of directly calling an API to get a single clock's time.
> No, that's a misunderstanding of the fallback mechanism. The fallback
> happens when the time module is initialised, not on every call. Once
> the appropriate clock has been selected during module initialisation,
> it is invoked directly at call time.
Nick,

I would hope that is how the fallback mechanism would be coded, but I'm 
pretty sure I've seen other comments in this thread that implied 
otherwise.  But please don't ask me to find them, this thread is huge.

Glenn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120402/28458c47/attachment.html>

From cs at zip.com.au  Tue Apr  3 00:03:32 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 08:03:32 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F79E583.5020704@g.nevcal.com>
References: <4F79E583.5020704@g.nevcal.com>
Message-ID: <20120402220332.GA18959@cskk.homeip.net>

On 02Apr2012 10:44, Glenn Linderman <v+python at g.nevcal.com> wrote:
| On 4/2/2012 4:37 AM, Victor Stinner wrote:
| > The API looks much more complex than the API proposed in PEP 418 just
| > to get the time. You have to call a function to get a function, and
| > then call the function, instead of just calling a function directly.
| >
| > Instead of returning an object with a now() method, I would prefer to
| > get directly the function getting time, and another function to get
| > "metadata" of the clock.
| 
| If there are more than two clocks, with different characteristics, no 
| API is going to be both simple to use and fast to call.
| 
| If there are more than two clocks, with different characteristics, then 
| having an API to get the right API to call to get a time seems very 
| natural to me.

It is, though Victor's point about offering the very easy to use API is
valid. The new code has the "flat" monotonic() et al calls as well.

| One thing I don't like about the idea of fallback being buried under 
| some API is that the efficiency of that API on each call must be less 
| than the efficiency of directly calling an API to get a single clock's 
| time.  For frequently called high resolution clocks, this is more 
| burdensome than infrequently called clocks.... yet those seem to be the 
| ones for which fallbacks are proposed, because of potential unavailability.

I hadn't thought about that, but it isn't actually a big deal. The
overhead isn't zero, but in order to always use the _same_ clock to
return hires() (for example) the library has to cache the clock lookup
anyway. Current clockutils.py skeleton here:

  https://bitbucket.org/cameron_simpson/css/src/91848af8663b/lib/python/cs/clockutils.py

does so.

| The only thing I'm not so sure about: can the properties be described by 
| flags?  Might it not be better to have an API that allows specification 
| of minimum resolution, in terms of fractional seconds? Perhaps other 
| properties suffice as flags, but perhaps not resolution.

It sounds nice, but there are some difficulties.

Firstly, the (currently just 3) flags were chosen to map to the three
features sought (in various cobinations) for clocks. Once you start
requesting precision (a totally reasonable desire, BTW) you also want to
request degree of slew (since "steady" is a tunabe term) and so forth.
And what for clocks that have variable precision (I'm imaging here a
clock which really is a float, and for large times (in the far future)
can't return the sane resolution because of the size of a float.

The concern is valid though. I could imagine beefing up the clock object
metadata with .epoch (can be None!), precision (function of float width
versus clock return value epislon), epsilon (your fraction of a second
parameter). Of course, for some clocks any of these might be None.

Then the truly concerned user iterates over the available clocks
with the desired coarse flags, inspecting each closely for precision
or whatever. Easy enough to tweak get_clock() to take an optional
all_clocks=False parameter to return all matching clocks in an iterable
instead of (the first match or None). Or the user could reach directly
for one of the clock lists.

cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Craft, n.  A fool's substitute for brains.      - The Devil's Dictionary

From cs at zip.com.au  Tue Apr  3 00:05:36 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 08:05:36 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120402215157.GA18668@cskk.homeip.net>
References: <20120402215157.GA18668@cskk.homeip.net>
Message-ID: <20120402220536.GA22629@cskk.homeip.net>

On 03Apr2012 07:51, I wrote:
| Changelog: updates based on suggestions from Victor Stinner: "flat" API
| calls to get time directly, make now() a method instead of a property,
| default flags for get_clock(), adjust hr_clock() to hires_clock(0 for
| consistency.

BTW, I'd also happily change T_HIRES to HIRES and so forth. They're hard to
type and read at present. The prefix is a hangover from old C coding habits,
with no namespaces.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

If you don't live on the edge, you're taking up too much space. - t-shirt

From cs at zip.com.au  Tue Apr  3 00:44:24 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 08:44:24 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120402213843.GA8530@cskk.homeip.net>
References: <20120402213843.GA8530@cskk.homeip.net>
Message-ID: <20120402224424.GA1763@cskk.homeip.net>

On 03Apr2012 07:38, I wrote:
| On 02Apr2012 13:37, Victor Stinner <victor.stinner at gmail.com> wrote:
| | Should I use
| | MONTONIC_CLOCKS or HIRES_CLOCKS when I would like a monotonic and
| | high-resolution clock?
| 
| Note that you don't need to provide a clock list at all; get_clock(0
| will use ALL_CLOCKS by default, and hires() and monotonic() should each
| have their own default list.
[...]
| | It would be simpler to have only one global and
| | *private* list.
[...]
| The whole point is to let the user be _able_ to control the choices to a
| fair degree without platform special knowledge.

On some reflection I may lean a little more Victor's way here:

I am still very much of the opinion that there should be multiple clock lists
so that hires() can offer the better hires clocks first and so forth.

However, perhaps I misunderstood and he was asking if he needed to name
a list to get a hires clock etc. This intent is not to need to, via the
convenience functions.

Accordingly, maybe the list names needn't be published, and may complicate
the published interface even though they're one to one with the flags.

It would certainly up the ante slightly f we added more
flags some time later. (For example, I think any synthetic clocks
such as the caching example in the skeleton should probably have a
SYNTHETIC flag. You might never ask for it, but you should be able to
check for it.

(I personally suspect some of the OS clocks are themselves synthetic,
but no matter...)

The flip side of this of course is that if the list names are private then
the get_clock() and hires() etc functions almost mandatorially need the
optional all_clocks=False parameter mooted in a sibling post; the really
picky user needs a way to iterate over the available clocks to make a fine
grained decision. On example would be to ask for monotonic clocks but omit
synthetic ones (there's a synthetic clock in the skeleton though I don't
partiularly expect one in reality - that really is better in a broader
"*utils" module; I also do NOT want to get into complicated parameters
to say these flags but not _those_ flags and so forth for other metadata.

And again, an external module offering synthetic clocks could easily want to
be able to fetch the existing and augument the list with its own, then use
that with the get_clock() interfaces.

So in short I think:

  - there should be, internally at least, multiple lists for quality of
    returned result

  - there should be a way to iterate over the available clocks, probably
    via an all_clocks paramater instead of a public list name

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

There is hopeful symbolism in the fact that flags do not wave in a vacuum.
        - Arthur C. Clarke

From solipsis at pitrou.net  Tue Apr  3 00:44:32 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 3 Apr 2012 00:44:32 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <CADiSq7fsn1aduDoT-5xfJjwo3VGVvPkwg6ji2U91HCZGhNsbfQ@mail.gmail.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
	<CADiSq7fsn1aduDoT-5xfJjwo3VGVvPkwg6ji2U91HCZGhNsbfQ@mail.gmail.com>
Message-ID: <20120403004432.561db865@pitrou.net>

On Tue, 3 Apr 2012 07:43:20 +1000
Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Mon, Apr 2, 2012 at 9:50 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > That said, these files will always be outdated, so we might as well
> > remove them so that at least git / bzr users don't get confused.
> 
> Given that they were originally *added* by core devs that are (or
> were) using git/bzr for their own local development, I don't think
> it's quite that simple.

Wasn't it back when SVN was still our official VCS, though?
I don't think Barry still uses bzr, and who ever used git to manage
their patches against the CPython repo?

cheers

Antoine.

From rdmurray at bitdance.com  Mon Apr  2 17:58:51 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 02 Apr 2012 11:58:51 -0400
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <20120403004432.561db865@pitrou.net>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
	<CADiSq7fsn1aduDoT-5xfJjwo3VGVvPkwg6ji2U91HCZGhNsbfQ@mail.gmail.com>
	<20120403004432.561db865@pitrou.net>
Message-ID: <20120402235849.2D44A2500E3@webabinitio.net>

On Tue, 03 Apr 2012 00:44:32 +0200, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Tue, 3 Apr 2012 07:43:20 +1000
> Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
> > On Mon, Apr 2, 2012 at 9:50 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > > That said, these files will always be outdated, so we might as well
> > > remove them so that at least git / bzr users don't get confused.
> > 
> > Given that they were originally *added* by core devs that are (or
> > were) using git/bzr for their own local development, I don't think
> > it's quite that simple.
> 
> Wasn't it back when SVN was still our official VCS, though?
> I don't think Barry still uses bzr, and who ever used git to manage
> their patches against the CPython repo?

That's my memory, too.

I have to laugh at the claim that Barry doesn't use bzr.  (But yeah,
I know what you mean, I think he does use hg now for cpython development.)

I think Benjamin was the one who used git, but I'm probably
misremembering.

--David

From cs at zip.com.au  Tue Apr  3 02:18:46 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 10:18:46 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7A2126.1080300@g.nevcal.com>
References: <4F7A2126.1080300@g.nevcal.com>
Message-ID: <20120403001845.GA23375@cskk.homeip.net>

On 02Apr2012 14:59, Glenn Linderman <v+python at g.nevcal.com> wrote:
| On 4/2/2012 2:40 PM, Nick Coghlan wrote:
| > On Tue, Apr 3, 2012 at 3:44 AM, Glenn Linderman<v+python at g.nevcal.com>  wrote:
| >> >  One thing I don't like about the idea of fallback being buried under some
| >> >  API is that the efficiency of that API on each call must be less than the
| >> >  efficiency of directly calling an API to get a single clock's time.
| > No, that's a misunderstanding of the fallback mechanism. The fallback
| > happens when the time module is initialised, not on every call. Once
| > the appropriate clock has been selected during module initialisation,
| > it is invoked directly at call time.
| 
| I would hope that is how the fallback mechanism would be coded, but I'm 
| pretty sure I've seen other comments in this thread that implied 
| otherwise.  But please don't ask me to find them, this thread is huge.

The idea of falling back to different clocks on the fly on different
calls got a bit of a rejection I thought. A recipe for clock
inconsitency whatever the failings of the current clock.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

We need a taxonomy for 'printing-that-is-no-longer-printing.'
- overhead by WIRED at the Intelligent Printing conference Oct2006

From brian at python.org  Tue Apr  3 04:12:04 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 2 Apr 2012 21:12:04 -0500
Subject: [Python-Dev] Preparation for VS2010 - MSDN for Windows build slaves,
	core devs
Message-ID: <CAD+XWwq0BHBFttUgrN+O_bF=wpz-_YA0HX_OJU79pB_70HkJSw@mail.gmail.com>

Hi all,

If you are a running a build slave or otherwise have an MSDN account
for development work, please check that your MSDN subscription is
still in effect. If the subscription expired, please let me know in
private what your subscriber ID is along with the email address you
use for the account.

Eventually we're switching to VS2010 so each slave will need to have
that version of the compiler installed.

Thanks

From regebro at gmail.com  Tue Apr  3 07:51:57 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Tue, 3 Apr 2012 07:51:57 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120402213843.GA8530@cskk.homeip.net>
References: <CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
	<20120402213843.GA8530@cskk.homeip.net>
Message-ID: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>

I like the aim of letting the user control what clock it get, but I
find this API pretty horrible:

> ?clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)

Just my 2 groszy.

//Lennart

From cs at zip.com.au  Tue Apr  3 08:03:18 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 16:03:18 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
Message-ID: <20120403060317.GA31001@cskk.homeip.net>

On 03Apr2012 07:51, Lennart Regebro <regebro at gmail.com> wrote:
| I like the aim of letting the user control what clock it get, but I
| find this API pretty horrible:
| 
| > ?clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)

FWIW, the leading "T_" is now gone, so it would now read:

  clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)

If the symbol names are not the horribleness, can you qualify what API
you would like more?
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

We had the experience, but missed the meaning.  - T.S. Eliot

From breamoreboy at yahoo.co.uk  Tue Apr  3 10:03:44 2012
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Tue, 03 Apr 2012 09:03:44 +0100
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120403060317.GA31001@cskk.homeip.net>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
Message-ID: <jleau6$2dt$1@dough.gmane.org>

On 03/04/2012 07:03, Cameron Simpson wrote:
> On 03Apr2012 07:51, Lennart Regebro<regebro at gmail.com>  wrote:
> | I like the aim of letting the user control what clock it get, but I
> | find this API pretty horrible:
> |
> |>    clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)
>
> FWIW, the leading "T_" is now gone, so it would now read:
>
>    clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
>
> If the symbol names are not the horribleness, can you qualify what API
> you would like more?

I reckon the API is ok given that you don't have to supply the flags, 
correct?

A small point but I'm with (I think) Terry Reedy and Steven D'Aprano in 
that hires is an English word, could you please substitute highres and 
HIGHRES, thanks.

-- 
Cheers.

Mark Lawrence.


From cs at zip.com.au  Tue Apr  3 10:43:05 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 3 Apr 2012 18:43:05 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <jleau6$2dt$1@dough.gmane.org>
References: <jleau6$2dt$1@dough.gmane.org>
Message-ID: <20120403084305.GA19441@cskk.homeip.net>

On 03Apr2012 09:03, Mark Lawrence <breamoreboy at yahoo.co.uk> wrote:
| On 03/04/2012 07:03, Cameron Simpson wrote:
| > On 03Apr2012 07:51, Lennart Regebro<regebro at gmail.com>  wrote:
| > | I like the aim of letting the user control what clock it get, but I
| > | find this API pretty horrible:
| > |
| > |>    clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)
| >
| > FWIW, the leading "T_" is now gone, so it would now read:
| >
| >    clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
| >
| > If the symbol names are not the horribleness, can you qualify what API
| > you would like more?
| 
| I reckon the API is ok given that you don't have to supply the flags, 
| correct?

That's right. And if the monotonic() or monotonic_clock() functions
(or the hires* versions if suitable) do what you want you don't even
need that. You only need the "or" style to choose your own fallback
according to your own criteria.

| A small point but I'm with (I think) Terry Reedy and Steven D'Aprano in 
| that hires is an English word, could you please substitute highres and 
| HIGHRES, thanks.

I have the same issue and would be happy to do it. Victor et al, how do
you feel about this? People have been saying "hires" throughout the
threads I think, but I for one would be slightly happier with "highres".

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I bested him in an Open Season of scouring-people's-postings-looking-for-
spelling-errors.        - kevin at rotag.mi.org (Kevin Darcy)

From kristjan at ccpgames.com  Tue Apr  3 11:44:53 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 3 Apr 2012 09:44:53 +0000
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAP7+vJLKfpzPRYkX4LZd1QBpqkuSiaFcpboTFdcNdyAVuwU=zw@mail.gmail.com>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
	<EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
	<CAP7+vJLKfpzPRYkX4LZd1QBpqkuSiaFcpboTFdcNdyAVuwU=zw@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD3384477@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf
> Of Guido van Rossum
> Sent: 2. apr?l 2012 17:43
> To: Kristj?n Valur J?nsson
> Cc: Cameron Simpson; Python Dev
> Subject: Re: [Python-Dev] Use QueryPerformanceCounter() for
> time.monotonic() and/or time.highres()?
> 
> You seem to have missed the episode where I explained that caching the last
> value in order to avoid going backwards doesn't work -- at least not if the
> cached value is internal to the API implementation.
> 
Yes, and I can't find it by briefly searching my mail.  I haven't had the energy to follow every bit of this discussion because it has become completely insane.

Of course we cannot promise not moving backwards, since there is a 64 bit wraparound some years in the future.  Otherwise, evidence contradicts your claim.
Here's actual code from production:

BOOL WINAPI QueryPerformanceCounterCCP( LARGE_INTEGER* li )
{
	static LARGE_INTEGER last = {0};
	BOOL ok = QueryPerformanceCounter(li);
	if( !ok )
	{
		return FALSE;
	}

	if( li->QuadPart > last.QuadPart )
	{
		last = *li;
	}
	else
	{
		*li = last;
	}
	return TRUE;
}

This has been running for many years on an incredible array of hardware and operating systems.  However, we mostly don't do this caching anymore, this code is a rudiment.  In all other places, a straight QPC is good enough for our purposes.  Even negative delta values of time are usually  harmless on the application level.  A curiosity, but harmless.  I am offering empirical evidence here from hundreds of thousands of computers over six years: For timing and benchmarking, QPC is good enough, and will only be as precise as the hardware and operating system permits, which in practice is good enough.

Which is why am flabbergasted by all of this bikeshedding.  My original submission (http://bugs.python.org/issue10278) Is merely a suggestion to provide a standardised clock function, useful for measuring the delta-t to the best abilities of the platform.  This is incredibly useful in many areas and necessary because time.clock() currently means different things on different operating systems.
There is no need to try to overspecify this to become something which it never can be.  If someone wants a real time clock with no time slew to control a radio telescope, he better write his own interface to an atomic clock.  What he definitely shouldn't be doing is using a built in timer on an old computer with an old operating system.

Also, since you objected to the original suggestion of time.wallclock(), here is the definition from Wikipedia:  http://en.wikipedia.org/wiki/Wall_clock_time . I actually never read this before, but it agrees with my original definition of relative passage of time in the real world.  I got the term myself from using profiling tools, which measure program execution in cpu time or wallclock time.

Cheers,
Kristj?n



From victor.stinner at gmail.com  Tue Apr  3 13:26:12 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 3 Apr 2012 13:26:12 +0200
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
Message-ID: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>

Hi,

I would to rename time.monotonic() to time.steady() in the PEP 418 for
the following reasons:

 - time.steady() may fallback to the system clock which is not
monotonic, it's strange to have to check for
time.get_clock_info('monotonic')['is_monotonic']
 - time.steady() uses GetTickCount() instead of
QueryPerformanceCounter() whereas both are monotonic, but
QueryPerformanceCounter() is not steady

Python steady clock will be different than the C++ definition.

You may argue that time.steady() is not always steady: it may fallback
to the system clock which is adjusted by NTP and can jump
backward/forward with a delta greater than 1 hour. In practice, there
is only one operating system that does not provide a monotonic clock:
GNU/Hurd.

I hesitate to add "is_steady" to time.get_clock_info(), but a boolean
is not very useful, it would be better to have a number.

Arguments for time.monotonic() name:

 - Users are looking for the "monotonic" name
 - Most of the time, time.monotonic() is a monotonic clock

--

On Linux, we might use CLOCK_MONOTONIC for time.steady() and
CLOCK_MONOTONIC_RAW for time.highres(). The NTP daemon Linux on Linux
uses a reliable clock to adjust CLOCK_MONOTONIC frequency and so
CLOCK_MONOTONIC is steady and it may go backward in a short period,
whereas CLOCK_MONOTONIC_RAW cannot go backward and so may fit closer
time.highres() requirements.

Currently, CLOCK_MONOTONIC is used for time.highres() and
time.steady() in the PEP.

--

NTP on Linux should only slew CLOCK_MONOTONIC, not step it. But it
looks like there was a bug in the Linux kernel 2.6.31: CLOCK_MONOTONIC
goes backward sometimes. Bug introduced in 2.6.31 by (August 14,
2009):
https://github.com/torvalds/linux/commit/0a54419836254a27baecd9037103171bcbabaf67
and fixed in the kernel 2.6.32 by (November 16, 2009):
https://github.com/torvalds/linux/commit/0696b711e4be45fa104c12329f617beb29c03f78

Someone had the bug:
http://stackoverflow.com/questions/3657289/linux-clock-gettimeclock-monotonic-strange-non-monotonic-behavior

Victor
PS: I already changed time.monotonic() to time.steady() in the PEP :-p

From victor.stinner at gmail.com  Tue Apr  3 13:42:51 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 3 Apr 2012 13:42:51 +0200
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <EFE3877620384242A686D52278B7CCD3384477@RKV-IT-EXCH104.ccp.ad.local>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
	<EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
	<CAP7+vJLKfpzPRYkX4LZd1QBpqkuSiaFcpboTFdcNdyAVuwU=zw@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD3384477@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <CAMpsgwah_rUsFrigAi3XtYAtCkFh6BbMR27K6zJxB=oJYkOOPQ@mail.gmail.com>

>> You seem to have missed the episode where I explained that caching the last
>> value in order to avoid going backwards doesn't work -- at least not if the
>> cached value is internal to the API implementation.
>>
> Yes, and I can't find it by briefly searching my mail. ?I haven't had the energy to follow every bit of this discussion because it has become completely insane.

I'm trying to complete the PEP, but I didn't add this part yet.

> Of course we cannot promise not moving backwards, since there is a 64 bit wraparound some years in the future.

Some years? I computed 584.5 years, so it should not occur in
practice. 32-bit wraparound is a common issue which occurs in practice
on Windows (49.7 days wraparound), and I propose a workaround in the
PEP (already implemented in the related issue).

> Here's actual code from production:
>
> BOOL WINAPI QueryPerformanceCounterCCP( LARGE_INTEGER* li )
> {
> ? ? ? ?static LARGE_INTEGER last = {0};
> ? ? ? ?BOOL ok = QueryPerformanceCounter(li);
> ? ? ? ?if( !ok )
> ? ? ? ?{
> ? ? ? ? ? ? ? ?return FALSE;
> ? ? ? ?}

Did you already see it failing in practice? Python ignores the return
value and only uses the counter value.

> Even negative delta values of time are usually ?harmless on the application level.
>?A curiosity, but harmless.

It depends on your usecase. For a scheduler or to implement a timeout,
it does matter. For a benchmark, it's not an issue because you usually
repeat a test at least 3 times. Most advanced benchmarked tools gives
a confidence factor to check if the benchmark ran fine or not.

>?I am offering empirical evidence here from hundreds of thousands of computers
> over six years: For timing and benchmarking, QPC is good enough, and will only
> be as precise as the hardware and operating system permits, which in practice
> is good enough.

The PEP contains also different proofs that QPC is not steady,
especially on virtual machines.

Victor

From mal at egenix.com  Tue Apr  3 14:26:25 2012
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 03 Apr 2012 14:26:25 +0200
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAMpsgwah_rUsFrigAi3XtYAtCkFh6BbMR27K6zJxB=oJYkOOPQ@mail.gmail.com>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
	<EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
	<CAP7+vJLKfpzPRYkX4LZd1QBpqkuSiaFcpboTFdcNdyAVuwU=zw@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD3384477@RKV-IT-EXCH104.ccp.ad.local>
	<CAMpsgwah_rUsFrigAi3XtYAtCkFh6BbMR27K6zJxB=oJYkOOPQ@mail.gmail.com>
Message-ID: <4F7AEC71.2090208@egenix.com>

Victor Stinner wrote:
>>> You seem to have missed the episode where I explained that caching the last
>>> value in order to avoid going backwards doesn't work -- at least not if the
>>> cached value is internal to the API implementation.
>>>
>> Yes, and I can't find it by briefly searching my mail.  I haven't had the energy to follow every bit of this discussion because it has become completely insane.
> 
> I'm trying to complete the PEP, but I didn't add this part yet.
> 
>> Of course we cannot promise not moving backwards, since there is a 64 bit wraparound some years in the future.
> 
> Some years? I computed 584.5 years, so it should not occur in
> practice. 32-bit wraparound is a common issue which occurs in practice
> on Windows (49.7 days wraparound), and I propose a workaround in the
> PEP (already implemented in the related issue).
> 
>> Here's actual code from production:
>>
>> BOOL WINAPI QueryPerformanceCounterCCP( LARGE_INTEGER* li )
>> {
>>        static LARGE_INTEGER last = {0};
>>        BOOL ok = QueryPerformanceCounter(li);
>>        if( !ok )
>>        {
>>                return FALSE;
>>        }
> 
> Did you already see it failing in practice? Python ignores the return
> value and only uses the counter value.
> 
>> Even negative delta values of time are usually  harmless on the application level.
>>  A curiosity, but harmless.
> 
> It depends on your usecase. For a scheduler or to implement a timeout,
> it does matter. For a benchmark, it's not an issue because you usually
> repeat a test at least 3 times. Most advanced benchmarked tools gives
> a confidence factor to check if the benchmark ran fine or not.
> 
>>  I am offering empirical evidence here from hundreds of thousands of computers
>> over six years: For timing and benchmarking, QPC is good enough, and will only
>> be as precise as the hardware and operating system permits, which in practice
>> is good enough.
> 
> The PEP contains also different proofs that QPC is not steady,
> especially on virtual machines.

I'm not sure I understand what you are after here, Victor. For benchmarks
it really doesn't matter if one or two runs fail due to the timer having
a problem: you just repeat the run and ignore the false results (you
have such issues in all empirical studies). You're making things
needlessly complicated here.

Regarding the approach to try to cover all timing requirements into
a single time.steady() API, I'm not convinced that this is good
approach. Different applications have different needs, so it's
better to provide interfaces to what the OS has to offer and
let the application decide what's best.

If an application wants to have a monotonic clock, it should use
time.monotonic(). The OS doesn't provide it, you get an AttributeError
and revert to some other function, depending on your needs.

Having a time.steady() API make this decision for you, is not
going to make your application more portable, since the choice
will inevitably be wrong in some cases (e.g. going from
CLOCK_MONOTONIC to time.time() as fallback).

BTW: You might also want to take a look at the systimes.py module
in pybench. We've been through discussions related to
benchmark timing in 2006 already and that module summarizes
the best practice outcome :-)

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 03 2012)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2012-04-03: Python Meeting Duesseldorf                             today

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From regebro at gmail.com  Tue Apr  3 16:09:28 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Tue, 3 Apr 2012 16:09:28 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120403060317.GA31001@cskk.homeip.net>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
Message-ID: <CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>

On Tue, Apr 3, 2012 at 08:03, Cameron Simpson <cs at zip.com.au> wrote:
> ?clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
>
> If the symbol names are not the horribleness, can you qualify what API
> you would like more?

Well, get_clock(monotonic=True, highres=True) would be a vast
improvement over get_clock(MONOTONIC|HIRES). I also think it should
raise an error if not found. The clarity and easy of use of the API is
much more important than how much you can do in one line.

//Lennart

From regebro at gmail.com  Tue Apr  3 16:13:44 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Tue, 3 Apr 2012 16:13:44 +0200
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
Message-ID: <CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>

On Tue, Apr 3, 2012 at 13:26, Victor Stinner <victor.stinner at gmail.com> wrote:
> Hi,
>
> I would to rename time.monotonic() to time.steady() in the PEP 418 for
> the following reasons:
>
> ?- time.steady() may fallback to the system clock which is not
> monotonic, it's strange to have to check for
> time.get_clock_info('monotonic')['is_monotonic']
> ?- time.steady() uses GetTickCount() instead of
> QueryPerformanceCounter() whereas both are monotonic, but
> QueryPerformanceCounter() is not steady

Wait, what?
I already thought we, several days ago, decided that "steady" was a
*terrible* name, and that monotonic should *not* fall back to the
system clock.

It seems that we are going in circles with this. Now we are back to
where we started. Now we have a time.steady() which may not be steady
and a time.highres() which may not be high resolution.

//Lennart

From kristjan at ccpgames.com  Tue Apr  3 16:27:59 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 3 Apr 2012 14:27:59 +0000
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD3384DB6@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of Lennart Regebro
> Sent: 3. apr?l 2012 14:14
> To: Victor Stinner
> Cc: Python Dev
> Subject: Re: [Python-Dev] PEP 418: rename time.monotonic() to
> time.steady()?
> 
> On Tue, Apr 3, 2012 at 13:26, Victor Stinner <victor.stinner at gmail.com>
> wrote:
> > Hi,
> >
> > I would to rename time.monotonic() to time.steady() in the PEP 418 for
> > the following reasons:
> >
> > ?- time.steady() may fallback to the system clock which is not
> > monotonic, it's strange to have to check for
> > time.get_clock_info('monotonic')['is_monotonic']
> > ?- time.steady() uses GetTickCount() instead of
> > QueryPerformanceCounter() whereas both are monotonic, but
> > QueryPerformanceCounter() is not steady
> 
> Wait, what?
> I already thought we, several days ago, decided that "steady" was a
> *terrible* name, and that monotonic should *not* fall back to the system
> clock.
> 
> It seems that we are going in circles with this. Now we are back to where we
> started. Now we have a time.steady() which may not be steady and a
> time.highres() which may not be high resolution.

There is no such thing as steady time.   I think we are trying to solve a non-existing problem here.
K


From anacrolix at gmail.com  Tue Apr  3 16:42:37 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Tue, 3 Apr 2012 22:42:37 +0800
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
Message-ID: <CAB4yi1N1vCsA5TpSL_GVcnE1WNjqzJnnTAb3M0-YUjWtRFyOZw@mail.gmail.com>

The discussion has completed degenerated. There are several different
clocks here, and several different agendas.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120403/1bbc0a55/attachment.html>

From rdmurray at bitdance.com  Tue Apr  3 09:12:57 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 03 Apr 2012 03:12:57 -0400
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAB4yi1N1vCsA5TpSL_GVcnE1WNjqzJnnTAb3M0-YUjWtRFyOZw@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
	<CAB4yi1N1vCsA5TpSL_GVcnE1WNjqzJnnTAb3M0-YUjWtRFyOZw@mail.gmail.com>
Message-ID: <20120403151249.7C9552500E3@webabinitio.net>

On Tue, 03 Apr 2012 22:42:37 +0800, Matt Joiner <anacrolix at gmail.com> wrote:
> The discussion has completed degenerated. There are several different
> clocks here, and several different agendas.

It's probably time to do a reset.  Read Victor's PEP, and help
him edit it so that it accurately reflects the various arguments.

Then we can bikeshed some more based on the language in the PEP :)

--David

From kristjan at ccpgames.com  Tue Apr  3 16:22:13 2012
From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=)
Date: Tue, 3 Apr 2012 14:22:13 +0000
Subject: [Python-Dev] Use QueryPerformanceCounter() for time.monotonic()
 and/or time.highres()?
In-Reply-To: <CAMpsgwah_rUsFrigAi3XtYAtCkFh6BbMR27K6zJxB=oJYkOOPQ@mail.gmail.com>
References: <CAP7+vJ+enZ+WmubymwL=whD_0EiqSqOiVxmzBnvuAd3tfLc+=w@mail.gmail.com>
	<20120330214319.GA3106@cskk.homeip.net>
	<EFE3877620384242A686D52278B7CCD3383C16@RKV-IT-EXCH104.ccp.ad.local>
	<CAP7+vJLKfpzPRYkX4LZd1QBpqkuSiaFcpboTFdcNdyAVuwU=zw@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD3384477@RKV-IT-EXCH104.ccp.ad.local>
	<CAMpsgwah_rUsFrigAi3XtYAtCkFh6BbMR27K6zJxB=oJYkOOPQ@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD3384D8D@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> Some years? I computed 584.5 years, so it should not occur in practice.

Funny that you mention it. "should not occur in practice" is exactly my point.

> > Here's actual code from production:
> >
> > BOOL WINAPI QueryPerformanceCounterCCP( LARGE_INTEGER* li ) {
> > ? ? ? ?static LARGE_INTEGER last = {0};
> > ? ? ? ?BOOL ok = QueryPerformanceCounter(li);
> > ? ? ? ?if( !ok )
> > ? ? ? ?{
> > ? ? ? ? ? ? ? ?return FALSE;
> > ? ? ? ?}
> 
> Did you already see it failing in practice? Python ignores the return value and
> only uses the counter value.
No, actually not.  But we always check return codes.  Always.

> 
> > Even negative delta values of time are usually ?harmless on the application
> level.
> >?A curiosity, but harmless.
> 
> It depends on your usecase. For a scheduler or to implement a timeout, it
> does matter. 
Does it?
now = time.wallclock()
if job.due_time <= now:
	job.do_it()

So what if you get an early timeout?  timeouts aren't guaranteed to wait _at least_ the specified time.  Rather to wait _at most_ the specified time.  

> 
> The PEP contains also different proofs that QPC is not steady, especially on
> virtual machines.
What does "steady" mean? 
Sampled time on a computer will always differ from some ideal time measured by an atomic clock.  On a virtual machine, the probability distribution of the error function can be different than on a "real" machine, but saying it is different is not enough.  You have to quantify it somehow.  And unless there is some "accepted" error PDF, there is no way to say that some platforms are ok, and others not.

And what does it matter?  A virtual machine is just another platform, one where we are providing a counter as good as that platform can provide.  "Caveat emptor: Don't expect reliable benchmarks or smoothly running time on a virtual machine.  The wallclock() function will contain some undetermined error depending on the quality of your platform."

I think you are simply overcomplicating the problem and trying to promise too much, without even being able to properly quantify that promise.  Just relax the specification and all will be well.

K

From ethan at stoneleaf.us  Tue Apr  3 18:07:05 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 03 Apr 2012 09:07:05 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
Message-ID: <4F7B2029.8010707@stoneleaf.us>

Lennart Regebro wrote:
> On Tue, Apr 3, 2012 at 08:03, Cameron Simpson <cs at zip.com.au> wrote:
>>  clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
>>
>> If the symbol names are not the horribleness, can you qualify what API
>> you would like more?
> 
> Well, get_clock(monotonic=True, highres=True) would be a vast
> improvement over get_clock(MONOTONIC|HIRES).

Allowing get_clock(True, True)?  Ick.  My nomination would be
get_clock(MONOTONIC, HIGHRES) -- easier on the eyes with no |.

> I also think it should
> raise an error if not found. The clarity and easy of use of the API is
> much more important than how much you can do in one line.

What's unclear about returning None if no clocks match?

Cheers,
~Ethan~

From barry at python.org  Tue Apr  3 18:25:16 2012
From: barry at python.org (Barry Warsaw)
Date: Tue, 3 Apr 2012 10:25:16 -0600
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <20120403004432.561db865@pitrou.net>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net>
	<CADiSq7fsn1aduDoT-5xfJjwo3VGVvPkwg6ji2U91HCZGhNsbfQ@mail.gmail.com>
	<20120403004432.561db865@pitrou.net>
Message-ID: <20120403102516.11c461b5@resist.wooz.org>

On Apr 03, 2012, at 12:44 AM, Antoine Pitrou wrote:

>I don't think Barry still uses bzr, and who ever used git to manage their
>patches against the CPython repo?

I still use bzr, but not currently for Python development.  I just use the
standard hg repo.  I'd like to go back to it though once the bzr-hg plugin can
handle multiple branches in a single repo.

-Barry


From andrew.svetlov at gmail.com  Tue Apr  3 21:59:01 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Tue, 3 Apr 2012 22:59:01 +0300
Subject: [Python-Dev] Remove of w9xopen
Message-ID: <CAL3CFcWcBO9Zp6TDyQCZEwhLoCw4jx3OT_rwGH3BHmoraz_YLA@mail.gmail.com>

I filed the issue http://bugs.python.org/issue14470 for removing
w9xopen from subprocess as python 3.3 has declaration about finishing
support of Windows 2000 and Win9x family.
But, as I see, VC project for building w9xopen is still present.
Should  we remove it as well?

-- 
Thanks,
Andrew Svetlov

From brian at python.org  Tue Apr  3 22:08:11 2012
From: brian at python.org (Brian Curtin)
Date: Tue, 3 Apr 2012 15:08:11 -0500
Subject: [Python-Dev] Remove of w9xopen
In-Reply-To: <CAL3CFcWcBO9Zp6TDyQCZEwhLoCw4jx3OT_rwGH3BHmoraz_YLA@mail.gmail.com>
References: <CAL3CFcWcBO9Zp6TDyQCZEwhLoCw4jx3OT_rwGH3BHmoraz_YLA@mail.gmail.com>
Message-ID: <CAD+XWwq_S6d=mh8eXMM8Bd9kPucAni6+0dntfsWOB9WenSvdqw@mail.gmail.com>

On Tue, Apr 3, 2012 at 14:59, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
> I filed the issue http://bugs.python.org/issue14470 for removing
> w9xopen from subprocess as python 3.3 has declaration about finishing
> support of Windows 2000 and Win9x family.
> But, as I see, VC project for building w9xopen is still present.
> Should ?we remove it as well?

Please leave it in for the time being. Feel free to assign the issue
to me and I'll take care of it once we've officially transitioned to
VS2010.

From andrew.svetlov at gmail.com  Tue Apr  3 22:17:58 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Tue, 3 Apr 2012 23:17:58 +0300
Subject: [Python-Dev] Remove of w9xopen
In-Reply-To: <CAD+XWwq_S6d=mh8eXMM8Bd9kPucAni6+0dntfsWOB9WenSvdqw@mail.gmail.com>
References: <CAL3CFcWcBO9Zp6TDyQCZEwhLoCw4jx3OT_rwGH3BHmoraz_YLA@mail.gmail.com>
	<CAD+XWwq_S6d=mh8eXMM8Bd9kPucAni6+0dntfsWOB9WenSvdqw@mail.gmail.com>
Message-ID: <CAL3CFcXfkTXEzuVXVXoCE=jT7cvEvgWfJYrsLGX0qFKyRJRz=g@mail.gmail.com>

Done. Thanks.

On Tue, Apr 3, 2012 at 11:08 PM, Brian Curtin <brian at python.org> wrote:
> On Tue, Apr 3, 2012 at 14:59, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
>> I filed the issue http://bugs.python.org/issue14470 for removing
>> w9xopen from subprocess as python 3.3 has declaration about finishing
>> support of Windows 2000 and Win9x family.
>> But, as I see, VC project for building w9xopen is still present.
>> Should ?we remove it as well?
>
> Please leave it in for the time being. Feel free to assign the issue
> to me and I'll take care of it once we've officially transitioned to
> VS2010.



-- 
Thanks,
Andrew Svetlov

From victor.stinner at gmail.com  Tue Apr  3 23:14:15 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 3 Apr 2012 23:14:15 +0200
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
Message-ID: <CAMpsgwb6H4PB1MGTLvooYa8mk27QrSGi2j=X6YBSvN7k1OUQQw@mail.gmail.com>

> Wait, what?
> I already thought we, several days ago, decided that "steady" was a
> *terrible* name, and that monotonic should *not* fall back to the
> system clock.

Copy of a more recent Guido's email:
http://mail.python.org/pipermail/python-dev/2012-March/118322.html
"Anyway, the more I think about it, the more I believe these functions
should have very loose guarantees, and instead just cater to common
use cases -- availability of a timer with minimal fuss is usually more
important than the guarantees. So forget the idea about one version
that falls back to time.time() and another that doesn't -- just always
fall back to time.time(), which is (almost) always better than
failing.

Then we can design a separate inquiry API (doesn't have to be complex
as long as it's extensible -- a dict or object with a few predefined
keys or attributes sounds good enough) for apps that want to know more
about how the timer they're using is actually implemented."

I added time.get_clock_info() so the user can check if the clock is
monotonic or not.

Victor

From cs at zip.com.au  Tue Apr  3 23:42:55 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 4 Apr 2012 07:42:55 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7B2029.8010707@stoneleaf.us>
References: <4F7B2029.8010707@stoneleaf.us>
Message-ID: <20120403214255.GA2847@cskk.homeip.net>

On 03Apr2012 09:07, Ethan Furman <ethan at stoneleaf.us> wrote:
| Lennart Regebro wrote:
| > On Tue, Apr 3, 2012 at 08:03, Cameron Simpson <cs at zip.com.au> wrote:
| >>  clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
| >>
| >> If the symbol names are not the horribleness, can you qualify what API
| >> you would like more?
| > 
| > Well, get_clock(monotonic=True, highres=True) would be a vast
| > improvement over get_clock(MONOTONIC|HIRES).
| 
| Allowing get_clock(True, True)?  Ick.  My nomination would be
| get_clock(MONOTONIC, HIGHRES) -- easier on the eyes with no |.

get_clock already has two arguments - you can optionally hand it a clock
list - that's used by monotonic_clock() and hires_clock().

Have a quick glance at:

  https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/clockutils.py

(I finally found out how to point at the latest revision on BitBucket;
it's not obvious from the web interface itself.)

| > I also think it should
| > raise an error if not found. The clarity and easy of use of the API is
| > much more important than how much you can do in one line.

How much you can do _clearly_ in one line is a useful metric.

| What's unclear about returning None if no clocks match?

The return of None is very deliberate. I _want_ user specified fallback
to be concise and easy. The example:

  clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)

seems to satisfy both these criteria to my eye. Raising an exception
makes user fallback a royal PITA, with a horrible try/except cascade
needed.

Exceptions are all very well when there is just one thing to do: parse
this or fail, divide this by that or fail. If fact they're the very
image of "do this one thing or FAIL". They are not such a good match for do
this thing or that thing or this other thing.

When you want a simple linear cascade of choices, Python's short circuiting
"or" operator is a very useful thing. Having an obsession with exceptions is
IMO unhealthy.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Because of its special customs, crossposting between alt.peeves and normal
newsgroups is discouraged.      - Cameron Spitzer

From fijall at gmail.com  Tue Apr  3 23:47:01 2012
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Tue, 3 Apr 2012 23:47:01 +0200
Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error
In-Reply-To: <20120331174533.E0E612500E9@webabinitio.net>
References: <20120329195825.843352500E9@webabinitio.net>
	<CAP7+vJ+nb7X+9bs=WP8Rf6797BxEhkaPhpn4d7_ZtsBc0NQ9jg@mail.gmail.com>
	<20120329203103.95A4B2500E9@webabinitio.net>
	<20120329204815.D7AC32500E9@webabinitio.net>
	<CAP7+vJJjDKBtBxTd3GXOMvNvHWn9mE4BJhoFsyrP9aE1=nbhVg@mail.gmail.com>
	<CADiSq7eLpqXu+wk6j2Qs66bv-XaOYC5_Q+xfSiWaDHXoPQeyLA@mail.gmail.com>
	<20120331174533.E0E612500E9@webabinitio.net>
Message-ID: <CAK5idxSREp9oSjq90RYakTfTuykNCrZxgqumbcH+H90474epqw@mail.gmail.com>

On Sat, Mar 31, 2012 at 7:45 PM, R. David Murray <rdmurray at bitdance.com>wrote:

> On Sun, 01 Apr 2012 03:03:13 +1000, Nick Coghlan <ncoghlan at gmail.com>
> wrote:
> > On Sun, Apr 1, 2012 at 2:09 AM, Guido van Rossum <guido at python.org>
> wrote:
> > > Here's a different puzzle. Has anyone written a demo yet that provokes
> > > this RuntimeError, without cheating? (Cheating would be to mutate the
> > > dict from *inside* the __eq__ or __hash__ method.) If you're serious
> > > about revisiting this, I'd like to see at least one example of a
> > > program that is broken by the change. Otherwise I think the status quo
> > > in the 3.3 repo should prevail -- I don't want to be stymied by
> > > superstition.
> >
> > I attached an attempt to *deliberately* break the new behaviour to the
> > tracker issue. It isn't actually breaking for me, so I'd like other
> > folks to look at it to see if I missed something in my implementation,
> > of if it's just genuinely that hard to induce the necessary bad timing
> > of a preemptive thread switch.
>
> Thanks, Nick.  It looks reasonable to me, but I've only given it a quick
> look so far (I'll try to think about it more deeply later today).
>
> If it is indeed hard to provoke, then I'm fine with leaving the
> RuntimeError as a signal that the application needs to add some locking.
> My concern was that we'd have working production code that would start
> breaking.  If it takes a *lot* of threads or a *lot* of mutation to
> trigger it, then it is going to be a lot less likely to happen anyway,
> since such programs are going to be much more careful about locking
> anyway.
>
> --David
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>

Hm

I might be missing something, but if you have multiple threads accessing a
dict, already this program: http://paste.pocoo.org/show/575776/ raises
RuntimeError. You'll get slightly more obscure cases than changing a size
raise RuntimeError during iteration under PyPy. As far as I understood, if
you're mutating while iterating, you *can* get a runtime error.

This does not even have a custom __eq__ or __hash__. Are you never
iterating over dicts?

Cheers,
fijal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120403/16a734cb/attachment.html>

From cs at zip.com.au  Tue Apr  3 23:53:51 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 4 Apr 2012 07:53:51 +1000
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
Message-ID: <20120403215351.GA5000@cskk.homeip.net>

On 03Apr2012 13:26, Victor Stinner <victor.stinner at gmail.com> wrote:
| I would to rename time.monotonic() to time.steady() in the PEP 418 for
| the following reasons:
| 
|  - time.steady() may fallback to the system clock which is not
| monotonic, it's strange to have to check for
| time.get_clock_info('monotonic')['is_monotonic']
|  - time.steady() uses GetTickCount() instead of
| QueryPerformanceCounter() whereas both are monotonic, but
| QueryPerformanceCounter() is not steady
| 
| Python steady clock will be different than the C++ definition.
| 
| You may argue that time.steady() is not always steady: it may fallback
| to the system clock which is adjusted by NTP and can jump
| backward/forward with a delta greater than 1 hour.

An HOUR ?!?!?

I have to say I'm -100 on any proposal where time.monotonic() returns
non-monotonic time and likewise for time.steady() returning unsteady
time.

| In practice, there
| is only one operating system that does not provide a monotonic clock:
| GNU/Hurd.

I'd have thought practically any early UNIX falls into this category.
And any number of other niche things. (Yes I know Python doesn't run on
everything anyway.) Are we only considering Linux/Mac/Windows, and
only recent versions of those?

What's the status of Java and Jython?

| I hesitate to add "is_steady" to time.get_clock_info(), but a boolean
| is not very useful, it would be better to have a number.
| 
| Arguments for time.monotonic() name:
| 
|  - Users are looking for the "monotonic" name
|  - Most of the time, time.monotonic() is a monotonic clock

Again, here, I'm -100 on "most". If I ask for monotonic, it is because I
need one. Given me monotonic or give me death! (Well, an exception or
at any rate something unusable like None.)

[...]
| PS: I already changed time.monotonic() to time.steady() in the PEP :-p

Sigh. They're different things! For all that "steady" is a slightly
vague term, steady and hires and monotonic are independent concepts. Of
course a lot of high quality clocks will embody hires and ideally steady
or monotonic.

This kind of offer-just-one-thing embedded policy is why I feel the API
needs more user control and a polciy free interface, with montonic() et
al providing handy prepackaged policy for the common uses.

If you can provide monotonic (for example, on Linux as you outline),
which _not_ offer it? Offering steady() provides no way for the user to
ask for higher guarentees.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

But in our enthusiasm, we could not resist a radical overhaul of the
system, in which all of its major weaknesses have been exposed, analyzed,
and replaced with new weaknesses.       - Bruce Leverett

From guido at python.org  Wed Apr  4 00:17:02 2012
From: guido at python.org (Guido van Rossum)
Date: Tue, 3 Apr 2012 15:17:02 -0700
Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error
In-Reply-To: <CAK5idxSREp9oSjq90RYakTfTuykNCrZxgqumbcH+H90474epqw@mail.gmail.com>
References: <20120329195825.843352500E9@webabinitio.net>
	<CAP7+vJ+nb7X+9bs=WP8Rf6797BxEhkaPhpn4d7_ZtsBc0NQ9jg@mail.gmail.com>
	<20120329203103.95A4B2500E9@webabinitio.net>
	<20120329204815.D7AC32500E9@webabinitio.net>
	<CAP7+vJJjDKBtBxTd3GXOMvNvHWn9mE4BJhoFsyrP9aE1=nbhVg@mail.gmail.com>
	<CADiSq7eLpqXu+wk6j2Qs66bv-XaOYC5_Q+xfSiWaDHXoPQeyLA@mail.gmail.com>
	<20120331174533.E0E612500E9@webabinitio.net>
	<CAK5idxSREp9oSjq90RYakTfTuykNCrZxgqumbcH+H90474epqw@mail.gmail.com>
Message-ID: <CAP7+vJJkJ71=-M=y7v+aOO1A+bVh8hZ=10RBuRdey2wjj-LvhQ@mail.gmail.com>

I'm confused. Are you saying that that program always raised
RuntimeError, or that it started raising RuntimeError with the new
behavior (3.3 alpha 2)?

On Tue, Apr 3, 2012 at 2:47 PM, Maciej Fijalkowski <fijall at gmail.com> wrote:
> On Sat, Mar 31, 2012 at 7:45 PM, R. David Murray <rdmurray at bitdance.com>
> wrote:
>>
>> On Sun, 01 Apr 2012 03:03:13 +1000, Nick Coghlan <ncoghlan at gmail.com>
>> wrote:
>> > On Sun, Apr 1, 2012 at 2:09 AM, Guido van Rossum <guido at python.org>
>> > wrote:
>> > > Here's a different puzzle. Has anyone written a demo yet that provokes
>> > > this RuntimeError, without cheating? (Cheating would be to mutate the
>> > > dict from *inside* the __eq__ or __hash__ method.) If you're serious
>> > > about revisiting this, I'd like to see at least one example of a
>> > > program that is broken by the change. Otherwise I think the status quo
>> > > in the 3.3 repo should prevail -- I don't want to be stymied by
>> > > superstition.
>> >
>> > I attached an attempt to *deliberately* break the new behaviour to the
>> > tracker issue. It isn't actually breaking for me, so I'd like other
>> > folks to look at it to see if I missed something in my implementation,
>> > of if it's just genuinely that hard to induce the necessary bad timing
>> > of a preemptive thread switch.
>>
>> Thanks, Nick. ?It looks reasonable to me, but I've only given it a quick
>> look so far (I'll try to think about it more deeply later today).
>>
>> If it is indeed hard to provoke, then I'm fine with leaving the
>> RuntimeError as a signal that the application needs to add some locking.
>> My concern was that we'd have working production code that would start
>> breaking. ?If it takes a *lot* of threads or a *lot* of mutation to
>> trigger it, then it is going to be a lot less likely to happen anyway,
>> since such programs are going to be much more careful about locking
>> anyway.
>>
>> --David
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>
>
> Hm
>
> I might be missing something, but if you have multiple threads accessing a
> dict, already this program:?http://paste.pocoo.org/show/575776/ raises
> RuntimeError. You'll get slightly more obscure cases than changing a size
> raise RuntimeError during iteration under PyPy. As far as I understood, if
> you're mutating while iterating, you *can* get a runtime error.
>
> This does not even have a custom __eq__ or __hash__. Are you never iterating
> over dicts?
>
> Cheers,
> fijal



-- 
--Guido van Rossum (python.org/~guido)

From ethan at stoneleaf.us  Wed Apr  4 00:08:29 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 03 Apr 2012 15:08:29 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120403214255.GA2847@cskk.homeip.net>
References: <4F7B2029.8010707@stoneleaf.us>
	<20120403214255.GA2847@cskk.homeip.net>
Message-ID: <4F7B74DD.7070608@stoneleaf.us>

Cameron Simpson wrote:
> get_clock already has two arguments - you can optionally hand it a clock
> list - that's used by monotonic_clock() and hires_clock().

def get_clock(*flags, *, clocklist=None):
     ''' Return a Clock based on the supplied `flags`.
         The returned clock shall have all the requested flags.
         If no clock matches, return None.
     '''
     wanted = 0
     for flag in flags:
         wanted |= flag
     if clocklist is None:
         clocklist = ALL_CLOCKS
     for clock in clocklist:
         if clock.flags & wanted == wanted:
             return clock.factory()
     return None

Would need to make *flags change to the other *_clock functions.


> Have a quick glance at:
> 
>   https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/clockutils.py

Thanks.


> The return of None is very deliberate. I _want_ user specified fallback
> to be concise and easy. The example:
> 
>   clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)

Which would become:

clock = get_clock(MONOTONIC, HIGHRES) or get_clock(MONOTONIC)

+1 to returning None


> Exceptions are all very well when there is just one thing to do: parse
> this or fail, divide this by that or fail. If fact they're the very
> image of "do this one thing or FAIL". They are not such a good match for do
> this thing or that thing or this other thing.
> 
> When you want a simple linear cascade of choices, Python's short circuiting
> "or" operator is a very useful thing. Having an obsession with exceptions is
> IMO unhealthy.

Another +1.

~Ethan~

From guido at python.org  Wed Apr  4 00:19:29 2012
From: guido at python.org (Guido van Rossum)
Date: Tue, 3 Apr 2012 15:19:29 -0700
Subject: [Python-Dev] Issue 14417: consequences of new dict runtime error
In-Reply-To: <CAP7+vJJkJ71=-M=y7v+aOO1A+bVh8hZ=10RBuRdey2wjj-LvhQ@mail.gmail.com>
References: <20120329195825.843352500E9@webabinitio.net>
	<CAP7+vJ+nb7X+9bs=WP8Rf6797BxEhkaPhpn4d7_ZtsBc0NQ9jg@mail.gmail.com>
	<20120329203103.95A4B2500E9@webabinitio.net>
	<20120329204815.D7AC32500E9@webabinitio.net>
	<CAP7+vJJjDKBtBxTd3GXOMvNvHWn9mE4BJhoFsyrP9aE1=nbhVg@mail.gmail.com>
	<CADiSq7eLpqXu+wk6j2Qs66bv-XaOYC5_Q+xfSiWaDHXoPQeyLA@mail.gmail.com>
	<20120331174533.E0E612500E9@webabinitio.net>
	<CAK5idxSREp9oSjq90RYakTfTuykNCrZxgqumbcH+H90474epqw@mail.gmail.com>
	<CAP7+vJJkJ71=-M=y7v+aOO1A+bVh8hZ=10RBuRdey2wjj-LvhQ@mail.gmail.com>
Message-ID: <CAP7+vJK-j3Yk7Jkeki6BP7hDmBaW2GT=hHYzt+DGVtvFoAt1Pg@mail.gmail.com>

Never mind, I got it. This always raised RuntimeError. I see this
should be considered support in favor of keeping the change, since
sharing dicts between threads without locking is already fraught with
RuntimeErrors.

At the same time, has anyone looked at my small patch (added to the
issue) that restores the retry loop without recursion?

On Tue, Apr 3, 2012 at 3:17 PM, Guido van Rossum <guido at python.org> wrote:
> I'm confused. Are you saying that that program always raised
> RuntimeError, or that it started raising RuntimeError with the new
> behavior (3.3 alpha 2)?
>
> On Tue, Apr 3, 2012 at 2:47 PM, Maciej Fijalkowski <fijall at gmail.com> wrote:
>> On Sat, Mar 31, 2012 at 7:45 PM, R. David Murray <rdmurray at bitdance.com>
>> wrote:
>>>
>>> On Sun, 01 Apr 2012 03:03:13 +1000, Nick Coghlan <ncoghlan at gmail.com>
>>> wrote:
>>> > On Sun, Apr 1, 2012 at 2:09 AM, Guido van Rossum <guido at python.org>
>>> > wrote:
>>> > > Here's a different puzzle. Has anyone written a demo yet that provokes
>>> > > this RuntimeError, without cheating? (Cheating would be to mutate the
>>> > > dict from *inside* the __eq__ or __hash__ method.) If you're serious
>>> > > about revisiting this, I'd like to see at least one example of a
>>> > > program that is broken by the change. Otherwise I think the status quo
>>> > > in the 3.3 repo should prevail -- I don't want to be stymied by
>>> > > superstition.
>>> >
>>> > I attached an attempt to *deliberately* break the new behaviour to the
>>> > tracker issue. It isn't actually breaking for me, so I'd like other
>>> > folks to look at it to see if I missed something in my implementation,
>>> > of if it's just genuinely that hard to induce the necessary bad timing
>>> > of a preemptive thread switch.
>>>
>>> Thanks, Nick. ?It looks reasonable to me, but I've only given it a quick
>>> look so far (I'll try to think about it more deeply later today).
>>>
>>> If it is indeed hard to provoke, then I'm fine with leaving the
>>> RuntimeError as a signal that the application needs to add some locking.
>>> My concern was that we'd have working production code that would start
>>> breaking. ?If it takes a *lot* of threads or a *lot* of mutation to
>>> trigger it, then it is going to be a lot less likely to happen anyway,
>>> since such programs are going to be much more careful about locking
>>> anyway.
>>>
>>> --David
>>> _______________________________________________
>>> Python-Dev mailing list
>>> Python-Dev at python.org
>>> http://mail.python.org/mailman/listinfo/python-dev
>>> Unsubscribe:
>>> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
>>
>>
>> Hm
>>
>> I might be missing something, but if you have multiple threads accessing a
>> dict, already this program:?http://paste.pocoo.org/show/575776/ raises
>> RuntimeError. You'll get slightly more obscure cases than changing a size
>> raise RuntimeError during iteration under PyPy. As far as I understood, if
>> you're mutating while iterating, you *can* get a runtime error.
>>
>> This does not even have a custom __eq__ or __hash__. Are you never iterating
>> over dicts?
>>
>> Cheers,
>> fijal
>
>
>
> --
> --Guido van Rossum (python.org/~guido)



-- 
--Guido van Rossum (python.org/~guido)

From ethan at stoneleaf.us  Wed Apr  4 00:10:45 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 03 Apr 2012 15:10:45 -0700
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <20120403215351.GA5000@cskk.homeip.net>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<20120403215351.GA5000@cskk.homeip.net>
Message-ID: <4F7B7565.3030801@stoneleaf.us>

Cameron Simpson wrote:
> Sigh. They're different things! For all that "steady" is a slightly
> vague term, steady and hires and monotonic are independent concepts. Of
> course a lot of high quality clocks will embody hires and ideally steady
> or monotonic.
> 
> This kind of offer-just-one-thing embedded policy is why I feel the API
> needs more user control and a polciy free interface, with montonic() et
> al providing handy prepackaged policy for the common uses.

+1

~Ethan~

From wickedgrey at gmail.com  Wed Apr  4 00:58:50 2012
From: wickedgrey at gmail.com (Eli Stevens (Gmail))
Date: Tue, 3 Apr 2012 15:58:50 -0700
Subject: [Python-Dev] Issue 11734: Add half-float (16-bit) support to struct
	module
Message-ID: <CADa34LBaVCLVTzFSio81sz+zu_0MLvJbsa6CaAi_8W-Hdf9S8Q@mail.gmail.com>

Hello,

I worked on a patch to support half-floats about a year ago, and the
impression I got from the python-dev list was that there wasn't anyone
with objections to the patch, and from the reviewers was that it was
ready for inclusion, but it never moved beyond that stage (I should
have pushed it harder, I suspect).  Is there still time to get it in
the 3.3 cycle?  The corresponding patch for NumPy has been accepted,
and IIRC, is present in the 1.6 release.

http://bugs.python.org/issue11734

The issue has links to the various discussions surrounding the patch
for context.

What should happen next?

Thanks,
Eli

From cs at zip.com.au  Wed Apr  4 01:31:20 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 4 Apr 2012 09:31:20 +1000
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <20120403215351.GA5000@cskk.homeip.net>
References: <20120403215351.GA5000@cskk.homeip.net>
Message-ID: <20120403233120.GA17641@cskk.homeip.net>

[ Returning at more leisure... ]

On 04Apr2012 07:53, I wrote:
| On 03Apr2012 13:26, Victor Stinner <victor.stinner at gmail.com> wrote:
| | I would to rename time.monotonic() to time.steady() in the PEP 418 for
| | the following reasons:
| |  - time.steady() may fallback to the system clock which is not
| | monotonic, it's strange to have to check for
| | time.get_clock_info('monotonic')['is_monotonic']

This I agree with. You should never need to do that.

| |  - time.steady() uses GetTickCount() instead of
| | QueryPerformanceCounter() whereas both are monotonic, but
| | QueryPerformanceCounter() is not steady

This is an example of where I think my pick-a-clock API can help people;
we should in some fashion offer all or most of the system clocks. Of
course monotonic() or steady() stould itself pick one, whatever people
agree is the best choice for those modes. But we may as well offer the
rest if it is easy; not with their own functions - that would be
platform specific - but findable.

| | Python steady clock will be different than the C++ definition.

[ BTW, you've a typo in here:
  http://www.python.org/dev/peps/pep-0418/#id22
  with the word "Specifiction", however apt that exciting new word may
  seem:-)
]

You say "will be different", and since the C++ may not be adjusted maybe that
is reasonable, but there's no Python definition in the PEP for "steady" at
present. Of course, people are still bickering, but perhaps you should whack
one in as a reference for the bickering.

| | You may argue that time.steady() is not always steady: it may fallback
| | to the system clock which is adjusted by NTP and can jump
| | backward/forward with a delta greater than 1 hour.
| 
| An HOUR ?!?!?

I'd like to apologise for my shrill tone here.

I still think a clock that stepped by an hour is grotesquely
non-steady. (Why an hour, BTW?  I'd hope it is not related to any
timezone summer/winter localtime presentation shift notions.)

I think Kristj\341n Valur J\363nsson is on point when he says "There is
no such thing as steady time", but the notion is very attractive. If
you're going to return a "steady" clock you should be able to find out
how steady that is, for example in maximum step size (adjustment in
alignment with "real time") in seconds. I think if I got 3600 from such
a query I'd decide it was not steady enough and choose not to rely on
it. (Or print all results output in blinking red text:-)

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

186,282 miles per second - Not just a good idea, It's the Law!

From cs at zip.com.au  Wed Apr  4 01:38:24 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 4 Apr 2012 09:38:24 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7B74DD.7070608@stoneleaf.us>
References: <4F7B74DD.7070608@stoneleaf.us>
Message-ID: <20120403233824.GA19668@cskk.homeip.net>

On 03Apr2012 15:08, Ethan Furman <ethan at stoneleaf.us> wrote:
| Cameron Simpson wrote:
| > get_clock already has two arguments - you can optionally hand it a clock
| > list - that's used by monotonic_clock() and hires_clock().
| 
| def get_clock(*flags, *, clocklist=None):

I presume that bare "*," is a typo. Both my python2 and python3 commands
reject it.

[...]
|      wanted = 0
|      for flag in flags:
|          wanted |= flag
[...]

I could do this. I think I'm -0 on it, because it doesn't seem more
expressive to my eye than the straight make-a-bitmask "|" form.
Other opinions?

| Would need to make *flags change to the other *_clock functions.

Yep.

| > The return of None is very deliberate. I _want_ user specified fallback
| > to be concise and easy. The example:
| >   clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
| 
| Which would become:
| clock = get_clock(MONOTONIC, HIGHRES) or get_clock(MONOTONIC)
| 
| +1 to returning None
| 
| > Exceptions are all very well when there is just one thing to do: parse
| > this or fail, divide this by that or fail. If fact they're the very
| > image of "do this one thing or FAIL". They are not such a good match for do
| > this thing or that thing or this other thing.

Another thought that occurred in the shower was that get_clock() et al
are inquiry functions, and returning None is very sensible there.

monotonic() et al are direct use functions, which should raise an exception
if unavailable so that code like:

  t0 = monotonic()
  .......
  t1 = monotonic()

does not become littered with checks for special values like None.

I consider this additional reason to return None from get_clock().

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

DON'T DRINK SOAP! DILUTE DILUTE! OK!
        - on the label of Dr. Bronner's Castile Soap

From victor.stinner at gmail.com  Wed Apr  4 01:45:27 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 4 Apr 2012 01:45:27 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120402213843.GA8530@cskk.homeip.net>
References: <CAMpsgwYnTYyu=XXfysgq5bTeVY5UAxNimOpVz3qzbsOyPcYMDg@mail.gmail.com>
	<20120402213843.GA8530@cskk.homeip.net>
Message-ID: <CAMpsgwbibQ_03kJGvYBL5GOagk-ONJjg5xz2h1tt-tUbtLwPxQ@mail.gmail.com>

> | get_clock() returns None if no clock has the requested flags, whereas
> | I expected an exception (LookupError or NotImplementError?).
>
> That is deliberate. People can easily write fallback like this:
>
> ?clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)

Why not passing a a list of set of flags? Example:

haypo_steady = get_clock(MONOTONIC|STEADY, STEADY, MONOTONIC, REALTIME)
# try to get a monotonic and steady clock,
# or fallback to a steady clock,
# or fallback to a monotonic clock,
# or fallback to the system clock

haypo_perf_counter = get_clock(HIGHRES, MONOTONIC|STEADY, STEADY,
MONOTONIC, REALTIME)
# try to get a high-resolution clock
# or fallback to a monotonic and steady clock,
# or fallback to a steady clock,
# or fallback to a monotonic clock,
# or fallback to the system clock

On Windows, haypo_steady should give GetTickCount (MONOTONIC|STEADY)
and haypo_perf_counter should give QueryPerformanceCounter
(MONOTONIC|HIGHRES).

Hum, I'm not sure that haypo_highres uses the same clocks than
time.perf_counter() in the PEP.

> If one wants an exception it is easy to follow up with:
>
> ?if not clock:
> ? ?raise RunTimeError("no suitable clocks on offer on this platform")

And if don't read the doc carefuly and forget the test, you can a
"NoneType object is not callable" error.

> | get_clock() doesn't remember if a clock works or not (if it raises an
> | OSError) and does not fallback to the next clock on error. See
> | "pseudo-codes" in the PEP 418.
>
> I presume the available clocks are all deduced from the platform. Your
> pseudo code checks for OSError at fetch-the-clock time. I expect that
> to occur once when the module is loaded, purely to populate the table
> of avaiable platform clocks.

It's better to avoid unnecessary system calls at startup (when the
time module is loaded), but you may defer the creation of the clock
list, or at least of the flags of each clock.

> Note that you don't need to provide a clock list at all; get_clock(0
> will use ALL_CLOCKS by default, and hires() and monotonic() should each
> have their own default list.

A list of clocks and a function are maybe redundant. Why not only
providing a function?

> Regarding the choice itself: as the _caller_ (not the library author),
> you must decide what you want most. You're already planning offering
> monotonic() and hires() calls without my proposal!

My PEP starts with use cases: it proposes one clock per use case.
There is no "If you need a monotonic, steady and high-resolution clock
..." use case.

The "highres" name was confusing, I just replaced it with
time.perf_counter() (thanks Antoine for the name!).
time.perf_counter() should be used for benchmarking and profiling.

> Taking your query "Should
> I use MONTONIC_CLOCKS or HIRES_CLOCKS when I would like a monotonic and
> high-resolution clock" is _already_ a problem. Of course you must call
> monotonic() or hires() first under the current scheme, and must answer this
> question anyway. Do you prefer hires? Use it first! No preference? Then the
> question does not matter.

I mean having to choose the flags *and* the list of clocks is hard. I
would prefer to only have to choose flags or only the list of clocks.
The example was maybe not the best one.

> | If you have only one list of clocks, how do sort the list to get
> | QueryPerformanceCounter when the user asks for highres and
> | GetTickCount when the user asks for monotonic?
>
> This is exactly why there are supposed to be different lists.
> You have just argued against your objection above.

You can solve this issue with only one list of clocks if you use the
right set of flags.

> | So we would have:
> |
> | GetTickCount.flags = T_MONOTONIC | T_STEADY | T_HIGHRES
> |
> | Even if GetTickCount has only an accuracy of 15 ms :-/
>
> T_HIGHRES is a quality call, surely? If 15ms is too sloppy for a "high
> resolution, the is should _not_ have the T_HIRES flag.

So what is the minimum resolution and/or accuracy of the HIGHRES flag?

> | Could you please update your code according to my remarks? I will try
> | to integrate it into the PEP. A PEP should list all alternatives!
>
> Surely.
>
> The only updates I can see are to provide the flat interface
> (instead of via clock-object indirection) and the missing hires_clock()
> and monotonic_clock() convenience methods.

A full implementation would help to decide which API is the best one.
"Full" implementation:

 - define all convinience function
 - define all list of clocks
 - define flags of all clocks listed in the PEP 418: clocks used in
the pseudo-code of time.steady and time.perf_counter, and maybe also
time.time

Victor

From ncoghlan at gmail.com  Wed Apr  4 01:46:57 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 4 Apr 2012 09:46:57 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120403233824.GA19668@cskk.homeip.net>
References: <4F7B74DD.7070608@stoneleaf.us>
	<20120403233824.GA19668@cskk.homeip.net>
Message-ID: <CADiSq7e9HPs5PvmpEXdmdJnUXLCOswTTGMNDMZZm66xfn+AiwA@mail.gmail.com>

On Wed, Apr 4, 2012 at 9:38 AM, Cameron Simpson <cs at zip.com.au> wrote:
> I could do this. I think I'm -0 on it, because it doesn't seem more
> expressive to my eye than the straight make-a-bitmask "|" form.
> Other opinions?

Yes. I've been mostly staying out of the PEP 418 clock discussion
(there are enough oars in there already), but numeric flags are
unnecessarily hard to debug. Use strings as your constants unless
there's a compelling reason not to.

Seeing "('MONOTONIC', 'HIGHRES')" in a debugger or exception message
is a lot more informative than seeing "3".

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From mathieu.desnoyers at efficios.com  Wed Apr  4 01:37:56 2012
From: mathieu.desnoyers at efficios.com (Mathieu Desnoyers)
Date: Tue, 3 Apr 2012 19:37:56 -0400
Subject: [Python-Dev] Scalability micro-conference topic proposals (LPC2012)
Message-ID: <20120403233756.GT26915@Krystal>

Hi,

We are organizing a micro-conference on scaling both upwards (many
cores) and downwards (low footprint, energy efficiency) that targets
all layers of the software stack. Our intent is to bring together
application, libraries and kernel developers to discuss the scalability
issues they currently face, and get exposure for the ongoing work on
scalability infrastructure.

Suggestions of topics are welcome. If you would like to present, please
let us know: we have lightnening-talk slots and a few 30 minutes slots
available. Presentations should be oriented towards stimulating
discussion over currently faced scalability problems and/or work in
progress in the area of scalability.

The micro-conference will be held between August 29-31, at LinuxCon
North America 2012, in San Diego.

   http://www.linuxplumbersconf.org/2012/

The Scaling Micro-Conference page is available at:

   http://wiki.linuxplumbersconf.org/2012:scaling

Best Regards,

Mathieu Desnoyers & Paul E. McKenney

-- 
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com

From thomas.spura at googlemail.com  Mon Apr  2 14:58:16 2012
From: thomas.spura at googlemail.com (Thomas Spura)
Date: Mon, 2 Apr 2012 14:58:16 +0200
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <jlc7ht$409$1@dough.gmane.org>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net> <jlc7ht$409$1@dough.gmane.org>
Message-ID: <CAE4GLst2+A6=1uSKMturOeC6_7fmt_51yU+MB7-K4Edv0RTycQ@mail.gmail.com>

On Mon, Apr 2, 2012 at 2:54 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Antoine Pitrou, 02.04.2012 13:50:
>> On Sun, 1 Apr 2012 19:44:00 -0500
>> Brian Curtin wrote:
>>> On Sun, Apr 1, 2012 at 17:31, Mat?j Cepl wrote:
>>>> On 1.4.2012 23:46, Brian Curtin wrote:
>>>>> For what reason? Are the git or bzr files causing issues on HG?
>>>>
>>>>
>>>> No, but wrong .gitignore causes issues with git repo obtained via
>>>> hg-fast-import. If it is meant as an intentional sabotage of using git (and
>>>> bzr) for cpython, then that's the only explanation I can understand,
>>>> otherwise it doesn't make sense to me why these files are in HG repository
>>>> at all.
>>>
>>> Then you won't understand. Sometimes things get out of date when they
>>> aren't used or maintained.
>>>
>>> You're welcome to fix the problem if you're a Git user, as suggested earlier.
>>
>> That said, these files will always be outdated, so we might as well
>> remove them so that at least git / bzr users don't get confused.
>
> How often is anything added to the .hgignore file? I doubt that these files
> will "sufficiently always" be outdated to be unhelpful.

How about using symlinks and only using a common syntax in .hgignore
that git also understands?

Greetings,
   Tom

From steve at pearwood.info  Wed Apr  4 01:53:31 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 04 Apr 2012 09:53:31 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
Message-ID: <4F7B8D7B.6070806@pearwood.info>

Lennart Regebro wrote:
> On Tue, Apr 3, 2012 at 08:03, Cameron Simpson <cs at zip.com.au> wrote:
>>  clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
>>
>> If the symbol names are not the horribleness, can you qualify what API
>> you would like more?
> 
> Well, get_clock(monotonic=True, highres=True) would be a vast
> improvement over get_clock(MONOTONIC|HIRES). I also think it should
> raise an error if not found. The clarity and easy of use of the API is
> much more important than how much you can do in one line.

That's a matter of opinion. I'm not particularly fond of this get_clock idea, 
but of the two examples given, I much prefer the first of these:

get_clock(MONOTONIC|HIRES)
get_clock(monotonic=True, highres=True)

and not just because it is shorter. The API is crying out for enum arguments, 
not a series of named flags.

But frankly I think this get_clock API sucks. At some earlier part of this 
thread, somebody listed three or four potential characteristics of clocks. If 
we offer these as parameters to get_clock(), that means there's eight or 
sixteen different clocks that the user can potentially ask for. Do we really 
offer sixteen different clocks? Or even eight? I doubt it -- there's probably 
only two or three. So the majority of potential clocks don't exist.

With get_clock, discoverability is hurt. How does the caller know what clocks 
are available? How can she look for documentation for them?

A simple, obvious, discoverable API is best. If we offer three clocks, we have 
three named functions. If some of these clocks aren't available on some 
platform, and we can't emulate them, then simply don't have that named 
function available on that platform. That's easy to discover: trying to use 
that clock will give a NameError or AttributeError, and the caller can then 
fall back on an alternative, or fail, whichever is appropriate.



-- 
Steven


From victor.stinner at gmail.com  Wed Apr  4 02:02:12 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 4 Apr 2012 02:02:12 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7B8D7B.6070806@pearwood.info>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
Message-ID: <CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>

> Lennart Regebro wrote:
>> Well, get_clock(monotonic=True, highres=True) would be a vast
>> improvement over get_clock(MONOTONIC|HIRES).

I don't like this keyword API because you have to use a magically
marker (True). Why True? What happens if I call
get_clock(monotonic=False) or get_clock(monotonic="yes")?

Victor

From greg.ewing at canterbury.ac.nz  Wed Apr  4 02:04:50 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 04 Apr 2012 12:04:50 +1200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120403084305.GA19441@cskk.homeip.net>
References: <jleau6$2dt$1@dough.gmane.org>
	<20120403084305.GA19441@cskk.homeip.net>
Message-ID: <4F7B9022.60504@canterbury.ac.nz>

Cameron Simpson wrote:
> People have been saying "hires" throughout the
> threads I think, but I for one would be slightly happier with "highres".

hirez?

-- 
Greg

From breamoreboy at yahoo.co.uk  Wed Apr  4 02:06:15 2012
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Wed, 04 Apr 2012 01:06:15 +0100
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <20120403233120.GA17641@cskk.homeip.net>
References: <20120403215351.GA5000@cskk.homeip.net>
	<20120403233120.GA17641@cskk.homeip.net>
Message-ID: <jlg39d$246$1@dough.gmane.org>

On 04/04/2012 00:31, Cameron Simpson wrote:
> [ Returning at more leisure... ]
> I think Kristj\341n Valur J\363nsson is on point when he says "There is
> no such thing as steady time", but the notion is very attractive. If
> you're going to return a "steady" clock you should be able to find out
> how steady that is, for example in maximum step size (adjustment in
> alignment with "real time") in seconds. I think if I got 3600 from such
> a query I'd decide it was not steady enough and choose not to rely on
> it. (Or print all results output in blinking red text:-)
>
> Cheers,

IIRC time.steady() has been rejected umpteen times.  Someone (apologies, 
it's a long thread :) suggested time.steadier() [or time.steadiest() ?] 
implying that it's the best that can be done.  I'd go with that unless 
someboby has a better option, but there's been so many suggested that I 
might even have to read the PEP :)

The more that this thread runs, the more I get the impression that we're 
trying to make mountains out of electrons.  Having said that, I know 
that the conservative nature of Python development is the best approach 
here wrt the API.  Rather short term pain and long term gain than vice 
versa.

Just my 2p worth.

-- 
Cheers.

Mark Lawrence.


From breamoreboy at yahoo.co.uk  Wed Apr  4 02:18:13 2012
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Wed, 04 Apr 2012 01:18:13 +0100
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7B9022.60504@canterbury.ac.nz>
References: <jleau6$2dt$1@dough.gmane.org>
	<20120403084305.GA19441@cskk.homeip.net>
	<4F7B9022.60504@canterbury.ac.nz>
Message-ID: <jlg3vq$58t$1@dough.gmane.org>

On 04/04/2012 01:04, Greg Ewing wrote:
> Cameron Simpson wrote:
>> People have been saying "hires" throughout the
>> threads I think, but I for one would be slightly happier with "highres".
>
> hirez?
>

IMHO still too easy to read as hires.  Or is it?  Bah I'm going to bed 
and will think about it, night all.

-- 
Cheers.

Mark Lawrence.


From steve at pearwood.info  Wed Apr  4 02:33:53 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 04 Apr 2012 10:33:53 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should be
	postponed
Message-ID: <4F7B96F1.6020906@pearwood.info>

Judging by the hundreds of emails regarding PEP 418, the disagreements about 
APIs, namings, and even what characteristics clocks should have, I believe 
that the suggestion is too divisive (and confusing!) to be accepted or 
rejected at this time.

Everyone has a different opinion, everyone believes their opinion holds for 
the majority, and it isn't practical for anyone to read the entire discussion.

I propose for Python 3.3:

1) the os module should expose lightweight wrappers around whatever clocks the 
operating system provides;

2) the time module should NOT provide any further clocks other than the 
existing time() and clock() functions (but see point 4 below);

3) we postpone PEP 418 until there is some real-world experience with using 
the os clocks from Python and we can develop a consensus of what is actually 
needed rather than what people think we need (i.e. probably in 3.4);

4) if the standard library has need for a "use the best clock available, for 
some definition of best, and fall back to time() if not" clock, then the time 
module should do the simplest thing that could possible work, flagged as a 
private function:

try:
     from os import bestclock as _bestclock
except ImportError:
     _bestclock = time

This can always be promoted to a public function later, if necessary.

Python has worked pretty well without high res and monotonic clocks for 20 
years. Let's not rush into a suboptimal design based on who can outlast 
everyone else in this discussion.



-- 
Steven

From ncoghlan at gmail.com  Wed Apr  4 02:40:17 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 4 Apr 2012 10:40:17 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7B96F1.6020906@pearwood.info>
References: <4F7B96F1.6020906@pearwood.info>
Message-ID: <CADiSq7cQ38qtH4BY9qQmp1da1uZTHLs0PjkZY8NWUSyBVjB-UA@mail.gmail.com>

On Wed, Apr 4, 2012 at 10:33 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> Python has worked pretty well without high res and monotonic clocks for 20
> years. Let's not rush into a suboptimal design based on who can outlast
> everyone else in this discussion.

+1

FWIW, I'd be fine with underscore prefixes on *any* additions to the
relevant module APIs for 3.3.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From cs at zip.com.au  Wed Apr  4 02:52:08 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 4 Apr 2012 10:52:08 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7B8D7B.6070806@pearwood.info>
References: <4F7B8D7B.6070806@pearwood.info>
Message-ID: <20120404005208.GA27839@cskk.homeip.net>

On 04Apr2012 09:53, Steven D'Aprano <steve at pearwood.info> wrote:
| Lennart Regebro wrote:
| > On Tue, Apr 3, 2012 at 08:03, Cameron Simpson <cs at zip.com.au> wrote:
| >>  clock = get_clock(MONOTONIC|HIRES) or get_clock(MONOTONIC)
| >> If the symbol names are not the horribleness, can you qualify what API
| >> you would like more?
| > 
| > Well, get_clock(monotonic=True, highres=True) would be a vast
| > improvement over get_clock(MONOTONIC|HIRES).[...]
| 
| That's a matter of opinion. I'm not particularly fond of this get_clock idea, 
| but of the two examples given, I much prefer the first of these:
| 
| get_clock(MONOTONIC|HIRES)
| get_clock(monotonic=True, highres=True)
| 
| and not just because it is shorter. The API is crying out for enum arguments, 
| not a series of named flags.

Enums would be ok with me. I went with a bitmask because it is natural
to me and very simple. But anything symbolicly expression will do.

| But frankly I think this get_clock API sucks. At some earlier part of this 
| thread, somebody listed three or four potential characteristics of clocks. If 
| we offer these as parameters to get_clock(), that means there's eight or 
| sixteen different clocks that the user can potentially ask for. Do we really 
| offer sixteen different clocks? Or even eight? I doubt it -- there's probably 
| only two or three. So the majority of potential clocks don't exist.

That's not the point. I think we should offer all the platform system clocks,
suitably described. That there are up to 8 or 16 flag combinations is
irrelevant; no user is going to try them all. A user will have requirements
for their clock. They ask for them either blandly via get_clock() or (for
example considering monotonic most important) via monotonic_clock(). In the
latter case, the supported clocks can be considered in a more apt order via a
different internal clock list.

| With get_clock, discoverability is hurt.

No, because the other calls still exist. (In my proposal. I see Victor's
characterised this as either/or in the PEP, never my intent.)

| How does the caller know what clocks 
| are available?

I would definitely want either:

  - the module clock lists available via public names, for example as in
    my sample clockutils.py code (ALL_CLOCKS, MONTONIC_CLOCKS etc) or
    via some map (eg clocks['monotonic']).

  - a get_clocks() function to return matching clocks, like get_clock()
    but not stopping on the first match

  - an all_clocks=False parameter to get_clock() to get an iterable of
    the suitable clocks

| How can she look for documentation for them?

There is good text in the PEP. That could be easily moved into the
module doco in a "clocks" section. Since my clocks proposal wraps clocks
in an object, they _can_ have nice class names and good docstrings and
more metadata in the object (possibilities including .epoch, .precision,
.is_steady() methods, .os_clock_name (eg "QueryPerformanceCounter"), etc).

| A simple, obvious, discoverable API is best. If we offer three clocks, we have 
| three named functions. If some of these clocks aren't available on some 
| platform, and we can't emulate them, then simply don't have that named 
| function available on that platform. That's easy to discover: trying to use 
| that clock will give a NameError or AttributeError, and the caller can then 
| fall back on an alternative, or fail, whichever is appropriate.

And I hate this. Because many platforms offer several OS clocks. The
time module SHOULD NOT dictate what clocks you get to play with, and you
should not need to have platform specific knowledge to look for a clock
with your desired characteristics.

If you just want montonic() and trust the module authors' policy
decisions you can go with monotonic(), have it do AttributeError if
unavailable and never worry about discoverability or the inspectable
object layer. Many will probaby be happy with that.

But without get_clock() or something like it, there is no
discoverability and not ability for a user to decide their own clock
choice policy.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Your modesty is typically human, so I will overlook it. - a Klingon

From victor.stinner at gmail.com  Wed Apr  4 03:28:34 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 04 Apr 2012 03:28:34 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <4F7B96F1.6020906@pearwood.info>
References: <4F7B96F1.6020906@pearwood.info>
Message-ID: <4F7BA3C2.4050705@gmail.com>

Le 04/04/2012 02:33, Steven D'Aprano a ?crit :
> Judging by the hundreds of emails regarding PEP 418, the disagreements
> about APIs, namings, and even what characteristics clocks should have, I
> believe that the suggestion is too divisive (and confusing!) to be
> accepted or rejected at this time.

Oh, I just "rewrote" the PEP before reading your email. Sorry for the 
noise with this PEP :-) I just read again all emails related to this PEP 
to complete the PEP. The PEP should now list all proposed API designs. I 
hope that I did not forget anything.

I failed to propose a consistent and clear API because I (and Guido!) 
wanted to fallback to the system clock. Falling back to the system clock 
is a problem when you have to define the function in the documentation 
or if you don't want to use the system clock (but do something else) if 
no monotonic clock is available.

So I rewrote the PEP to simplify it:

  * Don't fallback to system clock: time.monotonic() is always monotonic 
(cannot go backward), but it is not always available. You have to write 
a classic try/except ImportError which has a nice advantage: your 
program will work on Python older than 3.3 ;-)

  * Remove the time.perf_counter() function (it was called 
time.highres() before). "highres" notion was confusing. I only wrote the 
function to expose QueryPerformanceCounter (while it was already 
accessible using os.clock()). The function was not well defined. Another 
PEP should be written or at least the subject should be discussed after 
the PEP 418 (monotonic clock).

  * Rename time.steady() to time.monotonic(), again :-)

> Everyone has a different opinion, everyone believes their opinion holds
> for the majority, and it isn't practical for anyone to read the entire
> discussion.

I read most emails and I can say that:

  * There is a need for a monotonic clock
  * Most people prefer to handle explicitly the fallback if no monotonic 
clock is available
  * Most people don't want to call the new function "steady" because it 
stands for something different

> I propose for Python 3.3:
>
> 1) the os module should expose lightweight wrappers around whatever
> clocks the operating system provides;

Python 3.3 has already time.clock_gettime() and time.clock_getres() with 
CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_HIGHRES.

mach_absolute_time() and GetTickCount/GetTick64 are not available yet.

> 3) we postpone PEP 418 until there is some real-world experience with
> using the os clocks from Python and we can develop a consensus of what
> is actually needed rather than what people think we need (i.e. probably
> in 3.4);

Many applications already implement their own "monotonic" clock". Some 
libraries provide also such clock for Python. On UNIX, it's always using 
clock_gettime(MONOTONIC). On Windows, it's sometimes GetTickCount, 
sometimes QueryPerformanceCounter. On Mac OS X, it's always 
mach_absolute_time(). I didn't find a library supporting Solaris.

> 4) if the standard library has need for a "use the best clock available,
> for some definition of best, and fall back to time() if not" clock, then
> the time module should do the simplest thing that could possible work,
> flagged as a private function:

In the last version of my PEP, time.monotonic() is simply defined as "a 
monotonic clock (cannot go backward)". There is no more "... best ..." 
in its definition.

Victor

From yselivanov.ml at gmail.com  Wed Apr  4 03:30:08 2012
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Tue, 3 Apr 2012 21:30:08 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7B96F1.6020906@pearwood.info>
References: <4F7B96F1.6020906@pearwood.info>
Message-ID: <539B1A3E-2220-4BE0-94EA-D8EEE57AE8D6@gmail.com>

On 2012-04-03, at 8:33 PM, Steven D'Aprano wrote:

> 1) the os module should expose lightweight wrappers around whatever clocks the operating system provides;

+1.  That should make it flexible enough to those who really need it.

> 2) the time module should NOT provide any further clocks other than the existing time() and clock() functions (but see point 4 below);
> 
> 3) we postpone PEP 418 until there is some real-world experience with using the os clocks from Python and we can develop a consensus of what is actually needed rather than what people think we need (i.e. probably in 3.4);
> 
> 4) if the standard library has need for a "use the best clock available, for some definition of best, and fall back to time() if not" clock, then the time module should do the simplest thing that could possible work, flagged as a private function:

+1 on overall idea too.

-
Yury

From anacrolix at gmail.com  Wed Apr  4 04:05:38 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Wed, 4 Apr 2012 10:05:38 +0800
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7BA3C2.4050705@gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>
Message-ID: <CAB4yi1NqB36ATOh76qhkvU3a7Nr9CcBLrd+d3RKie8u=pBgLLg@mail.gmail.com>

Finally! We've come full circle.

+1 for monotonic as just described by Victor.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120404/f01c1d1f/attachment.html>

From anacrolix at gmail.com  Wed Apr  4 04:09:40 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Wed, 4 Apr 2012 10:09:40 +0800
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAB4yi1NqB36ATOh76qhkvU3a7Nr9CcBLrd+d3RKie8u=pBgLLg@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAB4yi1NqB36ATOh76qhkvU3a7Nr9CcBLrd+d3RKie8u=pBgLLg@mail.gmail.com>
Message-ID: <CAB4yi1MQfy0mVCxJgnTpL16nJyb5Z4Q7ZuWd-0j2JYgMn93YsA@mail.gmail.com>

Lock it in before the paint dries.
On Apr 4, 2012 10:05 AM, "Matt Joiner" <anacrolix at gmail.com> wrote:

> Finally! We've come full circle.
>
> +1 for monotonic as just described by Victor.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120404/60a305ce/attachment.html>

From cs at zip.com.au  Wed Apr  4 04:46:56 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 4 Apr 2012 12:46:56 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAMpsgwbibQ_03kJGvYBL5GOagk-ONJjg5xz2h1tt-tUbtLwPxQ@mail.gmail.com>
References: <CAMpsgwbibQ_03kJGvYBL5GOagk-ONJjg5xz2h1tt-tUbtLwPxQ@mail.gmail.com>
Message-ID: <20120404024656.GA30247@cskk.homeip.net>

On 04Apr2012 01:45, Victor Stinner <victor.stinner at gmail.com> wrote:
| > | get_clock() returns None if no clock has the requested flags, whereas
| > | I expected an exception (LookupError or NotImplementError?).
| >
| > That is deliberate. People can easily write fallback like this:
| >
| > ?clock = get_clock(T_MONOTONIC|T_HIRES) or get_clock(T_MONOTONIC)
|
| Why not passing a a list of set of flags? Example:
|
| haypo_steady = get_clock(MONOTONIC|STEADY, STEADY, MONOTONIC, REALTIME)
| # try to get a monotonic and steady clock,
| # or fallback to a steady clock,
| # or fallback to a monotonic clock,
| # or fallback to the system clock

That's interesting. Ethan Furman suggested multiple arguments to be
combined, whereas yours bundles multiple search criteria in one call.

While it uses a bitmask as mine does, this may get cumbersome if we went
with Nick's "use strings!" suggestion.

| haypo_perf_counter = get_clock(HIGHRES, MONOTONIC|STEADY, STEADY,
| MONOTONIC, REALTIME)
| # try to get a high-resolution clock
| # or fallback to a monotonic and steady clock,
| # or fallback to a steady clock,
| # or fallback to a monotonic clock,
| # or fallback to the system clock
|
| On Windows, haypo_steady should give GetTickCount (MONOTONIC|STEADY)
| and haypo_perf_counter should give QueryPerformanceCounter
| (MONOTONIC|HIGHRES).

Sounds ok to me. I am not familiar with the Windows counters and am
happy to take your word for it.

| Hum, I'm not sure that haypo_highres uses the same clocks than
| time.perf_counter() in the PEP.
|
| > If one wants an exception it is easy to follow up with:
| > ?if not clock:
| > ? ?raise RunTimeError("no suitable clocks on offer on this platform")
|
| And if don't read the doc carefuly and forget the test, you can a
| "NoneType object is not callable" error.

Excellent! An exception either way! Win win!

| > | get_clock() doesn't remember if a clock works or not (if it raises an
| > | OSError) and does not fallback to the next clock on error. See
| > | "pseudo-codes" in the PEP 418.
| >
| > I presume the available clocks are all deduced from the platform. Your
| > pseudo code checks for OSError at fetch-the-clock time. I expect that
| > to occur once when the module is loaded, purely to populate the table
| > of avaiable platform clocks.
|
| It's better to avoid unnecessary system calls at startup (when the
| time module is loaded), but you may defer the creation of the clock
| list, or at least of the flags of each clock.

Yes indeed. I think this should be deferred until use.

| > Note that you don't need to provide a clock list at all; get_clock(0
| > will use ALL_CLOCKS by default, and hires() and monotonic() should each
| > have their own default list.
|
| A list of clocks and a function are maybe redundant. Why not only
| providing a function?

Only because the function currently only returns one clock.
The picky user may want to peruse all the clocks inspecting other
metadata (precision etc) than the coarse flag requirements.

There should be a way to enumerate the available clock implementation;
in my other recent post I suggest either lists (as current), a
get_clocks() function, or a mode parameter to get_clock() such as
_all_clocks, defaulting to False.

| > Regarding the choice itself: as the _caller_ (not the library author),
| > you must decide what you want most. You're already planning offering
| > monotonic() and hires() calls without my proposal!
|
| My PEP starts with use cases: it proposes one clock per use case.
| There is no "If you need a monotonic, steady and high-resolution clock
| ..." use case.

Yes. but this is my exact objection to the "just provide hires() and
steady() and/or monotonic()" API; the discussion to date is littered
with "I can't imagine wanting to do X" style remarks. We should not be
trying to enumerate the user case space exhaustively. I'm entirely in
favour of your list of use cases and the approach of providing hires() et
al to cover the thought-common use cases. But I feel we really _must_
provide a way for the user with a not-thought-of use case to make an
arbitrary decision.

get_clock() provides a simple cut at the "gimme a suitable clock"
approach, with the lists or other "get me an enumeration of the
available clocks" mechanism for totally ad hoc perusal if the need
arises.

This is also my perhaps unstated concern with Guido's "the more I think about
it, the more I believe these functions should have very loose guarantees, and
instead just cater to common use cases -- availability of a timer with
minimal fuss is usually more important than the guarantees"
http://www.mail-archive.com/python-dev at python.org/msg66173.html

The easy to use hires() etc must make very loose guarentees or they will
be useless too often. That looseness is fine in some ways - it provides
availability on many platforms (all?) and discourages the user from
hoping for too much and thus writing fragile code. But it also PREVENTS
the user from obtaining a really good clock if it is available (where
"good" means their partiuclar weirdo feature requirements).

So I think there should be both - the easy and simple calls, and a
mecahnism for providing all clocks for the user to chose with arbitrary
criteria and fallback.

| The "highres" name was confusing, I just replaced it with
| time.perf_counter() (thanks Antoine for the name!).
| time.perf_counter() should be used for benchmarking and profiling.

I've been wondering; do we distinguish between clocks and counters. In my
mind a clock or timer has a very linear relationship with "real time",
the wall clock. A counter, by comparison, may measure CPU cycles or
kernel timeslice ticks or python opcode counts or any number of other
time-like resource consumption things.

I've been presuming we're concerned here with "clocks" and not counters.

| > Taking your query "Should
| > I use MONTONIC_CLOCKS or HIRES_CLOCKS when I would like a monotonic and
| > high-resolution clock" is _already_ a problem. Of course you must call
| > monotonic() or hires() first under the current scheme, and must answer this
| > question anyway. Do you prefer hires? Use it first! No preference? Then the
| > question does not matter.
|
| I mean having to choose the flags *and* the list of clocks is hard. I
| would prefer to only have to choose flags or only the list of clocks.
| The example was maybe not the best one.

Yah; I think I made a followup post where I realised you may have meant
this.

The use of default arguments is meant to make it easy to use flags
and/or lists or even neither (which for get_clock() at least would
always get you a clock because a wrapper for time.time() is always
provided). In my mind, usually just flags of course.

| > | If you have only one list of clocks, how do sort the list to get
| > | QueryPerformanceCounter when the user asks for highres and
| > | GetTickCount when the user asks for monotonic?
| >
| > This is exactly why there are supposed to be different lists.
| > You have just argued against your objection above.
|
| You can solve this issue with only one list of clocks if you use the
| right set of flags.

No you can't, not in general. If there are multiple clocks honouring
those flags you only ever get the first one in the list. The point of the
MONOTONIC_CLOCKS list etc is that the lists may be differently ordered
to provide quality of clock within that domain. Suppose I ask for
steady_clock(MONOTONIC); I would probably prefer a more-steady clock
over a more-precise/hires clock. And converse desires when asking for
hires_clock(). So different list orders, at least in principle.

If it turns out empirically that this isn't the case then all the names
can in fact refer to the same list. But offering only one list _name_
prevents offering these nuances when other platforms/clocks come in the
future.

| > | So we would have:
| > |
| > | GetTickCount.flags = T_MONOTONIC | T_STEADY | T_HIGHRES
| > |
| > | Even if GetTickCount has only an accuracy of 15 ms :-/
| >
| > T_HIGHRES is a quality call, surely? If 15ms is too sloppy for a "high
| > resolution, the is should _not_ have the T_HIRES flag.
|
| So what is the minimum resolution and/or accuracy of the HIGHRES flag?

No idea. But you must in principle have one in mind to offer the hires()
call at all in the PEP, or be prepared to merely offer the most hires
clock available regardless. In this latter case you would always mark
that clock as having the HIRES flag and the problem is solved. It would
be good to have metadata to indicate how hires a partiulcar clock is.

| > | Could you please update your code according to my remarks? I will try
| > | to integrate it into the PEP. A PEP should list all alternatives!
| >
| > Surely.
| >
| > The only updates I can see are to provide the flat interface
| > (instead of via clock-object indirection) and the missing hires_clock()
| > and monotonic_clock() convenience methods.
|
| A full implementation would help to decide which API is the best one.
| "Full" implementation:
|
|  - define all convinience function
|  - define all list of clocks

Ok. My current code is here, BTW:
  https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/clockutils.py
(Finally found a revision independent URL on bitbucket.)

|  - define flags of all clocks listed in the PEP 418: clocks used in
| the pseudo-code of time.steady and time.perf_counter, and maybe also
| time.time

I'll make one. It will take a little while. Will post again when ready. At
present the code compiles and runs (albeit with no platform specific
clocks:-) This table may require fictitious code. Should still compile
I guess...

Cheers,
--
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

The right to be heard does not include the right to be taken seriously.
        - Hubert Horatio Humphrey

From steve at pearwood.info  Wed Apr  4 10:09:40 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 4 Apr 2012 18:09:40 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7BA3C2.4050705@gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
Message-ID: <20120404080940.GA19862@ando>

On Wed, Apr 04, 2012 at 03:28:34AM +0200, Victor Stinner wrote:
> Le 04/04/2012 02:33, Steven D'Aprano a ?crit :
> >Judging by the hundreds of emails regarding PEP 418, the disagreements
> >about APIs, namings, and even what characteristics clocks should have, I
> >believe that the suggestion is too divisive (and confusing!) to be
> >accepted or rejected at this time.
> 
> Oh, I just "rewrote" the PEP before reading your email. Sorry for the 
> noise with this PEP :-) I just read again all emails related to this PEP 
> to complete the PEP. The PEP should now list all proposed API designs. I 
> hope that I did not forget anything.

I think the PEP is a good, important PEP, and thank you for being the 
PEP's champion. But in my opinion, this is too big to rush it and risk 
locking in a sub-standard API for the next decade or two.


> >I propose for Python 3.3:
> >
> >1) the os module should expose lightweight wrappers around whatever
> >clocks the operating system provides;
> 
> Python 3.3 has already time.clock_gettime() and time.clock_getres() with 
> CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_HIGHRES.

Why does it already have these things when the PEP is not accepted? 

(This is not a rhetorical question, perhaps there is a good reason why 
these have been added independently of the PEP.)

If I remember correctly, Guido earlier mentioned that he only wanted to 
see one or two (I forget which) new clocks, and I see in 3.3.0a1 there 
are already at least five new clocks:

monotonic or clock_gettime(CLOCK_MONOTONIC)  # Are these the same thing?
wallclock
clock_gettime(CLOCK_PROCESS_CPUTIME_ID)
clock_gettime(CLOCK_REALTIME)
clock_gettime(CLOCK_THREAD_CPUTIME_ID)

plus the old ways, time.time and time.clock. (Neither of which seems 
to have a clock-id.)


> mach_absolute_time() and GetTickCount/GetTick64 are not available yet.

That will make potentially 10 different clocks in the time module.


It may be that, eventually, Python should support all these ten 
different clocks. (Personally, I doubt that the average Python 
programmer cares about the difference between time() and clock(), let 
alone another eight more.) But there's no rush. I think we should start 
by supporting OS-specific clocks in the os module, and then once we have 
some best-practice idioms, we can promote some of them to the time 
module.


-- 
Steven

From p.f.moore at gmail.com  Wed Apr  4 10:21:21 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 4 Apr 2012 09:21:21 +0100
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CACac1F_T6_P2Bi=CbSQwJ2Udvdh33C6oPKPgAUT76n1g3bdtRg@mail.gmail.com>
References: <jleau6$2dt$1@dough.gmane.org>
	<20120403084305.GA19441@cskk.homeip.net>
	<4F7B9022.60504@canterbury.ac.nz>
	<CACac1F_T6_P2Bi=CbSQwJ2Udvdh33C6oPKPgAUT76n1g3bdtRg@mail.gmail.com>
Message-ID: <CACac1F_SQF_vjvb+P70R2DR06WyE7MM9Cwv_2eo43=gZzcstBA@mail.gmail.com>

(Sorry, should have sent to the list).

On 4 April 2012 01:04, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Cameron Simpson wrote:
>>
>> People have been saying "hires" throughout the
>> threads I think, but I for one would be slightly happier with "highres".
>
>
> hirez?

What's wrong with high_resolution?
Paul

From solipsis at pitrou.net  Wed Apr  4 11:46:31 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 4 Apr 2012 11:46:31 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
	<CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>
Message-ID: <20120404114631.69e8a8ff@pitrou.net>

On Wed, 4 Apr 2012 02:02:12 +0200
Victor Stinner <victor.stinner at gmail.com> wrote:
> > Lennart Regebro wrote:
> >> Well, get_clock(monotonic=True, highres=True) would be a vast
> >> improvement over get_clock(MONOTONIC|HIRES).
> 
> I don't like this keyword API because you have to use a magically
> marker (True). Why True? What happens if I call
> get_clock(monotonic=False) or get_clock(monotonic="yes")?

Since when are booleans magical? Has this thread gone totally insane?

Regards

Antoine.



From solipsis at pitrou.net  Wed Apr  4 11:52:24 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 4 Apr 2012 11:52:24 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<20120404080940.GA19862@ando>
Message-ID: <20120404115224.2313b111@pitrou.net>

On Wed, 4 Apr 2012 18:09:40 +1000
Steven D'Aprano <steve at pearwood.info> wrote:
> > Python 3.3 has already time.clock_gettime() and time.clock_getres() with 
> > CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_HIGHRES.
> 
> Why does it already have these things when the PEP is not accepted? 
> 
> (This is not a rhetorical question, perhaps there is a good reason why 
> these have been added independently of the PEP.)

Because these are thin (low-level) wrappers around the corresponding
POSIX APIs, so there is no reason not to add them.

I know you were asking for such wrappers to be in the "os" module, but
my understanding is that time-related functions should preferably go
into the "time" module. "os" is already full of very diverse stuff, and
documentation-wise it is better if time-related functions end up in a
time-related module. Otherwise we'll end up having to cross-link
manually, which is always cumbersome (for us) and less practical (for
the reader).

Regards

Antoine.



From victor.stinner at gmail.com  Wed Apr  4 13:04:13 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 4 Apr 2012 13:04:13 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120404114631.69e8a8ff@pitrou.net>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
	<CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>
	<20120404114631.69e8a8ff@pitrou.net>
Message-ID: <CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>

2012/4/4 Antoine Pitrou <solipsis at pitrou.net>:
> On Wed, 4 Apr 2012 02:02:12 +0200
> Victor Stinner <victor.stinner at gmail.com> wrote:
>> > Lennart Regebro wrote:
>> >> Well, get_clock(monotonic=True, highres=True) would be a vast
>> >> improvement over get_clock(MONOTONIC|HIRES).
>>
>> I don't like this keyword API because you have to use a magically
>> marker (True). Why True? What happens if I call
>> get_clock(monotonic=False) or get_clock(monotonic="yes")?
>
> Since when are booleans magical? Has this thread gone totally insane?

It depends if the option supports other values. But as I understood,
the keyword value must always be True.

Victor

From victor.stinner at gmail.com  Wed Apr  4 13:09:46 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 4 Apr 2012 13:09:46 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <20120404115224.2313b111@pitrou.net>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<20120404080940.GA19862@ando> <20120404115224.2313b111@pitrou.net>
Message-ID: <CAMpsgwa8C24iad8cPrgn0_ENmnyzdb67NQkAWnD-DrsqHcJ4Uw@mail.gmail.com>

>> Why does it already have these things when the PEP is not accepted?
>> ...
>> (This is not a rhetorical question, perhaps there is a good reason why
>> these have been added independently of the PEP.)

time.clock_gettime() & friends were added by the issue #10278. The
function was added before someone asked (me) to write a PEP. The need
of a PEP came later, when time.wallclock() and time.monotonic()
functions were added.

> Because these are thin (low-level) wrappers around the corresponding
> POSIX APIs, so there is no reason not to add them.

time.clock_gettime() can be used for other purpose than a monotonic
clock. For example, CLOCK_THREAD_CPUTIME_ID is the only available
function to get the "Thread-specific CPU-time clock". It also gives
access to CLOCK_MONOTONIC_RAW which is not used by the
time.monotonic() function proposed in the PEP.

Victor

From victor.stinner at gmail.com  Wed Apr  4 13:24:11 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 4 Apr 2012 13:24:11 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7BA3C2.4050705@gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>
Message-ID: <CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>

> I failed to propose a consistent and clear API because I (and Guido!) wanted
> to fallback to the system clock. Falling back to the system clock is a
> problem when you have to define the function in the documentation or if you
> don't want to use the system clock (but do something else) if no monotonic
> clock is available.

Well, it was not only Guido and me.

Nick Coghlan wrote:
"However, I think Victor's right to point out that the *standard
library* needs to have a fallback to maintain backwards compatibility
if time.monotonic() isn't available, and it seems silly to implement
the same fallback logic in every module where we manipulate timeouts."
and
"Since duplicating that logic in every module that handles timeouts
would be silly, it makes sense to provide an obvious way to do it in
the time module."

Michael Foord wrote:
"It is this always-having-to-manually-fallback-depending-on-os that I
was hoping your new functionality would avoid. Is time.try_monotonic()
suitable for this usecase?"

The following functions / libraries fall back to the system clock if
no monotonic clock is available:
 - QElapsedTimer class of the Qt library
 - g_get_monotonic_time() of the glib library
 - monotonic_clock library
 - AbsoluteTime.now (third-party Ruby library),
"AbsoluteTime.monotonic?" tells if AbsoluteTime.now is monotonic

Extract of the glib doc: "Otherwise, we make a best effort that
probably involves returning the wall clock time (with at least
microsecond accuracy, subject to the limitations of the OS kernel)."

--

Only the python-monotonic-time fails with an OSError if no monotonic
clock is available.

System.nanoTime() of Java has few garantee: "Returns the current value
of the most precise available system timer, in nanoseconds. This
method can only be used to measure elapsed time and is not related to
any other notion of system or wall-clock time. The value returned
represents nanoseconds since some fixed but arbitrary time (perhaps in
the future, so values may be negative)." I don't even know if it is
monotonic, steady or has an high resolution.

Note: Boost.Chrono.high_resolution_clock falls back to the system
clock if no steady clock is available. (But the high-resolution clock
idea was deferred, it's something different than a monotonic or steady
clock.)

Victor

From rosuav at gmail.com  Wed Apr  4 13:41:44 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 4 Apr 2012 21:41:44 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
	<CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>
	<20120404114631.69e8a8ff@pitrou.net>
	<CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>
Message-ID: <CAPTjJmpkKNHN88v6TKLiR86vkc59g9Gd2KsCx=o5CKjx4v1VCA@mail.gmail.com>

On Wed, Apr 4, 2012 at 9:04 PM, Victor Stinner <victor.stinner at gmail.com> wrote:
> 2012/4/4 Antoine Pitrou <solipsis at pitrou.net>:
>> On Wed, 4 Apr 2012 02:02:12 +0200
>> Victor Stinner <victor.stinner at gmail.com> wrote:
>>> > Lennart Regebro wrote:
>>> >> Well, get_clock(monotonic=True, highres=True) would be a vast
>>> >> improvement over get_clock(MONOTONIC|HIRES).
>>>
>>> I don't like this keyword API because you have to use a magically
>>> marker (True). Why True? What happens if I call
>>> get_clock(monotonic=False) or get_clock(monotonic="yes")?
>>
>> Since when are booleans magical? Has this thread gone totally insane?
>
> It depends if the option supports other values. But as I understood,
> the keyword value must always be True.

If I were looking at that in documentation, my automatic guess would
be that the only thing that matters is whether the argument
compares-as-true or not. So get_clock(monotonic="yes") would be the
same as =True, and =False wouldn't be. And get_clock(monotonic="No,
you idiot, I want one that ISN'T") would... be stupid. But it'd still
function :)

Chris Angelico

From brett at python.org  Wed Apr  4 15:57:55 2012
From: brett at python.org (Brett Cannon)
Date: Wed, 4 Apr 2012 09:57:55 -0400
Subject: [Python-Dev] .{git,bzr}ignore in cpython HG repo
In-Reply-To: <CAE4GLst2+A6=1uSKMturOeC6_7fmt_51yU+MB7-K4Edv0RTycQ@mail.gmail.com>
References: <4F75CA7E.7030204@redhat.com>
	<CAD+XWwpj3G38LsyNWA3K3Gw-649p2J676WxH_YaqYZ=y=+BmjA@mail.gmail.com>
	<4F78D73C.4000204@redhat.com>
	<CAD+XWwowBpgCEFBcqvhfT1CWh1JomtegQ1+Lbt=mAq95-XHLmQ@mail.gmail.com>
	<20120402135048.6ef7d87d@pitrou.net> <jlc7ht$409$1@dough.gmane.org>
	<CAE4GLst2+A6=1uSKMturOeC6_7fmt_51yU+MB7-K4Edv0RTycQ@mail.gmail.com>
Message-ID: <CAP1=2W5eMMfckVjmn4D33YJ-N69=Bk9mabJ1SRDwZx4xY8CPwg@mail.gmail.com>

On Mon, Apr 2, 2012 at 08:58, Thomas Spura <thomas.spura at googlemail.com>wrote:

> On Mon, Apr 2, 2012 at 2:54 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> > Antoine Pitrou, 02.04.2012 13:50:
> >> On Sun, 1 Apr 2012 19:44:00 -0500
> >> Brian Curtin wrote:
> >>> On Sun, Apr 1, 2012 at 17:31, Mat?j Cepl wrote:
> >>>> On 1.4.2012 23:46, Brian Curtin wrote:
> >>>>> For what reason? Are the git or bzr files causing issues on HG?
> >>>>
> >>>>
> >>>> No, but wrong .gitignore causes issues with git repo obtained via
> >>>> hg-fast-import. If it is meant as an intentional sabotage of using
> git (and
> >>>> bzr) for cpython, then that's the only explanation I can understand,
> >>>> otherwise it doesn't make sense to me why these files are in HG
> repository
> >>>> at all.
> >>>
> >>> Then you won't understand. Sometimes things get out of date when they
> >>> aren't used or maintained.
> >>>
> >>> You're welcome to fix the problem if you're a Git user, as suggested
> earlier.
> >>
> >> That said, these files will always be outdated, so we might as well
> >> remove them so that at least git / bzr users don't get confused.
> >
> > How often is anything added to the .hgignore file? I doubt that these
> files
> > will "sufficiently always" be outdated to be unhelpful.
>
> How about using symlinks and only using a common syntax in .hgignore
> that git also understands?
>

Because .hgignore has a more expressive syntax. We shouldn't hobble or make
messy our hg repo just for the sake of git.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120404/0b4f713d/attachment.html>

From regebro at gmail.com  Wed Apr  4 17:30:26 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Wed, 4 Apr 2012 17:30:26 +0200
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
In-Reply-To: <CAMpsgwb6H4PB1MGTLvooYa8mk27QrSGi2j=X6YBSvN7k1OUQQw@mail.gmail.com>
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
	<CAMpsgwb6H4PB1MGTLvooYa8mk27QrSGi2j=X6YBSvN7k1OUQQw@mail.gmail.com>
Message-ID: <CAL0kPAW-G7J4FBoFodJsTNfmEMOSMOvjkFBgSy1KApwW1Anr6w@mail.gmail.com>

On Tue, Apr 3, 2012 at 23:14, Victor Stinner <victor.stinner at gmail.com> wrote:
>> Wait, what?
>> I already thought we, several days ago, decided that "steady" was a
>> *terrible* name, and that monotonic should *not* fall back to the
>> system clock.
>
> Copy of a more recent Guido's email:
> http://mail.python.org/pipermail/python-dev/2012-March/118322.html
> "Anyway, the more I think about it, the more I believe these functions
> should have very loose guarantees, and instead just cater to common
> use cases -- availability of a timer with minimal fuss is usually more
> important than the guarantees. So forget the idea about one version
> that falls back to time.time() and another that doesn't -- just always
> fall back to time.time(), which is (almost) always better than
> failing.

I disagree with this, mainly for the reason that there isn't any good
names for these functions. "hopefully_monotonic()" doesn't really cut
it for me. :-)
I also don't see how it's hard to guarantee that monotonic() is monotonic.

//Lennart

From solipsis at pitrou.net  Wed Apr  4 17:33:21 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 4 Apr 2012 17:33:21 +0200
Subject: [Python-Dev] PEP 418: rename time.monotonic() to time.steady()?
References: <CAMpsgwaw3suzoMdFD1opPoD2=0XSSrOze31VXJb=0skxnDkxRg@mail.gmail.com>
	<CAL0kPAVLOAuM=m0dTg=wWevtrGLnQaL+STT0vFfEFC-xbUKDdA@mail.gmail.com>
	<CAMpsgwb6H4PB1MGTLvooYa8mk27QrSGi2j=X6YBSvN7k1OUQQw@mail.gmail.com>
	<CAL0kPAW-G7J4FBoFodJsTNfmEMOSMOvjkFBgSy1KApwW1Anr6w@mail.gmail.com>
Message-ID: <20120404173321.6406034c@pitrou.net>

On Wed, 4 Apr 2012 17:30:26 +0200
Lennart Regebro <regebro at gmail.com> wrote:
> > Copy of a more recent Guido's email:
> > http://mail.python.org/pipermail/python-dev/2012-March/118322.html
> > "Anyway, the more I think about it, the more I believe these functions
> > should have very loose guarantees, and instead just cater to common
> > use cases -- availability of a timer with minimal fuss is usually more
> > important than the guarantees. So forget the idea about one version
> > that falls back to time.time() and another that doesn't -- just always
> > fall back to time.time(), which is (almost) always better than
> > failing.
> 
> I disagree with this, mainly for the reason that there isn't any good
> names for these functions. "hopefully_monotonic()" doesn't really cut
> it for me. :-)

monotonic(fallback=False) doesn't look horrible to me (assuming a
default value of False for the `fallback` parameter).

> I also don't see how it's hard to guarantee that monotonic() is monotonic.

I think we are speaking about a system-wide monotonic clock (i.e., you
can compare values between processes). Otherwise it's probably quite
easy indeed.

Regards

Antoine.



From regebro at gmail.com  Wed Apr  4 17:41:18 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Wed, 4 Apr 2012 17:41:18 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
Message-ID: <CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>

I am fine with the PEP as it is now (2012-04-04 15:34 GMT).

A question:

Since the only monotonic clock that can be adjusted by NTP is Linux'
CLOCK_MONOTONIC, if we avoid it, then time.monotonic() would always
give a clock that isn't adjusted by NTP. That would however mean we
wouldn't support monotonic clocks on systems that run a Linux that is
older than mid-2008. Is this generally seen as a problem?

//Lennart

From regebro at gmail.com  Wed Apr  4 17:44:29 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Wed, 4 Apr 2012 17:44:29 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
	<CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>
	<20120404114631.69e8a8ff@pitrou.net>
	<CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>
Message-ID: <CAL0kPAU7CTFf_r4vfBtyO5At91OeBm7bom3XvqJt4grRH4tstg@mail.gmail.com>

On Wed, Apr 4, 2012 at 13:04, Victor Stinner <victor.stinner at gmail.com> wrote:
> It depends if the option supports other values. But as I understood,
> the keyword value must always be True.

Or False, obviously. Which would also be default.

//Lennart

From regebro at gmail.com  Wed Apr  4 17:47:16 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Wed, 4 Apr 2012 17:47:16 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7B2029.8010707@stoneleaf.us>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
Message-ID: <CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>

On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
> What's unclear about returning None if no clocks match?

Nothing, but having to check error values on return functions are not
what you typically do in Python. Usually, Python functions that fail
raise an error. Please don't force Python users to write pseudo-C code
in Python.

//Lennart

From yselivanov.ml at gmail.com  Wed Apr  4 18:29:29 2012
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Wed, 4 Apr 2012 12:29:29 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7BA3C2.4050705@gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
Message-ID: <FA560C42-BDBD-46B0-B31D-80FAE744D55C@gmail.com>

On 2012-04-03, at 9:28 PM, Victor Stinner wrote:

> In the last version of my PEP, time.monotonic() is simply defined as "a monotonic clock (cannot go backward)". There is no more "... best ..." in its definition.

I like the last version of the PEP ;)

-
Yury

From ethan at stoneleaf.us  Wed Apr  4 18:18:51 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 04 Apr 2012 09:18:51 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
Message-ID: <4F7C746B.3070108@stoneleaf.us>

Lennart Regebro wrote:
> On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>> What's unclear about returning None if no clocks match?
> 
> Nothing, but having to check error values on return functions are not
> what you typically do in Python. Usually, Python functions that fail
> raise an error. Please don't force Python users to write pseudo-C code
> in Python.

You mean like the dict.get() function?

--> repr({}.get('missing'))
'None'

Plus, failure mode is based on intent:  if the intent is "Give a clock 
no matter what", then yes, an exception when that's not possible is the 
way to go.

But if the intent is "Give me a clock that matches this criteria" then 
returning None is perfectly reasonable.

~Ethan~

From phd at phdru.name  Wed Apr  4 19:44:49 2012
From: phd at phdru.name (Oleg Broytman)
Date: Wed, 4 Apr 2012 21:44:49 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
Message-ID: <20120404174449.GB25288@iskra.aviel.ru>

On Wed, Apr 04, 2012 at 05:47:16PM +0200, Lennart Regebro wrote:
> On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
> > What's unclear about returning None if no clocks match?
> 
> Nothing, but having to check error values on return functions are not
> what you typically do in Python. Usually, Python functions that fail
> raise an error.

   Absolutely. "Errors should never pass silently."

> Please don't force Python users to write pseudo-C code in Python.

   +1. Pythonic equivalent of "get_clock(THIS) or get_clok(THAT)" is

for flag in (THIS, THAT):
    try:
        clock = get_clock(flag)
    except:
        pass
    else:
        break
else:
    raise ValueError('Cannot get clock, tried THIS and THAT')

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From g.brandl at gmx.net  Wed Apr  4 19:47:10 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 04 Apr 2012 19:47:10 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7C746B.3070108@stoneleaf.us>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<4F7C746B.3070108@stoneleaf.us>
Message-ID: <jli1ej$3v4$1@dough.gmane.org>

Am 04.04.2012 18:18, schrieb Ethan Furman:
> Lennart Regebro wrote:
>> On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>>> What's unclear about returning None if no clocks match?
>> 
>> Nothing, but having to check error values on return functions are not
>> what you typically do in Python. Usually, Python functions that fail
>> raise an error. Please don't force Python users to write pseudo-C code
>> in Python.
> 
> You mean like the dict.get() function?
> 
> --> repr({}.get('missing'))
> 'None'

Strawman: this is not a failure.

Georg


From ethan at stoneleaf.us  Wed Apr  4 20:06:50 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 04 Apr 2012 11:06:50 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <jli1ej$3v4$1@dough.gmane.org>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>	<4F7B2029.8010707@stoneleaf.us>	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>	<4F7C746B.3070108@stoneleaf.us>
	<jli1ej$3v4$1@dough.gmane.org>
Message-ID: <4F7C8DBA.4020206@stoneleaf.us>

Georg Brandl wrote:
> Am 04.04.2012 18:18, schrieb Ethan Furman:
>> Lennart Regebro wrote:
>>> On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>>>> What's unclear about returning None if no clocks match?
>>> Nothing, but having to check error values on return functions are not
>>> what you typically do in Python. Usually, Python functions that fail
>>> raise an error. Please don't force Python users to write pseudo-C code
>>> in Python.
>> You mean like the dict.get() function?
>>
>> --> repr({}.get('missing'))
>> 'None'
> 
> Strawman: this is not a failure.

Also not a very good example -- if 'missing' was there with a value of 
None the two situations could not be distinguished with the one call.

At any rate, the point is that there is nothing inherently wrong nor 
unPythonic about a function returning None instead of raising an exception.

~Ethan~

From ethan at stoneleaf.us  Wed Apr  4 20:03:02 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 04 Apr 2012 11:03:02 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120404174449.GB25288@iskra.aviel.ru>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>	<4F7B2029.8010707@stoneleaf.us>	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
Message-ID: <4F7C8CD6.7090308@stoneleaf.us>

Oleg Broytman wrote:
> On Wed, Apr 04, 2012 at 05:47:16PM +0200, Lennart Regebro wrote:
>> On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>>> What's unclear about returning None if no clocks match?
>> Nothing, but having to check error values on return functions are not
>> what you typically do in Python. Usually, Python functions that fail
>> raise an error.
> 
>    Absolutely. "Errors should never pass silently."

Again, what's the /intent/?  No matching clocks does not have to be an 
error.


>> Please don't force Python users to write pseudo-C code in Python.
> 
>    +1. Pythonic equivalent of "get_clock(THIS) or get_clok(THAT)" is
> 
> for flag in (THIS, THAT):
>     try:
>         clock = get_clock(flag)
>     except:
>         pass
>     else:
>         break
> else:
>     raise ValueError('Cannot get clock, tried THIS and THAT')


Wow -- you'd rather write nine lines of code instead of three?

clock = get_clock(THIS) or get_clock(THAT)
if clock is None:
     raise ValueError('Cannot get clock, tried THIS and THAT')

~Ethan~

From phd at phdru.name  Wed Apr  4 21:24:36 2012
From: phd at phdru.name (Oleg Broytman)
Date: Wed, 4 Apr 2012 23:24:36 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7C8CD6.7090308@stoneleaf.us>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
Message-ID: <20120404192436.GB27384@iskra.aviel.ru>

On Wed, Apr 04, 2012 at 11:03:02AM -0700, Ethan Furman wrote:
> Oleg Broytman wrote:
> >   . Pythonic equivalent of "get_clock(THIS) or get_clok(THAT)" is
> >
> >for flag in (THIS, THAT):
> >    try:
> >        clock = get_clock(flag)
> >    except:
> >        pass
> >    else:
> >        break
> >else:
> >    raise ValueError('Cannot get clock, tried THIS and THAT')
> 
> 
> Wow -- you'd rather write nine lines of code instead of three?
> 
> clock = get_clock(THIS) or get_clock(THAT)
> if clock is None:
>     raise ValueError('Cannot get clock, tried THIS and THAT')

   Yes - to force people to write the last two lines. Without forcing
most programmers will skip them.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From ethan at stoneleaf.us  Wed Apr  4 21:52:00 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 04 Apr 2012 12:52:00 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120404192436.GB27384@iskra.aviel.ru>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>	<4F7B2029.8010707@stoneleaf.us>	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>	<20120404174449.GB25288@iskra.aviel.ru>	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
Message-ID: <4F7CA660.60205@stoneleaf.us>

Oleg Broytman wrote:
> On Wed, Apr 04, 2012 at 11:03:02AM -0700, Ethan Furman wrote:
>> Oleg Broytman wrote:
>>>   . Pythonic equivalent of "get_clock(THIS) or get_clok(THAT)" is
>>>
>>> for flag in (THIS, THAT):
>>>    try:
>>>        clock = get_clock(flag)
>>>    except:
>>>        pass
>>>    else:
>>>        break
>>> else:
>>>    raise ValueError('Cannot get clock, tried THIS and THAT')
>>
>> Wow -- you'd rather write nine lines of code instead of three?
>>
>> clock = get_clock(THIS) or get_clock(THAT)
>> if clock is None:
>>     raise ValueError('Cannot get clock, tried THIS and THAT')
> 
>    Yes - to force people to write the last two lines. Without forcing
> most programmers will skip them.

Forced?  I do not use Python to be forced to use one style of 
programming over another.

And it's not like returning None will allow some clock calls to work but 
not others -- as soon as they try to use it, it will raise an exception.

~Ethan~

From cs at zip.com.au  Thu Apr  5 00:06:45 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Thu, 5 Apr 2012 08:06:45 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <jli1ej$3v4$1@dough.gmane.org>
References: <jli1ej$3v4$1@dough.gmane.org>
Message-ID: <20120404220645.GA14094@cskk.homeip.net>

On 04Apr2012 19:47, Georg Brandl <g.brandl at gmx.net> wrote:
| Am 04.04.2012 18:18, schrieb Ethan Furman:
| > Lennart Regebro wrote:
| >> On Tue, Apr 3, 2012 at 18:07, Ethan Furman <ethan at stoneleaf.us> wrote:
| >>> What's unclear about returning None if no clocks match?
| >> 
| >> Nothing, but having to check error values on return functions are not
| >> what you typically do in Python. Usually, Python functions that fail
| >> raise an error. Please don't force Python users to write pseudo-C code
| >> in Python.
| > 
| > You mean like the dict.get() function?
| > 
| > --> repr({}.get('missing'))
| > 'None'
| 
| Strawman: this is not a failure.

And neither is get_clock() returning None. get_clock() is an inquiry
function, and None is a legitimate response when no clock is
satisfactory, just as a dict has no key for a get().

Conversely, monotonic() ("gimme the time!") and indeed time() should
raise an exception if there is no clock. They're, for want of a word,
"live" functions you would routinely embed in a calculation.

So not so much a straw man as a relevant illuminating example.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

A crash reduces
your expensive computer
to a simple stone.
- Haiku Error Messages http://www.salonmagazine.com/21st/chal/1998/02/10chal2.html

From steve at pearwood.info  Thu Apr  5 00:50:54 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 05 Apr 2012 08:50:54 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120404192436.GB27384@iskra.aviel.ru>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>	<4F7B2029.8010707@stoneleaf.us>	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>	<20120404174449.GB25288@iskra.aviel.ru>	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
Message-ID: <4F7CD04E.7030303@pearwood.info>

Oleg Broytman wrote:
> On Wed, Apr 04, 2012 at 11:03:02AM -0700, Ethan Furman wrote:
>> Oleg Broytman wrote:
>>>   . Pythonic equivalent of "get_clock(THIS) or get_clok(THAT)" is
>>>
>>> for flag in (THIS, THAT):
>>>    try:
>>>        clock = get_clock(flag)
>>>    except:
>>>        pass
>>>    else:
>>>        break
>>> else:
>>>    raise ValueError('Cannot get clock, tried THIS and THAT')
>>
>> Wow -- you'd rather write nine lines of code instead of three?
>>
>> clock = get_clock(THIS) or get_clock(THAT)
>> if clock is None:
>>     raise ValueError('Cannot get clock, tried THIS and THAT')
> 
>    Yes - to force people to write the last two lines. Without forcing
> most programmers will skip them.

You're not my real Dad! You can't tell me what to do!

*wink*

This level of paternalism is unnecessary. It's not your job to "force" 
programmers to do anything. If people skip the test for None, they will get an 
exception as soon as they try to use None as an exception, and then they will 
fix their broken code.

Although I don't like the get_clock() API, I don't think this argument against 
it is a good one. Exceptions are the *usual* error-handling mechanism in 
Python, but they are not the *only* mechanism, there are others, and it is 
perfectly okay to use non-exception based failures when appropriate. This is 
one such example.

"Return None on failure" is how re.match() and re.search() work, and it is a 
good design for when you have multiple fallbacks on failure.

result = re.match(spam, s) or re.match(ham, s) or re.match(eggs, s)
if result is None:
     raise ValueError('could not find spam, ham or eggs')


This is a *much* better design than nested tries:

try:
     result = re.match(spam, s)
except ValueError:
     try:
         result = re.match(ham, s)
     except ValueError:
         try:
             result = re.match(eggs, s)
         except ValueError:
             raise ValueError('could not find spam, ham or eggs')


Wow. Now *that* is ugly code. There's nothing elegant or Pythonic about being 
forced to write that out of a misplaced sense of purity.


-- 
Steven


From phd at phdru.name  Thu Apr  5 01:05:03 2012
From: phd at phdru.name (Oleg Broytman)
Date: Thu, 5 Apr 2012 03:05:03 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7CA660.60205@stoneleaf.us>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us>
Message-ID: <20120404230503.GB314@iskra.aviel.ru>

On Wed, Apr 04, 2012 at 12:52:00PM -0700, Ethan Furman wrote:
> Oleg Broytman wrote:
> >On Wed, Apr 04, 2012 at 11:03:02AM -0700, Ethan Furman wrote:
> >>Oleg Broytman wrote:
> >>>  . Pythonic equivalent of "get_clock(THIS) or get_clok(THAT)" is
> >>>
> >>>for flag in (THIS, THAT):
> >>>   try:
> >>>       clock = get_clock(flag)
> >>>   except:
> >>>       pass
> >>>   else:
> >>>       break
> >>>else:
> >>>   raise ValueError('Cannot get clock, tried THIS and THAT')
> >>
> >>Wow -- you'd rather write nine lines of code instead of three?
> >>
> >>clock = get_clock(THIS) or get_clock(THAT)
> >>if clock is None:
> >>    raise ValueError('Cannot get clock, tried THIS and THAT')
> >
> >   Yes - to force people to write the last two lines. Without forcing
> >most programmers will skip them.
> 
> Forced?  I do not use Python to be forced to use one style of
> programming over another.

   Then it's strange you are using Python with its strict syntax
(case-sensitivity, forced indents), ubiquitous exceptions, limited
syntax of lambdas and absence of code blocks (read - forced functions),
etc.

> And it's not like returning None will allow some clock calls to work
> but not others -- as soon as they try to use it, it will raise an
> exception.

   There is a philosophical distinction between EAFP and LBYL. I am
mostly proponent of LBYL.
   Well, I am partially retreat. "Errors should never pass silently.
Unless explicitly silenced." get_clock(FLAG, on_error=None) could return
None.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From victor.stinner at gmail.com  Thu Apr  5 01:10:43 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 5 Apr 2012 01:10:43 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAL0kPAU7CTFf_r4vfBtyO5At91OeBm7bom3XvqJt4grRH4tstg@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
	<CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>
	<20120404114631.69e8a8ff@pitrou.net>
	<CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>
	<CAL0kPAU7CTFf_r4vfBtyO5At91OeBm7bom3XvqJt4grRH4tstg@mail.gmail.com>
Message-ID: <CAMpsgwbjvWi6uSJ22F-xrqbZyzZ-ynU6gyUJfjrQLmHm9mW4vA@mail.gmail.com>

2012/4/4 Lennart Regebro <regebro at gmail.com>:
> On Wed, Apr 4, 2012 at 13:04, Victor Stinner <victor.stinner at gmail.com> wrote:
>> It depends if the option supports other values. But as I understood,
>> the keyword value must always be True.
>
> Or False, obviously. Which would also be default.

Ok for the default, but what happens if the caller sets an option to
False? Does get_clock(monotonic=False) return a non-monotonic clock?
(I guess no, but it may be confusing.)

Victor

From cs at zip.com.au  Thu Apr  5 01:14:55 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Thu, 5 Apr 2012 09:14:55 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7CD04E.7030303@pearwood.info>
References: <4F7CD04E.7030303@pearwood.info>
Message-ID: <20120404231455.GA23478@cskk.homeip.net>

On 05Apr2012 08:50, Steven D'Aprano <steve at pearwood.info> wrote:
| Although I don't like the get_clock() API, I don't think this argument against 
| it is a good one.

Just to divert briefly; you said in another post you didn't like the API
and (also/because?) it didn't help discoverability.

My core objective was to allow users to query for clocks, and ideally
enumerate and inspect all clocks. Without the caller having platform
specific knowledge.

Allowing for the sake of discussion that this is desirable, what would
you propose as an API instead of get_clock() (and its friend, get_clocks()
for enumeration, that I should stuff into the code).

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Q: How many user support people does it take to change a light bulb?
A: We have an exact copy of the light bulb here and it seems to be
   working fine.  Can you tell me what kind of system you have?

From victor.stinner at gmail.com  Thu Apr  5 01:28:18 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 5 Apr 2012 01:28:18 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7BA3C2.4050705@gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>
Message-ID: <CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>

> I failed to propose a consistent and clear API because I (and Guido!) wanted
> to fallback to the system clock. Falling back to the system clock is a
> problem when you have to define the function in the documentation or if you
> don't want to use the system clock (but do something else) if no monotonic
> clock is available.

More details why it's hard to define such function and why I dropped
it from the PEP.

If someone wants to propose again such function ("monotonic or
fallback to system" clock), two issues should be solved:

 - name of the function
 - description of the function

At least, "monotonic" and "steady" names are not acceptable names for
such function, even if the function has an optional "strict=False" or
"fallback=True" parameter. By the way, someone complained that having
a boolean parameter requires to create a new function if you want to
call it without an argument (use a lambda function, functools.partial,
or anything else). Example:

get_time = lambda: try_monotonic(fallback=True)
t = get_time()

The description should give the least guarantees.

If the function doesn't promise anything (or only a few weak
properties), it is harder to decide which clock do you need for your
use case: time.clock(), time.time(), time.monotonic(), time.<name of
the monotonic-of-fallback function>, ...

Victor

From roundup-admin at psf.upfronthosting.co.za  Thu Apr  5 03:29:47 2012
From: roundup-admin at psf.upfronthosting.co.za (Python tracker)
Date: Thu, 05 Apr 2012 01:29:47 +0000
Subject: [Python-Dev] Failed issue tracker submission
Message-ID: <20120405012947.871341CA97@psf.upfronthosting.co.za>


An unexpected error occurred during the processing
of your message. The tracker administrator is being
notified.
-------------- next part --------------
Return-Path: <python-dev at python.org>
X-Original-To: report at bugs.python.org
Delivered-To: roundup+tracker at psf.upfronthosting.co.za
Received: from mail.python.org (mail.python.org [82.94.164.166])
	by psf.upfronthosting.co.za (Postfix) with ESMTPS id 18F1D1CA8A
	for <report at bugs.python.org>; Thu,  5 Apr 2012 03:29:47 +0200 (CEST)
Received: from albatross.python.org (localhost [127.0.0.1])
	by mail.python.org (Postfix) with ESMTP id 3VNRKk5ffZzMDH
	for <report at bugs.python.org>; Thu,  5 Apr 2012 03:29:46 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
	t=1333589386; bh=mxZYtp1PNBk7+3KkQ2c0Ir4UpBYOh1g7JxosMonW0bg=;
	h=Date:Message-Id:Content-Type:MIME-Version:
	 Content-Transfer-Encoding:From:To:Subject;
	b=dkAMMMzcIiS6RnMbU2X82n0soHqci8LQJeJZtQefi9I0bhSh9IbG/qrBrdjTdG3sE
	 QOAeN4ttDr5vy83SY7pcXGH4sXVAlMGHAwKPUcOYRFIHKzKoy/gNwOfPRRIdxDg3C0
	 Go6dQtIJi0j/uS4EI9o7oEJHVczuzJLkdGRcA6ik=
Received: from localhost (HELO mail.python.org) (127.0.0.1)
  by albatross.python.org with SMTP; 05 Apr 2012 03:29:46 +0200
Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4])
	(using TLSv1 with cipher AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.python.org (Postfix) with ESMTPS
	for <report at bugs.python.org>; Thu,  5 Apr 2012 03:29:46 +0200 (CEST)
Received: from localhost
	([127.0.0.1] helo=dinsdale.python.org ident=hg)
	by dinsdale.python.org with esmtp (Exim 4.72)
	(envelope-from <python-dev at python.org>)
	id 1SFbWE-0008LO-KB
	for report at bugs.python.org; Thu, 05 Apr 2012 03:29:46 +0200
Date: Thu, 05 Apr 2012 03:29:46 +0200
Message-Id: <E1SFbWE-0008LO-KB at dinsdale.python.org>
Content-Type: text/plain; charset="utf8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: python-dev at python.org
To: report at bugs.python.org
Subject: [issue14490]

TmV3IGNoYW5nZXNldCA2MmRkZTVkZDQ3NWUgYnkgUiBEYXZpZCBNdXJyYXkgaW4gYnJhbmNoICcz
LjInOgojMTQ0OTAsICMxNDQ5MTogYWRkICdzdW5kcnknLXN0eWxlIGltcG9ydCB0ZXN0cyBmb3Ig
VG9vbHMvc2NyaXB0cy4KaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvNjJkZGU1ZGQ0
NzVlCgoKTmV3IGNoYW5nZXNldCA2OTZjYjUyNDMyMmEgYnkgUiBEYXZpZCBNdXJyYXkgaW4gYnJh
bmNoICdkZWZhdWx0JzoKTWVyZ2UgIzE0NDkwLCAjMTQ0OTE6IGFkZCAnc3VuZHJ5Jy1zdHlsZSBp
bXBvcnQgdGVzdHMgZm9yIFRvb2xzL3NjcmlwdHMuCmh0dHA6Ly9oZy5weXRob24ub3JnL2NweXRo
b24vcmV2LzY5NmNiNTI0MzIyYQo=

From greg.ewing at canterbury.ac.nz  Thu Apr  5 00:45:49 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 05 Apr 2012 10:45:49 +1200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
Message-ID: <4F7CCF1D.2010600@canterbury.ac.nz>

Lennart Regebro wrote:
> Since the only monotonic clock that can be adjusted by NTP is Linux'
> CLOCK_MONOTONIC, if we avoid it, then time.monotonic() would always
> give a clock that isn't adjusted by NTP.

I thought we decided that NTP adjustment isn't an issue, because
it's always gradual.

-- 
Greg


From rdmurray at bitdance.com  Thu Apr  5 04:17:45 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 04 Apr 2012 22:17:45 -0400
Subject: [Python-Dev] Failed issue tracker submission
In-Reply-To: <20120405012947.871341CA97@psf.upfronthosting.co.za>
References: <20120405012947.871341CA97@psf.upfronthosting.co.za>
Message-ID: <20120405021741.73FA3250603@webabinitio.net>

On Thu, 05 Apr 2012 01:29:47 -0000, Python tracker <roundup-admin at psf.upfronthosting.co.za> wrote:
> 
> An unexpected error occurred during the processing
> of your message. The tracker administrator is being
> notified.

Since the bounce message went here, I'm posting this here for those who
are curious what caused it.

It was triggered by my committing a patch with two issue numbers in the
commit message.  This triggered a DB lock problem on the roundup end,
from the xapian indexer:

Traceback (most recent call last):
  File "/home/roundup/lib/python2.5/site-packages/roundup/mailgw.py", line 1395,
  in handle_Message
    return self.handle_message(message)
  File "/home/roundup/lib/python2.5/site-packages/roundup/mailgw.py", line 1451,
  in handle_message
    return self._handle_message(message)
  File "/home/roundup/lib/python2.5/site-packages/roundup/mailgw.py", line 1529,
  in _handle_message
    parsed_message.create_msg()
  File "/home/roundup/lib/python2.5/site-packages/roundup/mailgw.py", line 1105,
  in create_msg
    messageid=messageid, inreplyto=inreplyto, **self.msg_props)
  File "/home/roundup/lib/python2.5/site-
  packages/roundup/backends/rdbms_common.py", line 2958, in create
    content, mime_type)
  File "/home/roundup/lib/python2.5/site-
  packages/roundup/backends/indexer_xapian.py", line 59, in add_text
    database = self._get_database()
  File "/home/roundup/lib/python2.5/site-
  packages/roundup/backends/indexer_xapian.py", line 21, in _get_database
    return xapian.WritableDatabase(index, xapian.DB_CREATE_OR_OPEN)
  File "/usr/lib/python2.6/dist-packages/xapian/__init__.py", line 4059, in
  __init__
    _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args))
DatabaseLockError: Unable to get write lock on /home/roundup/trackers/tracker/db
/text-index: already locked

The Xapian index is new since the server upgrade, so it is possible this will
always happen when more than one issue number is mentioned.  Or it could
be a random timing thing.  Presumably it could also occur during normal
web submissions if they happen to happen at the same time, which is
a little bit worrisome.

If anyone has any Xapien experience and would be willing to help out with
debugging this and/or some indexing issues, please let me know :)

--David

From pje at telecommunity.com  Thu Apr  5 04:23:51 2012
From: pje at telecommunity.com (PJ Eby)
Date: Wed, 4 Apr 2012 22:23:51 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
Message-ID: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>

On Apr 4, 2012 7:28 PM, "Victor Stinner" <victor.stinner at gmail.com> wrote:
>
> More details why it's hard to define such function and why I dropped
> it from the PEP.
>
> If someone wants to propose again such function ("monotonic or
> fallback to system" clock), two issues should be solved:
>
>  - name of the function
>  - description of the function

Maybe I missed it, but did anyone ever give a reason why the fallback
couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the
system clock?  (Given a suitable minimum delta.)  That function appeared to
me to provide a sufficiently monotonic clock for timeout purposes, if
nothing else.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120404/c6f32d24/attachment.html>

From cs at zip.com.au  Thu Apr  5 05:41:02 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Thu, 5 Apr 2012 13:41:02 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
References: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
Message-ID: <20120405034102.GA28103@cskk.homeip.net>

On 04Apr2012 22:23, PJ Eby <pje at telecommunity.com> wrote:
| On Apr 4, 2012 7:28 PM, "Victor Stinner" <victor.stinner at gmail.com> wrote:
| > More details why it's hard to define such function and why I dropped
| > it from the PEP.
| >
| > If someone wants to propose again such function ("monotonic or
| > fallback to system" clock), two issues should be solved:
| >
| >  - name of the function
| >  - description of the function
| 
| Maybe I missed it, but did anyone ever give a reason why the fallback
| couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the
| system clock?  (Given a suitable minimum delta.)  That function appeared to
| me to provide a sufficiently monotonic clock for timeout purposes, if
| nothing else.

It was pointed out (by Nick Coglan I think?) that if the system clock
stepped backwards then a timeout would be extended by at least that
long. For example, code that waited (by polling the synthetic clock)
for 1s could easily wait an hour if the system clock stepped back that
far. Probaby undesirable.

I think synthetic clocks are an extra task; they will all have side
effects of one kind of another.

A system monotonic clock, by contrast, may have access to some clock
hardware that doesn't step when the "main" system clock gets adjusted,
and can stay monotonic. A synthetic clock without such access can't
behave as nicely.

If synthetic clocks get handed out as fallback there should be some way
for the user to know, or a big glaring negative guarrentee in the docs
and on platforms without a system monotonic clock you might get a clock
with weird (but monotonic!) behaviours if you use the fallback.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Tachyon: A gluon that's not completely dry.

From ncoghlan at gmail.com  Thu Apr  5 07:14:42 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 5 Apr 2012 15:14:42 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <20120405034102.GA28103@cskk.homeip.net>
References: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<20120405034102.GA28103@cskk.homeip.net>
Message-ID: <CADiSq7cUw1hypB-6Cg97d+-CEpqUYuQHZ56w9YFOjuL_Z_SHLQ@mail.gmail.com>

On Thu, Apr 5, 2012 at 1:41 PM, Cameron Simpson <cs at zip.com.au> wrote:
> It was pointed out (by Nick Coglan I think?) that if the system clock
> stepped backwards then a timeout would be extended by at least that
> long.

Guido pointed it out (it was in a reply to me, though).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From regebro at gmail.com  Thu Apr  5 10:07:12 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Thu, 5 Apr 2012 10:07:12 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7CCF1D.2010600@canterbury.ac.nz>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
Message-ID: <CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>

On Thu, Apr 5, 2012 at 00:45, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Lennart Regebro wrote:
>>
>> Since the only monotonic clock that can be adjusted by NTP is Linux'
>> CLOCK_MONOTONIC, if we avoid it, then time.monotonic() would always
>> give a clock that isn't adjusted by NTP.
>
> I thought we decided that NTP adjustment isn't an issue, because
> it's always gradual.

Well, in timings it is an issue, but perhaps not a big one. :-)
In any case, which one we use will not change the API, so if it is
decided it is an issue, we can always more to CLOCK_MONOTONIC_RAW in
the future, once Linux < 2.6.26 (or whatever it was) is deemed
unsupported.

//Lennart

From regebro at gmail.com  Thu Apr  5 10:21:09 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Thu, 5 Apr 2012 10:21:09 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAMpsgwbjvWi6uSJ22F-xrqbZyzZ-ynU6gyUJfjrQLmHm9mW4vA@mail.gmail.com>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B8D7B.6070806@pearwood.info>
	<CAMpsgwatK3C69eXVsNh99AKjxaxWXHTZNfwYd=wCfi5jjqeQxg@mail.gmail.com>
	<20120404114631.69e8a8ff@pitrou.net>
	<CAMpsgwa7evfoGbNKi9UzpFzEs=+nEw+zKEMY5y9aJG4ubwUb4g@mail.gmail.com>
	<CAL0kPAU7CTFf_r4vfBtyO5At91OeBm7bom3XvqJt4grRH4tstg@mail.gmail.com>
	<CAMpsgwbjvWi6uSJ22F-xrqbZyzZ-ynU6gyUJfjrQLmHm9mW4vA@mail.gmail.com>
Message-ID: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>

On Thu, Apr 5, 2012 at 01:10, Victor Stinner <victor.stinner at gmail.com> wrote:
> 2012/4/4 Lennart Regebro <regebro at gmail.com>:
>> On Wed, Apr 4, 2012 at 13:04, Victor Stinner <victor.stinner at gmail.com> wrote:
>>> It depends if the option supports other values. But as I understood,
>>> the keyword value must always be True.
>>
>> Or False, obviously. Which would also be default.
>
> Ok for the default, but what happens if the caller sets an option to
> False? Does get_clock(monotonic=False) return a non-monotonic clock?
> (I guess no, but it may be confusing.)

Good point, but the same does for using flags. If you don't pass in
the MONOTONIC flag, what happens? Only reading the documentation will
tell you. As such this, if anything, is an indication that the
get_clock() API isn't ideal in any incarnation.

//Lennart

From solipsis at pitrou.net  Thu Apr  5 12:21:02 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 5 Apr 2012 12:21:02 +0200
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
	tkinter font.
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
Message-ID: <20120405122102.7dd6ef8f@pitrou.net>

On Thu, 05 Apr 2012 11:41:48 +0200
andrew.svetlov <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/774c2afa6665
> changeset:   76115:774c2afa6665
> user:        Andrew Svetlov <andrew.svetlov at gmail.com>
> date:        Thu Apr 05 12:41:20 2012 +0300
> summary:
>   Issue #3033: Add displayof parameter to tkinter font.
> Patch by Guilherme Polo.

Aren't there any docs?

Regards

Antoine.



From victor.stinner at gmail.com  Thu Apr  5 12:32:45 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 5 Apr 2012 12:32:45 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
Message-ID: <CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>

>>> Since the only monotonic clock that can be adjusted by NTP is Linux'
>>> CLOCK_MONOTONIC, if we avoid it, then time.monotonic() would always
>>> give a clock that isn't adjusted by NTP.
>>
>> I thought we decided that NTP adjustment isn't an issue, because
>> it's always gradual.
>
> Well, in timings it is an issue, but perhaps not a big one. :-)
> In any case, which one we use will not change the API, so if it is
> decided it is an issue, we can always more to CLOCK_MONOTONIC_RAW in
> the future, once Linux < 2.6.26 (or whatever it was) is deemed
> unsupported.

I prefer to use CLOCK_MONOTONIC, not because it is also available for
older Linux kernels, but because it is more reliable. Even if the
underlying clock source is unstable (unstable frequency), a delta of
two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
smaller than a second. time.monotonic() unit is the second, as written
in its documentation.

Linux is the OS providing the most reliable monotonic clock, why
should you use a not reliable monotonic clock instead?

NTP doesn't step CLOCK_MONOTONIC, it only slew it.
http://www.python.org/dev/peps/pep-0418/#ntp-adjustment

Victor

From victor.stinner at gmail.com  Thu Apr  5 12:34:27 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 5 Apr 2012 12:34:27 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
Message-ID: <CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>

2012/4/5 PJ Eby <pje at telecommunity.com>:
>> More details why it's hard to define such function and why I dropped
>> it from the PEP.
>>
>> If someone wants to propose again such function ("monotonic or
>> fallback to system" clock), two issues should be solved:
>>
>> ?- name of the function
>> ?- description of the function
>
> Maybe I missed it, but did anyone ever give a reason why the fallback
> couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the system
> clock?? (Given a suitable minimum delta.)? That function appeared to me to
> provide a sufficiently monotonic clock for timeout purposes, if nothing
> else.


Did you read the following section of the PEP?
http://www.python.org/dev/peps/pep-0418/#working-around-operating-system-bugs

Did I miss something? If yes, could you write a patch for the PEP please?

Victor

From andrew.svetlov at gmail.com  Thu Apr  5 13:52:56 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Thu, 5 Apr 2012 14:52:56 +0300
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
 tkinter font.
In-Reply-To: <20120405122102.7dd6ef8f@pitrou.net>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
Message-ID: <CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>

Maybe you will be surprised, but tkinter.rst has no comprehensive docs
for any tkinter class.
I like to get it fixed but definitely cannot do it myself. My very
poor English is the main objection for writing narrative
documentation.

On Thu, Apr 5, 2012 at 1:21 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Thu, 05 Apr 2012 11:41:48 +0200
> andrew.svetlov <python-checkins at python.org> wrote:
>> http://hg.python.org/cpython/rev/774c2afa6665
>> changeset: ? 76115:774c2afa6665
>> user: ? ? ? ?Andrew Svetlov <andrew.svetlov at gmail.com>
>> date: ? ? ? ?Thu Apr 05 12:41:20 2012 +0300
>> summary:
>> ? Issue #3033: Add displayof parameter to tkinter font.
>> Patch by Guilherme Polo.
>
> Aren't there any docs?
>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com



-- 
Thanks,
Andrew Svetlov

From kristjan at ccpgames.com  Thu Apr  5 13:58:38 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Thu, 5 Apr 2012 11:58:38 +0000
Subject: [Python-Dev] FS: [issue9141] Allow objects to decide if they can be
 collected by GC
In-Reply-To: <1333626928.49.0.909415069769.issue9141@psf.upfronthosting.co.za>
References: <1278070212.7.0.406390843902.issue9141@psf.upfronthosting.co.za>,
	<1333626928.49.0.909415069769.issue9141@psf.upfronthosting.co.za>
Message-ID: <EFE3877620384242A686D52278B7CCD3386DC6@RKV-IT-EXCH104.ccp.ad.local>

Hi there. Antoine Pitrou suggested that I float this on python-dev again.  The new patch should
1) be much simpler and less hacky
2) remove the special case code for PyGenObject from gcmodule.c
K

________________________________________
Fr?: Kristj?n Valur J?nsson [report at bugs.python.org]
Sent: 5. apr?l 2012 11:55
To: Kristj?n Valur J?nsson
Efni: [issue9141] Allow objects to decide if they can be collected by GC

Kristj?n Valur J?nsson <kristjan at ccpgames.com> added the comment:

Here is a completely new patch.  This approach uses the already existing tp_is_gc enquiry slot to signal garbage collection.
The patch modifies the generator object to use this new mechanism.
The patch keeps the old PyGen_NeedsFinalizing() API, but this can now go away, unless people think it might be used in extension modules

(why do we always expose all those internal apis from the dll? I wonder.)

----------
Added file: http://bugs.python.org/file25131/ob_is_gc.patch

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue9141>
_______________________________________

From stephen at xemacs.org  Thu Apr  5 15:06:38 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 5 Apr 2012 22:06:38 +0900
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120404230503.GB314@iskra.aviel.ru>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>
	<20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
Message-ID: <CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>

On Thu, Apr 5, 2012 at 8:05 AM, Oleg Broytman <phd at phdru.name> wrote:
> ? Well, I am partially retreat. "Errors should never pass silently.
> Unless explicitly silenced." get_clock(FLAG, on_error=None) could return
> None.

I still don't see what's erroneous about returning None when asked for
an object that is documented to possibly not exist, ever, in some
implementations.  Isn't that precisely why None exists?

From phd at phdru.name  Thu Apr  5 15:34:11 2012
From: phd at phdru.name (Oleg Broytman)
Date: Thu, 5 Apr 2012 17:34:11 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
References: <20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
Message-ID: <20120405133411.GC17105@iskra.aviel.ru>

On Thu, Apr 05, 2012 at 10:06:38PM +0900, Stephen J. Turnbull wrote:
> On Thu, Apr 5, 2012 at 8:05 AM, Oleg Broytman <phd at phdru.name> wrote:
> > ? Well, I am partially retreat. "Errors should never pass silently.
> > Unless explicitly silenced." get_clock(FLAG, on_error=None) could return
> > None.
> 
> I still don't see what's erroneous about returning None when asked for
> an object that is documented to possibly not exist, ever, in some
> implementations.  Isn't that precisely why None exists?

   Why doesn't open() return None for a non-existing file? or
socket.gethostbyname() for a non-existing name?

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From rdmurray at bitdance.com  Thu Apr  5 16:06:58 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 05 Apr 2012 10:06:58 -0400
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
	tkinter font.
In-Reply-To: <CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
Message-ID: <20120405140649.C2CA4250603@webabinitio.net>

(reformatted to remove topposting)

On Thu, 05 Apr 2012 14:52:56 +0300, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
> On Thu, Apr 5, 2012 at 1:21 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > On Thu, 05 Apr 2012 11:41:48 +0200
> > andrew.svetlov <python-checkins at python.org> wrote:
> >> http://hg.python.org/cpython/rev/774c2afa6665
> >> changeset: ?? 76115:774c2afa6665
> >> user: ?? ?? ?? ??Andrew Svetlov <andrew.svetlov at gmail.com>
> >> date: ?? ?? ?? ??Thu Apr 05 12:41:20 2012 +0300
> >> summary:
> >> ?? Issue #3033: Add displayof parameter to tkinter font.
> >> Patch by Guilherme Polo.
> >
> > Aren't there any docs?
>
> Maybe you will be surprised, but tkinter.rst has no comprehensive docs
> for any tkinter class.
> I like to get it fixed but definitely cannot do it myself. My very
> poor English is the main objection for writing narrative
> documentation.

One way to approach this problem would be to draft some rough docs that
try to capture the functionality without worrying about English content
or style.  Then you could post the rough draft somewhere, and ask for
someone from the docs mailing list to edit it.  My thought would be that
whoever took on the task would then do a rewrite, asking you questions
to fill in any details that aren't clear from the rough draft.

Thank, you, by the way, for all the work you are doing.

--David

From andrew.svetlov at gmail.com  Thu Apr  5 16:34:07 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Thu, 5 Apr 2012 17:34:07 +0300
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
 tkinter font.
In-Reply-To: <20120405140649.C2CA4250603@webabinitio.net>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
Message-ID: <CAL3CFcXV3qgnSt54a5PPdELWt7r2bpmsEbCF-NktmA40jWXKXg@mail.gmail.com>

Thank you, David.
Is separate repo clone located at hg.python.org good enough? Or maybe
there are better way to do it?

On Thu, Apr 5, 2012 at 5:06 PM, R. David Murray <rdmurray at bitdance.com> wrote:
> (reformatted to remove topposting)
>
> On Thu, 05 Apr 2012 14:52:56 +0300, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
>> On Thu, Apr 5, 2012 at 1:21 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> > On Thu, 05 Apr 2012 11:41:48 +0200
>> > andrew.svetlov <python-checkins at python.org> wrote:
>> >> http://hg.python.org/cpython/rev/774c2afa6665
>> >> changeset: ? 76115:774c2afa6665
>> >> user: ? ? ? ?Andrew Svetlov <andrew.svetlov at gmail.com>
>> >> date: ? ? ? ?Thu Apr 05 12:41:20 2012 +0300
>> >> summary:
>> >> ? Issue #3033: Add displayof parameter to tkinter font.
>> >> Patch by Guilherme Polo.
>> >
>> > Aren't there any docs?
>>
>> Maybe you will be surprised, but tkinter.rst has no comprehensive docs
>> for any tkinter class.
>> I like to get it fixed but definitely cannot do it myself. My very
>> poor English is the main objection for writing narrative
>> documentation.
>
> One way to approach this problem would be to draft some rough docs that
> try to capture the functionality without worrying about English content
> or style. ?Then you could post the rough draft somewhere, and ask for
> someone from the docs mailing list to edit it. ?My thought would be that
> whoever took on the task would then do a rewrite, asking you questions
> to fill in any details that aren't clear from the rough draft.
>
> Thank, you, by the way, for all the work you are doing.
>
> --David
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>



-- 
Thanks,
Andrew Svetlov

From stephen at xemacs.org  Thu Apr  5 16:45:06 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 5 Apr 2012 23:45:06 +0900
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405133411.GC17105@iskra.aviel.ru>
References: <20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
Message-ID: <CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>

On Thu, Apr 5, 2012 at 10:34 PM, Oleg Broytman <phd at phdru.name> wrote:

> ? Why doesn't open() return None for a non-existing file? or
> socket.gethostbyname() for a non-existing name?

That's not an answer to my question, because those calls have very
important use cases where the user knows the object exists (and in
fact in some cases open() will create it for him), so that failure to
exist is indeed a (user) error (such as a misspelling).  I find it
hard to imagine use cases where "file = open(thisfile) or
open(thatfile)" makes sense.  Not even for the case where thisfile ==
'script.pyc' and thatfile == 'script.py'.

The point of the proposed get_clock(), OTOH, is to ask if an object
with certain characteristics exists, and the fact that it returns the
clock rather than True if found is a matter of practical convenience.
Precisely because "clock = get_clock(best) or get_clock(better) or
get_clock(acceptable)" does make sense.

From phd at phdru.name  Thu Apr  5 17:22:17 2012
From: phd at phdru.name (Oleg Broytman)
Date: Thu, 5 Apr 2012 19:22:17 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
References: <4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
Message-ID: <20120405152217.GA22311@iskra.aviel.ru>

On Thu, Apr 05, 2012 at 11:45:06PM +0900, Stephen J. Turnbull wrote:
> On Thu, Apr 5, 2012 at 10:34 PM, Oleg Broytman <phd at phdru.name> wrote:
> > ? Why doesn't open() return None for a non-existing file? or
> > socket.gethostbyname() for a non-existing name?
> 
> That's not an answer to my question, because those calls have very
> important use cases where the user knows the object exists (and in
> fact in some cases open() will create it for him), so that failure to
> exist is indeed a (user) error (such as a misspelling).  I find it
> hard to imagine use cases where "file = open(thisfile) or
> open(thatfile)" makes sense.  Not even for the case where thisfile ==
> 'script.pyc' and thatfile == 'script.py'.

   Counterexamples - any configuration file: a program looks for its config
at $HOME and not finding it there looks in /etc. So
    config = open('~/.someprogram.config') or open('/etc/someprogram/config')
would make sense. The absence of any of these files is not an error at
all - the program just starts with default configuration. So if the
resulting config in the code above would be None - it's still would be
ok. But Python doesn't allow that.
   Some configuration files are constructed by combining a number of
user-defined and system-defined files. E.g., the mailcap database. It
should be something like
    combined_database = [db for db in (
        open('/etc/mailcap'),
        open('/usr/etc/mailcap'),
        open('/usr/local/etc/mailcap'),
        open('~/.mailcap'),
    ) if db]
But no way - open() raises IOError, not return None. And I think it is
the right way. Those who want to write the code similar to the examples
above - explicitly suppress exceptions by writing wrappers.

> The point of the proposed get_clock(), OTOH, is to ask if an object
> with certain characteristics exists, and the fact that it returns the
> clock rather than True if found is a matter of practical convenience.
> Precisely because "clock = get_clock(best) or get_clock(better) or
> get_clock(acceptable)" does make sense.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From rdmurray at bitdance.com  Thu Apr  5 17:29:58 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 05 Apr 2012 11:29:58 -0400
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
	tkinter font.
In-Reply-To: <CAL3CFcXV3qgnSt54a5PPdELWt7r2bpmsEbCF-NktmA40jWXKXg@mail.gmail.com>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<CAL3CFcXV3qgnSt54a5PPdELWt7r2bpmsEbCF-NktmA40jWXKXg@mail.gmail.com>
Message-ID: <20120405152949.0348D250603@webabinitio.net>

On Thu, 05 Apr 2012 17:34:07 +0300, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
> Thank you, David.
> Is separate repo clone located at hg.python.org good enough? Or maybe
> there are better way to do it?

That sounds like a good plan to me.

--David

From rdmurray at bitdance.com  Thu Apr  5 17:38:13 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 05 Apr 2012 11:38:13 -0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
	(was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405152217.GA22311@iskra.aviel.ru>
References: <4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
	<20120405152217.GA22311@iskra.aviel.ru>
Message-ID: <20120405153804.5A49F250603@webabinitio.net>

On Thu, 05 Apr 2012 19:22:17 +0400, Oleg Broytman <phd at phdru.name> wrote:
> On Thu, Apr 05, 2012 at 11:45:06PM +0900, Stephen J. Turnbull wrote:
> > On Thu, Apr 5, 2012 at 10:34 PM, Oleg Broytman <phd at phdru.name> wrote:
> > > ?? Why doesn't open() return None for a non-existing file? or
> > > socket.gethostbyname() for a non-existing name?
> > 
> > That's not an answer to my question, because those calls have very
> > important use cases where the user knows the object exists (and in
> > fact in some cases open() will create it for him), so that failure to
> > exist is indeed a (user) error (such as a misspelling).  I find it
> > hard to imagine use cases where "file = open(thisfile) or
> > open(thatfile)" makes sense.  Not even for the case where thisfile ==
> > 'script.pyc' and thatfile == 'script.py'.
> 
>    Counterexamples - any configuration file: a program looks for its config
> at $HOME and not finding it there looks in /etc. So
>     config = open('~/.someprogram.config') or open('/etc/someprogram/config')
> would make sense. The absence of any of these files is not an error at
> all - the program just starts with default configuration. So if the
> resulting config in the code above would be None - it's still would be
> ok. But Python doesn't allow that.
>    Some configuration files are constructed by combining a number of
> user-defined and system-defined files. E.g., the mailcap database. It
> should be something like
>     combined_database = [db for db in (
>         open('/etc/mailcap'),
>         open('/usr/etc/mailcap'),
>         open('/usr/local/etc/mailcap'),
>         open('~/.mailcap'),
>     ) if db]
> But no way - open() raises IOError, not return None. And I think it is
> the right way. Those who want to write the code similar to the examples
> above - explicitly suppress exceptions by writing wrappers.

Ah, but the actual code in the mimetypes module (whose list is even
longer) looks like this:

    for file in files:
        if os.path.isfile(file):
            db.read(file)

That is, Python provides a query function that doesn't raise an error.

Do you really think we need to add a third clock function (the query
function) that just returns True or False?  Maybe we do, if actually
creating the clock could raise an error even if exists, as is the case
for 'open'.

(But unless I'm confused none of this has anything to do with Victor's
PEP as currently proposed :)

--David

From ethan at stoneleaf.us  Thu Apr  5 17:32:22 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 05 Apr 2012 08:32:22 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <20120405034102.GA28103@cskk.homeip.net>
References: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<20120405034102.GA28103@cskk.homeip.net>
Message-ID: <4F7DBB06.40801@stoneleaf.us>

Cameron Simpson wrote:
> On 04Apr2012 22:23, PJ Eby <pje at telecommunity.com> wrote:
> | On Apr 4, 2012 7:28 PM, "Victor Stinner" <victor.stinner at gmail.com> wrote:
> | > More details why it's hard to define such function and why I dropped
> | > it from the PEP.
> | >
> | > If someone wants to propose again such function ("monotonic or
> | > fallback to system" clock), two issues should be solved:
> | >
> | >  - name of the function
> | >  - description of the function
> | 
> | Maybe I missed it, but did anyone ever give a reason why the fallback
> | couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the
> | system clock?  (Given a suitable minimum delta.)  That function appeared to
> | me to provide a sufficiently monotonic clock for timeout purposes, if
> | nothing else.
> 
> It was pointed out (by Nick Coglan I think?) that if the system clock
> stepped backwards then a timeout would be extended by at least that
> long. For example, code that waited (by polling the synthetic clock)
> for 1s could easily wait an hour if the system clock stepped back that
> far. Probaby undesirable.

Steven D'Aprano's synthetic clock is able to partially avoid that 
situation -- worst case is a timeout of double what you asked for -- so 
10 seconds instead of 5 (which is much better than 3600!).

~Ethan~

From phd at phdru.name  Thu Apr  5 18:01:48 2012
From: phd at phdru.name (Oleg Broytman)
Date: Thu, 5 Apr 2012 20:01:48 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405152217.GA22311@iskra.aviel.ru>
References: <CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
	<20120405152217.GA22311@iskra.aviel.ru>
Message-ID: <20120405160148.GB22311@iskra.aviel.ru>

On Thu, Apr 05, 2012 at 07:22:17PM +0400, Oleg Broytman wrote:
> On Thu, Apr 05, 2012 at 11:45:06PM +0900, Stephen J. Turnbull wrote:
> > find it
> > hard to imagine use cases where "file = open(thisfile) or
> > open(thatfile)" makes sense.  Not even for the case where thisfile ==
> > 'script.pyc' and thatfile == 'script.py'.
> 
>    Counterexamples - any configuration file: a program looks for its config
> at $HOME and not finding it there looks in /etc. So
>     config = open('~/.someprogram.config') or open('/etc/someprogram/config')
> would make sense.

   A counterexample with gethostbyname - a list of proxies. It's not an
error if some or even all proxies in the list are down - one just
connect to the first that's up. So a chain like
    proxy_addr = gethostbyname(FIRST) or gethostbyname(SECOND)
would make sense.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From phd at phdru.name  Thu Apr  5 18:02:59 2012
From: phd at phdru.name (Oleg Broytman)
Date: Thu, 5 Apr 2012 20:02:59 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405153804.5A49F250603@webabinitio.net>
References: <20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
	<20120405152217.GA22311@iskra.aviel.ru>
	<20120405153804.5A49F250603@webabinitio.net>
Message-ID: <20120405160259.GC22311@iskra.aviel.ru>

On Thu, Apr 05, 2012 at 11:38:13AM -0400, R. David Murray wrote:
> Do you really think we need to add a third clock function (the query
> function) that just returns True or False?  Maybe we do, if actually
> creating the clock could raise an error even if exists, as is the case
> for 'open'.

   May be we do. Depends on the usage patterns.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From solipsis at pitrou.net  Thu Apr  5 17:59:15 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 5 Apr 2012 17:59:15 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
References: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<20120405034102.GA28103@cskk.homeip.net>
	<4F7DBB06.40801@stoneleaf.us>
Message-ID: <20120405175915.72e697dc@pitrou.net>

On Thu, 05 Apr 2012 08:32:22 -0700
Ethan Furman <ethan at stoneleaf.us> wrote:
> 
> Steven D'Aprano's synthetic clock is able to partially avoid that 
> situation -- worst case is a timeout of double what you asked for -- so 
> 10 seconds instead of 5 (which is much better than 3600!).

The remaining issue is that the clock is not system-wide, it's
interpreter-specific.

Regards

Antoine.



From pje at telecommunity.com  Thu Apr  5 18:41:46 2012
From: pje at telecommunity.com (PJ Eby)
Date: Thu, 5 Apr 2012 12:41:46 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <20120405034102.GA28103@cskk.homeip.net>
References: <CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<20120405034102.GA28103@cskk.homeip.net>
Message-ID: <CALeMXf7=2E4kY9tfxL4ZLU5W6VR1XR2qHzjLrkUDvcFKSTy_Tw@mail.gmail.com>

On Wed, Apr 4, 2012 at 11:41 PM, Cameron Simpson <cs at zip.com.au> wrote:

> On 04Apr2012 22:23, PJ Eby <pje at telecommunity.com> wrote:
> | On Apr 4, 2012 7:28 PM, "Victor Stinner" <victor.stinner at gmail.com>
> wrote:
> | > More details why it's hard to define such function and why I dropped
> | > it from the PEP.
> | >
> | > If someone wants to propose again such function ("monotonic or
> | > fallback to system" clock), two issues should be solved:
> | >
> | >  - name of the function
> | >  - description of the function
> |
> | Maybe I missed it, but did anyone ever give a reason why the fallback
> | couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the
> | system clock?  (Given a suitable minimum delta.)  That function appeared
> to
> | me to provide a sufficiently monotonic clock for timeout purposes, if
> | nothing else.
>
> It was pointed out (by Nick Coglan I think?) that if the system clock
> stepped backwards then a timeout would be extended by at least that
> long. For example, code that waited (by polling the synthetic clock)
> for 1s could easily wait an hour if the system clock stepped back that
> far. Probaby undesirable.
>

Steven D'Aprano's algorithm doesn't do that.  If the system clock steps
backwards, it still stepped forward by a specified minimum delta.  The
amount of time that a timeout was extended would be a function of the
polling frequency, not the presence of absence of backward steps in the
underlying clock.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120405/36362f26/attachment.html>

From pje at telecommunity.com  Thu Apr  5 18:48:13 2012
From: pje at telecommunity.com (PJ Eby)
Date: Thu, 5 Apr 2012 12:48:13 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
Message-ID: <CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>

On Thu, Apr 5, 2012 at 6:34 AM, Victor Stinner <victor.stinner at gmail.com>wrote:

> 2012/4/5 PJ Eby <pje at telecommunity.com>:
> >> More details why it's hard to define such function and why I dropped
> >> it from the PEP.
> >>
> >> If someone wants to propose again such function ("monotonic or
> >> fallback to system" clock), two issues should be solved:
> >>
> >>  - name of the function
> >>  - description of the function
> >
> > Maybe I missed it, but did anyone ever give a reason why the fallback
> > couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the
> system
> > clock?  (Given a suitable minimum delta.)  That function appeared to me
> to
> > provide a sufficiently monotonic clock for timeout purposes, if nothing
> > else.
>
>
> Did you read the following section of the PEP?
>
> http://www.python.org/dev/peps/pep-0418/#working-around-operating-system-bugs
>
> Did I miss something? If yes, could you write a patch for the PEP please?
>

What's missing is that if you're using a monotonic clock for timeouts, then
a monotonically-adjusted system clock can do that, subject to the polling
frequency -- it does not break just because the system clock is set
backwards; it simply loses time proportional to the frequency with which it
is polled.

For timeout purposes in a single process, such a clock is useful.  It just
isn't suitable for benchmarks, or for interprocess coordination.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120405/5f96df5b/attachment.html>

From guido at python.org  Thu Apr  5 18:56:19 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 5 Apr 2012 09:56:19 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
	<CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>
Message-ID: <CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>

On Thu, Apr 5, 2012 at 9:48 AM, PJ Eby <pje at telecommunity.com> wrote:
>
>
> On Thu, Apr 5, 2012 at 6:34 AM, Victor Stinner <victor.stinner at gmail.com>
> wrote:
>>
>> 2012/4/5 PJ Eby <pje at telecommunity.com>:
>> >> More details why it's hard to define such function and why I dropped
>> >> it from the PEP.
>> >>
>> >> If someone wants to propose again such function ("monotonic or
>> >> fallback to system" clock), two issues should be solved:
>> >>
>> >> ?- name of the function
>> >> ?- description of the function
>> >
>> > Maybe I missed it, but did anyone ever give a reason why the fallback
>> > couldn't be to Steven D'Aprano's monotonic wrapper algorithm over the
>> > system
>> > clock?? (Given a suitable minimum delta.)? That function appeared to me
>> > to
>> > provide a sufficiently monotonic clock for timeout purposes, if nothing
>> > else.
>>
>>
>> Did you read the following section of the PEP?
>>
>> http://www.python.org/dev/peps/pep-0418/#working-around-operating-system-bugs
>>
>> Did I miss something? If yes, could you write a patch for the PEP please?
>
>
> What's missing is that if you're using a monotonic clock for timeouts, then
> a monotonically-adjusted system clock can do that, subject to the polling
> frequency -- it does not break just because the system clock is set
> backwards; it simply loses time proportional to the frequency with which it
> is polled.

Depending on the polling frequency sounds like a bad idea, since you
can't know that you're the only user of the clock. Also depending on
the use case, too short a timeout may be worse than too long a
timeout. E.g. imagine hitting a website that usually takes 2 seconds
to respond, and setting a timeout to e.g. 4 seconds to bail. If the
timeout somehow gets reduced to 1 second it will appear as if the
website died, whereas if the timeout was increased to 1 hour, nothing
bad would happen unless the website *actually* started having truly
bad response times.

> For timeout purposes in a single process, such a clock is useful. ?It just
> isn't suitable for benchmarks, or for interprocess coordination.

I think it would be better if the proposed algorithm (or whatever
algorithm to "fix" timeouts) was implemented by the
application/library code using the timeout (or provided as a separate
library function), rather than by the clock, since the clock can't
know what fallback behavior the app/lib needs.

-- 
--Guido van Rossum (python.org/~guido)

From solipsis at pitrou.net  Thu Apr  5 18:59:07 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 5 Apr 2012 18:59:07 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
	<CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>
	<CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
Message-ID: <20120405185907.445541f7@pitrou.net>

On Thu, 5 Apr 2012 09:56:19 -0700
Guido van Rossum <guido at python.org> wrote:
> 
> > For timeout purposes in a single process, such a clock is useful. ?It just
> > isn't suitable for benchmarks, or for interprocess coordination.
> 
> I think it would be better if the proposed algorithm (or whatever
> algorithm to "fix" timeouts) was implemented by the
> application/library code using the timeout (or provided as a separate
> library function), rather than by the clock, since the clock can't
> know what fallback behavior the app/lib needs.

Agreed with providing it as a separate library function.

Regards

Antoine.



From tjreedy at udel.edu  Thu Apr  5 19:19:59 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 05 Apr 2012 13:19:59 -0400
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
	tkinter font.
In-Reply-To: <20120405140649.C2CA4250603@webabinitio.net>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
Message-ID: <jlkk8g$pj6$1@dough.gmane.org>

On 4/5/2012 10:06 AM, R. David Murray wrote:
> (reformatted to remove topposting)
>
> On Thu, 05 Apr 2012 14:52:56 +0300, Andrew Svetlov<andrew.svetlov at gmail.com>  wrote:
>> On Thu, Apr 5, 2012 at 1:21 PM, Antoine Pitrou<solipsis at pitrou.net>  wrote:

>>> Aren't there any docs?
>>
>> Maybe you will be surprised, but tkinter.rst has no comprehensive docs
>> for any tkinter class.

There are doc strings to be updated. See below.

>> I like to get it fixed but definitely cannot do it myself. My very
>> poor English is the main objection for writing narrative
>> documentation.
>
> One way to approach this problem would be to draft some rough docs that
> try to capture the functionality without worrying about English content
> or style.  Then you could post the rough draft somewhere, and ask for
> someone from the docs mailing list to edit it.  My thought would be that
> whoever took on the task would then do a rewrite, asking you questions
> to fill in any details that aren't clear from the rough draft.
>
> Thank, you, by the way, for all the work you are doing.

I have been hoping to work on a proper tkinter doc. I discovered some 
time ago through the pydoc server (not currently working for me, see 
http://bugs.python.org/issue14512)
that their are doc strings for (most) everything. I have been meaning to 
ask whether there is a way to build a draft doc from the doc strings. 
The first major editing job, given output like I saw in the browser, 
would be to remove the constant duplication of entries for inherited 
methods. Some widgets inherit perhaps a hundred methods and only add or 
override a couple. I guess the next question is whether a draft doc 
could be built *without* pulling in inherited methods.

-- 
Terry Jan Reedy


From victor.stinner at gmail.com  Thu Apr  5 19:21:58 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 5 Apr 2012 19:21:58 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <20120405185907.445541f7@pitrou.net>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
	<CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>
	<CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
	<20120405185907.445541f7@pitrou.net>
Message-ID: <CAMpsgwY7SPG3zfqb+6k8xkTpdsXkp4SVrMz6hKF7Sn5oY4bemg@mail.gmail.com>

>> > For timeout purposes in a single process, such a clock is useful. ?It just
>> > isn't suitable for benchmarks, or for interprocess coordination.
>>
>> I think it would be better if the proposed algorithm (or whatever
>> algorithm to "fix" timeouts) was implemented by the
>> application/library code using the timeout (or provided as a separate
>> library function), rather than by the clock, since the clock can't
>> know what fallback behavior the app/lib needs.
>
> Agreed with providing it as a separate library function.

I changed time.monotonic() to not fallback to the system clock exactly
for this reason: Python cannot guess what the developer expects, or
how the developer will use the clock.

Instead of implementing your own clock in your application, it's maybe
easier to patch your OS? I suppose that you are running on GNU/Hurd,
because I didn't find yet other OS not providing a monotonic clock :-)

If you are using an OS that doesn't provide a monotonic clock, do you
really need to implement your own in your application?

Victor

From ethan at stoneleaf.us  Thu Apr  5 20:04:16 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 05 Apr 2012 11:04:16 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <CAMpsgwY7SPG3zfqb+6k8xkTpdsXkp4SVrMz6hKF7Sn5oY4bemg@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>	<CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>	<CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>	<20120405185907.445541f7@pitrou.net>
	<CAMpsgwY7SPG3zfqb+6k8xkTpdsXkp4SVrMz6hKF7Sn5oY4bemg@mail.gmail.com>
Message-ID: <4F7DDEA0.2030402@stoneleaf.us>

Victor Stinner wrote:
> I changed time.monotonic() to not fallback to the system clock exactly
> for this reason: Python cannot guess what the developer expects, or
> how the developer will use the clock.

Which is exactly why I like Cameron Simpson's approach to selecting a 
clock -- let the developer/user decide what kind of clock they need, and 
ask for one that matches their criteria.

~Ethan~

From pje at telecommunity.com  Thu Apr  5 21:38:52 2012
From: pje at telecommunity.com (PJ Eby)
Date: Thu, 5 Apr 2012 15:38:52 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
	<CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>
	<CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
Message-ID: <CALeMXf7D6BJX1Uq8ocMN-Lysm9wwz+78vHbB-p4VB1q-yXhg-Q@mail.gmail.com>

On Thu, Apr 5, 2012 at 12:56 PM, Guido van Rossum <guido at python.org> wrote:

> Depending on the polling frequency sounds like a bad idea, since you
> can't know that you're the only user of the clock. Also depending on
> the use case, too short a timeout may be worse than too long a
> timeout.


Given a small enough delta, the timeout won't be too short. (Steven's
original code sample I believed used either 0 or 1 as a delta, but it could
be as small a fraction as will add correctly in the datatype used.)   And
the worst case polling frequency is the length of the timeout, meaning you
can't end up with more than double your intended timeout.

In the opposite scenario, where the time is polled in a tight loop, then as
long as Python doesn't sample the raw clock so often that the summed
fractional deltas exceeds the real clock speed, the timeout won't be
shortened by any appreciable amount.  In fact, this can be guaranteed by
measuring time as a (raw, increment) tuple, where the increment can be an
arbitrarily-large integer.  Each new time value is greater than the one
before, yet the real component remains untouched.  With this approach, the
timeout can only be delayed for however long the system clock *stops*, and
the timeout can only be shortened by the system clock skipping ahead.

Okay, having thought that out, I now agree that there are too many fine
points to make this cover enough of the use cases without needing
parameters.

Or more to the point, "If the implementation is hard to explain, it's a bad
idea."  ;-)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120405/57d8af2a/attachment.html>

From zooko at zooko.com  Thu Apr  5 21:39:17 2012
From: zooko at zooko.com (Zooko Wilcox-O'Hearn)
Date: Thu, 5 Apr 2012 13:39:17 -0600
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7BA3C2.4050705@gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>
Message-ID: <CANdZDc5q7T22abPwoyTLSiEFyF5qUbrO9T_pL8mYM3dytcvwVw@mail.gmail.com>

Folks:

Good job, Victor Stinner on baking the accumulated knowledge of this
thread into PEP 418. Even though I'm very interested in the topic, I
haven't been able to digest the whole thread(s) on the list and
understand what the current collective understanding is. The detailed
PEP document helps a lot.

I think there are still some mistakes, either in our collective
understanding as reflected by the PEP, or in my own head.

For starters, I still don't understand the first, most basic thing:
what do people mean when they say "monotonic clock"? I don't
understand the current text of PEP 418 with regard to the definition
of that word.

Allow me to resort to an analogy. There is an infinitely long,
perfectly straight and flat racetrack. There is a flag that gets
dragged along it at a constant rate, with the label "REAL TIME" on the
flag. There are some runners, each with a different label on their
chest:

Runner A: a helicopter hovers over Runner A. Occasionally it picks him
up and plops him down right next to the flag. Also, he wears a headset
and listens to instructions from his coach to run a little faster or
slower, as necessary, to remain abreast of the flag.

Runner B: a helicopter hovers over Runner B. If he is behind the flag,
it will pick him up and plop him down right next to the flag. However,
if he is ahead of the flag it will not pick him up.

Runner C: no helicopter ever picks up Runner C, but he does wear a
headset and listens to instructions from his coach to run a little
faster or a little slower. His coach tells him to run a little faster
if he is behind the flag or run a little slower if he is in front of
the flag, with the goal of eventually having him right next to the
flag.

Runner D: like Runner C, he never gets picked up, but he listens to
instructions to run a little faster or a little slower. However,
instead of telling him to run faster in order to catch up to the flag,
or to run slower in order to "catch down" to the flag, his coach
instead tells him to run a little faster if he is moving slower than
the flag is moving, and to run a little slower if he is moving faster
than the flag is moving. Note that this is very different from Runner
C, in that it is not intended to cause him to eventually be right next
to the flag, and indeed if it is done right it guarantees that he will
*never* be right next to the flag, although he will be moving just as
fast as the flag is moving.

Runner E: no helicopter, no headset. He just proceeds at his own pace,
blissfully unaware of the exhortations of others.

Now: which ones of these five runners do you call "monotonic"? Which
ones do you call "steady"?

Regards,

Zooko

From ethan at stoneleaf.us  Thu Apr  5 20:56:00 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 05 Apr 2012 11:56:00 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120404230503.GB314@iskra.aviel.ru>
References: <CAL0kPAV0gOkVCE9-7ux7ZEWC3bPxB3+AYMgu4BxzR6jXeyf97A@mail.gmail.com>	<20120403060317.GA31001@cskk.homeip.net>	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>	<4F7B2029.8010707@stoneleaf.us>	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>	<20120404174449.GB25288@iskra.aviel.ru>	<4F7C8CD6.7090308@stoneleaf.us>	<20120404192436.GB27384@iskra.aviel.ru>	<4F7CA660.60205@stoneleaf.us>
	<20120404230503.GB314@iskra.aviel.ru>
Message-ID: <4F7DEAC0.2030207@stoneleaf.us>

Oleg Broytman wrote:
> On Wed, Apr 04, 2012 at 12:52:00PM -0700, Ethan Furman wrote:
>> Forced?  I do not use Python to be forced to use one style of
>> programming over another.
> 
>    Then it's strange you are using Python with its strict syntax
> (case-sensitivity, forced indents), ubiquitous exceptions, limited
> syntax of lambdas and absence of code blocks (read - forced functions),
> etc.

I come from assembly -- 'a' and 'A' are *not* the same.

indents -- I already used them; finding a language that gave them the 
same importance I did was incredible.

exceptions -- Python uses them, true, but I don't have to in my own code 
(I do, but that's besides the point).

lambdas -- they work just fine for my needs.

etc.


>> And it's not like returning None will allow some clock calls to work
>> but not others -- as soon as they try to use it, it will raise an
>> exception.
> 
>    There is a philosophical distinction between EAFP and LBYL. I am
> mostly proponent of LBYL.
>    Well, I am partially retreat. "Errors should never pass silently.
> Unless explicitly silenced." get_clock(FLAG, on_error=None) could return
> None.

It's only an error if it's documented that way and, more importantly, 
thought of that way.  The re module is a good example: if it can't find 
what you're looking for it returns None -- it does *not* raise a 
NotFound exception.

I see get_clock() the same way:  I need a clock that does xyz... None? 
Okay, there isn't one.

~Ethan~

From phd at phdru.name  Thu Apr  5 22:15:08 2012
From: phd at phdru.name (Oleg Broytman)
Date: Fri, 6 Apr 2012 00:15:08 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <4F7DEAC0.2030207@stoneleaf.us>
References: <20120403060317.GA31001@cskk.homeip.net>
	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>
	<4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<4F7DEAC0.2030207@stoneleaf.us>
Message-ID: <20120405201508.GA29577@iskra.aviel.ru>

On Thu, Apr 05, 2012 at 11:56:00AM -0700, Ethan Furman wrote:
> It's only an error if it's documented that way and, more
> importantly, thought of that way.  The re module is a good example:
> if it can't find what you're looking for it returns None -- it does
> *not* raise a NotFound exception.

   But open() raises IOError. ''.find('a') returns -1 but ''.index('a')
raises ValueError.
   So we can argue in circles both ways, there are too many arguments
pro and contra. Python is just too inconsistent to be consistently
argued over. ;-)

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From andrew.svetlov at gmail.com  Thu Apr  5 22:16:54 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Thu, 5 Apr 2012 23:16:54 +0300
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
 tkinter font.
In-Reply-To: <jlkk8g$pj6$1@dough.gmane.org>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<jlkk8g$pj6$1@dough.gmane.org>
Message-ID: <CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>

On Thu, Apr 5, 2012 at 8:19 PM, Terry Reedy <tjreedy at udel.edu> wrote:
>
> I have been hoping to work on a proper tkinter doc. I discovered some time
> ago through the pydoc server (not currently working for me, see
> http://bugs.python.org/issue14512)
> that their are doc strings for (most) everything. I have been meaning to ask
> whether there is a way to build a draft doc from the doc strings.

I'll do it. Frankly speaking I don't like to do, 'hate' is better word
to describe my opinion.
But somebody need to make tkinter docs consistent.
I doubt if that work can be done by some script well enough ?
docstrings should to be reformatted according
to Sphinx markup.

Please after I will finish initial transform help me to make this part
of documentation as good as Python Docs should be.
I think the excellent narrative Python documentation makes a big deal
in python wide spreading.
I remember docs for 1.5. Not so bad, but it's incompatible to docs for now.

Also, please help me to make hg clone for tk documentation located in
hg.python.org.
I tried to:
andrew at tiktaalik2 ~/projects> hg clone ssh://hg at hg.python.org/cpython
ssh://hg at hg.python.org/sandbox/tkdocs
repo created, public URL is http://hg.python.org/sandbox/tkdocs
abort: clone from remote to remote not supported
http://hg.python.org/sandbox/tkdocs is empty.

Looks like I don't know Mercurial well enough to to it.
When I'll get online clone of cpython available to myself as well as
visible to everyone and write accessible to python committers
(I hope you will push updates as well) ? I will start to do.




-- 
Thanks,
Andrew Svetlov

From ethan at stoneleaf.us  Thu Apr  5 22:49:11 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 05 Apr 2012 13:49:11 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405201508.GA29577@iskra.aviel.ru>
References: <20120403060317.GA31001@cskk.homeip.net>	<CAL0kPAXZ1AxqeVYZen9vzUYxKDG-hr1YFZ+vRRMLTN=hJDfKPQ@mail.gmail.com>	<4F7B2029.8010707@stoneleaf.us>	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>	<20120404174449.GB25288@iskra.aviel.ru>	<4F7C8CD6.7090308@stoneleaf.us>	<20120404192436.GB27384@iskra.aviel.ru>	<4F7CA660.60205@stoneleaf.us>
	<20120404230503.GB314@iskra.aviel.ru>	<4F7DEAC0.2030207@stoneleaf.us>
	<20120405201508.GA29577@iskra.aviel.ru>
Message-ID: <4F7E0547.9060808@stoneleaf.us>

Oleg Broytman wrote:
> On Thu, Apr 05, 2012 at 11:56:00AM -0700, Ethan Furman wrote:
>> It's only an error if it's documented that way and, more
>> importantly, thought of that way.  The re module is a good example:
>> if it can't find what you're looking for it returns None -- it does
>> *not* raise a NotFound exception.
> 
>    But open() raises IOError. ''.find('a') returns -1 but ''.index('a')
> raises ValueError.
>    So we can argue in circles both ways, there are too many arguments
> pro and contra. Python is just too inconsistent to be consistently
> argued over. ;-)

Indeed -- I think we have reached an agreement!  Now if you'll just 
agree that returning None in this case is better... ;)

~Ethan~

From cs at zip.com.au  Fri Apr  6 00:05:56 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 08:05:56 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120404230503.GB314@iskra.aviel.ru>
References: <20120404230503.GB314@iskra.aviel.ru>
Message-ID: <20120405220555.GA10777@cskk.homeip.net>

On 05Apr2012 03:05, Oleg Broytman <phd at phdru.name> wrote:
| On Wed, Apr 04, 2012 at 12:52:00PM -0700, Ethan Furman wrote:
| > Forced?  I do not use Python to be forced to use one style of
| > programming over another.
| 
|    Then it's strange you are using Python with its strict syntax
| (case-sensitivity, forced indents), ubiquitous exceptions, limited
| syntax of lambdas and absence of code blocks (read - forced functions),
| etc.

But exceptions are NOT ubiquitous, nor should they be. They're a very
popular and often apt way to handle certain circumstances, that's all.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

On the one hand I knew that programs could have a compelling and deep logical
beauty, on the other hand I was forced to admit that most programs are
presented in a way fit for mechanical execution, but even if of any beauty at
all, totally unfit for human appreciation.      - Edsger W. Dijkstra

From cs at zip.com.au  Fri Apr  6 00:08:25 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 08:08:25 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405201508.GA29577@iskra.aviel.ru>
References: <20120405201508.GA29577@iskra.aviel.ru>
Message-ID: <20120405220824.GA11617@cskk.homeip.net>

On 06Apr2012 00:15, Oleg Broytman <phd at phdru.name> wrote:
|    So we can argue in circles both ways, there are too many arguments
| pro and contra. Python is just too inconsistent to be consistently
| argued over. ;-)

Bah! I think these threads demonstrate that we can consistently argue
over Python for weeks per topic, sometimes months and years.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Sam Jones <samjones at leo.unm.edu> on the Nine Types of User:

Frying Pan/Fire Tactician - "It didn't work with the data set we had, so I
                             fed in my aunt's recipe for key lime pie."
Advantages:     Will usually fix error.
Disadvantages:  'Fix' is defined VERY loosely here.
Symptoms:       A tendancy to delete lines that get errors instead of fixing
                them.
Real Case:      One user complained that their program executed, but didn't
                do anything.  The scon looked at it for twenty minutes before
                realizing that they'd commented out EVERY LINE.  The user
                said, "Well, that was the only way I could get it to compile."

From cs at zip.com.au  Fri Apr  6 00:17:58 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 08:17:58 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>
References: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>
Message-ID: <20120405221758.GA12229@cskk.homeip.net>

On 05Apr2012 10:21, Lennart Regebro <regebro at gmail.com> wrote:
| On Thu, Apr 5, 2012 at 01:10, Victor Stinner <victor.stinner at gmail.com> wrote:
| > Ok for the default, but what happens if the caller sets an option to
| > False? Does get_clock(monotonic=False) return a non-monotonic clock?
| > (I guess no, but it may be confusing.)

This is where the bitmap approach can be less confusing - the docstring
says "The returned clock shall have all the requested flags". It is at
least very predictable.

| Good point, but the same does for using flags.

Only notionally. With a keyword argument the (lazy non doc reading)
caller can imagine the default is None, and True and False specify
concrete position and negative requirements. Not the case with a
bitmask, which only has two states per feature, not three (or
arbitrarily many, depending how nasty one wants to be - I could play
devil's advocate and ask for monotonic=0.7 and demand a competivtive
evaluation of relative merits:-)

| If you don't pass in
| the MONOTONIC flag, what happens? Only reading the documentation will
| tell you.

Gah! ALL functions are like that! How often do we see questions about
max() or split() etc that a close reading of the docs obviate?

| As such this, if anything, is an indication that the
| get_clock() API isn't ideal in any incarnation.

It's not meant to be ideal. I find that word almost useless in its
overuse. get_clock() is meant to be _very_ _often_ _useful_ and easy to
use for expressing simple fallback when the PEP418 monotonic() et al
calls don't fit.

For the truly arbitrary case the caller needs to be able to enumerate
all the available clocks and make their own totally ad hoc decision. My
current example code offers both public lock list names and get_clocks()
(just like get_clock() in signature, but returning all matches instead
of just the first one).

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

From the EXUP mailing list  <exup-brotherhood at Majordomo.net> ...
Wayne Girdlestone <WayneG at mega.co.za>:
WG> Let's say there are no Yamaha's or Kawa's in the world.
Stevey Racer <ssturm at co.la.ca.us>:
SR> sriw - so then you are saying that Revelations (from the Bible) has come
SR> true and Hell is now on Earth.
WG> Your choice for you new bike is either a new '98 fuel injected SRAD, or a
WG> new '98 Fireblade.
SR> sriw -The devil's minions - full of temptation but never fulfilling their
SR> promise.

From cs at zip.com.au  Fri Apr  6 00:24:03 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 08:24:03 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
References: <CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
Message-ID: <20120405222403.GA14243@cskk.homeip.net>

On 05Apr2012 09:56, Guido van Rossum <guido at python.org> wrote:
| On Thu, Apr 5, 2012 at 9:48 AM, PJ Eby <pje at telecommunity.com> wrote:
| > What's missing is that if you're using a monotonic clock for timeouts, then
| > a monotonically-adjusted system clock can do that, subject to the polling
| > frequency -- it does not break just because the system clock is set
| > backwards; it simply loses time proportional to the frequency with which it
| > is polled.
| 
| Depending on the polling frequency sounds like a bad idea, since you
| can't know that you're the only user of the clock.

You can if you're handed a shiny new "clock" object in some way, with a
not-a-singleton guarrentee. Of course, such a clock is immediately
_less_ reliable to synchornisation with other clock users:-)

| Also depending on
| the use case, too short a timeout may be worse than too long a
| timeout. [...]
|
| > For timeout purposes in a single process, such a clock is useful. ?It just
| > isn't suitable for benchmarks, or for interprocess coordination.
| 
| I think it would be better if the proposed algorithm (or whatever
| algorithm to "fix" timeouts) was implemented by the
| application/library code using the timeout (or provided as a separate
| library function), rather than by the clock, since the clock can't
| know what fallback behavior the app/lib needs.

Absolutely. I for one would be happy with a clocktools module or
something offering a bunch of synthetic clocks. Especially if they were
compatible in API with whatever clock objects the core time module
clocks used, so that a user _could_ add them into the pick-a-clock
decision easily.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I'd be careful who you call smart or not smart. Smart isn't knowing how to
save six bytes. Smart is knowing WHEN. - Peter Cherna, Amiga O.S. Development

From victor.stinner at gmail.com  Fri Apr  6 00:27:16 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Fri, 06 Apr 2012 00:27:16 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120405221758.GA12229@cskk.homeip.net>
References: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>
	<20120405221758.GA12229@cskk.homeip.net>
Message-ID: <4F7E1C44.5040005@gmail.com>

Le 06/04/2012 00:17, Cameron Simpson a ?crit :
> This is where the bitmap approach can be less confusing - the docstring
> says "The returned clock shall have all the requested flags". It is at
> least very predictable.

By the way, I removed ("deferred") the time.highres() function from the 
PEP, and I try to avoid the term "steady" because no OS clock respect 
the definition of "steady" (especially in corner cases as system 
suspend/resume). So which flags do you want to support? (only "monotonic"?)

Basically, get_clock("monotonic") should give time.monotonic() whereas 
get_clock() gives time.time()?

Victor

From cs at zip.com.au  Fri Apr  6 00:34:57 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 08:34:57 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CANdZDc5q7T22abPwoyTLSiEFyF5qUbrO9T_pL8mYM3dytcvwVw@mail.gmail.com>
References: <CANdZDc5q7T22abPwoyTLSiEFyF5qUbrO9T_pL8mYM3dytcvwVw@mail.gmail.com>
Message-ID: <20120405223457.GA15346@cskk.homeip.net>

On 05Apr2012 13:39, Zooko Wilcox-O'Hearn <zooko at zooko.com> wrote:
| Good job, Victor Stinner on baking the accumulated knowledge of this
| thread into PEP 418. Even though I'm very interested in the topic, I
| haven't been able to digest the whole thread(s) on the list and
| understand what the current collective understanding is.

There isn't a collective understanding :-) That's why all the noise!

| The detailed
| PEP document helps a lot.

Yes indeed, though like all of us I think it could (always) use more
detail on my pet concerns.

| I think there are still some mistakes, either in our collective
| understanding as reflected by the PEP, or in my own head.
| 
| For starters, I still don't understand the first, most basic thing:
| what do people mean when they say "monotonic clock"? I don't
| understand the current text of PEP 418 with regard to the definition
| of that word.

A monotonic clock never returns t0 > t1 for t0, t1 being two adjacent
polls of the clock. On its own it says nothing about steadiness or
correlation with real world time.

_Quality of implementation_ says that the montonic() call should try to
return a close that is monotonic and _also_ steady and preferably
precise. How these things are balancer is a matter of policy.

| Allow me to resort to an analogy. There is an infinitely long,
| perfectly straight and flat racetrack. There is a flag that gets
| dragged along it at a constant rate, with the label "REAL TIME" on the
| flag. There are some runners, each with a different label on their
| chest:
| 
| Runner A: a helicopter hovers over Runner A. Occasionally it picks him
| up and plops him down right next to the flag. Also, he wears a headset
| and listens to instructions from his coach to run a little faster or
| slower, as necessary, to remain abreast of the flag.

If he always runs forwards, it is montonic. Not very steady when the
helicopter comes to play.

| Runner B: a helicopter hovers over Runner B. If he is behind the flag,
| it will pick him up and plop him down right next to the flag. However,
| if he is ahead of the flag it will not pick him up.

Seems like runner A without instruction. Monotonic. Not very steady.

| Runner C: no helicopter ever picks up Runner C, but he does wear a
| headset and listens to instructions from his coach to run a little
| faster or a little slower. His coach tells him to run a little faster
| if he is behind the flag or run a little slower if he is in front of
| the flag, with the goal of eventually having him right next to the
| flag.

If he always runs forward, monotonic. And steady.

| Runner D: like Runner C, he never gets picked up, but he listens to
| instructions to run a little faster or a little slower. However,
| instead of telling him to run faster in order to catch up to the flag,
| or to run slower in order to "catch down" to the flag, his coach
| instead tells him to run a little faster if he is moving slower than
| the flag is moving, and to run a little slower if he is moving faster
| than the flag is moving. Note that this is very different from Runner
| C, in that it is not intended to cause him to eventually be right next
| to the flag, and indeed if it is done right it guarantees that he will
| *never* be right next to the flag, although he will be moving just as
| fast as the flag is moving.
| 
| Runner E: no helicopter, no headset. He just proceeds at his own pace,
| blissfully unaware of the exhortations of others.
| 
| Now: which ones of these five runners do you call "monotonic"? Which
| ones do you call "steady"?

If they all run forwards, they're all monotonic.

If their coach or helicopter can move then _backwards_ they're not
monotonic. If the helicopter can move them an arbitrary (but matching
the game plan) distance, they're not steady. Otherwise they are steady,
if the runner's speed is always sufficiently close to the flag speed
(this threshold and the criteria for measuring it as subject to debate,
forming policy).

And "high resolution" has its own flavours, though generally less
contentious.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

If you don't shoot the fish in your barrel, your barrel will soon be
full of fish. - Tim Mefford

From cs at zip.com.au  Fri Apr  6 00:51:54 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 08:51:54 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7E1C44.5040005@gmail.com>
References: <4F7E1C44.5040005@gmail.com>
Message-ID: <20120405225154.GA17794@cskk.homeip.net>

On 06Apr2012 00:27, Victor Stinner <victor.stinner at gmail.com> wrote:
| Le 06/04/2012 00:17, Cameron Simpson a ?crit :
| > This is where the bitmap approach can be less confusing - the docstring
| > says "The returned clock shall have all the requested flags". It is at
| > least very predictable.
| 
| By the way, I removed ("deferred") the time.highres() function from the 
| PEP,

Chuckle; was not the whole PEP for a high res clock?

| and I try to avoid the term "steady" because no OS clock respect 
| the definition of "steady" (especially in corner cases as system 
| suspend/resume).

I can think of definitions of "steady" that I personally would accept,
and they'd accept that suspend/resume would be concealed (I guess I
would usually want - purely myself here - a clock representing system
run time; I'd go for time.time() for wall clock).

| So which flags do you want to support? (only "monotonic"?)

I'd stay with my current list, with metadata in the clock objects
indicating what _flavour_ of "steady" or "high res" they present.

| Basically, get_clock("monotonic") should give time.monotonic() whereas 

If time.monotonic() never falls back to a non-monotonic source, yes.

| get_clock() gives time.time()?

Might in theory give something better, but time.time() would always be a
valid result of nothing else seemed better to the module author. I imagine
in practice that time.time() might always use the "best" clock absent
special requirements. So you'd probably get what particular clock used to
implement time.time(), yes. (I realise this has interesting implications
for the list orders; time.time() would come _first_, but providing feature
flags to get_clock() can cause it not to be chosen when it doesn't match.)

This a reason why I think we should present (even privately only) all the
system clocks for a platform. Then you _can_ still offer highres() and
steady() with detailed qualifications in the docs as to what
considerations went into acepting a clock as highres or steady, and
therefore why some users may find them unsatisfactory i.e. under what
sort of circumstances/requirements they may not suit.

Any of the montonic()/highres()/steady() represent policy decisions by
the module author; it is just that monotonic() is easier to qualify than
the others: "never goes backwards in return value". Even though VMs and
system suspend can add depth to the arguments.

It _is_ useful for people to be able to reach for highres() or steady()
a lot of the time; they do, though, need to be able to decide if that's
sensible.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I thought back to other headaches from my past and sneered at their
ineffectiveness.        - Harry Harrison

From ericsnowcurrently at gmail.com  Fri Apr  6 01:09:42 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 5 Apr 2012 17:09:42 -0600
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbC7HgWZ+4-PDJfXD+=iD+h7NcDMo-pPVW_8fKwziM2ww@mail.gmail.com>
	<CALeMXf6vzd56QAmssvr5c7xQ+trP0-Cu4E5L793UNyH3Gu+hSw@mail.gmail.com>
	<CAMpsgwbwW81epcdv66GLv4dXLmYLfT6VU8VE85hv_tUKcuPv5Q@mail.gmail.com>
	<CALeMXf68n_OigrgbXuGZe=b1d0F+oQESeCtZsWrEDDJbY3pTbA@mail.gmail.com>
	<CAP7+vJ+9dTw7w9+iEG_P8tBBSr0w=4OLmrMt=DuYQ1F5h0E+ug@mail.gmail.com>
Message-ID: <CALFfu7BYUyPJVFyMCZs8EopcEq2pjR+xjoUhwEFd-caQxRb-dA@mail.gmail.com>

On Apr 5, 2012 11:01 AM, "Guido van Rossum" <guido at python.org> wrote:
> I think it would be better if the proposed algorithm (or whatever
> algorithm to "fix" timeouts) was implemented by the
> application/library code using the timeout (or provided as a separate
> library function), rather than by the clock, since the clock can't
> know what fallback behavior the app/lib needs.

+1

-eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120405/1b5f4f93/attachment.html>

From cs at zip.com.au  Fri Apr  6 02:48:12 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 10:48:12 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120405225154.GA17794@cskk.homeip.net>
References: <20120405225154.GA17794@cskk.homeip.net>
Message-ID: <20120406004812.GA32055@cskk.homeip.net>

On 06Apr2012 08:51, I wrote:
| On 06Apr2012 00:27, Victor Stinner <victor.stinner at gmail.com> wrote:
| | By the way, I removed ("deferred") the time.highres() function from the 
| | PEP,
| 
| Chuckle; was not the whole PEP for a high res clock?

Gah. I see it was for montonic, not high res. Sorry.

[...]
| I can think of definitions of "steady" that I personally would accept,
| and they'd accept that suspend/resume would be concealed (I guess I
| would usually want - purely myself here - a clock representing system
| run time; I'd go for time.time() for wall clock).
| 
| | So which flags do you want to support? (only "monotonic"?)
| 
| I'd stay with my current list, with metadata in the clock objects
| indicating what _flavour_ of "steady" or "high res" they present.

On reflection, since I have historically presumed time.time() on UNIX
mapped to "man 2 time", I clearly think that time.time() is wall clock
time, and may jump when the sysadmin notices it is incorrect (of course
this is often mediated by NTP, which in turn is usually mediated by
some ntpd using adjtime(), which slews instead of jumping). But it might
jump. (I'm intending to jump a wayard VM today, in fact:-)

So guess I expect time.time() to be only usually steady. And usually
monotonic. So having neither flag. Do I want a WALLCLOCK flag? Meaning a
clock that is supposed to be real world time (did I see REALTIME in one
of your examples?), and may be almost arbirarily corrected to be made
real world if it is wrong. Maybe. +0 on that I think.

Basicly I'm distinguishing here between a clock used for timestamps, for
example in log entries, and a clock used for measuring elapsed system
run time, for example in benchmarking. I would want to log entries to
match what a clock on the wall should say.

So I think I'm _still_ for the three original flags I suggested
(monotonic, high res, steady) and expect time.time() to not necessarily
meet any of them. But to meet a hypothetical WALLCLOCK flag.

Regarding UNIX time(2) (or POSIX time(3)), POSIX says:

  The time() function shall return the value of time   in  seconds since
  the Epoch.

and the epoch is a date. So UNIX time() should be a wall clock.
Python "help(time.time)" says:

  Return the current time in seconds since the Epoch.

So I think it should also be a wall clock by that same reasoning.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Uh, this is only temporary...unless it works.   - Red Green

From greg.ewing at canterbury.ac.nz  Fri Apr  6 03:14:11 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 06 Apr 2012 13:14:11 +1200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <20120405223457.GA15346@cskk.homeip.net>
References: <CANdZDc5q7T22abPwoyTLSiEFyF5qUbrO9T_pL8mYM3dytcvwVw@mail.gmail.com>
	<20120405223457.GA15346@cskk.homeip.net>
Message-ID: <4F7E4363.8090409@canterbury.ac.nz>

Cameron Simpson wrote:

> A monotonic clock never returns t0 > t1 for t0, t1 being two adjacent
> polls of the clock. On its own it says nothing about steadiness or
> correlation with real world time.

No, no, no.

This is the strict mathematical meaning of the word "monotonic",
but the way it's used in relation to OS clocks, it seems to
mean rather more than that.

A clock whose only guarantee is that it never goes backwards
is next to useless. For things like benchmarks and timeouts,
the important thing about a clock that it *keeps going forward*
at a reasonably constant rate. On the other hand, it can have
an arbitrary starting point and doesn't have to be related
to any external time standard.

I'm assuming this is what Linux et al mean when they talk
about a "monotonic clock", because anything else doesn't make
sense.

So if we're going to use the term "monotonic" at all, I think we
should explicitly define it as having this meaning, i.e.
both mathematically monotonic and steady. Failure to be clear
about this has caused a huge amount of confusion in this thead
so far.

-- 
Greg

From cs at zip.com.au  Fri Apr  6 03:50:17 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 11:50:17 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7E4363.8090409@canterbury.ac.nz>
References: <4F7E4363.8090409@canterbury.ac.nz>
Message-ID: <20120406015017.GA24126@cskk.homeip.net>

On 06Apr2012 13:14, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
| Cameron Simpson wrote:
| > A monotonic clock never returns t0 > t1 for t0, t1 being two adjacent
| > polls of the clock. On its own it says nothing about steadiness or
| > correlation with real world time.
|
| No, no, no.
| This is the strict mathematical meaning of the word "monotonic",
| but the way it's used in relation to OS clocks, it seems to
| mean rather more than that.

It's like my next paragraph didn't exist...

  _Quality of implementation_ says that the montonic() call should try to
  return a close that is monotonic and _also_ steady and preferably precise.
  How these things are balancer is a matter of policy.

To forstall things, right at the bottom of this I'm going to say i agree
with you about objectives, but not about terminology. I maintain that
"monotonic" still means what I said, and that it is the combination of
the word with "clock" that brings in your other criteria.

| A clock whose only guarantee is that it never goes backwards
| is next to useless. For things like benchmarks and timeouts,
| the important thing about a clock that it *keeps going forward*
| at a reasonably constant rate. [...]
| I'm assuming this is what Linux et al mean when they talk
| about a "monotonic clock", because anything else doesn't make
| sense.

This is why I say there's no global understanding. Why assume?
On a Linux box, "man 3 clock_getres" says (sorry, this is a bit wordy):

   All  implementations  support the system-wide real-time clock, which
   is identified by CLOCK_REALTIME.  Its time represents seconds and
   nanoseconds  since the Epoch.  When its time is changed, timers for
   a relative interval are unaffected, but timers for an absolute point
   in time  are affected.

   More  clocks may be implemented.  The interpretation of the
   corresponding time values and the effect on timers is unspecified.

   Sufficiently recent versions of glibc and the Linux kernel support
   the following clocks:

   CLOCK_REALTIME
      System-wide real-time clock.  Setting this clock requires
      appropriate privileges.

   CLOCK_MONOTONIC
      Clock that cannot be set and  represents  monotonic  time since
      some unspecified starting point.

   CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
      Similar  to  CLOCK_MONOTONIC, but provides access to a raw hard-
      ware-based time that is not subject to NTP adjustments.

   CLOCK_PROCESS_CPUTIME_ID
      High-resolution per-process timer from the CPU.

   CLOCK_THREAD_CPUTIME_ID
      Thread-specific CPU-time clock.

The first paragraph is very clear about real time (wall clock, and what
time.time() does, being "seconds since the epoch"). The CLOCK_MONOTONIC*
modes clearly imply steadiness.

"man 3p clock_getres" (POSIX clock_getres) is even more verbose and general.

| So if we're going to use the term "monotonic" at all, I think we
| should explicitly define it as having this meaning, i.e.
| both mathematically monotonic and steady.

I think if it is too "unsteady" then it is not "time".

So I actually side with you in the requirement for a "clock", but
monotonic alone does not mean that. Quality of implementation _may_ mean
we don't offer something abjectly erratic.

| Failure to be clear
| about this has caused a huge amount of confusion in this thead
| so far.

And burdening the word "monotonic" itself with exciting new meanings
doesn't help. "monotonic clock", sure, _that_ has additional
connotations, but not just the word monotonic.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

We had the experience, but missed the meaning.  - T.S. Eliot

From steve at pearwood.info  Fri Apr  6 04:23:07 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 06 Apr 2012 12:23:07 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <4F7E4363.8090409@canterbury.ac.nz>
References: <CANdZDc5q7T22abPwoyTLSiEFyF5qUbrO9T_pL8mYM3dytcvwVw@mail.gmail.com>	<20120405223457.GA15346@cskk.homeip.net>
	<4F7E4363.8090409@canterbury.ac.nz>
Message-ID: <4F7E538B.4030201@pearwood.info>

Greg Ewing wrote:
> Cameron Simpson wrote:
> 
>> A monotonic clock never returns t0 > t1 for t0, t1 being two adjacent
>> polls of the clock. On its own it says nothing about steadiness or
>> correlation with real world time.
> 
> No, no, no.
> 
> This is the strict mathematical meaning of the word "monotonic",
> but the way it's used in relation to OS clocks, it seems to
> mean rather more than that.
> 
> A clock whose only guarantee is that it never goes backwards
> is next to useless. For things like benchmarks and timeouts,
> the important thing about a clock that it *keeps going forward*

That would be a *strictly* monotonic clock, as opposed to merely monotonic.

And yes, a merely monotonic clock could just return a constant value, forever:

9, 9, 9, 9, 9, ...

and yes, such a thing would be useless.

Various people have suggested caching the last value of time() and re-using it 
if the new value is in the past. This will give a monotonic clock, but since 
it can give constant timestamps for an indefinite period, it's usefulness is 
limited.

I earlier put forward an alternate implementation which gives no more than one 
such constant tick in a row. If you know the hardware resolution of the clock, 
you can even avoid that single constant tick by always advancing the timestamp 
by that minimum resolution:

_prev = _prev_raw = 0
_res = 1e-9  # nanosecond resolution
def monotonic():
     global _prev, _prev_raw
     raw = time()
     delta = max(_res, raw - _prev_raw)
     _prev_raw = raw
     _prev += delta
     return _prev

Even if time() jumps backwards, or stays constant, monotonic() here will be 
strictly monotonic.



-- 
Steven

From stephen at xemacs.org  Fri Apr  6 04:57:20 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 6 Apr 2012 11:57:20 +0900
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <20120405152217.GA22311@iskra.aviel.ru>
References: <4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
	<20120405152217.GA22311@iskra.aviel.ru>
Message-ID: <CAL_0O18j+nRWQ5FrBkVeMYCLFZo50MtuaBAib5fki6BtPLvn5Q@mail.gmail.com>

On Fri, Apr 6, 2012 at 12:22 AM, Oleg Broytman <phd at phdru.name> wrote:
> On Thu, Apr 05, 2012 at 11:45:06PM +0900, Stephen J. Turnbull wrote:
>> On Thu, Apr 5, 2012 at 10:34 PM, Oleg Broytman <phd at phdru.name> wrote:
>> > ? Why doesn't open() return None for a non-existing file? or
>> > socket.gethostbyname() for a non-existing name?
>>
>> That's not an answer to my question, because those calls have very
>> important use cases

Note, implicit existential quantifier.

> ? Counterexamples

Not an argument against an existential quantifier.

> But Python doesn't allow [use of conditional constructs when opening a series of files, one must trap exceptions].

True.  Python needs to make a choice, and the existence of important
cases where the user knows that the object (file) exists makes it
plausible that the user would prefer an Exception.  Also, open() is
intended to be a fairly thin wrapper over the OS facility, and often
the OS terms a missing file an "error".

I might have chosen to implement a 'None' return if I had designed
open(), but I can't get too upset about raising an Exception as it
actually does.

What I want to know is why you're willing to assert that absence of a
clock of a particular configuration is an Exception, when that absence
clearly documented to be a common case?  I don't find your analogies
to be plausible.  They seem to come down to "sometimes in Python we've
made choices that impose extra work on some use cases, so we should
impose extra work on this use case too."  But that surely isn't what
you mean.

From zooko at zooko.com  Fri Apr  6 05:07:12 2012
From: zooko at zooko.com (Zooko Wilcox-O'Hearn)
Date: Thu, 5 Apr 2012 21:07:12 -0600
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic clock"
 (was: PEP 418 is too divisive and confusing and should be postponed)
Message-ID: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>

On Thu, Apr 5, 2012 at 7:14 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>
> This is the strict mathematical meaning of the word "monotonic", but the way it's used in relation to OS clocks, it seems to mean rather more than that.

Yep. As far as I can tell, nobody has a use for an unsteady, monotonic clock.

There seem to be two groups of people:

1. Those who think that "monotonic clock" means a clock that never
goes backwards. These people are in the majority. After all, that's
what the word "monotonic" means ? . However, a clock which guarantees
*only* this is useless.

2. Those who think that "monotonic clock" means a clock that never
jumps, and that runs at a rate approximating the rate of real time.
This is a very useful kind of clock to have! It is what C++ now calls
a "steady clock". It is what all the major operating systems provide.

The people in class 1 are more correct, technically, and far more
numerous, but the concept from 1 is a useless concept that should be
forgotten.

So before proceeding, we should mutually agree that we have no
interest in implementing a clock of type 1. It wouldn't serve anyone's
use case (correct me if I'm wrong!) and the major operating systems
don't offer such a thing anyway.

Then, if we all agree to stop thinking about that first concept, then
we need to agree whether we're all going to use the word "monotonic
clock" to refer to the second concept, or if we're going to use a
different word (such as "steady clock") to refer to the second
concept. I would prefer the latter, as it will relieve us of the need
to repeatedly explain to newcomers: "That word doesn't mean what you
think it means.".

The main reason to use the word "monotonic clock" to refer to the
second concept is that POSIX does so, but since Mac OS X, Solaris,
Windows, and C++ have all avoided following POSIX's mistake, I think
Python should too.

Regards,

Zooko

? http://mathworld.wolfram.com/MonotonicSequence.html

From cs at zip.com.au  Fri Apr  6 05:36:20 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 13:36:20 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
Message-ID: <20120406033619.GA9531@cskk.homeip.net>

On 05Apr2012 21:07, Zooko Wilcox-O'Hearn <zooko at zooko.com> wrote:
| On Thu, Apr 5, 2012 at 7:14 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
| > This is the strict mathematical meaning of the word "monotonic",
| > but the way it's used in relation to OS clocks, it seems to mean rather
| > more than that.
| 
| Yep. As far as I can tell, nobody has a use for an unsteady, monotonic clock.

Well, not for a wildly unsteady monotonic clock.

| There seem to be two groups of people:
| 1. Those who think that "monotonic clock" means a clock that never
| goes backwards.

I will always fall into this category.

| These people are in the majority. After all, that's
| what the word "monotonic" means ? . However, a clock which guarantees
| *only* this is useless.

Sure. I wouldn't have much use for a clock that was only monotonic, but
the word "clock" implies a bit more all on its own, so I am undisturbed.

| 2. Those who think that "monotonic clock" means a clock that never
| jumps, and that runs at a rate approximating the rate of real time.

If they're calling it "monotonic" on that basis alone, they are
wrong. Pure and simple.

| This is a very useful kind of clock to have! It is what C++ now calls
| a "steady clock". It is what all the major operating systems provide.

Sure. So _call_ it a steady clock!

| The people in class 1 are more correct, technically, and far more
| numerous, but the concept from 1 is a useless concept that should be
| forgotten.
| 
| So before proceeding, we should mutually agree that we have no
| interest in implementing a clock of type 1. It wouldn't serve anyone's
| use case (correct me if I'm wrong!) and the major operating systems
| don't offer such a thing anyway.

Bah! They are not disjoint sets of clocks!

Linux' CLOCK_MONOTONIC_RAW is both type 1 and type 2.

| Then, if we all agree to stop thinking about that first concept, then
| we need to agree whether we're all going to use the word "monotonic
| clock" to refer to the second concept,

No.

| or if we're going to use a
| different word (such as "steady clock") to refer to the second
| concept. I would prefer the latter, as it will relieve us of the need
| to repeatedly explain to newcomers: "That word doesn't mean what you
| think it means.".

Yes. Resorting to The Princess Bride to resolve bad terminology is only
funny in a movie, and should be a Big Clue that the term is either being
misused or too badly understood.

| The main reason to use the word "monotonic clock" to refer to the
| second concept is that POSIX does so, but since Mac OS X, Solaris,
| Windows, and C++ have all avoided following POSIX's mistake, I think
| Python should too.

No. If it is not monotonic, DO NOT CALL IT monotonic. Call it steady,
perhaps, if it _is_ steady (within some threshold of course).

But CLOCK_MONOTONIC_RAW is type 1 and 2, and is thus a "steady
monotonic" clock. Probably a good choice for both.

We can argue about what characteristics a useful clock has.
And we can argue about what characteristics the various OS clocks
possess.

But please DO NOT invent a new and misleading meaning for a well defined
word. "monotonic" alone is such a word, and means just one thing.
"monotonic clock" means _more_, but isn't always a requirement; "steady
clock" seems more commonly wanted.

Except of course that some participants say "steady clock" is a
nonsensical term.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Politics, n. Strife of interests masquerading as a contest of principles.
- Ambrose Bierce, _The_Devil's_Dictionary_

From steve at pearwood.info  Fri Apr  6 06:31:22 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 06 Apr 2012 14:31:22 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <20120406033619.GA9531@cskk.homeip.net>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<20120406033619.GA9531@cskk.homeip.net>
Message-ID: <4F7E719A.2090107@pearwood.info>

Cameron Simpson wrote:

> | The main reason to use the word "monotonic clock" to refer to the
> | second concept is that POSIX does so, but since Mac OS X, Solaris,
> | Windows, and C++ have all avoided following POSIX's mistake, I think
> | Python should too.
> 
> No. If it is not monotonic, DO NOT CALL IT monotonic. Call it steady,
> perhaps, if it _is_ steady (within some threshold of course).

Um, steady is a stronger promise than monotonic. This is a monotonic sequence:

1, 2, 99, 100, 101, 102, 103, 199, 200, 201, 999

But it isn't steady, because it jumps forward.

Here is a non-monotonic sequence:

1, 2, 3, 4, 5, 6, 7, 2, 3, 4, 5, 6, 7, 8

This isn't steady either, because it jumps backwards.

To be steady, it MUST also be monotonic. If you think that it is appropriate 
to call a non-monotonic clock "steady", then I think you should tell us what 
you mean by "steady but not monotonic".



-- 
Steven

From cs at zip.com.au  Fri Apr  6 07:19:45 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 6 Apr 2012 15:19:45 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <4F7E719A.2090107@pearwood.info>
References: <4F7E719A.2090107@pearwood.info>
Message-ID: <20120406051945.GA20040@cskk.homeip.net>

On 06Apr2012 14:31, Steven D'Aprano <steve at pearwood.info> wrote:
| Cameron Simpson wrote:
| > | The main reason to use the word "monotonic clock" to refer to the
| > | second concept is that POSIX does so, but since Mac OS X, Solaris,
| > | Windows, and C++ have all avoided following POSIX's mistake, I think
| > | Python should too.
| > 
| > No. If it is not monotonic, DO NOT CALL IT monotonic. Call it steady,
| > perhaps, if it _is_ steady (within some threshold of course).
| 
| Um, steady is a stronger promise than monotonic. This is a monotonic sequence:
| 
| 1, 2, 99, 100, 101, 102, 103, 199, 200, 201, 999
| 
| But it isn't steady, because it jumps forward.

Sure.

| Here is a non-monotonic sequence:
| 
| 1, 2, 3, 4, 5, 6, 7, 2, 3, 4, 5, 6, 7, 8
| 
| This isn't steady either, because it jumps backwards.
| 
| To be steady, it MUST also be monotonic. If you think that it is appropriate 
| to call a non-monotonic clock "steady", then I think you should tell us what 
| you mean by "steady but not monotonic".

I took steady to mean "never jumps more than x", for "x" being "small",
and allowing small negatives. If steady implies monotonic and people
agree that that is so, I'm happy too, and happy that steady is a better
aspiration than merely monotonic.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I understand a fury in your words, but not your words.  - William Shakespeare

From glyph at twistedmatrix.com  Fri Apr  6 08:39:44 2012
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Thu, 5 Apr 2012 23:39:44 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
	clock" (was: PEP 418 is too divisive and confusing and should
	be postponed)
In-Reply-To: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
Message-ID: <5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>


On Apr 5, 2012, at 8:07 PM, Zooko Wilcox-O'Hearn wrote:

> On Thu, Apr 5, 2012 at 7:14 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>> 
>> This is the strict mathematical meaning of the word "monotonic", but the way it's used in relation to OS clocks, it seems to mean rather more than that.
> 
> Yep. As far as I can tell, nobody has a use for an unsteady, monotonic clock.
> 
> There seem to be two groups of people:
> 
> 1. Those who think that "monotonic clock" means a clock that never
> goes backwards. These people are in the majority. After all, that's
> what the word "monotonic" means ? . However, a clock which guarantees
> *only* this is useless.

While this is a popular view on this list and in this discussion, it is also a view that seems to contradict quite a lot that has been written on the subject, and seems contrary to the usual jargon when referring to clocks.

> 2. Those who think that "monotonic clock" means a clock that never
> jumps, and that runs at a rate approximating the rate of real time.
> This is a very useful kind of clock to have! It is what C++ now calls
> a "steady clock". It is what all the major operating systems provide.

All clocks run at a rate approximating the rate of real time.  That is very close to the definition of the word "clock" in this context.  All clocks have flaws in that approximation, and really those flaws are the whole point of access to distinct clock APIs.  Different applications can cope with different flaws.

There seems to be a persistent desire in this discussion to specify and define these flaws out of existence, where this API really should instead be embracing the flaws and classifying them.  (Victor is doing a truly amazing job with the PEP in that regard; it's already the first web search hit on every search engine I've tried for more than half of these terms.)

Steadiness, in the C++ sense, only applies to most OS clocks that are given the label of "monotonic" during the run of a single program on a single computer while that computer is running at some close approximation of full power.

As soon as you close your laptop lid, the property of steadiness with respect to real local time goes away; the clock stops ticking forward, and only resumes when the lid is opened again.  The thing I'd like to draw attention to here is that when you get one of these clocks, you *do not* get a parallel facility that allows you to identify whether a suspend has happened (or, for that matter, when the wall clock has stepped).  Or at least, nobody's proposed one for Python.  I proposed one for Twisted, <http://twistedmatrix.com/trac/ticket/2424#comment:26>, but you need an event loop for that, because you need to be able to register interest in that event.

I believe that the fact that these clocks are only semi-steady, or only steady with respect to certain kinds of time, is why the term "monotonic clock" remains so popular, despite the fact that mathematical monotonicity is not actually their most useful property.  While these OS-provided clocks have other useful properties, they only have those properties under specific conditions which you cannot necessarily detect and you definitely cannot enforce.  But they all remain monotonic in the mathematical sense (modulo hardware and OS bugs), so it is the term "monotonic" which comes to label all their other, more useful, but less reliable properties.

> The people in class 1 are more correct, technically, and far more
> numerous, but the concept from 1 is a useless concept that should be
> forgotten.

Technically correct; the best kind of correct!

The people in class 1 are only more correct if you accept that mis-applying jargon from one field (mathematics) to replace generally-accepted terminology in another field (software clocks) is the right thing to do.  I think it's better to learn the local jargon and try to apply it consistently.  If you search around the web for the phrase "monotonic clock", it's applied in a sense closest to the one you mean on thousands and thousands of web pages.  "steady clock" generally applies with reference to C++, and even then is often found in phrases like "is_steady indicates whether this clock is a monotonic clock".

Software developers "mis"-apply mathematical terms like "isomorphic", "orthogonal", "incidental", "tangential", and "reflexive" all the time.  Physicists and mathematicians also disagree on the subtleties of the same terms.  Context is everything.

> So before proceeding, we should mutually agree that we have no
> interest in implementing a clock of type 1. It wouldn't serve anyone's
> use case (correct me if I'm wrong!) and the major operating systems
> don't offer such a thing anyway.

+1.

> Then, if we all agree to stop thinking about that first concept, then
> we need to agree whether we're all going to use the word "monotonic
> clock" to refer to the second concept, or if we're going to use a
> different word (such as "steady clock") to refer to the second
> concept. I would prefer the latter, as it will relieve us of the need
> to repeatedly explain to newcomers: "That word doesn't mean what you
> think it means.".

I don't think anything can (or should) relieve that need.

I am somewhat sympathetic to your preference for "steady" as a better overall term.  It does express the actually-desired property of the clock, even if that property isn't always present; steadiness is not a property that one can be tempted to synthesize, so it removes the temptation to cloud the discussion with that.  Ultimately I don't prefer it, because I think its provenance is less venerable than "monotonic", just because I have a bit more respect for the POSIX committee than the C++ one :-).

However, whatever choice we make in terminology, the documentation for this API must stress what it actually does, and what guarantee it actually provides.  In that sense, my preferred term for this would be the "time.zfnrg_lfj_lpqq(ZFNRG_TIME | ZFNRG_SEMI_STEADY | ZFNRG_SEE_DOCUMENTATION)".

> The main reason to use the word "monotonic clock" to refer to the
> second concept is that POSIX does so, but since Mac OS X, Solaris,
> Windows, and C++ have all avoided following POSIX's mistake, I think
> Python should too.

Do you just mean that the APIs don't have "monotonic" in the name?  They all use different words, which strikes me as more of a failure than a success, in the realm of making mistakes about communicating things :).

> Regards,
> 
> Zooko
> 
> ? http://mathworld.wolfram.com/MonotonicSequence.html
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/glyph%40twistedmatrix.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120405/edfe834a/attachment.html>

From greg.ewing at canterbury.ac.nz  Fri Apr  6 09:59:20 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 06 Apr 2012 19:59:20 +1200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <20120406015017.GA24126@cskk.homeip.net>
References: <4F7E4363.8090409@canterbury.ac.nz>
	<20120406015017.GA24126@cskk.homeip.net>
Message-ID: <4F7EA258.1080009@canterbury.ac.nz>

Cameron Simpson wrote:

> I maintain that
> "monotonic" still means what I said, and that it is the combination of
> the word with "clock" that brings in your other criteria.

I'm not trying to redefine the word "monotonic" in general.
All I'm saying is that *if* the PEP is going to talk about
a "monotonic clock" or "monotonic time", it should clearly
define what this means, because it's not obvious to everyone
that it implies something more than the mathematical meaning.

Alternatively, don't use the word "monotonic" at all, and
find a better term.

>    CLOCK_MONOTONIC
>       Clock that cannot be set and  represents  monotonic  time since
>       some unspecified starting point.

Which doesn't help very much, because it talks about "monotonic
time" without saying what that means. Googling for that phrase
doesn't seem turn up anything very useful. Apparently we're
supposed to just know.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Fri Apr  6 10:04:08 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 06 Apr 2012 20:04:08 +1200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <4F7E538B.4030201@pearwood.info>
References: <CANdZDc5q7T22abPwoyTLSiEFyF5qUbrO9T_pL8mYM3dytcvwVw@mail.gmail.com>
	<20120405223457.GA15346@cskk.homeip.net>
	<4F7E4363.8090409@canterbury.ac.nz>
	<4F7E538B.4030201@pearwood.info>
Message-ID: <4F7EA378.2010606@canterbury.ac.nz>

Steven D'Aprano wrote:
> Greg Ewing wrote:
> 
>> the important thing about a clock that it *keeps going forward*
> 
> That would be a *strictly* monotonic clock, as opposed to merely monotonic.

Well, yes, but even that's not enough -- it needs to go forward
at a reasonably constant rate, otherwise it's just as useless.
If it had enough resolution, it could go forward by one femtosecond
every hour for a while and still call itself strictly monotonic...

-- 
Greg

From stephen at xemacs.org  Fri Apr  6 10:37:33 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 6 Apr 2012 17:37:33 +0900
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
Message-ID: <CAL_0O1-ZXS2MmKLEaWNCAUuc9BT1J9xUHq0t9M=ievWhco_QTw@mail.gmail.com>

On Fri, Apr 6, 2012 at 3:39 PM, Glyph Lefkowitz <glyph at twistedmatrix.com> wrote:

> There seems to be a persistent desire in this discussion to specify and
> define these flaws out of existence, where this API really should instead be
> embracing the flaws and classifying them.

That seems to be precisely what Cameron is advocating.

> I think it's better to learn the local jargon and try to apply it
> consistently. ?If you search around the web for the phrase "monotonic
> clock", it's applied in a sense closest to the one you mean on thousands and
> thousands of web pages.

But is "a sense" the *same* sense on all of those pages?  If not, then
some people are going to be upset by anything we label a "monotonic"
clock, because it will suffer from some flaw that's unacceptable in
their applications for "monotonic" clocks.

From steve at pearwood.info  Fri Apr  6 12:12:50 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 06 Apr 2012 20:12:50 +1000
Subject: [Python-Dev] this is why we shouldn't call it a
 "monotonic	clock" (was: PEP 418 is too divisive and confusing and should	be
 postponed)
In-Reply-To: <5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
Message-ID: <4F7EC1A2.4050501@pearwood.info>

Glyph Lefkowitz wrote:
> On Apr 5, 2012, at 8:07 PM, Zooko Wilcox-O'Hearn wrote:

>> 2. Those who think that "monotonic clock" means a clock that never jumps,
>> and that runs at a rate approximating the rate of real time. This is a
>> very useful kind of clock to have! It is what C++ now calls a "steady
>> clock". It is what all the major operating systems provide.
> 
> All clocks run at a rate approximating the rate of real time.  That is very
> close to the definition of the word "clock" in this context.  All clocks
> have flaws in that approximation, and really those flaws are the whole
> point of access to distinct clock APIs.  Different applications can cope
> with different flaws.

I think that this is incorrect.

py> time.clock(); time.sleep(10); time.clock()
0.41
0.41




-- 
Steven


From steve at pearwood.info  Fri Apr  6 12:25:20 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 06 Apr 2012 20:25:20 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120404231455.GA23478@cskk.homeip.net>
References: <4F7CD04E.7030303@pearwood.info>
	<20120404231455.GA23478@cskk.homeip.net>
Message-ID: <4F7EC490.4030000@pearwood.info>

Cameron Simpson wrote:
> On 05Apr2012 08:50, Steven D'Aprano <steve at pearwood.info> wrote:
> | Although I don't like the get_clock() API, I don't think this argument against 
> | it is a good one.
> 
> Just to divert briefly; you said in another post you didn't like the API
> and (also/because?) it didn't help discoverability.
> 
> My core objective was to allow users to query for clocks, and ideally
> enumerate and inspect all clocks. Without the caller having platform
> specific knowledge.

Clocks *are* platform specific -- not just in their availability, but also in 
the fine details of their semantics and behaviour. I don't think we can or 
should try to gloss over this. If people are making decisions about timers 
without knowledge of what their platform supports, they're probably making 
poor decisions. Even the venerable time.time() and time.clock() differ between 
Linux and Windows.


> Allowing for the sake of discussion that this is desirable, what would
> you propose as an API instead of get_clock() (and its friend, get_clocks()
> for enumeration, that I should stuff into the code).

The old ways are the best. We don't have math.get_trig() and math.get_trigs() 
functions for querying trigonometric functions, we just expose the functions 
directly.

I think the way to enumerate and inspect all clocks is with the tried and true 
Python introspection tools that people use on all other functions:

* use dir(time) to see a list of names available in the module
* use help(time) to read their help
* read the Fine Manual to find out more
* use try... except... to detect the existence of a clock

There's nothing special about clocks that needs anything more than this.

get_clock() looks like a factory function, but it actually isn't. It just 
selects from a small number of pre-existing clocks. We should just expose 
those pre-existing clocks directly. I don't see any advantage in adding that 
extra level of indirection or the addition of all this complexity:

* a function get_clock() to select a clock
* a function get_clocks() to enumerate all the clocks
* another function for querying the properties of a clock

All those functions accomplish is to increase the complexity of the API, the 
documentation and the implementation. It's one more special case for the user 
to learn:

"To find out what functions are available, use dir(module), except for clocks, 
where you have to use time.get_clocks()."

Blah.

Another problem with get_clock() -- it will be an attractive nuisance for the 
sort of person who cares about symmetry and completeness. You will have a 
steady trickle of "feature requests" from users who are surprised that not 
every combination of features is supported. Out of the eight or sixteen or 
thirty-two potential clocks that get_clock() tempts the user with, only three 
or five will actually exist.

The only advantage of get_clock is that you don't need to know the *name* of a 
platform clock in order to use it, you can describe it with a series of flags 
or enums. But in practice, that's not an advantage, that's actually a 
disadvantage. Consider:

"Which clock should I use for such-and-such a task, foo or bar?"

versus

"Which clock should I use for such-and-such a task, get_clock(spam, eggs, 
cheese) or get_clock(ham, eggs, truffles)?"

The mere mechanics of talking about these clocks will suffer because they 
aren't named.



-- 
Steven

From p.f.moore at gmail.com  Fri Apr  6 13:21:29 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Fri, 6 Apr 2012 12:21:29 +0100
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <20120406015017.GA24126@cskk.homeip.net>
References: <4F7E4363.8090409@canterbury.ac.nz>
	<20120406015017.GA24126@cskk.homeip.net>
Message-ID: <CACac1F8kFcE=01_ivBdZBB+zMPrgb2Z8dbWa5u6XGLpZsuRBxw@mail.gmail.com>

On 6 April 2012 02:50, Cameron Simpson <cs at zip.com.au> wrote:
(Quoted from the Linux manpage)

> All  implementations  support the system-wide real-time clock, which
>   is identified by CLOCK_REALTIME.  Its time represents seconds and
>   nanoseconds  since the Epoch.  When its time is changed, timers for
>   a relative interval are unaffected, but timers for an absolute point
>   in time  are affected.
>

This made me think. They make a distinction between "timers for a relative
interval" and "timers for an absolute point in time".

But that's not right - *all* of the clock calls we are talking about here
return a *single* number. Interpreting that as an absolute time needs an
epoch. Or to put it another way, clock values are always meaningless
without context - whereas clock *differences* are what actually carry
meaning (in terms of a duration).

On that basis, I'd say that

- A clock that doesn't get adjusted or slewed is a tick counter (which
technically doesn't have any relationship to time, although the tick
frequency can be used to convert to seconds, but see the next entry)
- A clock that gets slewed but not adjusted is a seconds counter (or at
least, the nearest approximation the system can provide - presumably better
than a tick counter)
- A clock that gets adjusted is not an interval timer at all, but an
absolute timer (which therefore shouldn't really be used for benchmarking
or timeouts)

It seems to me that what *I* would most often need are the second two of
these (to at least as high a precision as my app needs, which may vary but
"to the highest precision possible" would do :-)) I'd be happy for a
seconds counter to fallback to a tick counter converted to seconds using
its frequency - slewing is simply an accuracy improvement process, as far
as I can see.

It seems to me that the current time.time() and time.wallclock() are the
right names for my "absolute timer" and "seconds timer" above. Whether
their implementations match my definitions I'm not sure, but that's what
I'd hope. One thing I would expect is that time.wallclock() would never go
backwards (so differences are always positive). The various other debates
about monotonic, steady, etc, seem to me to be only relevant for specialist
uses that I don't care about.

As regards suspension, if I'm timing intervals and the system suspends, I'd
be happy to say all bets are off. Similarly with timeouts. If I cared, I'd
simply make sure the system didn't suspend :-)

As far as comparability between different threads or processes are
concerned, I would expect absolute time (time.time) to be the same across
threads or processes (but wouldn't generally write apps that were affected
if it weren't - at least by small amounts), but I wouldn't expect
time.wallclock values obtained in different threads or processes to be
comparable (mostly because I can't think of a case where I'd compare them).
Where VMs or multiple machines are involved, I wouldn't even expect
absolute time to match (but that's the job of NTP, and if time.time follows
NTP, there's no reason why there would be an issue even there).

Summary: I'm happy with time.time and time.wallclock. The rest of this
debate doesn't matter for my usecases (and I suspect many other people's in
practice).

[Update, after I downloaded and installed 3.3a2] Bah, looks like
time.wallclock is gone. (Actually, looks like it was documented but not
implemented in 3.3a1!). Actually, the docs and the implementation don't
match - clock_gettime is documented as available, but it's not (at least on
Windows). I still prefer time.wallclock() as described above and in the
3.3a1 documentation. I thought I knew what was going on, but now I'm
confused. My comments above still stand, though.

Paul.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120406/4203cb3d/attachment.html>

From murman at gmail.com  Fri Apr  6 13:31:45 2012
From: murman at gmail.com (Michael Urman)
Date: Fri, 6 Apr 2012 06:31:45 -0500
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
 (was: PEP 418: Add monotonic clock)
In-Reply-To: <CAL_0O18j+nRWQ5FrBkVeMYCLFZo50MtuaBAib5fki6BtPLvn5Q@mail.gmail.com>
References: <4F7B2029.8010707@stoneleaf.us>
	<CAL0kPAWWtk2KWWkBmVHMwCBXGC1k_bjG0wEbkjOt9+DDniTVYw@mail.gmail.com>
	<20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
	<20120405152217.GA22311@iskra.aviel.ru>
	<CAL_0O18j+nRWQ5FrBkVeMYCLFZo50MtuaBAib5fki6BtPLvn5Q@mail.gmail.com>
Message-ID: <CAOpBPYXV5g3=V7GYm9E-B2KVbXTxNSR3reoutNocdnFKN026Lw@mail.gmail.com>

On Thu, Apr 5, 2012 at 21:57, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> I might have chosen to implement a 'None' return if I had designed
> open(), but I can't get too upset about raising an Exception as it
> actually does.

One fundamental difference is that there are many reasons one might
fail to open a file. It may not exist. It may not have permissions
allowing the request. It may be locked. If open() returned None, this
information would have to be retrievable through another function.
However since it returns an exception, that information is already
wrapped up in the exception object, should you choose to catch it, and
likely to be logged otherwise.

In the case of the clocks, I'm assuming the only reason you would fail
to get a clock is because it isn't provided by hardware and/or OS. You
don't have to worry about transient scenarios on multi-user systems
where another user has locked the clock. Thus the exception cannot
tell you anything more than None tells you. (Of course, if my
assumption is wrong, I'm not sure whether my reasoning still applies.)

-- 
Michael Urman

From p.f.moore at gmail.com  Fri Apr  6 13:55:52 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Fri, 6 Apr 2012 12:55:52 +0100
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <4F7EC1A2.4050501@pearwood.info>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
	<4F7EC1A2.4050501@pearwood.info>
Message-ID: <CACac1F8cbYx6jviFJLkSPFFJv3k4quPheqPR0JcR2_jQ==cVGg@mail.gmail.com>

On 6 April 2012 11:12, Steven D'Aprano <steve at pearwood.info> wrote:

> Glyph Lefkowitz wrote:
>
>> On Apr 5, 2012, at 8:07 PM, Zooko Wilcox-O'Hearn wrote:
>>
>
>  2. Those who think that "monotonic clock" means a clock that never jumps,
>>> and that runs at a rate approximating the rate of real time. This is a
>>> very useful kind of clock to have! It is what C++ now calls a "steady
>>> clock". It is what all the major operating systems provide.
>>>
>>
>> All clocks run at a rate approximating the rate of real time.  That is
>> very
>> close to the definition of the word "clock" in this context.  All clocks
>> have flaws in that approximation, and really those flaws are the whole
>> point of access to distinct clock APIs.  Different applications can cope
>> with different flaws.
>>
>
> I think that this is incorrect.
>
> py> time.clock(); time.sleep(10); time.clock()
> 0.41
> 0.41
>

Blame Python's use of CPU time in clock() on Unix for that. On Windows:

>>> time.clock(); time.sleep(10); time.clock()
14.879754156329385
24.879591008462793

That''s a backward compatibility issue, though - I'd be arguing that
time.clock() is the best name for "normally the right clock for interval,
benchmark or timeout uses as long as you don't care about oddities like
suspend" otherwise. Given that this name is taken, I'd argue for
time.wallclock. I'm not familiar enough with the terminology to know what
to expect from terms like monotonic, steady, raw and the like.

Paul.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120406/8a94b27a/attachment.html>

From phd at phdru.name  Fri Apr  6 15:40:03 2012
From: phd at phdru.name (Oleg Broytman)
Date: Fri, 6 Apr 2012 17:40:03 +0400
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAL_0O18j+nRWQ5FrBkVeMYCLFZo50MtuaBAib5fki6BtPLvn5Q@mail.gmail.com>
References: <20120404174449.GB25288@iskra.aviel.ru>
	<4F7C8CD6.7090308@stoneleaf.us>
	<20120404192436.GB27384@iskra.aviel.ru>
	<4F7CA660.60205@stoneleaf.us> <20120404230503.GB314@iskra.aviel.ru>
	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>
	<20120405133411.GC17105@iskra.aviel.ru>
	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>
	<20120405152217.GA22311@iskra.aviel.ru>
	<CAL_0O18j+nRWQ5FrBkVeMYCLFZo50MtuaBAib5fki6BtPLvn5Q@mail.gmail.com>
Message-ID: <20120406134003.GA25372@iskra.aviel.ru>

On Fri, Apr 06, 2012 at 11:57:20AM +0900, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:
> What I want to know is why you're willing to assert that absence of a
> clock of a particular configuration is an Exception, when that absence
> clearly documented to be a common case?

   An error or not an error depends on how people will use the API. I
usually don't like error codes -- people tend to ignore them or check
lazily. If some library would do

    (get_clock(THIS) or get_clock(THAT)).clock()

I want to get a clearly defined and documented clock-related error, not
some vague "AttributeError: 'NoneType' object has no attribute 'clock'".

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From kristjan at ccpgames.com  Fri Apr  6 15:27:12 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Fri, 6 Apr 2012 13:27:12 +0000
Subject: [Python-Dev] Pep 393 and debugging
Message-ID: <EFE3877620384242A686D52278B7CCD3387CBB@RKV-IT-EXCH104.ccp.ad.local>

I just had my first fun with Pep 393 strings and debuggers.  Trying to debug a deadlocked python program, I'm trying to figure out the callstack of the thread in the debugger.

I ended up with something like:



(char*)&((PyASCIIObject*)(tstate->frame->f_code->co_filename))[1]



while previously, it was sufficient to do

(PyUnicodeObject*)(tstate->frame->f_code->co_filename)



Obviously this won't work for non-ASCII objects.

I wonder if there is a way to make this situation easier?  Perhaps for "debug" builds, we can store some debug information in the frame object, e.g. utf8 encoding of the filename and function?



K




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120406/2d511d56/attachment.html>

From benjamin at python.org  Fri Apr  6 16:26:40 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Fri, 6 Apr 2012 10:26:40 -0400
Subject: [Python-Dev] Pep 393 and debugging
In-Reply-To: <EFE3877620384242A686D52278B7CCD3387CBB@RKV-IT-EXCH104.ccp.ad.local>
References: <EFE3877620384242A686D52278B7CCD3387CBB@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <CAPZV6o_MxbNsNrA4ubz0ece+4HU8nZtYK-Mn=xRUG2uK-_4_3w@mail.gmail.com>

2012/4/6 Kristj?n Valur J?nsson <kristjan at ccpgames.com>:
> I wonder if there is a way to make this situation easier?? Perhaps for
> "debug" builds, we can store some debug information in the frame object,
> e.g. utf8 encoding of the filename and function?

Have you tried using the cpython gdb plugin? It should repr these
things for you.



-- 
Regards,
Benjamin

From victor.stinner at gmail.com  Fri Apr  6 16:32:13 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Fri, 6 Apr 2012 16:32:13 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
Message-ID: <CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>

> 2. Those who think that "monotonic clock" means a clock that never
> jumps, and that runs at a rate approximating the rate of real time.
> This is a very useful kind of clock to have! It is what C++ now calls
> a "steady clock". It is what all the major operating systems provide.

Python cannot give such guarantee. Extract of time.monotonic()
function proposed in the PEP 418:

"The elapsed time may or may not include time the system spends in
sleep or hibernation; this depends on the operating system."

The C++ Timeout Specification uses the following definition: "Objects
of class steady_clock represent clocks for which values of time_point
advance at a steady rate relative to real time. That is, the clock may
not be adjusted."

Proposed time.monotonic() doesn't respect this definition because
CLOCK_MONOTONIC *is* adjusted (slewed) on Linux.

We might provide a steady clock, but it would be less portable than
the monotonic clock. I'm not sure that we need such clock, which use
case requires a steady and not a monotonic clock? On Linux, I now
prefer to use CLOCK_MONOTONIC (monotonic) than CLOCK_MONOTONIC_RAW
(monotonic and steady as defined by C++) *because* its frequency is
adjusted.

Victor

From ethan at stoneleaf.us  Fri Apr  6 16:40:06 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 06 Apr 2012 07:40:06 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120406134003.GA25372@iskra.aviel.ru>
References: <20120404174449.GB25288@iskra.aviel.ru>	<4F7C8CD6.7090308@stoneleaf.us>	<20120404192436.GB27384@iskra.aviel.ru>	<4F7CA660.60205@stoneleaf.us>
	<20120404230503.GB314@iskra.aviel.ru>	<CAL_0O18KqdYTDhj5xx2FPmNfiFGj7Esie-BVEa_exWKejg7F8Q@mail.gmail.com>	<20120405133411.GC17105@iskra.aviel.ru>	<CAL_0O19C6iDmQE2utpf_Zby9JudjB7uf5iWz8ga0gdENtLb-rA@mail.gmail.com>	<20120405152217.GA22311@iskra.aviel.ru>	<CAL_0O18j+nRWQ5FrBkVeMYCLFZo50MtuaBAib5fki6BtPLvn5Q@mail.gmail.com>
	<20120406134003.GA25372@iskra.aviel.ru>
Message-ID: <4F7F0046.7030205@stoneleaf.us>

Oleg Broytman wrote:
> On Fri, Apr 06, 2012 at 11:57:20AM +0900, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:
>> What I want to know is why you're willing to assert that absence of a
>> clock of a particular configuration is an Exception, when that absence
>> clearly documented to be a common case?
> 
>    An error or not an error depends on how people will use the API. I
> usually don't like error codes -- people tend to ignore them or check
> lazily. If some library would do
> 
>     (get_clock(THIS) or get_clock(THAT)).clock()
> 
> I want to get a clearly defined and documented clock-related error, not
> some vague "AttributeError: 'NoneType' object has no attribute 'clock'".

The error won't be that vague -- it will include that offending line, 
making the problem easy to track.

~Ethan~

From guido at python.org  Fri Apr  6 17:42:57 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 6 Apr 2012 08:42:57 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CACac1F8cbYx6jviFJLkSPFFJv3k4quPheqPR0JcR2_jQ==cVGg@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
	<4F7EC1A2.4050501@pearwood.info>
	<CACac1F8cbYx6jviFJLkSPFFJv3k4quPheqPR0JcR2_jQ==cVGg@mail.gmail.com>
Message-ID: <CAP7+vJLGdjL+4Zv3+FEuQc6GnDm+kqYe=7_Vx4ni_CwBeYij-w@mail.gmail.com>

I'd like to veto wall clock because to me that's the clock on my wall, i.e.
local time. Otherwise I like the way this thread is going.

--Guido van Rossum (sent from Android phone)
On Apr 6, 2012 4:57 AM, "Paul Moore" <p.f.moore at gmail.com> wrote:

> On 6 April 2012 11:12, Steven D'Aprano <steve at pearwood.info> wrote:
>
>> Glyph Lefkowitz wrote:
>>
>>> On Apr 5, 2012, at 8:07 PM, Zooko Wilcox-O'Hearn wrote:
>>>
>>
>>  2. Those who think that "monotonic clock" means a clock that never jumps,
>>>> and that runs at a rate approximating the rate of real time. This is a
>>>> very useful kind of clock to have! It is what C++ now calls a "steady
>>>> clock". It is what all the major operating systems provide.
>>>>
>>>
>>> All clocks run at a rate approximating the rate of real time.  That is
>>> very
>>> close to the definition of the word "clock" in this context.  All clocks
>>> have flaws in that approximation, and really those flaws are the whole
>>> point of access to distinct clock APIs.  Different applications can cope
>>> with different flaws.
>>>
>>
>> I think that this is incorrect.
>>
>> py> time.clock(); time.sleep(10); time.clock()
>> 0.41
>> 0.41
>>
>
> Blame Python's use of CPU time in clock() on Unix for that. On Windows:
>
> >>> time.clock(); time.sleep(10); time.clock()
> 14.879754156329385
> 24.879591008462793
>
> That''s a backward compatibility issue, though - I'd be arguing that
> time.clock() is the best name for "normally the right clock for interval,
> benchmark or timeout uses as long as you don't care about oddities like
> suspend" otherwise. Given that this name is taken, I'd argue for
> time.wallclock. I'm not familiar enough with the terminology to know what
> to expect from terms like monotonic, steady, raw and the like.
>
> Paul.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120406/4460d82f/attachment.html>

From status at bugs.python.org  Fri Apr  6 18:07:16 2012
From: status at bugs.python.org (Python tracker)
Date: Fri,  6 Apr 2012 18:07:16 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20120406160716.4205B1CB51@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2012-03-30 - 2012-04-06)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    3360 ( +1)
  closed 22935 (+64)
  total  26295 (+65)

Open issues with patches: 1430 


Issues opened (38)
==================

#9634: Add timeout parameter to Queue.join()
http://bugs.python.org/issue9634  reopened by ncoghlan

#14258: Better explain re.LOCALE and re.UNICODE for \S and \W
http://bugs.python.org/issue14258  reopened by orsenthil

#14453: profile.Profile.calibrate can produce incorrect numbers in som
http://bugs.python.org/issue14453  opened by adamtj

#14455: plistlib unable to read json and binary plist files
http://bugs.python.org/issue14455  opened by d9pouces

#14457: Unattended Install doesn't populate registry
http://bugs.python.org/issue14457  opened by Paul.Klapperich

#14458: Non-admin installation fails
http://bugs.python.org/issue14458  opened by toughy

#14460: In re's positive lookbehind assertion repetition works
http://bugs.python.org/issue14460  opened by py.user

#14461: In re's positive lookbehind assertion documentation match() ca
http://bugs.python.org/issue14461  opened by py.user

#14462: In re's named group the name cannot contain unicode characters
http://bugs.python.org/issue14462  opened by py.user

#14465: xml.etree.ElementTree: add feature to prettify XML output
http://bugs.python.org/issue14465  opened by tshepang

#14468: Update cloning guidelines in devguide
http://bugs.python.org/issue14468  opened by eric.araujo

#14469: Python 3 documentation links
http://bugs.python.org/issue14469  opened by storchaka

#14470: Remove using of w9xopen in subprocess module
http://bugs.python.org/issue14470  opened by asvetlov

#14472: .gitignore is outdated
http://bugs.python.org/issue14472  opened by mcepl

#14475: codecs.StreamReader.read behaves differently from regular file
http://bugs.python.org/issue14475  opened by tdb

#14477: Rietveld test issue
http://bugs.python.org/issue14477  opened by loewis

#14478: Decimal hashing very slow, could be cached
http://bugs.python.org/issue14478  opened by Jimbofbx

#14480: os.kill on Windows should accept zero as signal
http://bugs.python.org/issue14480  opened by asvetlov

#14483: inspect.getsource fails to read a file of only comments
http://bugs.python.org/issue14483  opened by Sean.Grider

#14484: missing return in win32_kill?
http://bugs.python.org/issue14484  opened by pitrou

#14486: Add some versionchanged notes in threading docs
http://bugs.python.org/issue14486  opened by ncoghlan

#14488: Can't install Python2.7.2
http://bugs.python.org/issue14488  opened by kiwii128

#14494: __future__.py and its documentation claim absolute imports bec
http://bugs.python.org/issue14494  opened by smarnach

#14499: Extension module builds fail with Xcode 4.3 on OS X 10.7 due t
http://bugs.python.org/issue14499  opened by ned.deily

#14500: test_importlib fails in refleak mode
http://bugs.python.org/issue14500  opened by pitrou

#14501: Error initialising BaseManager class with 'authkey' argument o
http://bugs.python.org/issue14501  opened by Drauger

#14503: docs:Code not highlighted
http://bugs.python.org/issue14503  opened by ramchandra.apte

#14504: Suggestion to improve argparse's help messages for "store_cons
http://bugs.python.org/issue14504  opened by Amnon.Harel

#14507: Segfault with starmap and izip combo
http://bugs.python.org/issue14507  opened by progrper

#14508: gprof2html is broken
http://bugs.python.org/issue14508  opened by Popa.Claudiu

#14509: Build failures in non-pydebug builds without NDEBUG.
http://bugs.python.org/issue14509  opened by twouters

#14511: _static/opensearch.xml for Python 3.2 docs directs searches to
http://bugs.python.org/issue14511  opened by zach.ware

#14512: Pydocs module docs server not working on Windows.
http://bugs.python.org/issue14512  opened by terry.reedy

#14513: IDLE icon switched and switches on Windows taskbar
http://bugs.python.org/issue14513  opened by terry.reedy

#14514: Equivalent to tempfile.NamedTemporaryFile that deletes file at
http://bugs.python.org/issue14514  opened by r.david.murray

#14515: tempfile.TemporaryDirectory documented as returning object but
http://bugs.python.org/issue14515  opened by r.david.murray

#14516: test_tools assumes BUILDDIR=SRCDIR
http://bugs.python.org/issue14516  opened by ronaldoussoren

#14517: Recompilation of sources with Distutils
http://bugs.python.org/issue14517  opened by cbenoit



Most recent 15 issues with no replies (15)
==========================================

#14517: Recompilation of sources with Distutils
http://bugs.python.org/issue14517

#14515: tempfile.TemporaryDirectory documented as returning object but
http://bugs.python.org/issue14515

#14512: Pydocs module docs server not working on Windows.
http://bugs.python.org/issue14512

#14511: _static/opensearch.xml for Python 3.2 docs directs searches to
http://bugs.python.org/issue14511

#14509: Build failures in non-pydebug builds without NDEBUG.
http://bugs.python.org/issue14509

#14504: Suggestion to improve argparse's help messages for "store_cons
http://bugs.python.org/issue14504

#14500: test_importlib fails in refleak mode
http://bugs.python.org/issue14500

#14499: Extension module builds fail with Xcode 4.3 on OS X 10.7 due t
http://bugs.python.org/issue14499

#14494: __future__.py and its documentation claim absolute imports bec
http://bugs.python.org/issue14494

#14483: inspect.getsource fails to read a file of only comments
http://bugs.python.org/issue14483

#14477: Rietveld test issue
http://bugs.python.org/issue14477

#14462: In re's named group the name cannot contain unicode characters
http://bugs.python.org/issue14462

#14461: In re's positive lookbehind assertion documentation match() ca
http://bugs.python.org/issue14461

#14460: In re's positive lookbehind assertion repetition works
http://bugs.python.org/issue14460

#14457: Unattended Install doesn't populate registry
http://bugs.python.org/issue14457



Most recent 15 issues waiting for review (15)
=============================================

#14516: test_tools assumes BUILDDIR=SRCDIR
http://bugs.python.org/issue14516

#14511: _static/opensearch.xml for Python 3.2 docs directs searches to
http://bugs.python.org/issue14511

#14508: gprof2html is broken
http://bugs.python.org/issue14508

#14494: __future__.py and its documentation claim absolute imports bec
http://bugs.python.org/issue14494

#14477: Rietveld test issue
http://bugs.python.org/issue14477

#14472: .gitignore is outdated
http://bugs.python.org/issue14472

#14455: plistlib unable to read json and binary plist files
http://bugs.python.org/issue14455

#14453: profile.Profile.calibrate can produce incorrect numbers in som
http://bugs.python.org/issue14453

#14448: Mention pytz in datetime's docs
http://bugs.python.org/issue14448

#14440: Close background process if IDLE closes abnormally.
http://bugs.python.org/issue14440

#14439: Easier error diagnosis when bootstrapping the runpy module in 
http://bugs.python.org/issue14439

#14433: Python 3 interpreter crash on windows when stdin closed in Pyt
http://bugs.python.org/issue14433

#14432: Bug in generator if the generator in created in a C	thread
http://bugs.python.org/issue14432

#14428: Implementation of the PEP 418
http://bugs.python.org/issue14428

#14423: Getting the starting date of iso week from a week number and a
http://bugs.python.org/issue14423



Top 10 most discussed issues (10)
=================================

#14417: dict RuntimeError workaround
http://bugs.python.org/issue14417  20 msgs

#7839: Popen should raise ValueError if pass a string when shell=Fals
http://bugs.python.org/issue7839  13 msgs

#14428: Implementation of the PEP 418
http://bugs.python.org/issue14428  12 msgs

#14440: Close background process if IDLE closes abnormally.
http://bugs.python.org/issue14440  10 msgs

#14503: docs:Code not highlighted
http://bugs.python.org/issue14503   8 msgs

#14387: Include\accu.h incompatible with Windows.h
http://bugs.python.org/issue14387   7 msgs

#14478: Decimal hashing very slow, could be cached
http://bugs.python.org/issue14478   7 msgs

#9141: Allow objects to decide if they can be collected by GC
http://bugs.python.org/issue9141   6 msgs

#9634: Add timeout parameter to Queue.join()
http://bugs.python.org/issue9634   6 msgs

#13903: New shared-keys dictionary implementation
http://bugs.python.org/issue13903   6 msgs



Issues closed (65)
==================

#3033: tkFont added displayof where necessary
http://bugs.python.org/issue3033  closed by asvetlov

#3035: Removing apparently unwanted functions from Tkinter
http://bugs.python.org/issue3035  closed by asvetlov

#5136: Deprecating (and removing) "globalcall", "merge" and "globalev
http://bugs.python.org/issue5136  closed by asvetlov

#6015: Tkinter Scrollbar in OS X 10.5
http://bugs.python.org/issue6015  closed by ned.deily

#6124: Tkinter should support the OS X zoom button
http://bugs.python.org/issue6124  closed by asvetlov

#8515: idle "Run Module" (F5) does not set __file__ variable
http://bugs.python.org/issue8515  closed by asvetlov

#9016: IDLE won't launch (Win XP)
http://bugs.python.org/issue9016  closed by asvetlov

#9787: Release the TLS lock during allocations
http://bugs.python.org/issue9787  closed by krisvale

#10423: s/args/options in arpgarse "Upgrading optparse code"
http://bugs.python.org/issue10423  closed by r.david.murray

#11310: Document byte[s|array]() and byte[s|array](count) in docstring
http://bugs.python.org/issue11310  closed by r.david.murray

#11668: _multiprocessing.Connection.poll with timeout uses polling und
http://bugs.python.org/issue11668  closed by pitrou

#12979: tkinter.font.Font object not usable as font option
http://bugs.python.org/issue12979  closed by asvetlov

#13019: bytearrayobject.c: refleak
http://bugs.python.org/issue13019  closed by pitrou

#13507: Modify OS X installer builds to package liblzma for the new lz
http://bugs.python.org/issue13507  closed by ned.deily

#13872: socket.detach doesn't mark socket._closed
http://bugs.python.org/issue13872  closed by pitrou

#14116: threading classes' __enter__ should return self
http://bugs.python.org/issue14116  closed by pitrou

#14151: multiprocessing.connection.Listener fails with invalid address
http://bugs.python.org/issue14151  closed by pitrou

#14227: console w/ cp65001 displays extra characters for non-ascii str
http://bugs.python.org/issue14227  closed by haypo

#14249: unicodeobject.c: aliasing warnings
http://bugs.python.org/issue14249  closed by python-dev

#14300: dup_socket() on Windows should use WSA_FLAG_OVERLAPPED
http://bugs.python.org/issue14300  closed by pitrou

#14316: Broken link in grammar.rst
http://bugs.python.org/issue14316  closed by sandro.tosi

#14318: clarify "may not" in time.steady docs
http://bugs.python.org/issue14318  closed by haypo

#14362: No mention of collections.ChainMap in What's New for 3.3
http://bugs.python.org/issue14362  closed by eric.araujo

#14397: Use GetTickCount/GetTickCount64 instead of QueryPerformanceCou
http://bugs.python.org/issue14397  closed by haypo

#14406: Race condition in concurrent.futures
http://bugs.python.org/issue14406  closed by pitrou

#14422: Pack PyASCIIObject fields to reduce memory consumption of pure
http://bugs.python.org/issue14422  closed by haypo

#14425: Improve handling of 'timeout' parameter default in urllib.urlo
http://bugs.python.org/issue14425  closed by r.david.murray

#14434: Tutorial link in "help()" in Python3 points to Python2 tutoria
http://bugs.python.org/issue14434  closed by r.david.murray

#14437: _io build fails on cygwin
http://bugs.python.org/issue14437  closed by pitrou

#14450: Log rotate cant execute in Windows. (logging module)
http://bugs.python.org/issue14450  closed by vinay.sajip

#14454: argparse metavar list parameter with nargs=k
http://bugs.python.org/issue14454  closed by andyharrington

#14456: Relation between threads and signals unclear
http://bugs.python.org/issue14456  closed by pitrou

#14459: type([].append([]))
http://bugs.python.org/issue14459  closed by r.david.murray

#14463: _decimal.so compile fails in OS X installer builds
http://bugs.python.org/issue14463  closed by ned.deily

#14464: reference loss in test_xml_etree_c
http://bugs.python.org/issue14464  closed by eli.bendersky

#14466: Rip out mq instructions
http://bugs.python.org/issue14466  closed by pitrou

#14471: Buffer overrun in winreg.c
http://bugs.python.org/issue14471  closed by krisvale

#14473: Regex Howto error
http://bugs.python.org/issue14473  closed by orsenthil

#14474: mishandling of AttributeError in threads
http://bugs.python.org/issue14474  closed by python-dev

#14476: sudo breaks python
http://bugs.python.org/issue14476  closed by r.david.murray

#14479: Replace transplant with graft in devguide
http://bugs.python.org/issue14479  closed by python-dev

#14481: trivial formatting error in subprocess docs
http://bugs.python.org/issue14481  closed by r.david.murray

#14482: multiprocessing.connection.Listener fails with invalid address
http://bugs.python.org/issue14482  closed by pitrou

#14485: hi, thanks,  nice to learn from you
http://bugs.python.org/issue14485  closed by loewis

#14487: Add pending() query method to Queue.Queue
http://bugs.python.org/issue14487  closed by ncoghlan

#14489: repr() function link on the built-in function documentation is
http://bugs.python.org/issue14489  closed by python-dev

#14490: abitype.py wrong raise format
http://bugs.python.org/issue14490  closed by r.david.murray

#14491: fixcid.py is using <> instead of !=
http://bugs.python.org/issue14491  closed by r.david.murray

#14492: pdeps.py has_key
http://bugs.python.org/issue14492  closed by r.david.murray

#14493: use gvfs-open/xdg-open in Lib/webbrowser.py
http://bugs.python.org/issue14493  closed by r.david.murray

#14495: Minor typo in tkinter.ttk.Treeview.exists docstring
http://bugs.python.org/issue14495  closed by python-dev

#14496: Wrong name in idlelib/tabbedpages.py
http://bugs.python.org/issue14496  closed by asvetlov

#14497: Invalid syntax Python files in Python sources tree
http://bugs.python.org/issue14497  closed by r.david.murray

#14498: Python 3.x distutils.util.get_platform returns incorrect value
http://bugs.python.org/issue14498  closed by ned.deily

#14502: Document better what happens on releasing an unacquired lock
http://bugs.python.org/issue14502  closed by sandro.tosi

#14505: PyFile_FromString leaks file descriptors in python 2.7
http://bugs.python.org/issue14505  closed by pitrou

#14506: HTMLParser can't handle erronous end tags with additional info
http://bugs.python.org/issue14506  closed by ezio.melotti

#14510: Regular Expression "+" perform wrong repeat
http://bugs.python.org/issue14510  closed by ezio.melotti

#802310: tkFont may reuse font names
http://bugs.python.org/issue802310  closed by asvetlov

#1053687: PyOS_InputHook not called in IDLE subprocess
http://bugs.python.org/issue1053687  closed by asvetlov

#1641544: rlcompleter tab completion in pdb
http://bugs.python.org/issue1641544  closed by georg.brandl

#1396946: %ehrntDRT support for time.strptime
http://bugs.python.org/issue1396946  closed by georg.brandl

#14467: Avoid exotic documentation in the devguide
http://bugs.python.org/issue14467  closed by pitrou

#1542677: IDLE shell gives different len() of unicode strings compared t
http://bugs.python.org/issue1542677  closed by asvetlov

#1047540: Turtle.py hangs Idle
http://bugs.python.org/issue1047540  closed by asvetlov

From kristjan at ccpgames.com  Fri Apr  6 19:04:44 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Fri, 6 Apr 2012 17:04:44 +0000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>,
	<CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD3387EAF@RKV-IT-EXCH104.ccp.ad.local>

This is the most amusing of discussions.
Teh key sentence here is "the clock may not be adjusted".  Slewing or accelerating a clock is nerely adding to the already present error of the pace of the clock.
Sometimes a clock runs fast, sometimes it runs slow.  This is without any purposeful slewing or accelerating made by the OS.  Notice how the C++ standard specifies nothing about the error of this steady rate, which is however always nonzero.  This implies that the error in the time, or the variations of its rate, are not important to the meaning of the standard.

The thing wich matters here is that the clock progresses forwards, matching the progress of real times, to some (unsopecified) precision.  It must not suddenly jump backwards or forwards because someone changed the timezone.  That is all that is implied.

Since the error of the clock is unspecified, (that is, the error in its rate of progress) it cannot matter if this rate is temporarily adjusted by hand.



K


________________________________________
Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir h&#246;nd Victor Stinner [victor.stinner at gmail.com]
Sent: 6. apr?l 2012 14:32
To: Zooko Wilcox-O'Hearn
Cc: Python-Dev
Efni: Re: [Python-Dev] this is why we shouldn't call it a "monotonic clock" (was: PEP 418 is too divisive and confusing and should be postponed)


The C++ Timeout Specification uses the following definition: "Objects
of class steady_clock represent clocks for which values of time_point
advance at a steady rate relative to real time. That is, the clock may
not be adjusted."

Proposed time.monotonic() doesn't respect this definition because
CLOCK_MONOTONIC *is* adjusted (slewed) on Linux.

From kristjan at ccpgames.com  Fri Apr  6 19:07:48 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Fri, 6 Apr 2012 17:07:48 +0000
Subject: [Python-Dev] this is why we shouldn't call it a
	"monotonic	clock" (was: PEP 418 is too divisive and confusing
	and should	be postponed)
In-Reply-To: <4F7EC1A2.4050501@pearwood.info>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>,
	<4F7EC1A2.4050501@pearwood.info>
Message-ID: <EFE3877620384242A686D52278B7CCD3387EF6@RKV-IT-EXCH104.ccp.ad.local>

This is the original reason for the original defect (issue 10278)
unix' clock() doesn't actually provide a clock in this sense, it provides a resource usage metric.
K

________________________________________
Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir h&#246;nd Steven D'Aprano [steve at pearwood.info]
Sent: 6. apr?l 2012 10:12
To: Python-Dev
Efni: Re: [Python-Dev] this is why we shouldn't call it a "monotonic    clock" (was: PEP 418 is too divisive and confusing and should   be postponed)

I think that this is incorrect.

py> time.clock(); time.sleep(10); time.clock()
0.41
0.41




--
Steven

_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com

From vinay_sajip at yahoo.co.uk  Fri Apr  6 22:06:22 2012
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Fri, 6 Apr 2012 13:06:22 -0700 (PDT)
Subject: [Python-Dev] Possible change to logging.handlers.SysLogHandler
Message-ID: <f755d5d3-b2f4-4d3a-8ad7-9b1e0d950b99@i18g2000vbx.googlegroups.com>

There is a problem with the way logging.handlers.SysLogHandler works
when presented with Unicode messages. According to RFC 5424, Unicode
is supposed to be sent encoded as UTF-8 and preceded by a BOM.
However, the current handler implementation puts the BOM at the start
of the formatted message, and this is wrong in scenarios where you
want to put some additional structured data in front of the
unstructured message part; the BOM is supposed to go after the
structured part (which, therefore, has to be ASCII) and before the
unstructured part. In that scenario, the handler's current behaviour
does not strictly conform to RFC 5424.

The issue is described in [1]. The BOM was originally added / position
changed in response to [2] and [3].

It is not possible to achieve conformance with the current
implementation of the handler, unless you subclass the handler and
override the whole emit() method. This is not ideal. For 3.3, I will
refactor the implementation to expose a method which creates the byte
string which is sent over the wire to the syslog daemon. This method
can then be overridden for specific use cases where needed.

However, for 2.7 and 3.2, removing the BOM insertion would bring the
implementation into conformance to the RFC, though the entire message
would have to be regarded as just a set of octets. A Unicode message
would still be encoded using UTF-8, but the BOM would be left out.

I am thinking of removing the BOM insertion in 2.7 and 3.2 - although
it is a change in behaviour, the current behaviour does seem broken
with regard to RFC 5424 conformance. However, as some might disagree
with that assessment and view it as a backwards-incompatible behaviour
change, I thought I should post this to get some opinions about
whether this change is viewed as objectionable.

Regards,

Vinay Sajip

[1] http://bugs.python.org/issue14452
[2] http://bugs.python.org/issue7077
[3] http://bugs.python.org/issue8795

From regebro at gmail.com  Fri Apr  6 22:36:28 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Fri, 6 Apr 2012 22:36:28 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
Message-ID: <CAL0kPAXYupY-Sz6u6gmhJ1H65cqJuQPy1B=U3Wmat85Pk1QN9A@mail.gmail.com>

On Thu, Apr 5, 2012 at 12:32, Victor Stinner <victor.stinner at gmail.com> wrote:
> I prefer to use CLOCK_MONOTONIC, not because it is also available for
> older Linux kernels, but because it is more reliable. Even if the
> underlying clock source is unstable (unstable frequency), a delta of
> two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
> whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
> smaller than a second.

Aha. OK, CLOCK_MONOTONIC it is then.

//Lennart

From regebro at gmail.com  Fri Apr  6 23:05:39 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Fri, 6 Apr 2012 23:05:39 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120405221758.GA12229@cskk.homeip.net>
References: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>
	<20120405221758.GA12229@cskk.homeip.net>
Message-ID: <CAL0kPAWJaWLiRwwzaPYDq7MO2Yj9vBObYt2GHQAUcX3+fqQAyA@mail.gmail.com>

On Fri, Apr 6, 2012 at 00:17, Cameron Simpson <cs at zip.com.au> wrote:
> Gah! ALL functions are like that! How often do we see questions about
> max() or split() etc that a close reading of the docs obviate?

My point exactly.

//Lennart

From ethan at stoneleaf.us  Fri Apr  6 23:26:05 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 06 Apr 2012 14:26:05 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <CAL0kPAWJaWLiRwwzaPYDq7MO2Yj9vBObYt2GHQAUcX3+fqQAyA@mail.gmail.com>
References: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>	<20120405221758.GA12229@cskk.homeip.net>
	<CAL0kPAWJaWLiRwwzaPYDq7MO2Yj9vBObYt2GHQAUcX3+fqQAyA@mail.gmail.com>
Message-ID: <4F7F5F6D.7020003@stoneleaf.us>

Lennart Regebro wrote:
> On Fri, Apr 6, 2012 at 00:17, Cameron Simpson <cs at zip.com.au> wrote:
>> 
> Good point, but the same does for using flags. If you don't pass in
> the MONOTONIC flag, what happens? Only reading the documentation will
> tell you. As such this, if anything, is an indication that the
> get_clock() API isn't ideal in any incarnation.
>> 
>> Gah! ALL functions are like that! How often do we see questions about
>> max() or split() etc that a close reading of the docs obviate?
> 
> My point exactly.

Huh?  Your point is that all APIs are less than ideal because you have 
to read the docs to know for certain how they work?

~Ethan~

From guido at python.org  Fri Apr  6 23:54:39 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 6 Apr 2012 14:54:39 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7F5F6D.7020003@stoneleaf.us>
References: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>
	<20120405221758.GA12229@cskk.homeip.net>
	<CAL0kPAWJaWLiRwwzaPYDq7MO2Yj9vBObYt2GHQAUcX3+fqQAyA@mail.gmail.com>
	<4F7F5F6D.7020003@stoneleaf.us>
Message-ID: <CAP7+vJKYT5K4uN9UCNA=KnGidOJVXx_MOpSpBjqw4xAeaihusQ@mail.gmail.com>

I don't know who started this, but the PEP 418 threads have altogether
too much snarkiness and not enough content. It's bad enough that we're
bikeshedding so intensely; we don't need clever comebacks in
triplicate to every out-of-context argument.

--Guido

On Fri, Apr 6, 2012 at 2:26 PM, Ethan Furman <ethan at stoneleaf.us> wrote:
> Lennart Regebro wrote:
>>
>> On Fri, Apr 6, 2012 at 00:17, Cameron Simpson <cs at zip.com.au> wrote:
>>>
>>>
>> Good point, but the same does for using flags. If you don't pass in
>>
>> the MONOTONIC flag, what happens? Only reading the documentation will
>> tell you. As such this, if anything, is an indication that the
>>
>> get_clock() API isn't ideal in any incarnation.
>>>
>>>
>>> Gah! ALL functions are like that! How often do we see questions about
>>> max() or split() etc that a close reading of the docs obviate?
>>
>>
>> My point exactly.
>
>
> Huh? ?Your point is that all APIs are less than ideal because you have to
> read the docs to know for certain how they work?
>
> ~Ethan~
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)

From cs at zip.com.au  Sat Apr  7 00:11:20 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sat, 7 Apr 2012 08:11:20 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <20120406051945.GA20040@cskk.homeip.net>
References: <20120406051945.GA20040@cskk.homeip.net>
Message-ID: <20120406221120.GA12534@cskk.homeip.net>

On 06Apr2012 15:19, I wrote:
| On 06Apr2012 14:31, Steven D'Aprano <steve at pearwood.info> wrote:
| | Here is a non-monotonic sequence:
| | 
| | 1, 2, 3, 4, 5, 6, 7, 2, 3, 4, 5, 6, 7, 8
| | 
| | This isn't steady either, because it jumps backwards.
| | 
| | To be steady, it MUST also be monotonic. If you think that it is appropriate 
| | to call a non-monotonic clock "steady", then I think you should tell us what 
| | you mean by "steady but not monotonic".
| 
| I took steady to mean "never jumps more than x", for "x" being "small",
| and allowing small negatives. If steady implies monotonic and people
| agree that that is so, I'm happy too, and happy that steady is a better
| aspiration than merely monotonic.

I've had some sleep. _Of course_ steady implies monotonic, or it
wouldn't steadily move forwards.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I went to see a psychiatrist.  He told me I was crazy.  I told him
I wanted a second opinion, so he said, "Ok, you're ugly, too."
        - Rodney Dangerfield

From timothy.c.delaney at gmail.com  Sat Apr  7 00:17:15 2012
From: timothy.c.delaney at gmail.com (Tim Delaney)
Date: Sat, 7 Apr 2012 08:17:15 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F7B96F1.6020906@pearwood.info>
References: <4F7B96F1.6020906@pearwood.info>
Message-ID: <CAN8CLg=dg9ANJ24Lb_NdYowkTS2YhQss6ggJ2AybGnGMTnjdCQ@mail.gmail.com>

On 4 April 2012 10:33, Steven D'Aprano <steve at pearwood.info> wrote:

> try:
>    from os import bestclock as _bestclock
> except ImportError:
>    _bestclock = time
>

My problem here is that "best clock" means different things to different
people (as the number of emails shows).

I think exposing specific clocks is also useful (sometimes people may need
a steady clock, and early failure is better than clock skew). However, I
propose a loosely classified set of clocks built on top of the specific
clocks, all of which can fall back to the lowest precision/non-monotonic
clock if needed.

1. The "steadiest" clock on the system. Ideally this would be a steady
clock, but may not be.

2. The "most precise" clock on the system. This would have the finest-grain
tick available on the system.

3. The "highest performance" (or maybe "lowest latency") clock. This would
be whichever clock on the system returned its results fastest.

I'm not sure if there are more that would be needed ("most accurate" comes
to mind, but feels like it's arbitrarily choosing between steadiest and
most precise, so I don't think it's valid).

By choosing relative terms, it caters to people's desire to have the "best"
clock, but doesn't set the expectation that the behaviour is guaranteed.

Tim Delaney
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120407/d89b5b07/attachment.html>

From cs at zip.com.au  Sat Apr  7 00:28:27 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sat, 7 Apr 2012 08:28:27 +1000
Subject: [Python-Dev] this is why we shouldn't call it a
 "monotonic	clock" (was: PEP 418 is too divisive and confusing and should	be
 postponed)
In-Reply-To: <EFE3877620384242A686D52278B7CCD3387EF6@RKV-IT-EXCH104.ccp.ad.local>
References: <EFE3877620384242A686D52278B7CCD3387EF6@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120406222827.GA14768@cskk.homeip.net>

On 06Apr2012 17:07, Kristj?n Valur J?nsson <kristjan at ccpgames.com> wrote:
| Steven D'Aprano:
| > I think that this is incorrect.
| > py> time.clock(); time.sleep(10); time.clock()
| > 0.41
| > 0.41
|
| This is the original reason for the original defect (issue 10278)
| unix' clock() doesn't actually provide a clock in this sense, it provides a resource usage metric.

Yeah:-( Its help says "Return the CPU time or real time since [...]".
Two very different things, as demonstrated. I suppose neither goes
backwards, but this seems like a classic example of the "useless
monotonic clock" against which Greg Ewing railed.

And why? For one thing, because one can't inspect its metadata to find
out what it does.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Tens of thousands of messages, hundreds of points of view.  It was not called
the Net of a Million Lies for nothing.  - Vernor Vinge, _A Fire Upon The Deep_

From victor.stinner at gmail.com  Sat Apr  7 01:01:45 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 7 Apr 2012 01:01:45 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAN8CLg=dg9ANJ24Lb_NdYowkTS2YhQss6ggJ2AybGnGMTnjdCQ@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<CAN8CLg=dg9ANJ24Lb_NdYowkTS2YhQss6ggJ2AybGnGMTnjdCQ@mail.gmail.com>
Message-ID: <CAMpsgwYk3caCqerp6YZeTsvzfwTAHqi690e47N0NbYrUcyZV2A@mail.gmail.com>

> 1. The "steadiest" clock on the system. Ideally this would be a steady
> clock, but may not be.

time.monotonic() as proposed in the PEP 418 *is* the steadiest
available clock, but we cannot say that it is steady :-)

> 2. The "most precise" clock on the system. This would have the finest-grain
> tick available on the system.

It's discussed in the "Deferred API: time.perf_counter()" section. It
would be nice to provide such clock, but I don't feel able right now
to propose ab API for such requirement. It's unclear to me if it must
be monotonic, steady, count elapsed time during a sleep or not, etc.

It is already very hard to propose one single time function
(time.monotonic), so I chose to simplify the PEP and not propose two
functions but only one :-)

> 3. The "highest performance" (or maybe "lowest latency") clock. This would
> be whichever clock on the system returned its results fastest.

Linux provides CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
clocks, reading the ACPI Power Management clock is known to be slow.
But should the clock be monotonic or not? Return seconds or CPU ticks?
If the clock is not well defined, it's useless or at least, not
portable. Exposing CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
constants should be enough.

Victor

From cs at zip.com.au  Sat Apr  7 01:11:44 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sat, 7 Apr 2012 09:11:44 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7EC490.4030000@pearwood.info>
References: <4F7EC490.4030000@pearwood.info>
Message-ID: <20120406231144.GA16381@cskk.homeip.net>

On 06Apr2012 20:25, Steven D'Aprano <steve at pearwood.info> wrote:
| Cameron Simpson wrote:
| > My core objective was to allow users to query for clocks, and ideally
| > enumerate and inspect all clocks. Without the caller having platform
| > specific knowledge.
| 
| Clocks *are* platform specific -- not just in their availability, but also in 
| the fine details of their semantics and behaviour. I don't think we can or 
| should try to gloss over this.

This is why get_clock() returns a clock object, which can have metadata
exposing such details. Up to and including the name of the platform specific
library/system-call at its core.

The issue with monotonic() on its own is that the guarentees in the doco
will have to be fairly loose. That prevents the user learning about
"fine details of their semantics and behaviour". Glossing over this
stuff is exactly what offering _only_ a few genericly characterised
clock names (monotonic() et al) does.

| If people are making decisions about timers 
| without knowledge of what their platform supports, they're probably making 
| poor decisions. Even the venerable time.time() and time.clock() differ between 
| Linux and Windows.

time.clock() does, as (you?) clearly demonstrated elsewhere.

time.time()? (Aside from precision?)

| > Allowing for the sake of discussion that this is desirable, what would
| > you propose as an API instead of get_clock() (and its friend, get_clocks()
| > for enumeration, that I should stuff into the code).
| 
| The old ways are the best. We don't have math.get_trig() and math.get_trigs() 
| functions for querying trigonometric functions, we just expose the functions 
| directly.
| 
| I think the way to enumerate and inspect all clocks is with the tried and true 
| Python introspection tools that people use on all other functions:
| 
| * use dir(time) to see a list of names available in the module

So, they see "monotonic". Does that tell them much about fine details?

| * use help(time) to read their help

Useful only to humans, not programs.

| * read the Fine Manual to find out more

Useful only to humans, not programs.

| * use try... except... to detect the existence of a clock

Useful only for a fixed list of defined name. Works fine for monotonic,
highres, steady or whatever. And I would be ok with the module
presenting these only where available and concealing them otherwise,
thus raising AttributeError. Or ImportError ("from time import
monotonic").

| There's nothing special about clocks that needs anything more than this.

This I think is false. In fact, I think your own statement at the start
about glossing over fine details goes against this.

If I ask for a highres clock, I might well care _how_ precise it was.

If I ask for a steady clock, I might well care how large its slews were.

If I ask for a monotonic clock, I might well want to know if it tracks
wall clock time (even if by magic) or elapsed system run time (eg time
that stops increasing if the system is suspended, whereas wallclocks do
not). Example: a wallclock is nice for log timestamps. A system run time
clock is nice for profiling. They're both monotonic in some domain.

| get_clock() looks like a factory function, but it actually isn't. It just 
| selects from a small number of pre-existing clocks.

That number may still be a few. Victor's made it clear that Windows
has a choice of possible highres clocks, UNIX clock_getres() offers
several possible clock behaviours and an indication that a platform may
have several clocks embodying a subset of these, and may indeed offer
more clocks.

| We should just expose 
| those pre-existing clocks directly.

But exposing them _purely_ _by_ _name_ means inventing names for every single
platform clock, and knowing those names per platform. time.clock() is a
fine example where the name tells you nearly nothing about the clock
behaviour. If the user cares about fine detail as you suggest they need
to know their platform and have _external_ knowledge of the platform
specifics; they can't inspect from inside the program.

| I don't see any advantage in adding that 
| extra level of indirection or the addition of all this complexity:
| * a function get_clock() to select a clock
| * a function get_clocks() to enumerate all the clocks

These are only two functions because the next alternative seemed an
all_clocks= mode parameter, which changed the signature of the function
return.

Another alternative is the public lists-of-clocks.

The point it to be able to enumerate all available clocks for
consideration of their properties; get_clock() provides a simple way
to coarsely say "a clock like _this_" for the common instances of
"this".

| * another function for querying the properties of a clock

No, that's why you get a clock object back. You can examine it directly
for defined metadata names (epoch, precision, underlying-os-clock-name,
etc). In exactly the fashion you appear to want for the top level
offerings: by knowing the metadata property names.

| All those functions accomplish is to increase the complexity of the API, the 
| documentation and the implementation. It's one more special case for the user 
| to learn:
| 
| "To find out what functions are available, use dir(module), except for clocks, 
| where you have to use time.get_clocks()."

But dir(module) _will_ list monotonic et al anyway, and possibly matching
public clock list names. get_clock() is only for when you want to dig
around more flexibly.

| Another problem with get_clock() -- it will be an attractive nuisance for the 
| sort of person who cares about symmetry and completeness. You will have a 
| steady trickle of "feature requests" from users who are surprised that not 
| every combination of features is supported. Out of the eight or sixteen or 
| thirty-two potential clocks that get_clock() tempts the user with, only three 
| or five will actually exist.

And the optional "clocklist" parameter addresses such feaping creaturism
by providing a hook for _other_ modules to offer a clock list. Such as a
list of syntheic clocks with cool (or insane:-) properties. Without
burdening the time module.

| The only advantage of get_clock is that you don't need to know the *name* of a 
| platform clock in order to use it, you can describe it with a series of flags 
| or enums. But in practice, that's not an advantage, that's actually a 
| disadvantage. Consider:
| 
| "Which clock should I use for such-and-such a task, foo or bar?"

What's your list of foo, bah? Again, I'm not talking about removing
monotonic et al. I'm talking about exposing the alternatives for when
the chosen-by-the-module monotonic doesn't fit.

| versus
| "Which clock should I use for such-and-such a task, get_clock(spam, eggs, 
| cheese) or get_clock(ham, eggs, truffles)?"

One hopes the user knows the task. Then they can specify cheese or
truffles. Again, only if they feel they need to because the bare
monotonic et al don't fit, or was too vague.

| The mere mechanics of talking about these clocks will suffer because they 
| aren't named.

But they _can_ be named! get_clock() is for when you don't know or care
their names, only their behaviours! And also for when an available clock
_wasn't_ one returned by the monotonic et al names.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I do not trust thee, Cage from Hell, / The reason why I cannot tell, /
But this I know, and know full well: / I do not trust thee, Cage from Hell.
        - Leigh Ann Hussey, leighann at sybase.com, DoD#5913

From victor.stinner at gmail.com  Sat Apr  7 01:16:33 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 7 Apr 2012 01:16:33 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <20120406222827.GA14768@cskk.homeip.net>
References: <EFE3877620384242A686D52278B7CCD3387EF6@RKV-IT-EXCH104.ccp.ad.local>
	<20120406222827.GA14768@cskk.homeip.net>
Message-ID: <CAMpsgwbDB4kkr2G=Si1+qKCEzFBFgpCo0LuEHLR7og8H14T1UA@mail.gmail.com>

> | This is the original reason for the original defect (issue 10278)
> | unix' clock() doesn't actually provide a clock in this sense, it provides a resource usage metric.
>
> Yeah:-( Its help says "Return the CPU time or real time since [...]".
> Two very different things, as demonstrated. I suppose neither goes
> backwards, but this seems like a classic example of the "useless
> monotonic clock" against which Greg Ewing railed.
>
> And why? For one thing, because one can't inspect its metadata to find
> out what it does.

Should I add another key to the result of
time.get_clock_info('clock')? How can we define "clock on Windows"
(almost monotonic and steady clock) vs "clock on UNIX" (CPU time) with
a flag or a value?

Victor

From victor.stinner at gmail.com  Sat Apr  7 01:47:16 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 7 Apr 2012 01:47:16 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
Message-ID: <CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>

> 2. Those who think that "monotonic clock" means a clock that never
> jumps, and that runs at a rate approximating the rate of real time.
> This is a very useful kind of clock to have! It is what C++ now calls
> a "steady clock". It is what all the major operating systems provide.

For the "C++" part, I suppose that you are thinking to:
"Objects of class steady_clock represent clocks for which values of
time_point advance at a steady rate relative to real time. That is,
the clock may not be adjusted."
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3128.html#time.clock.steady

I don't understand this definition. All clocks have a clock drift.
This is just one exception: atomic clocks, but such clocks are rare
and very expensive.

http://www.clocktypes.com/buy_atomic_clocks.html
"Atomic clocks can have a high price, but if you really want to buy
one there is at least one place you can purchase an atomic clock.

    Agilent Technologies (www.agilient.com)
    Model number 5071A atomic clock with a long-term stability better
than 1 x 10-14 price - $50,390"

There is a simple "trick" to get a very cheap steady clock: adjust the
clock manually. Extract of a Wikipedia article:
"More advanced clocks and old mechanical clocks often have some kind
of speed trimmer where one can adjust the speed of the clock and thus
reduce the clock drift. For instance, in pendulum clocks the clock
drift can be manipulated by slightly changing the length of the
pendulum."
http://en.wikipedia.org/wiki/Clock_drift

Or you can use a NTP daemon to adjust automatically using a free farm
of atomic clocks distributed around the world.

So you can get a cheap steady clock if you accept that (OMG!) it can
be adjusted.

Or did I misunderstand "the clock may not be adjusted"?

Victor

From janzert at janzert.com  Sat Apr  7 02:17:52 2012
From: janzert at janzert.com (Janzert)
Date: Fri, 06 Apr 2012 20:17:52 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
Message-ID: <jlo136$soq$1@dough.gmane.org>

On 4/5/2012 6:32 AM, Victor Stinner wrote:
>>>> Since the only monotonic clock that can be adjusted by NTP is Linux'
>>>> CLOCK_MONOTONIC, if we avoid it, then time.monotonic() would always
>>>> give a clock that isn't adjusted by NTP.
>>>
>>> I thought we decided that NTP adjustment isn't an issue, because
>>> it's always gradual.
>>
>> Well, in timings it is an issue, but perhaps not a big one. :-)
>> In any case, which one we use will not change the API, so if it is
>> decided it is an issue, we can always more to CLOCK_MONOTONIC_RAW in
>> the future, once Linux<  2.6.26 (or whatever it was) is deemed
>> unsupported.
>
> I prefer to use CLOCK_MONOTONIC, not because it is also available for
> older Linux kernels, but because it is more reliable. Even if the
> underlying clock source is unstable (unstable frequency), a delta of
> two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
> whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
> smaller than a second. time.monotonic() unit is the second, as written
> in its documentation.
>

I believe the above is only true for sufficiently large time deltas. One 
of the major purposes of NTP slewing is to give up some short term 
accuracy in order to achieve long term accuracy (e.g. whenever the clock 
is found to be ahead of real time it is purposefully ticked slower than 
real time).

So for benchmarking it would not be surprising to be better off with the 
non-adjusted clock. Ideally there would be a clock that was slewed "just 
enough" to try and achieve short term accuracy, but I don't know of 
anything providing that.

Janzert


From v+python at g.nevcal.com  Sat Apr  7 02:30:59 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Fri, 06 Apr 2012 17:30:59 -0700
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <20120406231144.GA16381@cskk.homeip.net>
References: <4F7EC490.4030000@pearwood.info>
	<20120406231144.GA16381@cskk.homeip.net>
Message-ID: <4F7F8AC3.7010702@g.nevcal.com>

On 4/6/2012 4:11 PM, Cameron Simpson wrote:
> Another alternative is the public lists-of-clocks.

After watching this thread with amusement and frustration, amusement 
because it is so big, and so many people have so many different 
opinions, frustration, because it seems that few of the clocks that are 
available are anywhere near ideal for any particular stated 
characteristic, and because none of the APIs presented provide a way for 
the user to specify the details of the characteristics of the desired 
clock, I think this idea of a list-of-clocks sounds better and better.

Hopefully, for each  system, the characteristics of each clock can be 
discovered, and fully characterized in available metadata for the clock...

tick rate, or list of tick rates
maximum variation of tick rate
precision
maximum "helicopter drop" jump delta
monotonicity
frequency of rollover or None
base epoch value or None
behavior during system sleep, hibernate, suspend, shutdown, battery 
failure, flood, wartime events, and acts of God. These last two may have 
values that are long prose texts full of political or religious 
rhetoric, such as the content of this thread :)
any other characteristics I forgot to mention

Of course, it is not clear that all of these characteristics can be 
determined based on OS/Version; hardware vendors may have different 
implementations.

There should be a way to add new clock objects to the list, given a set 
of characteristics, and an API to retrieve them, at least by installing 
a submodule that provides access to an additional clock.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120406/bfedc9a8/attachment-0001.html>

From cs at zip.com.au  Sat Apr  7 06:22:56 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sat, 7 Apr 2012 14:22:56 +1000
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7F8AC3.7010702@g.nevcal.com>
References: <4F7F8AC3.7010702@g.nevcal.com>
Message-ID: <20120407042255.GA12672@cskk.homeip.net>

On 06Apr2012 17:30, Glenn Linderman <v+python at g.nevcal.com> wrote:
| On 4/6/2012 4:11 PM, Cameron Simpson wrote:
| > Another alternative is the public lists-of-clocks.
| 
| After watching this thread with amusement and frustration, amusement 
| because it is so big, and so many people have so many different 
| opinions, frustration, because it seems that few of the clocks that are 
| available are anywhere near ideal for any particular stated 
| characteristic,

My partner has occasionally opined that most Prolog programs simply
result in "*** NO ***". We could optimise for that and simplify the
implementation enormously. It would also let us provide very strong
guarrentees about the offered clocks on the basis that no suitable clock
would ever provided:-)

| and because none of the APIs presented provide a way for 
| the user to specify the details of the characteristics of the desired 
| clock, I think this idea of a list-of-clocks sounds better and better.
| 
| Hopefully, for each  system, the characteristics of each clock can be 
| discovered, and fully characterized in available metadata for the clock...

Victor has asked me to do that for my skeleton, based on the tables he
has assembled. I'll see what i can do there...

| Of course, it is not clear that all of these characteristics can be 
| determined based on OS/Version; hardware vendors may have different 
| implementations.

If you can look up the kernel revision you can do fairly well. In
principle.

| There should be a way to add new clock objects to the list, given a set 
| of characteristics, and an API to retrieve them, at least by installing 
| a submodule that provides access to an additional clock.

Returning to seriousness, the get_clock() call admits a clocklist.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Principles have no real force except when one is well fed.      - Mark Twain

From stephen at xemacs.org  Sat Apr  7 10:12:30 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 7 Apr 2012 17:12:30 +0900
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>
Message-ID: <CAL_0O1_o=x_=b3J-zxkHq967ZsJifp5__w-O=D=FTDE6b-h1dA@mail.gmail.com>

On Fri, Apr 6, 2012 at 11:32 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:

> On Linux, I now prefer
> to use CLOCK_MONOTONIC (monotonic) than CLOCK_MONOTONIC_RAW
> (monotonic and steady as defined by C++) *because* its frequency is
> adjusted.

I don't think that's a reason that should be considered.  There just
doesn't seem to be a single best clock, nor do clocks of similar
character seem to be easy to find across platforms.  So the reasons
I'd like to see are of the form "we should provide CLOCK_MONOTONIC on
Linux as one of our small selection of recommended clocks *because*
the frequency adjustment makes it *most* useful in use-cases A and B,
and it's a *reasonable* choice in use-case C *but* we need to document
that it's a terrible choice for use-case D."

Why do I ask for this kind of argument?  Because there are only a few
people (you, Glyph, Zooko) who seem to have studied clocks closely
enough to be able to evaluate the technical issues involved in "*this*
clock is good/mediocre/unusable in *that* use case."  I'm happy to
leave such judgments up to you guys.  What the rest of us can
contribute is (a) use cases to consider and (b) our opinions on the
relative importance of various use cases in whether we should
recommend a particular clock (ie, provide a named API in the stdlib
for it).

From senthil at uthcode.com  Sat Apr  7 10:20:44 2012
From: senthil at uthcode.com (Senthil Kumaran)
Date: Sat, 7 Apr 2012 16:20:44 +0800
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
 tkinter font.
In-Reply-To: <CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<jlkk8g$pj6$1@dough.gmane.org>
	<CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
Message-ID: <20120407082044.GA6020@mathmagic>

Hi Andrew,

On Thu, Apr 05, 2012 at 11:16:54PM +0300, Andrew Svetlov wrote:
> I tried to:
> andrew at tiktaalik2 ~/projects> hg clone ssh://hg at hg.python.org/cpython
> ssh://hg at hg.python.org/sandbox/tkdocs
> repo created, public URL is http://hg.python.org/sandbox/tkdocs
> abort: clone from remote to remote not supported

You could do the server side clone using the web form here -
http://hg.python.org/cpython/

Then you could you that repo to work on your stuff.

Thanks,
Senthil


From martin at v.loewis.de  Sat Apr  7 11:08:31 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Sat, 07 Apr 2012 11:08:31 +0200
Subject: [Python-Dev] Pep 393 and debugging
In-Reply-To: <EFE3877620384242A686D52278B7CCD3387CBB@RKV-IT-EXCH104.ccp.ad.local>
References: <EFE3877620384242A686D52278B7CCD3387CBB@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <4F80040F.5070208@v.loewis.de>

> I wonder if there is a way to make this situation easier?  Perhaps for
> "debug" builds, we can store some debug information in the frame object,
> e.g. utf8 encoding of the filename and function?

I'd like to stress Benjamin's recommendation. Dave Malcolm's gdb
extensions (requires gdb with Python support) are really powerful; they
will automatically render PyObject* by displaying the actual logical
value (and not just for strings).

Failing that, I use _PyObject_Dump to display strings; this requires a
debugger that can call functions in the debuggee (like gdb).

Regards,
Martin

From stephen at xemacs.org  Sat Apr  7 11:42:29 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 7 Apr 2012 18:42:29 +0900
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>
Message-ID: <CAL_0O19My7EVSG_-SNuYYysAEdSDsJHViosCFxA-Ef0wqpC3vA@mail.gmail.com>

On Sat, Apr 7, 2012 at 8:47 AM, Victor Stinner <victor.stinner at gmail.com> wrote:

> "Objects of class steady_clock represent clocks for which values of
> time_point advance at a steady rate relative to real time. That is,
> the clock may not be adjusted."
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3128.html#time.clock.steady
>
> I don't understand this definition. All clocks have a clock drift. [[...]]
> you can use a NTP daemon to adjust automatically using a free farm
> of atomic clocks distributed around the world.

That's inside the black box; C++ doesn't care about *how* the clock is
made to be steady by the system.  The system could incorporate an
atomic clock, or it could use NTP to keep the clock closely
corresponding to physical time.  The C++ program doesn't ask, and the
system shouldn't tell.

> So you can get a cheap steady clock if you accept that (OMG!) it can
> be adjusted.
>
> Or did I misunderstand "the clock may not be adjusted"?

I think you are not keeping your context consistent with the viewpoint
of the C++ committee.  To the C++ committee, a steady clock may be
expected to "just keep ticking" as far as the C++ program is
concerned.  What this means is that the clock value is incremented in
sequence: it never goes backward, and it never "jumps over" a possible
time value.  How closely that "ticking" approximates physical time is
Somebody Else's Problem; C++ simply assumes that it does.

In other words, a clock adjustment in the C++ standard means that the
clock's reported time values occur out of sequence.  However, if the
intervals between clock ticks are adjusted by NTP (or by a little old
Swiss watchmaker moving the pendulum bob) in order to improve its
steadiness (ie, accurate correspondence to physical time), C++ doesn't
know about that, and doesn't care.

Amusingly enough, various people's statements about common usage of
"monotonic" notwithstanding, C++'s definition of "monotonic" was
"mathematically monotonic."  Cf. N3092, 20.10.1.
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3092.pdf)

Regards,

From victor.stinner at gmail.com  Sat Apr  7 11:49:20 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 7 Apr 2012 11:49:20 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <jlo136$soq$1@dough.gmane.org>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
Message-ID: <CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>

2012/4/7 Janzert <janzert at janzert.com>:
> On 4/5/2012 6:32 AM, Victor Stinner wrote:
>> I prefer to use CLOCK_MONOTONIC, not because it is also available for
>> older Linux kernels, but because it is more reliable. Even if the
>> underlying clock source is unstable (unstable frequency), a delta of
>> two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
>> whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
>> smaller than a second. time.monotonic() unit is the second, as written
>> in its documentation.
>
> I believe the above is only true for sufficiently large time deltas. One of
> the major purposes of NTP slewing is to give up some short term accuracy in
> order to achieve long term accuracy (e.g. whenever the clock is found to be
> ahead of real time it is purposefully ticked slower than real time).

I don't think that NTP works like that. NTP only uses very smooth adjustements:

""slewing": change the clock frequency to be slightly faster or slower
(which is done with adjtime()). Since the slew rate is limited to 0.5
ms/s, each second of adjustment requires an amortization interval of
2000 s. Thus, an adjustment of many seconds can take hours or days to
amortize."
http://www.python.org/dev/peps/pep-0418/#ntp-adjustment

> So for benchmarking it would not be surprising to be better off with the
> non-adjusted clock. Ideally there would be a clock that was slewed "just
> enough" to try and achieve short term accuracy, but I don't know of anything
> providing that.

time.monotonic() is not written for benchmarks. It does not have the
highest frequecency, it's primary property is that is monotonic. A
side effect is that it is usually the steadiest clock.

For example, on Windows time.monotonic() has only an accuracy of 15 ms
(15 milliseconds not 15 microseconds).

If you consider that the PEP should also solve the issue of
benchmarking clock, we should continue the work on this section:
http://www.python.org/dev/peps/pep-0418/#deferred-api-time-perf-counter

Victor

From p.f.moore at gmail.com  Sat Apr  7 12:08:39 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 7 Apr 2012 11:08:39 +0100
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAL_0O1_o=x_=b3J-zxkHq967ZsJifp5__w-O=D=FTDE6b-h1dA@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>
	<CAL_0O1_o=x_=b3J-zxkHq967ZsJifp5__w-O=D=FTDE6b-h1dA@mail.gmail.com>
Message-ID: <CACac1F-TFRi8DMg1dRT4gVrYyoGy8qMfAp27Q7Xcxvy98_5Yzg@mail.gmail.com>

On 7 April 2012 09:12, Stephen J. Turnbull <stephen at xemacs.org> wrote:
>
> I don't think that's a reason that should be considered. ?There just
> doesn't seem to be a single best clock, nor do clocks of similar
> character seem to be easy to find across platforms. ?So the reasons
> I'd like to see are of the form "we should provide CLOCK_MONOTONIC on
> Linux as one of our small selection of recommended clocks *because*
> the frequency adjustment makes it *most* useful in use-cases A and B,
> and it's a *reasonable* choice in use-case C *but* we need to document
> that it's a terrible choice for use-case D."


>From the PEP:

"""
Use cases:

Display the current time to a human (e.g. display a calendar or draw a
wall clock): use system clock, i.e. time.time() or
datetime.datetime.now().
Event scheduler, timeout: time.monotonic().
Benchmark, profiling: time.clock() on Windows, time.monotonic(), or
fallback to time.time()

Functions

To fulfill the use cases, the functions' properties are:

time.time(): system clock, "wall clock".
time.monotonic(): monotonic clock
time.get_clock_info(name): get information on the specified time function
"""

That broadly covers it, I'd say. There are 2 main exceptions I see:
(1) your suggestion of "explain why clock X is a terrible choice for
use case Y" isn't there, although I'm not sure how important that is,
and (2) there's no really good cross-platform option given for
benchmarking/profiling (time.clock() is fine on Windows, but it gives
CPU time on Unix - is that acceptable?)

Also, Victor - you missed time.clock() from "Functions". Was that
deliberate because it's sometimes CPU time? Maybe it should be added
for clarity?

Paul.

From steve at pearwood.info  Sat Apr  7 12:40:46 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 07 Apr 2012 20:40:46 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
 be postponed
In-Reply-To: <CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>	<4F7CCF1D.2010600@canterbury.ac.nz>	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
Message-ID: <4F8019AE.1050305@pearwood.info>

Victor Stinner wrote:
> 2012/4/7 Janzert <janzert at janzert.com>:
>> On 4/5/2012 6:32 AM, Victor Stinner wrote:
>>> I prefer to use CLOCK_MONOTONIC, not because it is also available for
>>> older Linux kernels, but because it is more reliable. Even if the
>>> underlying clock source is unstable (unstable frequency), a delta of
>>> two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
>>> whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
>>> smaller than a second. time.monotonic() unit is the second, as written
>>> in its documentation.
>> I believe the above is only true for sufficiently large time deltas. One of
>> the major purposes of NTP slewing is to give up some short term accuracy in
>> order to achieve long term accuracy (e.g. whenever the clock is found to be
>> ahead of real time it is purposefully ticked slower than real time).
> 
> I don't think that NTP works like that. NTP only uses very smooth adjustements:
> 
> ""slewing": change the clock frequency to be slightly faster or slower
> (which is done with adjtime()). Since the slew rate is limited to 0.5
> ms/s, each second of adjustment requires an amortization interval of
> 2000 s. Thus, an adjustment of many seconds can take hours or days to
> amortize."
> http://www.python.org/dev/peps/pep-0418/#ntp-adjustment


That is incorrect. NTP by default will only slew the clock for small 
discrepancies. For large discrepancies, it will step the clock, causing the 
time to jump. By default, "large" here means more than 128 milliseconds.

Yes, milliseconds.

http://www.ntp.org/ntpfaq/NTP-s-config-tricks.htm#AEN4249


In any case, NTP is not the only thing that adjusts the clock, e.g. the 
operating system will adjust the time for daylight savings.


-- 
Steven


From andrew.svetlov at gmail.com  Sat Apr  7 14:15:00 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Sat, 7 Apr 2012 15:15:00 +0300
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
 tkinter font.
In-Reply-To: <20120407082044.GA6020@mathmagic>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<jlkk8g$pj6$1@dough.gmane.org>
	<CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
	<20120407082044.GA6020@mathmagic>
Message-ID: <CAL3CFcVujayvSYZ4diCr9b1J0Bj+WHZQwNPUB9k01KWo8mhWGA@mail.gmail.com>

Thank you. That works. Is there way to delete unused repo?

On Sat, Apr 7, 2012 at 11:20 AM, Senthil Kumaran <senthil at uthcode.com> wrote:
> Hi Andrew,
>
> On Thu, Apr 05, 2012 at 11:16:54PM +0300, Andrew Svetlov wrote:
>> I tried to:
>> andrew at tiktaalik2 ~/projects> hg clone ssh://hg at hg.python.org/cpython
>> ssh://hg at hg.python.org/sandbox/tkdocs
>> repo created, public URL is http://hg.python.org/sandbox/tkdocs
>> abort: clone from remote to remote not supported
>
> You could do the server side clone using the web form here -
> http://hg.python.org/cpython/
>
> Then you could you that repo to work on your stuff.
>
> Thanks,
> Senthil
>



-- 
Thanks,
Andrew Svetlov

From rdmurray at bitdance.com  Sat Apr  7 14:45:02 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Sat, 07 Apr 2012 08:45:02 -0400
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
	tkinter font.
In-Reply-To: <CAL3CFcVujayvSYZ4diCr9b1J0Bj+WHZQwNPUB9k01KWo8mhWGA@mail.gmail.com>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<jlkk8g$pj6$1@dough.gmane.org>
	<CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
	<20120407082044.GA6020@mathmagic>
	<CAL3CFcVujayvSYZ4diCr9b1J0Bj+WHZQwNPUB9k01KWo8mhWGA@mail.gmail.com>
Message-ID: <20120407124437.5866D250603@webabinitio.net>

On Sat, 07 Apr 2012 15:15:00 +0300, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
> Thank you. That works. Is there way to delete unused repo?

This is what I've heard:

If a repo isn't used (at all) it eventually gets deleted automatically.
Otherwise, you have to ask.  Probably python-committers is the best
place for a delete request.  If this becomes a burden at some point,
someone will figure out a secure way to automate it...security is the
reason it isn't automated now.

--David

From storchaka at gmail.com  Sat Apr  7 16:06:48 2012
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Sat, 07 Apr 2012 17:06:48 +0300
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
	tkinter font.
In-Reply-To: <CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<jlkk8g$pj6$1@dough.gmane.org>
	<CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
Message-ID: <jlphgu$jt0$1@dough.gmane.org>

Andrew, when you prepare the tkinter documentation, I advise you to 
include a link to www.tkdocs.com -- probably the best resource in this 
way (at least it was very useful for me).

Maybe even should offer these guys do official documentation, if they 
agree and if there would be no conflict of interests (they offer 
commercial e-book).


From andrew.svetlov at gmail.com  Sat Apr  7 19:19:52 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Sat, 7 Apr 2012 20:19:52 +0300
Subject: [Python-Dev] cpython: Issue #3033: Add displayof parameter to
 tkinter font.
In-Reply-To: <jlphgu$jt0$1@dough.gmane.org>
References: <E1SFjCO-0002h7-St@dinsdale.python.org>
	<20120405122102.7dd6ef8f@pitrou.net>
	<CAL3CFcVYeiSbWFV4EUm3s6MDqZ-A-Vm4E38ZN2rVupRVJ926-g@mail.gmail.com>
	<20120405140649.C2CA4250603@webabinitio.net>
	<jlkk8g$pj6$1@dough.gmane.org>
	<CAL3CFcV3iOSHW2SnhRHkRBOr8GfFJYDat4JhR803xQ+A8S8HWg@mail.gmail.com>
	<jlphgu$jt0$1@dough.gmane.org>
Message-ID: <CAL3CFcW8bDRXGda_dWD1fHNmgxxQcVhjxGBqNBF3rGBtXN01Xg@mail.gmail.com>

On Sat, Apr 7, 2012 at 5:06 PM, Serhiy Storchaka <storchaka at gmail.com> wrote:
> Andrew, when you prepare the tkinter documentation, I advise you to include
> a link to www.tkdocs.com -- probably the best resource in this way (at least
> it was very useful for me).
>
Done in sanbox/tkdoc repo.

-- 
Thanks,
Andrew Svetlov

From victor.stinner at gmail.com  Sun Apr  8 00:24:16 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 8 Apr 2012 00:24:16 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F8019AE.1050305@pearwood.info>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
	<4F8019AE.1050305@pearwood.info>
Message-ID: <CAMpsgwZj9aXV2wL+azM-xSvHKcawxAT+tRxNHPiLPdTftaeHzA@mail.gmail.com>

2012/4/7 Steven D'Aprano <steve at pearwood.info>:
> Victor Stinner wrote:
>>
>> 2012/4/7 Janzert <janzert at janzert.com>:
>>>
>>> On 4/5/2012 6:32 AM, Victor Stinner wrote:
>>>>
>>>> I prefer to use CLOCK_MONOTONIC, not because it is also available for
>>>> older Linux kernels, but because it is more reliable. Even if the
>>>> underlying clock source is unstable (unstable frequency), a delta of
>>>> two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
>>>> whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
>>>> smaller than a second. time.monotonic() unit is the second, as written
>>>> in its documentation.
>>>
>>> I believe the above is only true for sufficiently large time deltas. One
>>> of
>>> the major purposes of NTP slewing is to give up some short term accuracy
>>> in
>>> order to achieve long term accuracy (e.g. whenever the clock is found to
>>> be
>>> ahead of real time it is purposefully ticked slower than real time).
>>
>> I don't think that NTP works like that. NTP only uses very smooth
>> adjustements:
>>
>> ""slewing": change the clock frequency to be slightly faster or slower
>> (which is done with adjtime()). Since the slew rate is limited to 0.5
>> ms/s, each second of adjustment requires an amortization interval of
>> 2000 s. Thus, an adjustment of many seconds can take hours or days to
>> amortize."
>> http://www.python.org/dev/peps/pep-0418/#ntp-adjustment
>

> That is incorrect. NTP by default will only slew the clock for small
> discrepancies. For large discrepancies, it will step the clock, causing the
> time to jump. By default, "large" here means more than 128 milliseconds.
>
> Yes, milliseconds.
>
> http://www.ntp.org/ntpfaq/NTP-s-config-tricks.htm#AEN4249

We are talking about CLOCK_MONOTONIC. Steping is disabled on this clock.

Victor

From cs at zip.com.au  Sun Apr  8 00:39:25 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sun, 8 Apr 2012 08:39:25 +1000
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F8019AE.1050305@pearwood.info>
References: <4F8019AE.1050305@pearwood.info>
Message-ID: <20120407223925.GA19834@cskk.homeip.net>

On 07Apr2012 20:40, Steven D'Aprano <steve at pearwood.info> wrote:
| Victor Stinner wrote:
| > I don't think that NTP works like that. NTP only uses very smooth adjustements:
[...]
| > http://www.python.org/dev/peps/pep-0418/#ntp-adjustment
| 
| That is incorrect. NTP by default will only slew the clock for small 
| discrepancies. For large discrepancies, it will step the clock, causing the 
| time to jump. By default, "large" here means more than 128 milliseconds.
| Yes, milliseconds.
| http://www.ntp.org/ntpfaq/NTP-s-config-tricks.htm#AEN4249
| 
| In any case, NTP is not the only thing that adjusts the clock, e.g. the 
| operating system will adjust the time for daylight savings.

Ignoring the discussion of NTP, daylight saving is a _presentation_
issue. It is _display_. The OS clock does not change for daylight
saving! Think: "seconds since the epoch". This is a continuous function.
Daylight saving presentation occurs turning a seconds-since-epoch into a
human decomposed time (hours, etc).

Now, AFAIR, Windows used to run its system clock in "local time"
i.e. it did have to jump its clock for daylight saving. Hopefully that
is long gone.

UNIX never did this. It ran in seconds since the epoch (in its case, start of
01jan1970 GMT). Printing dates and timestamps to humans needed daylight
saving knowledge etc.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Stan Malyshev <stas at netcom.com> wrote:
| You're dragging a peg in a blind hairpin when you see a patch of coolant
| up ahead, and you hear the airbrakes of an oncoming 18-wheeler.
| What do you do?  WHAT DO YOU DO?  ("Speed", anyone?)
Shoot my pillion?       - Vociferous Mole <stevegr at Starbase.NeoSoft.COM>

From cs at zip.com.au  Sun Apr  8 00:43:35 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sun, 8 Apr 2012 08:43:35 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>
References: <CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>
Message-ID: <20120407224335.GA21094@cskk.homeip.net>

On 07Apr2012 01:47, Victor Stinner <victor.stinner at gmail.com> wrote:
| I don't understand this definition. All clocks have a clock drift.
| This is just one exception: atomic clocks, but such clocks are rare
| and very expensive.

They've got drift too. It is generally very small.

Anecdote: I used to keep my wristwatch (oh, the days of wrist
watches:-) synched to an atomic clock. By walking 4 metres down the hall
from my office to peer into the window of the room with the atomic
clock:-)

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Knox's 386 is slick.            Fox in Sox, on Knox's Box
Knox's box is very quick.       Plays lots of LSL. He's sick!
        - Gregory Bond <gnb at bby.com.au>,
          (Apologies to John "Iron Bar" Mackin.)

From raymond.hettinger at gmail.com  Sun Apr  8 01:36:11 2012
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sat, 7 Apr 2012 16:36:11 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
	clock" (was: PEP 418 is too divisive and confusing and should
	be postponed)
In-Reply-To: <CACac1F-TFRi8DMg1dRT4gVrYyoGy8qMfAp27Q7Xcxvy98_5Yzg@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<CAMpsgwZ1SiTcE_HHVnEweXPE2bqnowac+nK5DTJnR3jwkHw9MQ@mail.gmail.com>
	<CAL_0O1_o=x_=b3J-zxkHq967ZsJifp5__w-O=D=FTDE6b-h1dA@mail.gmail.com>
	<CACac1F-TFRi8DMg1dRT4gVrYyoGy8qMfAp27Q7Xcxvy98_5Yzg@mail.gmail.com>
Message-ID: <659E02C5-6C32-4DE3-AE68-B546BE236908@gmail.com>


On Apr 7, 2012, at 3:08 AM, Paul Moore wrote:

> Use cases:
> 
> Display the current time to a human (e.g. display a calendar or draw a
> wall clock): use system clock, i.e. time.time() or
> datetime.datetime.now().
> Event scheduler, timeout: time.monotonic().
> Benchmark, profiling: time.clock() on Windows, time.monotonic(), or
> fallback to time.time()


ISTM, an event scheduler should use time.time().
If I schedule an event at 9:30am exactly, I really
want my system time to be one that is used.
Stock traders, for example, need to initiate event based
on scheduled times (i.e. the market opening and closing times).

With respect to benchmarking, the important attribute of time keeping
is that the start and end times be computed from the same offset.
For that purpose, I would want a clock that wasn't subject to adjustment at all,
or if it did adjust, do so in  a way that doesn't hide the fact (i.e. spreading-out
the adjustment over a number of ticks or freezing time until a negative 
adjustment had caught-up).

Then, there are timeouts, a common use-case where I'm not clear on the 
relative merits of the different clocks.   Which is the recommended clock
for code like this?

    start = time.time()
    while event_not_occurred():
        if time.time() - start >= timeout:
            raise TimeOutException

Ideally, the clock used here shouldn't get adjusted during the timing.
Failing that, the system time (plain old time.time()) seems like a reasonable
choice (I'm not used to seeing the system time just around at an irregular pace).  

If time gets set backwards, it is possible to add code to defend against that as long as
the clock doesn't try to hide that time went backwards:

    start = time.time()
    while event_not_occurred():
        now = time.time()
        if now < start:
             # time went backwards, so restart the countdown
             start = now
        if now - start >= timeout:
            raise TimeOutException

If the new clocks go in (or rather stay in), there should be some clear recommendations
about which ones to use for various use cases.  Those recommendations need to be
explained (i.e. under what circumstance would timeout code be better if one abandoned
time.time() in favor of one of the new clocks).

I ask this because adding multiple clocks WILL cause some confusion.
There will be cases where different tools use different clocks, resulting in unexpected interactions.
Victor is proposing that almost every use of time.time() in the standard library be replaced by
monotonic, but I'm at a loss to explain whether the code would actually  be better off
or to explain in what circumstances the new code would behave differently in any observable way.

AFAICT, there has never been a reported bug in sched or Queue because they used time.time(),
so I'm reluctant to have those changed without very clear understanding of how the code would
be better (i.e. what negative outcome would be avoided with time.monotonic or somesuch).


Raymond


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120407/f3445c88/attachment.html>

From glyph at twistedmatrix.com  Sun Apr  8 01:56:24 2012
From: glyph at twistedmatrix.com (Glyph Lefkowitz)
Date: Sat, 7 Apr 2012 16:56:24 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <4F8019AE.1050305@pearwood.info>
References: <4F7B96F1.6020906@pearwood.info>
	<4F7BA3C2.4050705@gmail.com>	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>	<4F7CCF1D.2010600@canterbury.ac.nz>	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
	<4F8019AE.1050305@pearwood.info>
Message-ID: <87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>

On Apr 7, 2012, at 3:40 AM, Steven D'Aprano wrote:

> In any case, NTP is not the only thing that adjusts the clock, e.g. the operating system will adjust the time for daylight savings.

Daylight savings time is not a clock adjustment, at least not in the sense this thread has mostly been talking about the word "clock".  It doesn't affect the "seconds from epoch" measurement, it affects the way in which the clock is formatted to the user.

-glyph
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120407/45aaef97/attachment-0001.html>

From kristjan at ccpgames.com  Sun Apr  8 02:49:09 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Sun, 8 Apr 2012 00:49:09 +0000
Subject: [Python-Dev] Pep 393 and debugging
In-Reply-To: <4F80040F.5070208@v.loewis.de>
References: <EFE3877620384242A686D52278B7CCD3387CBB@RKV-IT-EXCH104.ccp.ad.local>,
	<4F80040F.5070208@v.loewis.de>
Message-ID: <EFE3877620384242A686D52278B7CCD3388FA8@RKV-IT-EXCH104.ccp.ad.local>

Thanks, _PyObject_Dump sounds like just the ticket.  Most of the time, the VS2010 debugger can just run functions willie nillie and thing should simply work.

________________________________________
Fr?: "Martin v. L?wis" [martin at v.loewis.de]
Sent: 7. apr?l 2012 09:08
To: Kristj?n Valur J?nsson
Cc: python-dev at python.org
Efni: Re: [Python-Dev] Pep 393 and debugging

> I wonder if there is a way to make this situation easier?  Perhaps for
> "debug" builds, we can store some debug information in the frame object,
> e.g. utf8 encoding of the filename and function?

I'd like to stress Benjamin's recommendation. Dave Malcolm's gdb
extensions (requires gdb with Python support) are really powerful; they
will automatically render PyObject* by displaying the actual logical
value (and not just for strings).

Failing that, I use _PyObject_Dump to display strings; this requires a
debugger that can call functions in the debuggee (like gdb).

Regards,
Martin

From kristjan at ccpgames.com  Sun Apr  8 02:55:40 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Sun, 8 Apr 2012 00:55:40 +0000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAP7+vJLGdjL+4Zv3+FEuQc6GnDm+kqYe=7_Vx4ni_CwBeYij-w@mail.gmail.com>
References: <CANdZDc527cvHGbQGnNgPBHu673sDi1YOBS3wkUbOejwYQ8BBow@mail.gmail.com>
	<5741767C-F616-490F-917E-0801DA64BE47@twistedmatrix.com>
	<4F7EC1A2.4050501@pearwood.info>
	<CACac1F8cbYx6jviFJLkSPFFJv3k4quPheqPR0JcR2_jQ==cVGg@mail.gmail.com>,
	<CAP7+vJLGdjL+4Zv3+FEuQc6GnDm+kqYe=7_Vx4ni_CwBeYij-w@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD3388FDB@RKV-IT-EXCH104.ccp.ad.local>

Thank you for your veto.  Still, again for the sake of keeping track of things and such, there is this: http://en.wikipedia.org/wiki/Wall_clock_time and also my original suggestion: http://bugs.python.org/issue10278



In the end, the world shall be ruled by the nomenclaturists.



K

________________________________
Fr?: python-dev-bounces+kristjan=ccpgames.com at python.org [python-dev-bounces+kristjan=ccpgames.com at python.org] fyrir h?nd Guido van Rossum [guido at python.org]
Sent: 6. apr?l 2012 15:42
To: Paul Moore
Cc: Python-Dev
Efni: Re: [Python-Dev] this is why we shouldn't call it a "monotonic clock" (was: PEP 418 is too divisive and confusing and should be postponed)


I'd like to veto wall clock because to me that's the clock on my wall, i.e. local time. Otherwise I like the way this thread is going.

--Guido van Rossum (sent from Android phone)

On Apr 6, 2012 4:57 AM, "Paul Moore" <p.f.moore at gmail.com<mailto:p.f.moore at gmail.com>> wrote:
On 6 April 2012 11:12, Steven D'Aprano <steve at pearwood.info<mailto:steve at pearwood.info>> wrote:
Glyph Lefkowitz wrote:
On Apr 5, 2012, at 8:07 PM, Zooko Wilcox-O'Hearn wrote:

2. Those who think that "monotonic clock" means a clock that never jumps,
and that runs at a rate approximating the rate of real time. This is a
very useful kind of clock to have! It is what C++ now calls a "steady
clock". It is what all the major operating systems provide.

All clocks run at a rate approximating the rate of real time.  That is very
close to the definition of the word "clock" in this context.  All clocks
have flaws in that approximation, and really those flaws are the whole
point of access to distinct clock APIs.  Different applications can cope
with different flaws.

I think that this is incorrect.

py> time.clock(); time.sleep(10); time.clock()
0.41
0.41

Blame Python's use of CPU time in clock() on Unix for that. On Windows:

>>> time.clock(); time.sleep(10); time.clock()
14.879754156329385
24.879591008462793

That''s a backward compatibility issue, though - I'd be arguing that time.clock() is the best name for "normally the right clock for interval, benchmark or timeout uses as long as you don't care about oddities like suspend" otherwise. Given that this name is taken, I'd argue for time.wallclock. I'm not familiar enough with the terminology to know what to expect from terms like monotonic, steady, raw and the like.

Paul.


_______________________________________________
Python-Dev mailing list
Python-Dev at python.org<mailto:Python-Dev at python.org>
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120408/cd0bae0a/attachment.html>

From raymond.hettinger at gmail.com  Sun Apr  8 03:26:31 2012
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sat, 7 Apr 2012 18:26:31 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
	clock" (was: PEP 418 is too divisive and confusing and should
	be postponed)
In-Reply-To: <20120407224335.GA21094@cskk.homeip.net>
References: <CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>
	<20120407224335.GA21094@cskk.homeip.net>
Message-ID: <909E9A13-A0E6-4015-8D83-25F5723A920B@gmail.com>

Just to clarify my previous post.

It seems clear that benchmarking and timeout logic would benefit from a clock that cannot be adjusted by NTP.

I'm unclear on whether time.sleep() will be based on the same clock so that timeouts and sleeps are on the same basis.

For scheduling logic (such as the sched module), I would think that NTP adjusted time would be what you want.

I'm also unclear on the interactions between components implemented with different clocks
(for example, if my logs show three seconds between events and a 10-second time-out exception occurs, is that confusing)?


Raymond




From cs at zip.com.au  Sun Apr  8 03:38:30 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sun, 8 Apr 2012 11:38:30 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <20120404024656.GA30247@cskk.homeip.net>
	<20120407042255.GA12672@cskk.homeip.net>
	<CAMpsgwbDB4kkr2G=Si1+qKCEzFBFgpCo0LuEHLR7og8H14T1UA@mail.gmail.com>
References: <20120404024656.GA30247@cskk.homeip.net>
	<20120407042255.GA12672@cskk.homeip.net>
	<CAMpsgwbDB4kkr2G=Si1+qKCEzFBFgpCo0LuEHLR7og8H14T1UA@mail.gmail.com>
Message-ID: <20120408013829.GA6563@cskk.homeip.net>

Victor et al,

Just an update note:

I've started marking up clocks with attributes; not yet complete and I
still need to make a small C extension to present the system clocks to
Python space (which means learning to do that, too).

But you can glance over the start on it here:

  https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/clockutils.py

Several feature flags and some properties for qualifying clocks.

Still needed stuff includes: C access to clocks, .accuracy being actual
clock precision versus the resolution of the units in the underlying OS
call, a __repr__ and/or __str__ to decode feature bitmaps into useful
strings, .is_*() __getattr__ method to resolve against flags by name
or maybe has_flag(str), etc.

On 07Apr2012 01:16, Victor Stinner <victor.stinner at gmail.com> wrote:
| > | This is the original reason for the original defect (issue 10278)
| > | unix' clock() doesn't actually provide a clock in this sense,
| > | it provides a resource usage metric.
| >
| > Yeah:-( Its help says "Return the CPU time or real time since [...]".
| > Two very different things, as demonstrated. I suppose neither goes
| > backwards, but this seems like a classic example of the "useless
| > monotonic clock" against which Greg Ewing railed.
| >
| > And why? For one thing, because one can't inspect its metadata to find
| > out what it does.
| 
| Should I add another key to the result of
| time.get_clock_info('clock')? How can we define "clock on Windows"
| (almost monotonic and steady clock) vs "clock on UNIX" (CPU time) with
| a flag or a value?

For clocks I'm going with two feature flags: WALLCLOCK and RUNTIME. The
former indicates a clock that tries to stay in synch with real world time,
and would still advance when the system is suspended or idle; it would
almost certainly need to "step" over a suspend. The latter means system
run time; it accrues only while the system is up. Neither is CPU usage
(so a time.sleep(10) would add 10 seconds to both).

I think resource usage is not a "clock". We could characterise such
timers and counters with a lot of the same metrics we like to use with
clocks, but I do not think they should be returned by a system
purporting to return clocks or clock values.

On 07Apr2012 14:22, I wrote:
| On 06Apr2012 17:30, Glenn Linderman <v+python at g.nevcal.com> wrote:
| | Hopefully, for each  system, the characteristics of each clock can be 
| | discovered, and fully characterized in available metadata for the clock...
| 
| Victor has asked me to do that for my skeleton, based on the tables he
| has assembled. I'll see what i can do there...

I've started on this, see above.

Victor Stinner <victor.stinner at gmail.com> wrote:
| |  - define flags of all clocks listed in the PEP 418: clocks used in
| | the pseudo-code of time.steady and time.perf_counter, and maybe also
| | time.time
| 
| I'll make one. It will take a little while. Will post again when ready.

So, new code to glance over as evidence of good faith, if not speed:-(

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Life is uncertain.  Eat dessert first.  - Jim Blandy

From guido at python.org  Sun Apr  8 03:49:53 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 7 Apr 2012 18:49:53 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <909E9A13-A0E6-4015-8D83-25F5723A920B@gmail.com>
References: <CAMpsgwZ-j41v62s+VQWAJ-Y4AUFRwneVkZJtq13kr4oQb2S37Q@mail.gmail.com>
	<20120407224335.GA21094@cskk.homeip.net>
	<909E9A13-A0E6-4015-8D83-25F5723A920B@gmail.com>
Message-ID: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>

On Sat, Apr 7, 2012 at 6:26 PM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
> Just to clarify my previous post.
>
> It seems clear that benchmarking and timeout logic would benefit from a clock that cannot be adjusted by NTP.
>
> I'm unclear on whether time.sleep() will be based on the same clock so that timeouts and sleeps are on the same basis.

I made the same suggestion earlier but I don't know that anyone did
anything with it. :-( It would be nice to know what clock sleep() uses
on each of the major platforms.

> For scheduling logic (such as the sched module), I would think that NTP adjusted time would be what you want.

In my view, it depends on whether you are scheduling far in the future
(presumably guided by a calendar) or a short time ahead (milliseconds
to hours).

> I'm also unclear on the interactions between components implemented with different clocks
> (for example, if my logs show three seconds between events and a 10-second time-out exception occurs, is that confusing)?

I don't think this is avoidable. The logs will always use time.time()
or a local time derived from it; but we've accepted that for
benchmarking, timeouts and short-interval scheduling, that's not a
good clock to use.

-- 
--Guido van Rossum (python.org/~guido)

From cs at zip.com.au  Sun Apr  8 06:00:14 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Sun, 8 Apr 2012 14:00:14 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
Message-ID: <20120408040013.GA16581@cskk.homeip.net>

On 07Apr2012 18:49, Guido van Rossum <guido at python.org> wrote:
| On Sat, Apr 7, 2012 at 6:26 PM, Raymond Hettinger
| <raymond.hettinger at gmail.com> wrote:
| > Just to clarify my previous post.
| > It seems clear that benchmarking and timeout logic would benefit
| > from a clock that cannot be adjusted by NTP.

Indeed.
Except for calendar programs setting alarms:-) I suppose they wake up
regularly and consult local time anyway.

| > I'm unclear on whether time.sleep() will be based on the same clock
| > so that timeouts and sleeps are on the same basis.
| 
| I made the same suggestion earlier but I don't know that anyone did
| anything with it. :-( It would be nice to know what clock sleep() uses
| on each of the major platforms.

I saw it but didn't know what I could do with it, or even if it can be
found out in any very general sense.

Looking at nanosleep(2) on a recent Linux system says:

  POSIX.1  specifies  that  nanosleep()  should  measure time against the
  CLOCK_REALTIME clock.  However,  Linux  measures  the  time using  the
  CLOCK_MONOTONIC  clock.   This  probably  does  not  matter, since the
  POSIX.1 specification  for  clock_settime(2)  says  that discontinuous
  changes in CLOCK_REALTIME should not affect nanosleep(): 

    Setting  the  value  of  the CLOCK_REALTIME clock via clock_settime(2)
    shall have no effect on threads that are blocked waiting for a relative
    time service based upon this clock, including the nanosleep() function;
    ...   Consequently,  these  time services shall expire when the requested
    relative interval elapses, independently of the new or old value
    of the clock.

and POSIX's nanosleep(3p) says:

  ... except  for  the case of being interrupted by a signal, the suspension
  time shall not be less than the time specified by rqtp,  as measured by the
  system clock CLOCK_REALTIME.

| > For scheduling logic (such as the sched module), I would think that
| > NTP adjusted time would be what you want.
| 
| In my view, it depends on whether you are scheduling far in the future
| (presumably guided by a calendar) or a short time ahead (milliseconds
| to hours).

In my view it depends on whether you're working in a calendar or in
elapsed time. The scheduling range ("far in the future" for example)
shouldn't be relevant, for all that "far in the future" is usually done
with a calendar instead of a relative timespans in flat seconds.

Raymond:
| > I'm also unclear on the interactions between components implemented with
| > different clocks (for example, if my logs show three seconds between
| > events and a 10-second time-out exception occurs, is that confusing)?

I don't think it is confusing given some more context; to me it would
usually be a Big Clue that the internal supposedly-wallclock got a big
adjustment between log timestamps. If that shouldn't happen it may be
confusing or surprising...

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

The street finds its own uses for things.       - William Gibson

From solipsis at pitrou.net  Sun Apr  8 12:42:27 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 8 Apr 2012 12:42:27 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
Message-ID: <20120408124227.78ccab01@pitrou.net>


> | I made the same suggestion earlier but I don't know that anyone did
> | anything with it. :-( It would be nice to know what clock sleep() uses
> | on each of the major platforms.
> 
> I saw it but didn't know what I could do with it, or even if it can be
> found out in any very general sense.
> 
> Looking at nanosleep(2) on a recent Linux system says:

time.sleep() uses select(), not nanosleep().
select() is not specified to use a particular clock. However, since it
takes a timeout rather than a deadline, it would be reasonable for it
to use a non-adjustable clock :-)
http://pubs.opengroup.org/onlinepubs/9699919799/functions/select.html

Regards

Antoine.



From guido at python.org  Sun Apr  8 16:29:30 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 8 Apr 2012 07:29:30 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <20120408124227.78ccab01@pitrou.net>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
Message-ID: <CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>

On Sun, Apr 8, 2012 at 3:42 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>
>> | I made the same suggestion earlier but I don't know that anyone did
>> | anything with it. :-( It would be nice to know what clock sleep() uses
>> | on each of the major platforms.
>>
>> I saw it but didn't know what I could do with it, or even if it can be
>> found out in any very general sense.
>>
>> Looking at nanosleep(2) on a recent Linux system says:
>
> time.sleep() uses select(), not nanosleep().
> select() is not specified to use a particular clock. However, since it
> takes a timeout rather than a deadline, it would be reasonable for it
> to use a non-adjustable clock :-)
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/select.html

Still, my hope was to cut short a bunch of discussion by declaring
that on every platform, one of the timers available should match the
one used by sleep(), select() and the like -- assuming they all use
the same timer underneath in a typical OS, even though (due to
standardization at different times by different standards bodies) they
aren't all specified the same.

IOW "What's good enough for sleep() is good enough for
user-implemented timeouts and scheduling." as a way to reach at least
one decision for a platform with agreed-upon cross-platform
characteristics that are useful.

What to name it can't be decided this way, although I might put
forward time.sleeptimer().

I personally have a need for one potentially different clock -- to
measure short intervals for benchmarks and profiling. This might be
called time.performancetimer()?

-- 
--Guido van Rossum (python.org/~guido)

From solipsis at pitrou.net  Sun Apr  8 17:35:32 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 8 Apr 2012 17:35:32 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
	<CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
Message-ID: <20120408173532.364c5467@pitrou.net>

On Sun, 8 Apr 2012 07:29:30 -0700
Guido van Rossum <guido at python.org> wrote:
> 
> What to name it can't be decided this way, although I might put
> forward time.sleeptimer().

interval_timer() ?
I would suggest timer() simply, but it's too close to time().

> I personally have a need for one potentially different clock -- to
> measure short intervals for benchmarks and profiling. This might be
> called time.performancetimer()?

It's called perf_counter() in the PEP:
http://www.python.org/dev/peps/pep-0418/#deferred-api-time-perf-counter

Regards

Antoine.

From paul at colomiets.name  Sun Apr  8 22:46:52 2012
From: paul at colomiets.name (Paul Colomiets)
Date: Sun, 8 Apr 2012 23:46:52 +0300
Subject: [Python-Dev] PEP-419: Protecting cleanup statements from
	interruptions
Message-ID: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>

Hi,

I present my first PEP.

http://www.python.org/dev/peps/pep-0419/

Added text to the end of email for easier reference. Comments are welcome.

-- 
Paul



PEP: 419
Title: Protecting cleanup statements from interruptions
Version: $Revision$
Last-Modified: $Date$
Author: Paul Colomiets <paul at colomiets.name>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 06-Apr-2012
Python-Version: 3.3


Abstract
========

This PEP proposes a way to protect Python code from being interrupted
inside a finally clause or during context manager cleanup.


Rationale
=========

Python has two nice ways to do cleanup.  One is a ``finally``
statement and the other is a context manager (usually called using a
``with`` statement).  However, neither is protected from interruption
by ``KeyboardInterrupt`` or ``GeneratorExit`` caused by
``generator.throw()``.  For example::

    lock.acquire()
    try:
        print('starting')
        do_something()
    finally:
        print('finished')
        lock.release()

If ``KeyboardInterrupt`` occurs just after the second ``print()``
call, the lock will not be released.  Similarly, the following code
using the ``with`` statement is affected::

    from threading import Lock

    class MyLock:

        def __init__(self):
            self._lock_impl = Lock()

        def __enter__(self):
            self._lock_impl.acquire()
            print("LOCKED")

        def __exit__(self):
            print("UNLOCKING")
            self._lock_impl.release()

    lock = MyLock()
    with lock:
        do_something

If ``KeyboardInterrupt`` occurs near any of the ``print()`` calls, the
lock will never be released.


Coroutine Use Case
------------------

A similar case occurs with coroutines.  Usually coroutine libraries
want to interrupt the coroutine with a timeout.  The
``generator.throw()`` method works for this use case, but there is no
way of knowing if the coroutine is currently suspended from inside a
``finally`` clause.

An example that uses yield-based coroutines follows.  The code looks
similar using any of the popular coroutine libraries Monocle [1]_,
Bluelet [2]_, or Twisted [3]_. ::

    def run_locked():
        yield connection.sendall('LOCK')
        try:
            yield do_something()
            yield do_something_else()
        finally:
            yield connection.sendall('UNLOCK')

    with timeout(5):
        yield run_locked()

In the example above, ``yield something`` means to pause executing the
current coroutine and to execute coroutine ``something`` until it
finishes execution.  Therefore the coroutine library itself needs to
maintain a stack of generators.  The ``connection.sendall()`` call waits
until the socket is writable and does a similar thing to what
``socket.sendall()`` does.

The ``with`` statement ensures that all code is executed within 5
seconds timeout.  It does so by registering a callback in the main
loop, which calls ``generator.throw()`` on the top-most frame in the
coroutine stack when a timeout happens.

The ``greenlets`` extension works in a similar way, except that it
doesn't need ``yield`` to enter a new stack frame.  Otherwise
considerations are similar.


Specification
=============

Frame Flag 'f_in_cleanup'
-------------------------

A new flag on the frame object is proposed.  It is set to ``True`` if
this frame is currently executing a ``finally`` clause.  Internally,
the flag must be implemented as a counter of nested finally statements
currently being executed.

The internal counter also needs to be incremented during execution of
the ``SETUP_WITH`` and ``WITH_CLEANUP`` bytecodes, and decremented
when execution for these bytecodes is finished.  This allows to also
protect ``__enter__()`` and ``__exit__()`` methods.


Function 'sys.setcleanuphook'
-----------------------------

A new function for the ``sys`` module is proposed.  This function sets
a callback which is executed every time ``f_in_cleanup`` becomes
false.  Callbacks get a frame object as their sole argument, so that
they can figure out where they are called from.

The setting is thread local and must be stored in the
``PyThreadState`` structure.


Inspect Module Enhancements
---------------------------

Two new functions are proposed for the ``inspect`` module:
``isframeincleanup()`` and ``getcleanupframe()``.

``isframeincleanup()``, given a frame or generator object as its sole
argument, returns the value of the ``f_in_cleanup`` attribute of a
frame itself or of the ``gi_frame`` attribute of a generator.

``getcleanupframe()``, given a frame object as its sole argument,
returns the innermost frame which has a true value of
``f_in_cleanup``, or ``None`` if no frames in the stack have a nonzero
value for that attribute.  It starts to inspect from the specified
frame and walks to outer frames using ``f_back`` pointers, just like
``getouterframes()`` does.


Example
=======

An example implementation of a SIGINT handler that interrupts safely
might look like::

    import inspect, sys, functools

    def sigint_handler(sig, frame):
        if inspect.getcleanupframe(frame) is None:
            raise KeyboardInterrupt()
        sys.setcleanuphook(functools.partial(sigint_handler, 0))

A coroutine example is out of scope of this document, because its
implementation depends very much on a trampoline (or main loop) used
by coroutine library.


Unresolved Issues
=================

Interruption Inside With Statement Expression
---------------------------------------------

Given the statement ::

    with open(filename):
        do_something()

Python can be interrupted after ``open()`` is called, but before the
``SETUP_WITH`` bytecode is executed.  There are two possible
decisions:

* Protect ``with`` expressions.  This would require another bytecode,
  since currently there is no way of recognizing the start of the
  ``with`` expression.

* Let the user write a wrapper if he considers it important for the
  use-case.  A safe wrapper might look like this::

      class FileWrapper(object):

          def __init__(self, filename, mode):
              self.filename = filename
              self.mode = mode

          def __enter__(self):
              self.file = open(self.filename, self.mode)

          def __exit__(self):
              self.file.close()

  Alternatively it can be written using the ``contextmanager()``
  decorator::

      @contextmanager
      def open_wrapper(filename, mode):
          file = open(filename, mode)
          try:
              yield file
          finally:
              file.close()

  This code is safe, as the first part of the generator (before yield)
  is executed inside the ``SETUP_WITH`` bytecode of the caller.


Exception Propagation
---------------------

Sometimes a ``finally`` clause or an ``__enter__()``/``__exit__()``
method can raise an exception.  Usually this is not a problem, since
more important exceptions like ``KeyboardInterrupt`` or ``SystemExit``
should be raised instead.  But it may be nice to be able to keep the
original exception inside a ``__context__`` attribute.  So the cleanup
hook signature may grow an exception argument::

    def sigint_handler(sig, frame)
        if inspect.getcleanupframe(frame) is None:
            raise KeyboardInterrupt()
        sys.setcleanuphook(retry_sigint)

    def retry_sigint(frame, exception=None):
        if inspect.getcleanupframe(frame) is None:
            raise KeyboardInterrupt() from exception

.. note::

   There is no need to have three arguments like in the ``__exit__``
   method since there is a ``__traceback__`` attribute in exception in
   Python 3.

However, this will set the ``__cause__`` for the exception, which is
not exactly what's intended.  So some hidden interpreter logic may be
used to put a ``__context__`` attribute on every exception raised in a
cleanup hook.


Interruption Between Acquiring Resource and Try Block
-----------------------------------------------------

The example from the first section is not totally safe.  Let's take a
closer look::

    lock.acquire()
    try:
        do_something()
    finally:
        lock.release()

The problem might occur if the code is interrupted just after
``lock.acquire()`` is executed but before the ``try`` block is
entered.

There is no way the code can be fixed unmodified.  The actual fix
depends very much on the use case.  Usually code can be fixed using a
``with`` statement::

    with lock:
        do_something()

However, for coroutines one usually can't use the ``with`` statement
because you need to ``yield`` for both the acquire and release
operations.  So the code might be rewritten like this::

    try:
        yield lock.acquire()
        do_something()
    finally:
        yield lock.release()

The actual locking code might need more code to support this use case,
but the implementation is usually trivial, like this: check if the
lock has been acquired and unlock if it is.


Handling EINTR Inside a Finally
-------------------------------

Even if a signal handler is prepared to check the ``f_in_cleanup``
flag, ``InterruptedError`` might be raised in the cleanup handler,
because the respective system call returned an ``EINTR`` error.  The
primary use cases are prepared to handle this:

* Posix mutexes never return ``EINTR``

* Networking libraries are always prepared to handle ``EINTR``

* Coroutine libraries are usually interrupted with the ``throw()``
  method, not with a signal

The platform-specific function ``siginterrupt()`` might be used to
remove the need to handle ``EINTR``.  However, it may have hardly
predictable consequences, for example ``SIGINT`` a handler is never
called if the main thread is stuck inside an IO routine.

A better approach would be to have the code, which is usually used in
cleanup handlers, be prepared to handle ``InterruptedError``
explicitly.  An example of such code might be a file-based lock
implementation.


Setting Interruption Context Inside Finally Itself
--------------------------------------------------

Some coroutine libraries may need to set a timeout for the finally
clause itself.  For example::

    try:
        do_something()
    finally:
        with timeout(0.5):
            try:
                yield do_slow_cleanup()
            finally:
                yield do_fast_cleanup()

With current semantics, timeout will either protect the whole ``with``
block or nothing at all, depending on the implementation of each
library.  What the author intended is to treat ``do_slow_cleanup`` as
ordinary code, and ``do_fast_cleanup`` as a cleanup (a
non-interruptible one).

A similar case might occur when using greenlets or tasklets.

This case can be fixed by exposing ``f_in_cleanup`` as a counter, and
by calling a cleanup hook on each decrement.  A coroutine library may
then remember the value at timeout start, and compare it on each hook
execution.

But in practice, the example is considered to be too obscure to take
into account.


Modifying KeyboardInterrupt
---------------------------

It should be decided if the default ``SIGINT`` handler should be
modified to use the described mechanism.  The initial proposition is
to keep old behavior, for two reasons:

* Most application do not care about cleanup on exit (either they do
  not have external state, or they modify it in crash-safe way).

* Cleanup may take too much time, not giving user a chance to
  interrupt an application.

The latter case can be fixed by allowing an unsafe break if a
``SIGINT`` handler is called twice, but it seems not worth the
complexity.


Alternative Python Implementations Support
==========================================

We consider ``f_in_cleanup`` an implementation detail.  The actual
implementation may have some fake frame-like object passed to signal
handler, cleanup hook and returned from ``getcleanupframe()``.  The
only requirement is that the ``inspect`` module functions work as
expected on these objects.  For this reason, we also allow to pass a
generator object to the ``isframeincleanup()`` function, which removes
the need to use the ``gi_frame`` attribute.

It might be necessary to specify that ``getcleanupframe()`` must
return the same object that will be passed to cleanup hook at the next
invocation.


Alternative Names
=================

The original proposal had a ``f_in_finally`` frame attribute, as the
original intention was to protect ``finally`` clauses.  But as it grew
up to protecting ``__enter__`` and ``__exit__`` methods too, the
``f_in_cleanup`` name seems better.  Although the ``__enter__`` method
is not a cleanup routine, it at least relates to cleanup done by
context managers.

``setcleanuphook``, ``isframeincleanup`` and ``getcleanupframe`` can
be unobscured to ``set_cleanup_hook``, ``is_frame_in_cleanup`` and
``get_cleanup_frame``, although they follow the naming convention of
their respective modules.


Alternative Proposals
=====================

Propagating 'f_in_cleanup' Flag Automatically
---------------------------------------------

This can make ``getcleanupframe()`` unnecessary.  But for yield-based
coroutines you need to propagate it yourself.  Making it writable
leads to somewhat unpredictable behavior of ``setcleanuphook()``.


Add Bytecodes 'INCR_CLEANUP', 'DECR_CLEANUP'
--------------------------------------------

These bytecodes can be used to protect the expression inside the
``with`` statement, as well as making counter increments more explicit
and easy to debug (visible inside a disassembly).  Some middle ground
might be chosen, like ``END_FINALLY`` and ``SETUP_WITH`` implicitly
decrementing the counter (``END_FINALLY`` is present at end of every
``with`` suite).

However, adding new bytecodes must be considered very carefully.


Expose 'f_in_cleanup' as a Counter
----------------------------------

The original intention was to expose a minimum of needed
functionality.  However, as we consider the frame flag
``f_in_cleanup`` an implementation detail, we may expose it as a
counter.

Similarly, if we have a counter we may need to have the cleanup hook
called on every counter decrement.  It's unlikely to have much
performance impact as nested finally clauses are an uncommon case.


Add code object flag 'CO_CLEANUP'
---------------------------------

As an alternative to set the flag inside the ``SETUP_WITH`` and
``WITH_CLEANUP`` bytecodes, we can introduce a flag ``CO_CLEANUP``.
When the interpreter starts to execute code with ``CO_CLEANUP`` set,
it sets ``f_in_cleanup`` for the whole function body.  This flag is
set for code objects of ``__enter__`` and ``__exit__`` special
methods.  Technically it might be set on functions called
``__enter__`` and ``__exit__``.

This seems to be less clear solution.  It also covers the case where
``__enter__`` and ``__exit__`` are called manually.  This may be
accepted either as a feature or as an unnecessary side-effect (or,
though unlikely, as a bug).

It may also impose a problem when ``__enter__`` or ``__exit__``
functions are implemented in C, as there is no code object to check
for the ``f_in_cleanup`` flag.


Have Cleanup Callback on Frame Object Itself
--------------------------------------------

The frame object may be extended to have a ``f_cleanup_callback``
member which is called when ``f_in_cleanup`` is reset to 0.  This
would help to register different callbacks to different coroutines.

Despite its apparent beauty, this solution doesn't add anything, as
the two primary use cases are:

* Setting the callback in a signal handler.  The callback is
  inherently a single one for this case.

* Use a single callback per loop for the coroutine use case.  Here, in
  almost all cases, there is only one loop per thread.


No Cleanup Hook
---------------

The original proposal included no cleanup hook specification, as there
are a few ways to achieve the same using current tools:

* Using ``sys.settrace()`` and the ``f_trace`` callback.  This may
  impose some problem to debugging, and has a big performance impact
  (although interrupting doesn't happen very often).

* Sleeping a bit more and trying again.  For a coroutine library this
  is easy.  For signals it may be achieved using ``signal.alert``.

Both methods are considered too impractical and a way to catch exit
from ``finally`` clauses is proposed.


References
==========

.. [1] Monocle
   https://github.com/saucelabs/monocle

.. [2] Bluelet
   https://github.com/sampsyo/bluelet

.. [3] Twisted: inlineCallbacks
   http://twistedmatrix.com/documents/8.1.0/api/twisted.internet.defer.html

.. [4] Original discussion
   http://mail.python.org/pipermail/python-ideas/2012-April/014705.html


Copyright
=========

This document has been placed in the public domain.



..
  Local Variables:
  mode: indented-text
  indent-tabs-mode: nil
  sentence-end-double-space: t
  fill-column: 70
  coding: utf-8
  End:

From solipsis at pitrou.net  Sun Apr  8 23:06:14 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 8 Apr 2012 23:06:14 +0200
Subject: [Python-Dev] PEP-419: Protecting cleanup statements from
	interruptions
References: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
Message-ID: <20120408230614.574654fe@pitrou.net>


Hello Paul,

Thanks for the PEP and the description of the various issues.

> An example implementation of a SIGINT handler that interrupts safely
> might look like::
> 
>     import inspect, sys, functools
> 
>     def sigint_handler(sig, frame):
>         if inspect.getcleanupframe(frame) is None:
>             raise KeyboardInterrupt()
>         sys.setcleanuphook(functools.partial(sigint_handler, 0))

It is not clear whether you are proposing this for the default signal
handler, or only as an example that third-party libraries or frameworks
could implement.

Regards

Antoine.



From benjamin at python.org  Sun Apr  8 23:42:50 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Sun, 8 Apr 2012 17:42:50 -0400
Subject: [Python-Dev] PEP-419: Protecting cleanup statements from
	interruptions
In-Reply-To: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
References: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
Message-ID: <CAPZV6o8o2GfS3xR+=7rxokd_Rnd6aOzyjJmi-qxvn6sAUm5hvg@mail.gmail.com>

2012/4/8 Paul Colomiets <paul at colomiets.name>:
> Function 'sys.setcleanuphook'
> -----------------------------
>
> A new function for the ``sys`` module is proposed. ?This function sets
> a callback which is executed every time ``f_in_cleanup`` becomes
> false. ?Callbacks get a frame object as their sole argument, so that
> they can figure out where they are called from.

Calling a function every time you leave a finally block? Isn't that a
bit expensive?


-- 
Regards,
Benjamin

From victor.stinner at gmail.com  Mon Apr  9 02:00:32 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 9 Apr 2012 02:00:32 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
	<CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
Message-ID: <CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>

> IOW "What's good enough for sleep() is good enough for
> user-implemented timeouts and scheduling." as a way to reach at least
> one decision for a platform with agreed-upon cross-platform
> characteristics that are useful.

sleep() is implemented in the kernel. The kernel is notified when a
clock is set, and so can choose how to handle time adjustement. Most
"sleeping" functions use the system clock but don't care of clock
adjustement.

> I personally have a need for one potentially different clock -- to
> measure short intervals for benchmarks and profiling. This might be
> called time.performancetimer()?

I deferred this topic because it is unclear to me if such timer has to
count elapsed time during a sleep or not. For example, time.clock()
does on UNIX, whereas it doesn't on Windows. You may need two clocks
for this:
 * time.perf_counter(): high-resolution timer for benchmarking, count
time elasped during a sleep
 * time.process_time(): High-resolution (?) per-process timer from the
CPU. (other possible names: time.process_cpu_time() or
time.cpu_time())

On Windows, GetProcessTimes() has not a "high-resolution": it has a
accuracy of 1 ms in the best case. QueryPerformanceCounter() counts
time elapsed during a sleep, I don't know for GetProcessTimes.

Victor

From regebro at gmail.com  Mon Apr  9 03:29:02 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Mon, 9 Apr 2012 03:29:02 +0200
Subject: [Python-Dev] an alternative to embedding policy in PEP 418
In-Reply-To: <4F7F5F6D.7020003@stoneleaf.us>
References: <CAL0kPAUCaAYa-RsaN5Q2H_j+NT+9q4fFwDXLimg6wxuapYpnSg@mail.gmail.com>
	<20120405221758.GA12229@cskk.homeip.net>
	<CAL0kPAWJaWLiRwwzaPYDq7MO2Yj9vBObYt2GHQAUcX3+fqQAyA@mail.gmail.com>
	<4F7F5F6D.7020003@stoneleaf.us>
Message-ID: <CAL0kPAUjR_fHpc4jVnRJs0S5pDT-a2jznqNVngyeWyh2oK8n7Q@mail.gmail.com>

On Fri, Apr 6, 2012 at 23:26, Ethan Furman <ethan at stoneleaf.us> wrote:
> Huh? ?Your point is that all APIs are less than ideal because you have to
> read the docs to know for certain how they work?

No.

//Lennart

From cs at zip.com.au  Mon Apr  9 04:54:42 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Mon, 9 Apr 2012 12:54:42 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
References: <CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
Message-ID: <20120409025442.GA22023@cskk.homeip.net>

On 09Apr2012 02:00, Victor Stinner <victor.stinner at gmail.com> wrote:
| > I personally have a need for one potentially different clock -- to
| > measure short intervals for benchmarks and profiling. This might be
| > called time.performancetimer()?
| 
| I deferred this topic because it is unclear to me if such timer has to
| count elapsed time during a sleep or not. For example, time.clock()
| does on UNIX, whereas it doesn't on Windows. You may need two clocks
| for this:
|  * time.perf_counter(): high-resolution timer for benchmarking, count
| time elasped during a sleep

For POSIX, sounds like CLOCK_MONOTONIC_RAW to me.

|  * time.process_time(): High-resolution (?) per-process timer from the
| CPU. (other possible names: time.process_cpu_time() or
| time.cpu_time())

POSIX offers CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID that
seem to suit this need, depending on your threading situation (and what
you're measuring).

| On Windows, GetProcessTimes() has not a "high-resolution": it has a
| accuracy of 1 ms in the best case.

This page:
  http://msdn.microsoft.com/en-us/library/windows/desktop/ms683223%28v=vs.85%29.aspx
says "100-nanosecond time units".

Am I going to the wrong place to learn about these functions?
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

I distrust a research person who is always obviously busy on a task.
- Robert Frosch, VP, GM Research

From guido at python.org  Mon Apr  9 06:14:06 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 8 Apr 2012 21:14:06 -0700
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
	<CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
	<CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
Message-ID: <CAP7+vJ+Ta8fZS6V5t-+3L_j0Zjaq0p92Jwm3RR5CbTV0PN+DXw@mail.gmail.com>

On Sun, Apr 8, 2012 at 5:00 PM, Victor Stinner <victor.stinner at gmail.com> wrote:
>> IOW "What's good enough for sleep() is good enough for
>> user-implemented timeouts and scheduling." as a way to reach at least
>> one decision for a platform with agreed-upon cross-platform
>> characteristics that are useful.
>
> sleep() is implemented in the kernel. The kernel is notified when a
> clock is set, and so can choose how to handle time adjustement. Most
> "sleeping" functions use the system clock but don't care of clock
> adjustement.

We're going around in circles. I'm not asking what sleep does, I want
on principle a timer that does the same thing as sleep(), regardless
of how sleep() works. So if on some OS sleep() uses the same algorithm
as CLOCK_MONOTONIC_RAW, I want my timer to use that too. But if on
some other OS sleep() uses CLOCK_MONOTONIC, I want my timer there to
use that. And if on some OS sleep() is buggy and uses the time-of-day
clock, well, I wouldn't mind if my timer used the same thing.

>> I personally have a need for one potentially different clock -- to
>> measure short intervals for benchmarks and profiling. This might be
>> called time.performancetimer()?
>
> I deferred this topic because it is unclear to me if such timer has to
> count elapsed time during a sleep or not. For example, time.clock()
> does on UNIX, whereas it doesn't on Windows.

I will declare that that was a mistake in clock(), but one that's too
late to fix, because fixing it would break too many programs (those on
*nix that use it to measure CPU time, and those on Windows that use it
to measure real time).

>You may need two clocks
> for this:
> ?* time.perf_counter(): high-resolution timer for benchmarking, count
> time elasped during a sleep
> ?* time.process_time(): High-resolution (?) per-process timer from the
> CPU. (other possible names: time.process_cpu_time() or
> time.cpu_time())

TBH I don't need another timer that measures CPU time (not even on
Windows). In a sense, measuring CPU time is a relic from the age of
mainframes and timesharing, where CPU time was the most precious
resource (and in some cases the unit in which other resources were
expressed for accounting purposes). In modern days, it's much more
likely that the time you're measuring is somehow related to how long a
use has to wait for some result (e.g. web response times) and here
"wait time" is just as real as CPU time.

> On Windows, GetProcessTimes() has not a "high-resolution": it has a
> accuracy of 1 ms in the best case. QueryPerformanceCounter() counts
> time elapsed during a sleep, I don't know for GetProcessTimes.

-- 
--Guido van Rossum (python.org/~guido)

From paul at colomiets.name  Mon Apr  9 08:54:20 2012
From: paul at colomiets.name (Paul Colomiets)
Date: Mon, 9 Apr 2012 09:54:20 +0300
Subject: [Python-Dev] PEP-419: Protecting cleanup statements from
	interruptions
In-Reply-To: <20120408230614.574654fe@pitrou.net>
References: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
	<20120408230614.574654fe@pitrou.net>
Message-ID: <CAA0gF6r15WrrLTmFK--W=A1SG0pCgWmD0nQ57LX=u3KqZsNddQ@mail.gmail.com>

Hi Antoine,

On Mon, Apr 9, 2012 at 12:06 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>
> Hello Paul,
>
> Thanks for the PEP and the description of the various issues.
>
>> An example implementation of a SIGINT handler that interrupts safely
>> might look like::
>>
>> ? ? import inspect, sys, functools
>>
>> ? ? def sigint_handler(sig, frame):
>> ? ? ? ? if inspect.getcleanupframe(frame) is None:
>> ? ? ? ? ? ? raise KeyboardInterrupt()
>> ? ? ? ? sys.setcleanuphook(functools.partial(sigint_handler, 0))
>
> It is not clear whether you are proposing this for the default signal
> handler, or only as an example that third-party libraries or frameworks
> could implement.
>

Only as an example. The reason is in "Modifying KeyboardInterrupt"
section under "Unresolved Issues". So it might be changed if there
is demand.

-- 
Paul

From paul at colomiets.name  Mon Apr  9 09:05:33 2012
From: paul at colomiets.name (Paul Colomiets)
Date: Mon, 9 Apr 2012 10:05:33 +0300
Subject: [Python-Dev] PEP-419: Protecting cleanup statements from
	interruptions
In-Reply-To: <CAPZV6o8o2GfS3xR+=7rxokd_Rnd6aOzyjJmi-qxvn6sAUm5hvg@mail.gmail.com>
References: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
	<CAPZV6o8o2GfS3xR+=7rxokd_Rnd6aOzyjJmi-qxvn6sAUm5hvg@mail.gmail.com>
Message-ID: <CAA0gF6qTp1B9uKYsh3pKR7n2a2S0ocSYYPqKkcdH_KUdM43G6A@mail.gmail.com>

Hi Benjamin,

On Mon, Apr 9, 2012 at 12:42 AM, Benjamin Peterson <benjamin at python.org> wrote:
> 2012/4/8 Paul Colomiets <paul at colomiets.name>:
>> Function 'sys.setcleanuphook'
>> -----------------------------
>>
>> A new function for the ``sys`` module is proposed. ?This function sets
>> a callback which is executed every time ``f_in_cleanup`` becomes
>> false. ?Callbacks get a frame object as their sole argument, so that
>> they can figure out where they are called from.
>
> Calling a function every time you leave a finally block? Isn't that a
> bit expensive?
>

For signal handler it isn't, because you set it only when signal happens,
and remove it when it first happens (in the common case)

For yield-based coroutines, there is a similar overhead of trampoline
at each yield and each return, and exit from finally doesn't happen more
often than return.

For both greenlets and yield-based coroutines it is intented to be used
for exceptional situation (when timeout happens *and* coroutine
currently in finally block), so can be turned off when unneeded
(and even turned on only for this specific coroutine).

When hook is not set it's only checking of single pointer for NULL
value at each exit from finally. This overhead should be negligible.

-- 
Paul

From mark at hotpy.org  Mon Apr  9 10:56:34 2012
From: mark at hotpy.org (Mark Shannon)
Date: Mon, 09 Apr 2012 09:56:34 +0100
Subject: [Python-Dev] Removing surplus fields from the frame object and not
 adding any new ones.
Message-ID: <4F82A442.8020508@hotpy.org>

The frame object is a key object in CPython. It holds the state
of a function invocation. Frame objects are allocated, initialised
and deallocated at a rapid rate.
Each extra field in the frame object requires extra work for each
and every function invocation. Fewer fields in the frame object
means less overhead for function calls, and cleaner simpler code.

We have recently removed the f_yieldfrom field from the frame object.
(http://bugs.python.org/issue14230)

The f_exc_type, f->f_exc_value, f->f_exc_traceback fields which handle
sys.exc_info() in generators could be moved to the generator object.
(http://bugs.python.org/issue13897)

The f_tstate field is redundant and, it would seem, dangerous
(http://bugs.python.org/issue14432)

The f_builtins, f_globals, f_locals fields could be combined into a
single f_namespaces struct.
(http://code.activestate.com/lists/python-dev/113381/)

Now PEP 419 proposes adding (yet) another field to the frame object.
Please don't.

Clean, concise data structures lead to clean, concise code.
which we all know is a "good thing" :)

Cheers,
Mark.


From andrew.svetlov at gmail.com  Mon Apr  9 12:40:51 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Mon, 9 Apr 2012 13:40:51 +0300
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <4F82A442.8020508@hotpy.org>
References: <4F82A442.8020508@hotpy.org>
Message-ID: <CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>

Do you want to create `frame` and `f_namespaces` every function call
instead of single `frame` creation?

On Mon, Apr 9, 2012 at 11:56 AM, Mark Shannon <mark at hotpy.org> wrote:
> The frame object is a key object in CPython. It holds the state
> of a function invocation. Frame objects are allocated, initialised
> and deallocated at a rapid rate.
> Each extra field in the frame object requires extra work for each
> and every function invocation. Fewer fields in the frame object
> means less overhead for function calls, and cleaner simpler code.
>
> We have recently removed the f_yieldfrom field from the frame object.
> (http://bugs.python.org/issue14230)
>
> The f_exc_type, f->f_exc_value, f->f_exc_traceback fields which handle
> sys.exc_info() in generators could be moved to the generator object.
> (http://bugs.python.org/issue13897)
>
> The f_tstate field is redundant and, it would seem, dangerous
> (http://bugs.python.org/issue14432)
>
> The f_builtins, f_globals, f_locals fields could be combined into a
> single f_namespaces struct.
> (http://code.activestate.com/lists/python-dev/113381/)
>
> Now PEP 419 proposes adding (yet) another field to the frame object.
> Please don't.
>
> Clean, concise data structures lead to clean, concise code.
> which we all know is a "good thing" :)
>
> Cheers,
> Mark.
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com



-- 
Thanks,
Andrew Svetlov

From mark at hotpy.org  Mon Apr  9 12:51:54 2012
From: mark at hotpy.org (Mark Shannon)
Date: Mon, 09 Apr 2012 11:51:54 +0100
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
Message-ID: <4F82BF4A.407@hotpy.org>

Andrew Svetlov wrote:
> Do you want to create `frame` and `f_namespaces` every function call
> instead of single `frame` creation?

f_namespaces would be part of the frame, replacing f_builtins, f_globals
and f_locals. The indirection of an external object hurts performance,
so it would have to be a struct within the frame. The aim is clarity;
locals, globals and builtins form a trio, so should be implemented as such.

> On Mon, Apr 9, 2012 at 11:56 AM, Mark Shannon <mark at hotpy.org> wrote:
>> The frame object is a key object in CPython. It holds the state
>> of a function invocation. Frame objects are allocated, initialised
>> and deallocated at a rapid rate.
>> Each extra field in the frame object requires extra work for each
>> and every function invocation. Fewer fields in the frame object
>> means less overhead for function calls, and cleaner simpler code.
>>
>> We have recently removed the f_yieldfrom field from the frame object.
>> (http://bugs.python.org/issue14230)
>>
>> The f_exc_type, f->f_exc_value, f->f_exc_traceback fields which handle
>> sys.exc_info() in generators could be moved to the generator object.
>> (http://bugs.python.org/issue13897)
>>
>> The f_tstate field is redundant and, it would seem, dangerous
>> (http://bugs.python.org/issue14432)
>>
>> The f_builtins, f_globals, f_locals fields could be combined into a
>> single f_namespaces struct.
>> (http://code.activestate.com/lists/python-dev/113381/)
>>
>> Now PEP 419 proposes adding (yet) another field to the frame object.
>> Please don't.
>>
>> Clean, concise data structures lead to clean, concise code.
>> which we all know is a "good thing" :)
>>
>> Cheers,
>> Mark.
>>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
> 
> 
> 


From victor.stinner at gmail.com  Mon Apr  9 13:24:38 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 9 Apr 2012 13:24:38 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAP7+vJ+Ta8fZS6V5t-+3L_j0Zjaq0p92Jwm3RR5CbTV0PN+DXw@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
	<CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
	<CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
	<CAP7+vJ+Ta8fZS6V5t-+3L_j0Zjaq0p92Jwm3RR5CbTV0PN+DXw@mail.gmail.com>
Message-ID: <CAMpsgwa4hvSM9wx+itvyEB9r7petnju+vWQ5O6oXDMr--Rgsmw@mail.gmail.com>

2012/4/9 Guido van Rossum <guido at python.org>:
>>You may need two clocks
>> for this:
>> ?* time.perf_counter(): high-resolution timer for benchmarking, count
>> time elasped during a sleep
>> ?* time.process_time(): High-resolution (?) per-process timer from the
>> CPU. (other possible names: time.process_cpu_time() or
>> time.cpu_time())
>
> TBH I don't need another timer that measures CPU time (not even on
> Windows). In a sense, measuring CPU time is a relic from the age of
> mainframes and timesharing, where CPU time was the most precious
> resource (and in some cases the unit in which other resources were
> expressed for accounting purposes). In modern days, it's much more
> likely that the time you're measuring is somehow related to how long a
> use has to wait for some result (e.g. web response times) and here
> "wait time" is just as real as CPU time.

Ah. In this case, my initial proposition is correct. I re-added the pseudo-code:
http://www.python.org/dev/peps/pep-0418/#deferred-api-time-perf-counter

Use QueryPerformanceCounter(), or time.monotonic() or time.time().

Victor

From victor.stinner at gmail.com  Mon Apr  9 13:26:30 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 9 Apr 2012 13:26:30 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <20120409025442.GA22023@cskk.homeip.net>
References: <CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
	<20120409025442.GA22023@cskk.homeip.net>
Message-ID: <CAMpsgwZj-AgZu95q4aJRbfS5Rjbyo1E3NqRjDpVV3RRCnzLTCQ@mail.gmail.com>

> | ?* time.process_time(): High-resolution (?) per-process timer from the
> | CPU. (other possible names: time.process_cpu_time() or
> | time.cpu_time())
>
> POSIX offers CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID that
> seem to suit this need, depending on your threading situation (and what
> you're measuring).

Yep.

> | On Windows, GetProcessTimes() has not a "high-resolution": it has a
> | accuracy of 1 ms in the best case.
>
> This page:
> ?http://msdn.microsoft.com/en-us/library/windows/desktop/ms683223%28v=vs.85%29.aspx
> says "100-nanosecond time units".
>
> Am I going to the wrong place to learn about these functions?

Yes, the resolution is 100 ns, but the accuracy is only 1 ms in the
best case (but it usually 15 ms or 10 ms).

Resolution != accuracy, and only accuracy matters :-)
http://www.python.org/dev/peps/pep-0418/#resolution

Victor

From greg.ewing at canterbury.ac.nz  Mon Apr  9 14:24:07 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 10 Apr 2012 00:24:07 +1200
Subject: [Python-Dev] Change to yield-from implementation
In-Reply-To: <4F82A442.8020508@hotpy.org>
References: <4F82A442.8020508@hotpy.org>
Message-ID: <4F82D4E7.3000803@canterbury.ac.nz>

Mark Shannon wrote:

> We have recently removed the f_yieldfrom field from the frame object.
> (http://bugs.python.org/issue14230)

Hey, wait a minute. Did anyone consider the performance effect
of that change on deeply nested yield-froms?

The way it was, a yield-from chain was traversed by a very
tight C loop that found the end frame and resumed it directly.
If I understand what you've done correctly, now it has to
enter and execute a bytecode in every frame along the way.

-- 
Greg

From benjamin at python.org  Mon Apr  9 14:46:06 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Mon, 9 Apr 2012 08:46:06 -0400
Subject: [Python-Dev] Change to yield-from implementation
In-Reply-To: <4F82D4E7.3000803@canterbury.ac.nz>
References: <4F82A442.8020508@hotpy.org>
	<4F82D4E7.3000803@canterbury.ac.nz>
Message-ID: <CAPZV6o9UMDOdY_+JyhCXnSvnKz20Pn+0j1z2JBd91Xrv2JFs1w@mail.gmail.com>

2012/4/9 Greg Ewing <greg.ewing at canterbury.ac.nz>:
> Mark Shannon wrote:
>
>> We have recently removed the f_yieldfrom field from the frame object.
>> (http://bugs.python.org/issue14230)
>
>
> Hey, wait a minute. Did anyone consider the performance effect
> of that change on deeply nested yield-froms?
>
> The way it was, a yield-from chain was traversed by a very
> tight C loop that found the end frame and resumed it directly.
> If I understand what you've done correctly, now it has to
> enter and execute a bytecode in every frame along the way.

I think correctness is more important that performance, though.



-- 
Regards,
Benjamin

From andrew.svetlov at gmail.com  Mon Apr  9 14:46:11 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Mon, 9 Apr 2012 15:46:11 +0300
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <4F82BF4A.407@hotpy.org>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
	<4F82BF4A.407@hotpy.org>
Message-ID: <CAL3CFcU5F9QRamu1MT7CJtPryFBvKz_7mQVVAghmTY_Z5NcknQ@mail.gmail.com>

So it's really no difference between three separate fields in frame
and embedded struct with those fields.

On Mon, Apr 9, 2012 at 1:51 PM, Mark Shannon <mark at hotpy.org> wrote:
> Andrew Svetlov wrote:
>>
>> Do you want to create `frame` and `f_namespaces` every function call
>> instead of single `frame` creation?
>
>
> f_namespaces would be part of the frame, replacing f_builtins, f_globals
> and f_locals. The indirection of an external object hurts performance,
> so it would have to be a struct within the frame. The aim is clarity;
> locals, globals and builtins form a trio, so should be implemented as such.
>
>
>> On Mon, Apr 9, 2012 at 11:56 AM, Mark Shannon <mark at hotpy.org> wrote:
>>>
>>> The frame object is a key object in CPython. It holds the state
>>> of a function invocation. Frame objects are allocated, initialised
>>> and deallocated at a rapid rate.
>>> Each extra field in the frame object requires extra work for each
>>> and every function invocation. Fewer fields in the frame object
>>> means less overhead for function calls, and cleaner simpler code.
>>>
>>> We have recently removed the f_yieldfrom field from the frame object.
>>> (http://bugs.python.org/issue14230)
>>>
>>> The f_exc_type, f->f_exc_value, f->f_exc_traceback fields which handle
>>> sys.exc_info() in generators could be moved to the generator object.
>>> (http://bugs.python.org/issue13897)
>>>
>>> The f_tstate field is redundant and, it would seem, dangerous
>>> (http://bugs.python.org/issue14432)
>>>
>>> The f_builtins, f_globals, f_locals fields could be combined into a
>>> single f_namespaces struct.
>>> (http://code.activestate.com/lists/python-dev/113381/)
>>>
>>> Now PEP 419 proposes adding (yet) another field to the frame object.
>>> Please don't.
>>>
>>> Clean, concise data structures lead to clean, concise code.
>>> which we all know is a "good thing" :)
>>>
>>> Cheers,
>>> Mark.
>>>
>>> _______________________________________________
>>> Python-Dev mailing list
>>> Python-Dev at python.org
>>> http://mail.python.org/mailman/listinfo/python-dev
>>> Unsubscribe:
>>>
>>> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>>
>>
>>
>>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com



-- 
Thanks,
Andrew Svetlov

From solipsis at pitrou.net  Mon Apr  9 14:46:05 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 9 Apr 2012 14:46:05 +0200
Subject: [Python-Dev] Change to yield-from implementation
References: <4F82A442.8020508@hotpy.org>
	<4F82D4E7.3000803@canterbury.ac.nz>
Message-ID: <20120409144605.76a6fb04@pitrou.net>

On Tue, 10 Apr 2012 00:24:07 +1200
Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Mark Shannon wrote:
> 
> > We have recently removed the f_yieldfrom field from the frame object.
> > (http://bugs.python.org/issue14230)
> 
> Hey, wait a minute. Did anyone consider the performance effect
> of that change on deeply nested yield-froms?

What's the point? Apart from na?ve toy examples of traversing trees, I
don't think "deeply nested yield-froms" are likely to be
performance-critical.

Regards

Antoine.



From andrew.svetlov at gmail.com  Mon Apr  9 15:02:50 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Mon, 9 Apr 2012 16:02:50 +0300
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <CAL3CFcU5F9QRamu1MT7CJtPryFBvKz_7mQVVAghmTY_Z5NcknQ@mail.gmail.com>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
	<4F82BF4A.407@hotpy.org>
	<CAL3CFcU5F9QRamu1MT7CJtPryFBvKz_7mQVVAghmTY_Z5NcknQ@mail.gmail.com>
Message-ID: <CAL3CFcVSPqUoEJnE-Ommt8REpC5quw32U+rmi1+1hJi5zhk0aQ@mail.gmail.com>

While I agree with keeping data structures simple and clean I think
conserving them forever is bad idea in general.
Let's look on every particular case before making decision.

On Mon, Apr 9, 2012 at 3:46 PM, Andrew Svetlov <andrew.svetlov at gmail.com> wrote:
> So it's really no difference between three separate fields in frame
> and embedded struct with those fields.
>
> On Mon, Apr 9, 2012 at 1:51 PM, Mark Shannon <mark at hotpy.org> wrote:
>> Andrew Svetlov wrote:
>>>
>>> Do you want to create `frame` and `f_namespaces` every function call
>>> instead of single `frame` creation?
>>
>>
>> f_namespaces would be part of the frame, replacing f_builtins, f_globals
>> and f_locals. The indirection of an external object hurts performance,
>> so it would have to be a struct within the frame. The aim is clarity;
>> locals, globals and builtins form a trio, so should be implemented as such.
>>
>>
>>> On Mon, Apr 9, 2012 at 11:56 AM, Mark Shannon <mark at hotpy.org> wrote:
>>>>
>>>> The frame object is a key object in CPython. It holds the state
>>>> of a function invocation. Frame objects are allocated, initialised
>>>> and deallocated at a rapid rate.
>>>> Each extra field in the frame object requires extra work for each
>>>> and every function invocation. Fewer fields in the frame object
>>>> means less overhead for function calls, and cleaner simpler code.
>>>>
>>>> We have recently removed the f_yieldfrom field from the frame object.
>>>> (http://bugs.python.org/issue14230)
>>>>
>>>> The f_exc_type, f->f_exc_value, f->f_exc_traceback fields which handle
>>>> sys.exc_info() in generators could be moved to the generator object.
>>>> (http://bugs.python.org/issue13897)
>>>>
>>>> The f_tstate field is redundant and, it would seem, dangerous
>>>> (http://bugs.python.org/issue14432)
>>>>
>>>> The f_builtins, f_globals, f_locals fields could be combined into a
>>>> single f_namespaces struct.
>>>> (http://code.activestate.com/lists/python-dev/113381/)
>>>>
>>>> Now PEP 419 proposes adding (yet) another field to the frame object.
>>>> Please don't.
>>>>
>>>> Clean, concise data structures lead to clean, concise code.
>>>> which we all know is a "good thing" :)
>>>>
>>>> Cheers,
>>>> Mark.
>>>>
>>>> _______________________________________________
>>>> Python-Dev mailing list
>>>> Python-Dev at python.org
>>>> http://mail.python.org/mailman/listinfo/python-dev
>>>> Unsubscribe:
>>>>
>>>> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>
>
>
> --
> Thanks,
> Andrew Svetlov



-- 
Thanks,
Andrew Svetlov

From guido at python.org  Mon Apr  9 16:57:05 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 9 Apr 2012 07:57:05 -0700
Subject: [Python-Dev] Change to yield-from implementation
In-Reply-To: <20120409144605.76a6fb04@pitrou.net>
References: <4F82A442.8020508@hotpy.org> <4F82D4E7.3000803@canterbury.ac.nz>
	<20120409144605.76a6fb04@pitrou.net>
Message-ID: <CAP7+vJLD2MZuRw-f1_eCaKJ6V5TZfy8YsVw6CzdJjc4bt0A34g@mail.gmail.com>

On Mon, Apr 9, 2012 at 5:46 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Tue, 10 Apr 2012 00:24:07 +1200
> Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>> Mark Shannon wrote:
>>
>> > We have recently removed the f_yieldfrom field from the frame object.
>> > (http://bugs.python.org/issue14230)
>>
>> Hey, wait a minute. Did anyone consider the performance effect
>> of that change on deeply nested yield-froms?
>
> What's the point? Apart from na?ve toy examples of traversing trees, I
> don't think "deeply nested yield-froms" are likely to be
> performance-critical.

I agree with Benjamin that correctness trumps performance, but I'd
also like to point out that there are other use cases besides nested
iterators. If this gets used for coroutines it may not be so unusual
to have a stack of nested things with on top one that loops a lot --
if each iteration incurs cost proportional to how it got there this
may be a problem. But, correctness first.

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Mon Apr  9 16:59:41 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 9 Apr 2012 07:59:41 -0700
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <4F82BF4A.407@hotpy.org>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
	<4F82BF4A.407@hotpy.org>
Message-ID: <CAP7+vJ+SUjqcfGgPFHnMwSAaDWzU2DB60dh9yt6oxPqejkVTUw@mail.gmail.com>

On Mon, Apr 9, 2012 at 3:51 AM, Mark Shannon <mark at hotpy.org> wrote:
> f_namespaces would be part of the frame, replacing f_builtins, f_globals
> and f_locals. The indirection of an external object hurts performance,
> so it would have to be a struct within the frame. The aim is clarity;
> locals, globals and builtins form a trio, so should be implemented as such.

How does replacing three fields with a struct containing three fields
reduce the size of the frame or the overhead in creating it?

-- 
--Guido van Rossum (python.org/~guido)

From mark at hotpy.org  Mon Apr  9 17:17:32 2012
From: mark at hotpy.org (Mark Shannon)
Date: Mon, 09 Apr 2012 16:17:32 +0100
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <CAP7+vJ+SUjqcfGgPFHnMwSAaDWzU2DB60dh9yt6oxPqejkVTUw@mail.gmail.com>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
	<4F82BF4A.407@hotpy.org>
	<CAP7+vJ+SUjqcfGgPFHnMwSAaDWzU2DB60dh9yt6oxPqejkVTUw@mail.gmail.com>
Message-ID: <4F82FD8C.3080103@hotpy.org>

Guido van Rossum wrote:
> On Mon, Apr 9, 2012 at 3:51 AM, Mark Shannon <mark at hotpy.org> wrote:
>> f_namespaces would be part of the frame, replacing f_builtins, f_globals
>> and f_locals. The indirection of an external object hurts performance,
>> so it would have to be a struct within the frame. The aim is clarity;
>> locals, globals and builtins form a trio, so should be implemented as such.
> 
> How does replacing three fields with a struct containing three fields
> reduce the size of the frame or the overhead in creating it?
> 

It doesn't.
I think it would improve clarity, but I doubt it is worth the effort.

The point I really wanted to make is that many of the fields in the
frame object belong elsewhere and adding new fields to the frame object
is generally a bad idea.

Cheers,
Mark.


From martin at v.loewis.de  Mon Apr  9 18:17:09 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Mon, 09 Apr 2012 18:17:09 +0200
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <4F82FD8C.3080103@hotpy.org>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
	<4F82BF4A.407@hotpy.org>
	<CAP7+vJ+SUjqcfGgPFHnMwSAaDWzU2DB60dh9yt6oxPqejkVTUw@mail.gmail.com>
	<4F82FD8C.3080103@hotpy.org>
Message-ID: <20120409181709.Horde.terzPNjz9kRPgwuFnk0SOkA@webmail.df.eu>

> The point I really wanted to make is that many of the fields in the
> frame object belong elsewhere and adding new fields to the frame object
> is generally a bad idea.

I disagree with that statement, and don't think you have offered sufficient
proof of it. The structure may look irregular to you, but maybe you just need
to get used to it. Factually, I don't think that *many* of the fields belong
elsewhere. The majority of the fields clearly belongs where it is, and there
is nothing wrong with adding new fields if there is a need for it.

Regards,
Martin



From guido at python.org  Mon Apr  9 18:20:02 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 9 Apr 2012 09:20:02 -0700
Subject: [Python-Dev] Removing surplus fields from the frame object and
 not adding any new ones.
In-Reply-To: <4F82FD8C.3080103@hotpy.org>
References: <4F82A442.8020508@hotpy.org>
	<CAL3CFcWygySwmfMteQZ=j1PydWe1YYnKXBjpVLThWNV6gSrusg@mail.gmail.com>
	<4F82BF4A.407@hotpy.org>
	<CAP7+vJ+SUjqcfGgPFHnMwSAaDWzU2DB60dh9yt6oxPqejkVTUw@mail.gmail.com>
	<4F82FD8C.3080103@hotpy.org>
Message-ID: <CAP7+vJ+eP7pNV1LORjAjOMhLHDAFHCxU5DyC6aTjtD+9PpY+3Q@mail.gmail.com>

On Mon, Apr 9, 2012 at 8:17 AM, Mark Shannon <mark at hotpy.org> wrote:
> Guido van Rossum wrote:
>>
>> On Mon, Apr 9, 2012 at 3:51 AM, Mark Shannon <mark at hotpy.org> wrote:
>>>
>>> f_namespaces would be part of the frame, replacing f_builtins, f_globals
>>> and f_locals. The indirection of an external object hurts performance,
>>> so it would have to be a struct within the frame. The aim is clarity;
>>> locals, globals and builtins form a trio, so should be implemented as
>>> such.
>>
>>
>> How does replacing three fields with a struct containing three fields
>> reduce the size of the frame or the overhead in creating it?
>>
>
> It doesn't.
> I think it would improve clarity, but I doubt it is worth the effort.
>
> The point I really wanted to make is that many of the fields in the
> frame object belong elsewhere and adding new fields to the frame object
> is generally a bad idea.

But is it? Consider the 'finally' proposal (not that I endorse it!) --
where would they put this info?

And what is the cost really? Have you measured it? Or are you just
optimizing prematurely?

-- 
--Guido van Rossum (python.org/~guido)

From tjreedy at udel.edu  Mon Apr  9 19:34:25 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 09 Apr 2012 13:34:25 -0400
Subject: [Python-Dev] [Python-checkins] cpython: #14533: if a test has
 no test_main, use loadTestsFromModule.
In-Reply-To: <E1SHEPA-0001ff-Ow@dinsdale.python.org>
References: <E1SHEPA-0001ff-Ow@dinsdale.python.org>
Message-ID: <4F831DA1.5070100@udel.edu>


On 4/9/2012 9:13 AM, r.david.murray wrote:
> http://hg.python.org/cpython/rev/eff551437abd
> changeset:   76176:eff551437abd
> user:        R David Murray<rdmurray at bitdance.com>
> date:        Mon Apr 09 08:55:42 2012 -0400
> summary:
>    #14533: if a test has no test_main, use loadTestsFromModule.
>
> This moves us further in the direction of using normal unittest facilities
> instead of specialized regrtest ones.  Any test module that can be correctly
> run currently using 'python unittest -m test.test_xxx' can now be converted to
> use normal unittest test loading by simply deleting its test_main, thus no
> longer requiring manual maintenance of the list of tests to run.
...
> +   if __name__ == '__main__':
> +       unittest.main()
>
> -   if __name__ == '__main__':
> -       test_main()

Being on Windows, I sometimes run single tests interactively with

from test import test_xxx as t; t.test_main()

Should t.unittest.main(t.__name__) work as well?
Should this always work even if there is still a test_main?

tjr

From roundup-admin at psf.upfronthosting.co.za  Mon Apr  9 21:06:00 2012
From: roundup-admin at psf.upfronthosting.co.za (Python tracker)
Date: Mon, 09 Apr 2012 19:06:00 +0000
Subject: [Python-Dev] Failed issue tracker submission
Message-ID: <20120409190600.911B31CBBE@psf.upfronthosting.co.za>


An unexpected error occurred during the processing
of your message. The tracker administrator is being
notified.
-------------- next part --------------
Return-Path: <python-dev at python.org>
X-Original-To: report at bugs.python.org
Delivered-To: roundup+tracker at psf.upfronthosting.co.za
Received: from mail.python.org (mail.python.org [82.94.164.166])
	by psf.upfronthosting.co.za (Postfix) with ESMTPS id 304CC1CB5A
	for <report at bugs.python.org>; Mon,  9 Apr 2012 21:06:00 +0200 (CEST)
Received: from albatross.python.org (localhost [127.0.0.1])
	by mail.python.org (Postfix) with ESMTP id 3VRLZb6y36zMWS
	for <report at bugs.python.org>; Mon,  9 Apr 2012 21:05:59 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
	t=1333998360; bh=ZhGI0T6Kn0Y8JqnevpbwDXDB5j9UXzZv8e2a2phXX7Q=;
	h=Date:Message-Id:Content-Type:MIME-Version:
	 Content-Transfer-Encoding:From:To:Subject;
	b=v5Unj779GoVXtqcGwg7RMYf7Q4+RyrlY4L7j0WoAqz3nivlgYdXUJwvUrXpyZX3oR
	 D1gmggDFVyrKZRcueBy3gpNYgoNWzBE2BVFDW36BugqNNBINX8fjrwkvDhfyG0V/oy
	 h/8h7FR2fNbyaJyViHuUjGTsVyM9YkTksivNw4qc=
Received: from localhost (HELO mail.python.org) (127.0.0.1)
  by albatross.python.org with SMTP; 09 Apr 2012 21:05:59 +0200
Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4])
	(using TLSv1 with cipher AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.python.org (Postfix) with ESMTPS
	for <report at bugs.python.org>; Mon,  9 Apr 2012 21:05:59 +0200 (CEST)
Received: from localhost
	([127.0.0.1] helo=dinsdale.python.org ident=hg)
	by dinsdale.python.org with esmtp (Exim 4.72)
	(envelope-from <python-dev at python.org>)
	id 1SHJuZ-00030q-Pl
	for report at bugs.python.org; Mon, 09 Apr 2012 21:05:59 +0200
Date: Mon, 09 Apr 2012 21:05:59 +0200
Message-Id: <E1SHJuZ-00030q-Pl at dinsdale.python.org>
Content-Type: text/plain; charset="utf8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: python-dev at python.org
To: report at bugs.python.org
Subject: [issue14004]

TmV3IGNoYW5nZXNldCBiNWYwY2U0ZGRmMGMgYnkgw4lyaWMgQXJhdWpvIGluIGJyYW5jaCAnMi43
JzoKRml4IGxvbmctc3RhbmRpbmcgYnVncyB3aXRoIE1BTklGRVNULmluIHBhcnNpbmcgb24gV2lu
ZG93cyAoIzY4ODQpLgpodHRwOi8vaGcucHl0aG9uLm9yZy9jcHl0aG9uL3Jldi9iNWYwY2U0ZGRm
MGMK

From roundup-admin at psf.upfronthosting.co.za  Mon Apr  9 21:06:02 2012
From: roundup-admin at psf.upfronthosting.co.za (Python tracker)
Date: Mon, 09 Apr 2012 19:06:02 +0000
Subject: [Python-Dev] Failed issue tracker submission
Message-ID: <20120409190602.917E81C98E@psf.upfronthosting.co.za>


An unexpected error occurred during the processing
of your message. The tracker administrator is being
notified.
-------------- next part --------------
Return-Path: <python-dev at python.org>
X-Original-To: report at bugs.python.org
Delivered-To: roundup+tracker at psf.upfronthosting.co.za
Received: from mail.python.org (mail.python.org [82.94.164.166])
	by psf.upfronthosting.co.za (Postfix) with ESMTPS id 363A41CBBD
	for <report at bugs.python.org>; Mon,  9 Apr 2012 21:06:00 +0200 (CEST)
Received: from albatross.python.org (localhost [127.0.0.1])
	by mail.python.org (Postfix) with ESMTP id 3VRLZb7449zMXv
	for <report at bugs.python.org>; Mon,  9 Apr 2012 21:05:59 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
	t=1333998360; bh=ZhGI0T6Kn0Y8JqnevpbwDXDB5j9UXzZv8e2a2phXX7Q=;
	h=Date:Message-Id:Content-Type:MIME-Version:
	 Content-Transfer-Encoding:From:To:Subject;
	b=s6EE04stXBIIATauhoZuwRzAyMxzVK0IdvCVAtdt1TX7KywoGntHN2Y+9UftEwHHv
	 i3acVM7VBTfqadTFcz16cb5NcrO9C9tgAB9tnoYJxNn1beSYEILe38nXwzVfXu9Tw0
	 8HYgzMq2cbHIKrvRriONhB4BKHjReqx24/bvwCuk=
Received: from localhost (HELO mail.python.org) (127.0.0.1)
  by albatross.python.org with SMTP; 09 Apr 2012 21:05:59 +0200
Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4])
	(using TLSv1 with cipher AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.python.org (Postfix) with ESMTPS
	for <report at bugs.python.org>; Mon,  9 Apr 2012 21:05:59 +0200 (CEST)
Received: from localhost
	([127.0.0.1] helo=dinsdale.python.org ident=hg)
	by dinsdale.python.org with esmtp (Exim 4.72)
	(envelope-from <python-dev at python.org>)
	id 1SHJuZ-00030q-P4
	for report at bugs.python.org; Mon, 09 Apr 2012 21:05:59 +0200
Date: Mon, 09 Apr 2012 21:05:59 +0200
Message-Id: <E1SHJuZ-00030q-P4 at dinsdale.python.org>
Content-Type: text/plain; charset="utf8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: python-dev at python.org
To: report at bugs.python.org
Subject: [issue13193]

TmV3IGNoYW5nZXNldCBiNWYwY2U0ZGRmMGMgYnkgw4lyaWMgQXJhdWpvIGluIGJyYW5jaCAnMi43
JzoKRml4IGxvbmctc3RhbmRpbmcgYnVncyB3aXRoIE1BTklGRVNULmluIHBhcnNpbmcgb24gV2lu
ZG93cyAoIzY4ODQpLgpodHRwOi8vaGcucHl0aG9uLm9yZy9jcHl0aG9uL3Jldi9iNWYwY2U0ZGRm
MGMK

From anacrolix at gmail.com  Mon Apr  9 21:11:23 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Tue, 10 Apr 2012 03:11:23 +0800
Subject: [Python-Dev] [Python-checkins] cpython: #14533: if a test has
 no test_main, use loadTestsFromModule.
In-Reply-To: <4F831DA1.5070100@udel.edu>
References: <E1SHEPA-0001ff-Ow@dinsdale.python.org> <4F831DA1.5070100@udel.edu>
Message-ID: <CAB4yi1NB09WrfMhqJd6wkpeziE9fWnuUoyucmo0Mfy7u=Zr66A@mail.gmail.com>

On Apr 10, 2012 2:36 AM, "Terry Reedy" <tjreedy at udel.edu> wrote:
>
>
> On 4/9/2012 9:13 AM, r.david.murray wrote:
>>
>> http://hg.python.org/cpython/rev/eff551437abd
>> changeset:   76176:eff551437abd
>> user:        R David Murray<rdmurray at bitdance.com>
>> date:        Mon Apr 09 08:55:42 2012 -0400
>> summary:
>>   #14533: if a test has no test_main, use loadTestsFromModule.
>>
>> This moves us further in the direction of using normal unittest
facilities
>> instead of specialized regrtest ones.  Any test module that can be
correctly
>> run currently using 'python unittest -m test.test_xxx' can now be
converted to
>> use normal unittest test loading by simply deleting its test_main, thus
no
>> longer requiring manual maintenance of the list of tests to run.
>
> ...
>>
>> +   if __name__ == '__main__':
>> +       unittest.main()
>>
>> -   if __name__ == '__main__':
>> -       test_main()
>
>
> Being on Windows, I sometimes run single tests interactively with
>
> from test import test_xxx as t; t.test_main()
>
> Should t.unittest.main(t.__name__) work as well?
> Should this always work even if there is still a test_main?
Both questions have the same answer. Yes, because this is how discovery
works.
>
> tjr
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120410/94216a78/attachment.html>

From tjreedy at udel.edu  Mon Apr  9 20:54:03 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 09 Apr 2012 14:54:03 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Issue #13165:
 stringbench is now available in the Tools/stringbench folder.
In-Reply-To: <E1SHGDy-0005JW-U5@dinsdale.python.org>
References: <E1SHGDy-0005JW-U5@dinsdale.python.org>
Message-ID: <4F83304B.90709@udel.edu>

Some comments...

On 4/9/2012 11:09 AM, antoine.pitrou wrote:
> http://hg.python.org/cpython/rev/704630a9c5d5
> changeset:   76179:704630a9c5d5
> user:        Antoine Pitrou<solipsis at pitrou.net>
> date:        Mon Apr 09 17:03:32 2012 +0200
> summary:
>    Issue #13165: stringbench is now available in the Tools/stringbench folder.
...

> diff --git a/Tools/stringbench/stringbench.py b/Tools/stringbench/stringbench.py
> new file mode 100755
> --- /dev/null
> +++ b/Tools/stringbench/stringbench.py
> @@ -0,0 +1,1483 @@
> +

Did you mean to start with a blank line?

> +# Various microbenchmarks comparing unicode and byte string performance
> +# Please keep this file both 2.x and 3.x compatible!

Which versions of 2.x? In particular

> +dups = {}

> +        dups[f.__name__] = 1

Is the use of a dict for a set a holdover that could be updated, or 
intentional for back compatibility with 2.whatever and before?

> +# Try with regex
> + at uses_re
> + at bench('s="ABC"*33; re.compile(s+"D").search((s+"D")*300+s+"E")',
> +       "late match, 100 characters", 100)
> +def re_test_slow_match_100_characters(STR):
> +    m = STR("ABC"*33)
> +    d = STR("D")
> +    e = STR("E")
> +    s1 = (m+d)*300 + m+e
> +    s2 = m+e
> +    pat = re.compile(s2)
> +    search = pat.search
> +    for x in _RANGE_100:
> +        search(s1)

If regex is added to stdlib as other than re replacement, we might want 
option to use that instead or in addition to the current re.

> +#### Benchmark join
> +
> +def get_bytes_yielding_seq(STR, arg):
> +    if STR is BYTES and sys.version_info>= (3,):
> +        raise UnsupportedType
> +    return STR(arg)

> + at bench('"A".join("")',
> +       "join empty string, with 1 character sep", 100)

I am puzzled by this. Does str.join(iterable) internally branch on 
whether the iterable is a str or not, so that that these timings might 
be different from equivalent timings with list of strings?

What might be interesting, especially for 3.3, is timing with non-ascii 
BMP and non-BMP chars both as joiner and joined.


tjr

From rdmurray at bitdance.com  Mon Apr  9 21:57:37 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 09 Apr 2012 15:57:37 -0400
Subject: [Python-Dev] [Python-checkins] cpython: #14533: if a test has
	no test_main, use loadTestsFromModule.
In-Reply-To: <4F831DA1.5070100@udel.edu>
References: <E1SHEPA-0001ff-Ow@dinsdale.python.org> <4F831DA1.5070100@udel.edu>
Message-ID: <20120409195652.E54A42500E9@webabinitio.net>

On Mon, 09 Apr 2012 13:34:25 -0400, Terry Reedy <tjreedy at udel.edu> wrote:
> 
> On 4/9/2012 9:13 AM, r.david.murray wrote:
> > http://hg.python.org/cpython/rev/eff551437abd
> > changeset:   76176:eff551437abd
> > user:        R David Murray<rdmurray at bitdance.com>
> > date:        Mon Apr 09 08:55:42 2012 -0400
> > summary:
> >    #14533: if a test has no test_main, use loadTestsFromModule.
> >
> > This moves us further in the direction of using normal unittest facilities
> > instead of specialized regrtest ones.  Any test module that can be correctly
> > run currently using 'python unittest -m test.test_xxx' can now be converted to
> > use normal unittest test loading by simply deleting its test_main, thus no
> > longer requiring manual maintenance of the list of tests to run.
> ...
> > +   if __name__ == '__main__':
> > +       unittest.main()
> >
> > -   if __name__ == '__main__':
> > -       test_main()
> 
> Being on Windows, I sometimes run single tests interactively with
> 
> from test import test_xxx as t; t.test_main()
> 
> Should t.unittest.main(t.__name__) work as well?

That will work.

t.unittest.main(t) will also work and is less typing.

> Should this always work even if there is still a test_main?

It will work if and only if the test can be run correctly via './python
-m unittest test.test_xxx'.  Not all test files in Lib/test can be run that
way (though I at least am open to fixing ones that don't work).

--David

From brian at python.org  Mon Apr  9 22:05:58 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 9 Apr 2012 15:05:58 -0500
Subject: [Python-Dev] Upgrading tcl/tk deps
Message-ID: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>

Can someone let me in on the process to upgrade tcl and tk on
svn.python.org? For the VS2010 port it looks like I need to upgrade
since the 8.5.9 versions do not work. They use link options that choke
on 2010. Taking 8.5.11, which is the current release, seems to work
out alright so far.

It seems as easy as downloading the tarball and checking that in. Am I
missing any official process here?

From solipsis at pitrou.net  Mon Apr  9 22:03:01 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 9 Apr 2012 22:03:01 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #13165:
 stringbench is now available in the Tools/stringbench folder.
References: <E1SHGDy-0005JW-U5@dinsdale.python.org> <4F83304B.90709@udel.edu>
Message-ID: <20120409220301.617be908@pitrou.net>

On Mon, 09 Apr 2012 14:54:03 -0400
Terry Reedy <tjreedy at udel.edu> wrote:
> 
> > diff --git a/Tools/stringbench/stringbench.py b/Tools/stringbench/stringbench.py
> > new file mode 100755
> > --- /dev/null
> > +++ b/Tools/stringbench/stringbench.py
> > @@ -0,0 +1,1483 @@
> > +
> 
> Did you mean to start with a blank line?

This is just a copy of the original file. I did not make any
modifications to it.

Regards

Antoine.





From jimjjewett at gmail.com  Mon Apr  9 22:44:58 2012
From: jimjjewett at gmail.com (Jim Jewett)
Date: Mon, 9 Apr 2012 16:44:58 -0400
Subject: [Python-Dev] Who are the decimal volunteers? Re: [Python-checkins]
 cpython: Resize the coefficient to MPD_MINALLOC also if the requested size
 is below
Message-ID: <CA+OGgf4FCZ57H_jxhTNfmK2SZ_GJgNxEsBwxOJXAjK1RAohFzQ@mail.gmail.com>

I remember that one of the concerns with cdecimal was whether it could
be maintained by anyone except Stefan (and a few people who were
already overcommitted).

If anyone (including absolute newbies) wants to step up, now would be
a good time to get involved.

A few starter questions, whose answer it would be good to document:

Why is there any need for MPD_MINALLOC at all for (immutable) numbers?

I suspect that will involve fleshing out some of the memory management
issues around dynamic decimals, as touched on here:
http://www.bytereef.org/mpdecimal/doc/libmpdec/memory.html#static-and-dynamic-decimals

On Mon, Apr 9, 2012 at 3:33 PM, stefan.krah <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/170bdc5c798b
> changeset: ? 76197:170bdc5c798b
> parent: ? ? ?76184:02ecb8261cd8
> user: ? ? ? ?Stefan Krah <skrah at bytereef.org>
> date: ? ? ? ?Mon Apr 09 20:47:57 2012 +0200
> summary:
> ?Resize the coefficient to MPD_MINALLOC also if the requested size is below
> MPD_MINALLOC. Previously the resize was skipped as a micro optimization.
>
> files:
> ?Modules/_decimal/libmpdec/mpdecimal.c | ?36 ++++++++------
> ?1 files changed, 20 insertions(+), 16 deletions(-)
>
>
> diff --git a/Modules/_decimal/libmpdec/mpdecimal.c b/Modules/_decimal/libmpdec/mpdecimal.c
> --- a/Modules/_decimal/libmpdec/mpdecimal.c
> +++ b/Modules/_decimal/libmpdec/mpdecimal.c
> @@ -480,17 +480,20 @@
> ?{
> ? ? assert(!mpd_isconst_data(result)); /* illegal operation for a const */
> ? ? assert(!mpd_isshared_data(result)); /* illegal operation for a shared */
> -
> + ? ?assert(MPD_MINALLOC <= result->alloc);
> +
> + ? ?nwords = (nwords <= MPD_MINALLOC) ? MPD_MINALLOC : nwords;
> + ? ?if (nwords == result->alloc) {
> + ? ? ? ?return 1;
> + ? ?}
> ? ? if (mpd_isstatic_data(result)) {
> ? ? ? ? if (nwords > result->alloc) {
> ? ? ? ? ? ? return mpd_switch_to_dyn(result, nwords, status);
> ? ? ? ? }
> - ? ?}
> - ? ?else if (nwords != result->alloc && nwords >= MPD_MINALLOC) {
> - ? ? ? ?return mpd_realloc_dyn(result, nwords, status);
> - ? ?}
> -
> - ? ?return 1;
> + ? ? ? ?return 1;
> + ? ?}
> +
> + ? ?return mpd_realloc_dyn(result, nwords, status);
> ?}
>
> ?/* Same as mpd_qresize, but the complete coefficient (including the old
> @@ -500,20 +503,21 @@
> ?{
> ? ? assert(!mpd_isconst_data(result)); /* illegal operation for a const */
> ? ? assert(!mpd_isshared_data(result)); /* illegal operation for a shared */
> -
> - ? ?if (mpd_isstatic_data(result)) {
> - ? ? ? ?if (nwords > result->alloc) {
> - ? ? ? ? ? ?return mpd_switch_to_dyn_zero(result, nwords, status);
> - ? ? ? ?}
> - ? ?}
> - ? ?else if (nwords != result->alloc && nwords >= MPD_MINALLOC) {
> - ? ? ? ?if (!mpd_realloc_dyn(result, nwords, status)) {
> + ? ?assert(MPD_MINALLOC <= result->alloc);
> +
> + ? ?nwords = (nwords <= MPD_MINALLOC) ? MPD_MINALLOC : nwords;
> + ? ?if (nwords != result->alloc) {
> + ? ? ? ?if (mpd_isstatic_data(result)) {
> + ? ? ? ? ? ?if (nwords > result->alloc) {
> + ? ? ? ? ? ? ? ?return mpd_switch_to_dyn_zero(result, nwords, status);
> + ? ? ? ? ? ?}
> + ? ? ? ?}
> + ? ? ? ?else if (!mpd_realloc_dyn(result, nwords, status)) {
> ? ? ? ? ? ? return 0;
> ? ? ? ? }
> ? ? }
>
> ? ? mpd_uint_zero(result->data, nwords);
> -
> ? ? return 1;
> ?}
>
>
> --
> Repository URL: http://hg.python.org/cpython
>
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>

From martin at v.loewis.de  Mon Apr  9 23:49:10 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Mon, 09 Apr 2012 23:49:10 +0200
Subject: [Python-Dev] Upgrading tcl/tk deps
In-Reply-To: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
Message-ID: <20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>


Zitat von Brian Curtin <brian at python.org>:

> Can someone let me in on the process to upgrade tcl and tk on
> svn.python.org? For the VS2010 port it looks like I need to upgrade
> since the 8.5.9 versions do not work. They use link options that choke
> on 2010. Taking 8.5.11, which is the current release, seems to work
> out alright so far.
>
> It seems as easy as downloading the tarball and checking that in. Am I
> missing any official process here?

Yes. There is a set of changes that you need to preserve. Tk *never*
works with any recent VC compilers, so even if you use a new version, you
still likely have to adjust the sources and the build process. Also, make
sure Tix works.

So there are two options:
a) adjust the existing sources to work with the new compiler. To do so,
    modify tk-8.5.9.x (or whatever we currently use), then tag your  
modifications
    as tk-8.5.9.<next> (would be .1 AFAICT), then update Tools/buildbot and
    PCbuild/readme.txt to refer to these.
b) import new sources into tk-8.X.Y.x, then go through the changes in  
tk-8.5.9.x,
    and port over what is still needed. Again, tag your imported tree so that
    the Python tree refers to the tag, allowing for modifications to Tk
    should they be necessary.

Switching to the most recent Tk release is a good idea, anyway, so b) is
preferred.

Regards,
Martin


From stefan at bytereef.org  Tue Apr 10 00:06:23 2012
From: stefan at bytereef.org (Stefan Krah)
Date: Tue, 10 Apr 2012 00:06:23 +0200
Subject: [Python-Dev] [Python-checkins] Who are the decimal volunteers?
	Re: cpython:	Resize the coefficient to MPD_MINALLOC also if
	the requested size	is below
In-Reply-To: <CA+OGgf4FCZ57H_jxhTNfmK2SZ_GJgNxEsBwxOJXAjK1RAohFzQ@mail.gmail.com>
References: <CA+OGgf4FCZ57H_jxhTNfmK2SZ_GJgNxEsBwxOJXAjK1RAohFzQ@mail.gmail.com>
Message-ID: <20120409220623.GA13148@sleipnir.bytereef.org>

Jim Jewett <jimjjewett at gmail.com> wrote:
> Why is there any need for MPD_MINALLOC at all for (immutable) numbers?
> 
> I suspect that will involve fleshing out some of the memory management
> issues around dynamic decimals, as touched on here:
> http://www.bytereef.org/mpdecimal/doc/libmpdec/memory.html#static-and-dynamic-decimals

MPD_MINALLOC
------------

"In order to avoid frequent resizing operations, the global variable
 MPD_MINALLOC guarantees a minimum amount of allocated words for the
 coefficient of each mpd_t. [...]" 


So the rationale is to avoid resizing operations. The mpd_t data type
is not immutable -- I suspect no high speed library for arbitrary
precision arithmetic has an immutable data type.

PyDecObjects are immutable, but they have to be initialized at
some point. The mpd_t struct is part of a PyDecObject and is
in the position of the result operand during initialization.


All operations in _decimal.c follow the same scheme:

  /* dec contains an mpd_t with MPD_MINALLOC words. */
  dec = dec_alloc();

  /* Initialization by a libmpdec function. MPD() is the
     accessor macro for the mpd_t. */
  mpd_func(MPD(dec), x, y, ...);

  /* From here on dec is immutable */



Stefan Krah



From cs at zip.com.au  Tue Apr 10 00:26:50 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 10 Apr 2012 08:26:50 +1000
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwZj-AgZu95q4aJRbfS5Rjbyo1E3NqRjDpVV3RRCnzLTCQ@mail.gmail.com>
References: <CAMpsgwZj-AgZu95q4aJRbfS5Rjbyo1E3NqRjDpVV3RRCnzLTCQ@mail.gmail.com>
Message-ID: <20120409222650.GA14651@cskk.homeip.net>

On 09Apr2012 13:26, Victor Stinner <victor.stinner at gmail.com> wrote:
| > | On Windows, GetProcessTimes() has not a "high-resolution": it has a
| > | accuracy of 1 ms in the best case.
| >
| > This page:
| > ?http://msdn.microsoft.com/en-us/library/windows/desktop/ms683223%28v=vs.85%29.aspx
| > says "100-nanosecond time units".
| >
| > Am I going to the wrong place to learn about these functions?
| 
| Yes, the resolution is 100 ns, but the accuracy is only 1 ms in the
| best case (but it usually 15 ms or 10 ms).

I understand the difference, but I can't see mention of the accuracy on
the cited page, hence my question as to whether I'm looking in the right
place. I need to mark up clocks with their accuracy (I've got their
resolution:-)

| Resolution != accuracy, and only accuracy matters :-)
| http://www.python.org/dev/peps/pep-0418/#resolution

I agree. But finding the accuracy seems harder than one would like.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Thomas R. Collins<brimisha at ix.netcom.com> wrote
> This is NOT alt.peeves, as I previously suspected, but
>alt.talk-about-what-you-want-but-sooner-or-later-you'll-get-flamed.

alt.peeves "as you suspected" doesn't exist and never has. The _real_
alt.peeves is, and for at least the past six years has been, the
literate and flamminiferous counterpart of alt.flame and the refined
and brutal alternative to alt.tasteless.
        - Charlie Stross <charlie at antipope.org>, educating a newbie

From victor.stinner at gmail.com  Tue Apr 10 01:33:49 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 10 Apr 2012 01:33:49 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAP7+vJ+Ta8fZS6V5t-+3L_j0Zjaq0p92Jwm3RR5CbTV0PN+DXw@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
	<CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
	<CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
	<CAP7+vJ+Ta8fZS6V5t-+3L_j0Zjaq0p92Jwm3RR5CbTV0PN+DXw@mail.gmail.com>
Message-ID: <CAMpsgwageVXVyCpqczdwTERtLhUhitRhWsTv41ZEx4HpwuO7uw@mail.gmail.com>

>> sleep() is implemented in the kernel. The kernel is notified when a
>> clock is set, and so can choose how to handle time adjustement. Most
>> "sleeping" functions use the system clock but don't care of clock
>> adjustement.
>
> We're going around in circles. I'm not asking what sleep does, I want
> on principle a timer that does the same thing as sleep(), regardless
> of how sleep() works. So if on some OS sleep() uses the same algorithm
> as CLOCK_MONOTONIC_RAW, I want my timer to use that too. But if on
> some other OS sleep() uses CLOCK_MONOTONIC, I want my timer there to
> use that. And if on some OS sleep() is buggy and uses the time-of-day
> clock, well, I wouldn't mind if my timer used the same thing.

sleep() takes a number of seconds as argument, so CLOCK_MONOTONIC
should be used, not CLOCK_MONOTONIC_RAW. If I understood correctly,
the unit of CLOCK_MONOTONIC is a second, whereas CLOCK_MONOTONIC_RAW
may be faster or slower than a second.

It looks like CLOCK_MONOTONIC_RAW was added to write a NTP server in
user-space. Extract of the mail including the patch adding
CLOCK_MONOTONIC_RAW to the Linux kernel:
"In talking with Josip Loncaric, and his work on clock synchronization
(see btime.sf.net), he mentioned that for really close synchronization,
it is useful to have access to "hardware time", that is a notion of time
that is not in any way adjusted by the clock slewing done to keep close
time sync.

Part of the issue is if we are using the kernel's ntp adjusted
representation of time in order to measure how we should correct time,
we can run into what Paul McKenney aptly described as "Painting a road
using the lines we're painting as the guide".

I had been thinking of a similar problem, and was trying to come up with
a way to give users access to a purely hardware based time
representation that avoided users having to know the underlying
frequency and mask values needed to deal with the wide variety of
possible underlying hardware counters."
https://lkml.org/lkml/2008/3/19/260

Victor

From tjreedy at udel.edu  Tue Apr 10 01:38:36 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 09 Apr 2012 19:38:36 -0400
Subject: [Python-Dev] [Python-checkins] cpython: #14533: if a test has
 no test_main, use loadTestsFromModule.
In-Reply-To: <20120409195652.E54A42500E9@webabinitio.net>
References: <E1SHEPA-0001ff-Ow@dinsdale.python.org> <4F831DA1.5070100@udel.edu>
	<20120409195652.E54A42500E9@webabinitio.net>
Message-ID: <jlvrtu$rjl$1@dough.gmane.org>

On 4/9/2012 3:57 PM, R. David Murray wrote:
> On Mon, 09 Apr 2012 13:34:25 -0400, Terry Reedy<tjreedy at udel.edu>  wrote:

>> Should t.unittest.main(t.__name__) work as well?
>
> That will work.
>
> t.unittest.main(t) will also work and is less typing.

Good. The only doc for the parameter is "unittest.main(module='__main__',"
with no indication other than the name 'module' that both a module 
object or a name is accepted (as with some file object or name interfaces).

>> Should this always work even if there is still a test_main?
>
> It will work if and only if the test can be run correctly via './python
> -m unittest test.test_xxx'.  Not all test files in Lib/test can be run that
> way (though I at least am open to fixing ones that don't work).

One way to again run each would be nice. I will open an issue if I find 
any laggards.

-- 
Terry Jan Reedy


From greg at krypto.org  Tue Apr 10 01:42:35 2012
From: greg at krypto.org (Gregory P. Smith)
Date: Mon, 9 Apr 2012 16:42:35 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
	<4F8019AE.1050305@pearwood.info>
	<87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>
Message-ID: <CAGE7PNKy=n61_r1PVVixiQ2-uDMtk1=h2kOf7BBKRWEfhnzB9Q@mail.gmail.com>

On Sat, Apr 7, 2012 at 4:56 PM, Glyph Lefkowitz <glyph at twistedmatrix.com>wrote:

> On Apr 7, 2012, at 3:40 AM, Steven D'Aprano wrote:
>
> In any case, NTP is not the only thing that adjusts the clock, e.g. the
> operating system will adjust the time for daylight savings.
>
>
> Daylight savings time is not a clock adjustment, at least not in the sense
> this thread has mostly been talking about the word "clock".  It doesn't
> affect the "seconds from epoch" measurement, it affects the way in which
> the clock is formatted to the user.
>
> -glyph
>

even on windows where the system hardware clock is maintained in local time?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120409/0fc9e8ad/attachment.html>

From tjreedy at udel.edu  Tue Apr 10 01:41:58 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 09 Apr 2012 19:41:58 -0400
Subject: [Python-Dev] Upgrading tcl/tk deps
In-Reply-To: <20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
	<20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>
Message-ID: <jlvs48$rjl$2@dough.gmane.org>

On 4/9/2012 5:49 PM, martin at v.loewis.de wrote:
>
> Zitat von Brian Curtin <brian at python.org>:
>
>> Can someone let me in on the process to upgrade tcl and tk on
>> svn.python.org? For the VS2010 port it looks like I need to upgrade
>> since the 8.5.9 versions do not work. They use link options that choke
>> on 2010. Taking 8.5.11, which is the current release, seems to work
>> out alright so far.
>>
>> It seems as easy as downloading the tarball and checking that in. Am I
>> missing any official process here?
>
> Yes. There is a set of changes that you need to preserve. Tk *never*
> works with any recent VC compilers, so even if you use a new version, you
> still likely have to adjust the sources and the build process. Also, make
> sure Tix works.
>
> So there are two options:
> a) adjust the existing sources to work with the new compiler. To do so,
> modify tk-8.5.9.x (or whatever we currently use), then tag your
> modifications
> as tk-8.5.9.<next> (would be .1 AFAICT), then update Tools/buildbot and
> PCbuild/readme.txt to refer to these.
> b) import new sources into tk-8.X.Y.x, then go through the changes in
> tk-8.5.9.x,
> and port over what is still needed. Again, tag your imported tree so that
> the Python tree refers to the tag, allowing for modifications to Tk
> should they be necessary.
>
> Switching to the most recent Tk release is a good idea, anyway, so b) is
> preferred.

In particular, it should include a recent fix so that French keyboards 
work with tk/tkinter and hence Idle better than now. There has been more 
than one complaint about this.

-- 
Terry Jan Reedy


From guido at python.org  Tue Apr 10 01:46:12 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 9 Apr 2012 16:46:12 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAGE7PNKy=n61_r1PVVixiQ2-uDMtk1=h2kOf7BBKRWEfhnzB9Q@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
	<4F8019AE.1050305@pearwood.info>
	<87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>
	<CAGE7PNKy=n61_r1PVVixiQ2-uDMtk1=h2kOf7BBKRWEfhnzB9Q@mail.gmail.com>
Message-ID: <CAP7+vJLAtq=OiCudpZh2MoxYi_NmBipRgSA4NE99BNeYV-2EnA@mail.gmail.com>

Is it still? I thought they fixed that ages ago?

On Mon, Apr 9, 2012 at 4:42 PM, Gregory P. Smith <greg at krypto.org> wrote:
>
> On Sat, Apr 7, 2012 at 4:56 PM, Glyph Lefkowitz <glyph at twistedmatrix.com>
> wrote:
>>
>> On Apr 7, 2012, at 3:40 AM, Steven D'Aprano wrote:
>>
>> In any case, NTP is not the only thing that adjusts the clock, e.g. the
>> operating system will adjust the time for daylight savings.
>>
>>
>> Daylight savings time is not a clock adjustment, at least not in the sense
>> this thread has mostly been talking about the word "clock". ?It doesn't
>> affect the "seconds from epoch" measurement, it affects the way in which the
>> clock is formatted to the user.
>>
>> -glyph
>
>
> even on windows where the system hardware clock is maintained in local time?
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)

From brian at python.org  Tue Apr 10 01:53:02 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 9 Apr 2012 18:53:02 -0500
Subject: [Python-Dev] Upgrading tcl/tk deps
In-Reply-To: <jlvs48$rjl$2@dough.gmane.org>
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
	<20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>
	<jlvs48$rjl$2@dough.gmane.org>
Message-ID: <CAD+XWwoin=MMaMB4SRnRd1X_JTrcdN6h9UUOh5Dr3jvD02x7DQ@mail.gmail.com>

On Mon, Apr 9, 2012 at 18:41, Terry Reedy <tjreedy at udel.edu> wrote:
> In particular, it should include a recent fix so that French keyboards work
> with tk/tkinter and hence Idle better than now. There has been more than one
> complaint about this.

Do you know when this was fixed or have any information about it? Tcl
and Tk 8.5.11 were released Nov 4, 2011. If it was fixed after that I
can look into patching our copy of whatever projects are affected.

From greg at krypto.org  Tue Apr 10 02:45:10 2012
From: greg at krypto.org (Gregory P. Smith)
Date: Mon, 9 Apr 2012 17:45:10 -0700
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAP7+vJLAtq=OiCudpZh2MoxYi_NmBipRgSA4NE99BNeYV-2EnA@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
	<4F8019AE.1050305@pearwood.info>
	<87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>
	<CAGE7PNKy=n61_r1PVVixiQ2-uDMtk1=h2kOf7BBKRWEfhnzB9Q@mail.gmail.com>
	<CAP7+vJLAtq=OiCudpZh2MoxYi_NmBipRgSA4NE99BNeYV-2EnA@mail.gmail.com>
Message-ID: <CAGE7PNJCuCJa9ZsDJMPMQkR4emdErHGPA5Zw+qGwqv7G-Gk-4w@mail.gmail.com>

On Mon, Apr 9, 2012 at 4:46 PM, Guido van Rossum <guido at python.org> wrote:

> Is it still? I thought they fixed that ages ago?
>

sadly, no.  http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html

On Mon, Apr 9, 2012 at 4:42 PM, Gregory P. Smith <greg at krypto.org> wrote:
> >
> > On Sat, Apr 7, 2012 at 4:56 PM, Glyph Lefkowitz <glyph at twistedmatrix.com
> >
> > wrote:
> >>
> >> On Apr 7, 2012, at 3:40 AM, Steven D'Aprano wrote:
> >>
> >> In any case, NTP is not the only thing that adjusts the clock, e.g. the
> >> operating system will adjust the time for daylight savings.
> >>
> >>
> >> Daylight savings time is not a clock adjustment, at least not in the
> sense
> >> this thread has mostly been talking about the word "clock".  It doesn't
> >> affect the "seconds from epoch" measurement, it affects the way in
> which the
> >> clock is formatted to the user.
> >>
> >> -glyph
> >
> >
> > even on windows where the system hardware clock is maintained in local
> time?
> >
> >
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev at python.org
> > http://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> > http://mail.python.org/mailman/options/python-dev/guido%40python.org
> >
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120409/5fba1063/attachment.html>

From tjreedy at udel.edu  Tue Apr 10 03:53:26 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 09 Apr 2012 21:53:26 -0400
Subject: [Python-Dev] Upgrading tcl/tk deps
In-Reply-To: <CAD+XWwoin=MMaMB4SRnRd1X_JTrcdN6h9UUOh5Dr3jvD02x7DQ@mail.gmail.com>
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
	<20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>
	<jlvs48$rjl$2@dough.gmane.org>
	<CAD+XWwoin=MMaMB4SRnRd1X_JTrcdN6h9UUOh5Dr3jvD02x7DQ@mail.gmail.com>
Message-ID: <4F839296.6060601@udel.edu>

On 4/9/2012 7:53 PM, Brian Curtin wrote:
> On Mon, Apr 9, 2012 at 18:41, Terry Reedy<tjreedy at udel.edu>  wrote:
>> In particular, it should include a recent fix so that French keyboards work
>> with tk/tkinter and hence Idle better than now. There has been more than one
>> complaint about this.
>
> Do you know when this was fixed or have any information about it? Tcl
> and Tk 8.5.11 were released Nov 4, 2011. If it was fixed after that I
> can look into patching our copy of whatever projects are affected.

The patch is specifically for tkMacOS, 29/1/12
http://core.tcl.tk/tk/info/9844fe10b9

so it apparently does not affect Windows or what we include with Windows 
build. But it was a show stopper for some French Mac users, including 
one professor who wanted to use Python for an undergraduate course.

On Mar 4, Ned Daily wrote on idle-sig list:

Update: The fix has now been released in the latest ActiveState Tcl 8.5
for Mac OS X release (8.5.11.1) available here:

     http://www.activestate.com/activetcl/downloads

It appears to fix the French keyboard tilde problem and other similar
problems with composite characters, like Option-U + vowel to form
"umlauted" vowels in the U.S. input method.  Many thanks to Adrian
Robert, Kevin Walzer, and the ActiveState team for addressing this nasty 
problem.

If you install ActiveState Tcl 8.5.x, it will automatically be used by
the python.org 2.7.x, 3.2.x, and 3.3.x 64-bit/32-bit Pythons for OS X
10.6 and 10.7.  It will *not* be used by the Apple-supplied system
Pythons or by 32-bit-only python.org Pythons.   More details here:

     http://www.python.org/download/mac/tcltk/
===

So the latest A.S. Windows release should be fine as the base for our 
Windows release.

Terry

From brian at python.org  Tue Apr 10 04:13:50 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 9 Apr 2012 21:13:50 -0500
Subject: [Python-Dev] Upgrading tcl/tk deps
In-Reply-To: <4F839296.6060601@udel.edu>
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
	<20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>
	<jlvs48$rjl$2@dough.gmane.org>
	<CAD+XWwoin=MMaMB4SRnRd1X_JTrcdN6h9UUOh5Dr3jvD02x7DQ@mail.gmail.com>
	<4F839296.6060601@udel.edu>
Message-ID: <CAD+XWwqqEBh9abEggEvJUboaSpBV++yq3DTmFuaDrE6A3-5gfw@mail.gmail.com>

On Mon, Apr 9, 2012 at 20:53, Terry Reedy <tjreedy at udel.edu> wrote:
> On 4/9/2012 7:53 PM, Brian Curtin wrote:
>>
>> On Mon, Apr 9, 2012 at 18:41, Terry Reedy<tjreedy at udel.edu> ?wrote:
>>>
>>> In particular, it should include a recent fix so that French keyboards
>>> work
>>> with tk/tkinter and hence Idle better than now. There has been more than
>>> one
>>> complaint about this.
>>
>>
>> Do you know when this was fixed or have any information about it? Tcl
>> and Tk 8.5.11 were released Nov 4, 2011. If it was fixed after that I
>> can look into patching our copy of whatever projects are affected.
>
>
> The patch is specifically for tkMacOS, 29/1/12
> http://core.tcl.tk/tk/info/9844fe10b9
>
> so it apparently does not affect Windows or what we include with Windows
> build. But it was a show stopper for some French Mac users, including one
> professor who wanted to use Python for an undergraduate course.
>
> On Mar 4, Ned Daily wrote on idle-sig list:
>
> Update: The fix has now been released in the latest ActiveState Tcl 8.5
> for Mac OS X release (8.5.11.1) available here:
>
> ? ?http://www.activestate.com/activetcl/downloads
>
> It appears to fix the French keyboard tilde problem and other similar
> problems with composite characters, like Option-U + vowel to form
> "umlauted" vowels in the U.S. input method. ?Many thanks to Adrian
> Robert, Kevin Walzer, and the ActiveState team for addressing this nasty
> problem.
>
> If you install ActiveState Tcl 8.5.x, it will automatically be used by
> the python.org 2.7.x, 3.2.x, and 3.3.x 64-bit/32-bit Pythons for OS X
> 10.6 and 10.7. ?It will *not* be used by the Apple-supplied system
> Pythons or by 32-bit-only python.org Pythons. ? More details here:
>
> ? ?http://www.python.org/download/mac/tcltk/
> ===
>
> So the latest A.S. Windows release should be fine as the base for our
> Windows release.
>
> Terry

The Windows build works with 8.5.11 so I imagine we would just use
that. If anyone wants to pull it all out and make it use some
third-party installer that's up to them.

I can try applying the relevant patches to the 8.5.11 we have, but I
don't really have the time or knowledge to test them. I don't know
anything about tcl/tk and don't know a whole lot about Macs.

From nad at acm.org  Tue Apr 10 05:15:53 2012
From: nad at acm.org (Ned Deily)
Date: Mon, 09 Apr 2012 20:15:53 -0700
Subject: [Python-Dev] Upgrading tcl/tk deps
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>
	<20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>
	<jlvs48$rjl$2@dough.gmane.org>
	<CAD+XWwoin=MMaMB4SRnRd1X_JTrcdN6h9UUOh5Dr3jvD02x7DQ@mail.gmail.com>
	<4F839296.6060601@udel.edu>
	<CAD+XWwqqEBh9abEggEvJUboaSpBV++yq3DTmFuaDrE6A3-5gfw@mail.gmail.com>
Message-ID: <nad-F8F7DB.20155309042012@news.gmane.org>

In article 
<CAD+XWwqqEBh9abEggEvJUboaSpBV++yq3DTmFuaDrE6A3-5gfw at mail.gmail.com>,
 Brian Curtin <brian at python.org> wrote:
> On Mon, Apr 9, 2012 at 20:53, Terry Reedy <tjreedy at udel.edu> wrote:
> > On 4/9/2012 7:53 PM, Brian Curtin wrote:
> >>
> >> On Mon, Apr 9, 2012 at 18:41, Terry Reedy<tjreedy at udel.edu> ?wrote:
> >>>
> >>> In particular, it should include a recent fix so that French keyboards
> >>> work
> >>> with tk/tkinter and hence Idle better than now. There has been more than
> >>> one
> >>> complaint about this.
[...]
> The Windows build works with 8.5.11 so I imagine we would just use
> that. If anyone wants to pull it all out and make it use some
> third-party installer that's up to them.
> 
> I can try applying the relevant patches to the 8.5.11 we have, but I
> don't really have the time or knowledge to test them. I don't know
> anything about tcl/tk and don't know a whole lot about Macs.

The Tk fix Terry refers is applicable only to the OS X Aqua Cocoa Tcl/Tk 
8.5 port.  It has nothing to do with Windows, any other OS X Tcl/Tk, or 
any other platform.  Further, the Tcl/TK source Martin is talking about 
is used only by the Windows installer builds.  The python.org OS X 
installers do not build or supply Tcl/Tk; they link with the 
Apple-supplied Tcl/Tks and compatible distributions, like the 
ActiveState ones.   So this is all a non-issue.

-- 
 Ned Deily,
 nad at acm.org


From martin at v.loewis.de  Wed Apr 11 00:44:16 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 11 Apr 2012 00:44:16 +0200
Subject: [Python-Dev] Upgrading tcl/tk deps
In-Reply-To: <nad-F8F7DB.20155309042012@news.gmane.org>
References: <CAD+XWwrYfSNiwgHYPDATXf84js4+3C2a7eMARWqzYD893gkTeQ@mail.gmail.com>	<20120409234910.Horde.HOGZRNjz9kRPg1lWosc0ykA@webmail.df.eu>	<jlvs48$rjl$2@dough.gmane.org>	<CAD+XWwoin=MMaMB4SRnRd1X_JTrcdN6h9UUOh5Dr3jvD02x7DQ@mail.gmail.com>	<4F839296.6060601@udel.edu>	<CAD+XWwqqEBh9abEggEvJUboaSpBV++yq3DTmFuaDrE6A3-5gfw@mail.gmail.com>
	<nad-F8F7DB.20155309042012@news.gmane.org>
Message-ID: <4F84B7C0.5040701@v.loewis.de>

> The Tk fix Terry refers is applicable only to the OS X Aqua Cocoa Tcl/Tk 
> 8.5 port.  It has nothing to do with Windows, any other OS X Tcl/Tk, or 
> any other platform.  Further, the Tcl/TK source Martin is talking about 
> is used only by the Windows installer builds.  The python.org OS X 
> installers do not build or supply Tcl/Tk; they link with the 
> Apple-supplied Tcl/Tks and compatible distributions, like the 
> ActiveState ones.   So this is all a non-issue.

Thanks for the clarification. I was about to write something less polite.

Regards,
Martin

From victor.stinner at gmail.com  Wed Apr 11 01:06:27 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 11 Apr 2012 01:06:27 +0200
Subject: [Python-Dev] this is why we shouldn't call it a "monotonic
 clock" (was: PEP 418 is too divisive and confusing and should be postponed)
In-Reply-To: <CAMpsgwageVXVyCpqczdwTERtLhUhitRhWsTv41ZEx4HpwuO7uw@mail.gmail.com>
References: <CAP7+vJKRPn3a4xmRxBmV7G9gBPQJOx-8Me994LXeXTBhRwPfFw@mail.gmail.com>
	<20120408040013.GA16581@cskk.homeip.net>
	<20120408124227.78ccab01@pitrou.net>
	<CAP7+vJ+VB_rNmWdNA2RDsz0KGrvGMO-N8fgfNGdYesFd3x6xPw@mail.gmail.com>
	<CAMpsgwb_M06tbeteMygemvG2Bk3R0TYt-8zeqLWvyJ0m2S0_OQ@mail.gmail.com>
	<CAP7+vJ+Ta8fZS6V5t-+3L_j0Zjaq0p92Jwm3RR5CbTV0PN+DXw@mail.gmail.com>
	<CAMpsgwageVXVyCpqczdwTERtLhUhitRhWsTv41ZEx4HpwuO7uw@mail.gmail.com>
Message-ID: <CAMpsgwZR9014GOT3dj4g_B4P0r1YQ3HuoGs8vABvGyQqQ3oQ1Q@mail.gmail.com>

>> We're going around in circles. I'm not asking what sleep does, I want
>> on principle a timer that does the same thing as sleep(), regardless
>> of how sleep() works. So if on some OS sleep() uses the same algorithm
>> as CLOCK_MONOTONIC_RAW, I want my timer to use that too. But if on
>> some other OS sleep() uses CLOCK_MONOTONIC, I want my timer there to
>> use that. And if on some OS sleep() is buggy and uses the time-of-day
>> clock, well, I wouldn't mind if my timer used the same thing.
>
> sleep() takes a number of seconds as argument, so CLOCK_MONOTONIC
> should be used, not CLOCK_MONOTONIC_RAW. If I understood correctly,
> the unit of CLOCK_MONOTONIC is a second, whereas CLOCK_MONOTONIC_RAW
> may be faster or slower than a second.

sleep() is not affected by system clock update on any OS: I tested
Linux, FreeBSD, Mac OS X and OpenIndiana.

By the way, CLOCK_BOOTTIME was added to Linux 2.6.39: it includes time
elapsed during system suspend, whereas CLOCK_MONOTONIC doesn't include
time elapsed during system suspend. I updated the "Monotonic clocks"
table to indicate if the clock includes the elapsed time or not.
http://www.python.org/dev/peps/pep-0418/#monotonic-clocks

Victor

From victor.stinner at gmail.com  Wed Apr 11 01:25:04 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 11 Apr 2012 01:25:04 +0200
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
	<4F8019AE.1050305@pearwood.info>
	<87201503-000C-4258-A040-D9223EDE8188@twistedmatrix.com>
Message-ID: <CAMpsgwYYQGopc9G0hzYj-oix_MavrEjRS9xzAipivB2BX3EVyA@mail.gmail.com>

>> In any case, NTP is not the only thing that adjusts the clock, e.g. the
>> operating system will adjust the time for daylight savings.
>
> Daylight savings time is not a clock adjustment, at least not in the sense
> this thread has mostly been talking about the word "clock". ?It doesn't
> affect the "seconds from epoch" measurement, it affects the way in which the
> clock is formatted to the user.

Ah yes, you're right. The system clock uses the UTC time zone on Linux
and Windows, and it is not affected by DST.

Victor

From greg at krypto.org  Wed Apr 11 02:03:42 2012
From: greg at krypto.org (Gregory P. Smith)
Date: Tue, 10 Apr 2012 17:03:42 -0700
Subject: [Python-Dev] Possible change to logging.handlers.SysLogHandler
In-Reply-To: <f755d5d3-b2f4-4d3a-8ad7-9b1e0d950b99@i18g2000vbx.googlegroups.com>
References: <f755d5d3-b2f4-4d3a-8ad7-9b1e0d950b99@i18g2000vbx.googlegroups.com>
Message-ID: <CAGE7PNKEeoec5TNvsAP21QX6rshUMrpqa9E9Km6AxSATWiN0-g@mail.gmail.com>

On Fri, Apr 6, 2012 at 1:06 PM, Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:

> There is a problem with the way logging.handlers.SysLogHandler works
> when presented with Unicode messages. According to RFC 5424, Unicode
> is supposed to be sent encoded as UTF-8 and preceded by a BOM.
> However, the current handler implementation puts the BOM at the start
> of the formatted message, and this is wrong in scenarios where you
> want to put some additional structured data in front of the
> unstructured message part; the BOM is supposed to go after the
> structured part (which, therefore, has to be ASCII) and before the
> unstructured part. In that scenario, the handler's current behaviour
> does not strictly conform to RFC 5424.
>
> The issue is described in [1]. The BOM was originally added / position
> changed in response to [2] and [3].
>
> It is not possible to achieve conformance with the current
> implementation of the handler, unless you subclass the handler and
> override the whole emit() method. This is not ideal. For 3.3, I will
> refactor the implementation to expose a method which creates the byte
> string which is sent over the wire to the syslog daemon. This method
> can then be overridden for specific use cases where needed.
>
> However, for 2.7 and 3.2, removing the BOM insertion would bring the
> implementation into conformance to the RFC, though the entire message
> would have to be regarded as just a set of octets. A Unicode message
> would still be encoded using UTF-8, but the BOM would be left out.
>
> I am thinking of removing the BOM insertion in 2.7 and 3.2 - although
> it is a change in behaviour, the current behaviour does seem broken
> with regard to RFC 5424 conformance. However, as some might disagree
> with that assessment and view it as a backwards-incompatible behaviour
> change, I thought I should post this to get some opinions about
> whether this change is viewed as objectionable.
>

Given the existing brokenness I personally think that removing the BOM
insertion (because it is incorrect) in 2.7 and 3.2 is fine if you cannot
find a way to make it correct in 2.7 and 3.2 without breaking existing APIs.

could a private method to create the byte string not be added and used in
2.7 and 3.2 that correctly add the BOM?


> Regards,
>
> Vinay Sajip
>
> [1] http://bugs.python.org/issue14452
> [2] http://bugs.python.org/issue7077
> [3] http://bugs.python.org/issue8795
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/greg%40krypto.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120410/57753188/attachment.html>

From vinay_sajip at yahoo.co.uk  Wed Apr 11 05:05:48 2012
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Wed, 11 Apr 2012 03:05:48 +0000 (UTC)
Subject: [Python-Dev] Possible change to logging.handlers.SysLogHandler
References: <f755d5d3-b2f4-4d3a-8ad7-9b1e0d950b99@i18g2000vbx.googlegroups.com>
	<CAGE7PNKEeoec5TNvsAP21QX6rshUMrpqa9E9Km6AxSATWiN0-g@mail.gmail.com>
Message-ID: <loom.20120411T045942-329@post.gmane.org>

Gregory P. Smith <greg <at> krypto.org> writes:

> Given the existing brokenness I personally think that removing the BOM
> insertion (because it is incorrect) in 2.7 and 3.2 is fine if you cannot find
> a way to make it correct in 2.7 and 3.2 without breaking existing APIs.

Thanks for the feedback.
 
> could a private method to create the byte string not be added and used in 2.7
> and 3.2 that correctly add the BOM?

The problem is that given a format string, the code would not know where to
insert the BOM. According to the RFC, it's supposed to go just before the
unstructured message part, but that's format-string and hence
application-dependent. So some new API will need to be exposed, though I haven't
thought through exactly what that will be (for example, it could be a new
place-holder for the BOM in the format-string, or some new public methods which
are meant to be overridden and so not private).

Regards,

Vinay Sajip


From janzert at janzert.com  Wed Apr 11 05:31:30 2012
From: janzert at janzert.com (Janzert)
Date: Tue, 10 Apr 2012 23:31:30 -0400
Subject: [Python-Dev] PEP 418 is too divisive and confusing and should
	be postponed
In-Reply-To: <CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
References: <4F7B96F1.6020906@pearwood.info> <4F7BA3C2.4050705@gmail.com>
	<CAMpsgwbZKgEzbZmsVBeM1vNfjRUxEdGH=FAybvf3HgNOsRsnRA@mail.gmail.com>
	<CAL0kPAVuPoYEo=hqTaryi1YNqhpF-8ngJAGrBwaQ4N3Y-M34dA@mail.gmail.com>
	<4F7CCF1D.2010600@canterbury.ac.nz>
	<CAL0kPAU0Zp5YwH3J+9KKqQ2r7QZo15o=VrqvViaVsNx7j4kQDw@mail.gmail.com>
	<CAMpsgwaPtm_M3wVmpqGUwmBEmbT8V=qFs1GU0-Fak=+Ws6JjYQ@mail.gmail.com>
	<jlo136$soq$1@dough.gmane.org>
	<CAMpsgwYLMMpOkXyKmVQsbbxRX5ZGHHWKe6v0trq-SR2AS4Oqdw@mail.gmail.com>
Message-ID: <jm2tuq$pp4$1@dough.gmane.org>

On 4/7/2012 5:49 AM, Victor Stinner wrote:
> 2012/4/7 Janzert<janzert at janzert.com>:
>> On 4/5/2012 6:32 AM, Victor Stinner wrote:
>>> I prefer to use CLOCK_MONOTONIC, not because it is also available for
>>> older Linux kernels, but because it is more reliable. Even if the
>>> underlying clock source is unstable (unstable frequency), a delta of
>>> two reads of the CLOCK_MONOTONIC clock is a result in *seconds*,
>>> whereas CLOCK_MONOTONIC_RAW may use an unit a little bit bigger or
>>> smaller than a second. time.monotonic() unit is the second, as written
>>> in its documentation.
>>
>> I believe the above is only true for sufficiently large time deltas. One of
>> the major purposes of NTP slewing is to give up some short term accuracy in
>> order to achieve long term accuracy (e.g. whenever the clock is found to be
>> ahead of real time it is purposefully ticked slower than real time).
>
> I don't think that NTP works like that. NTP only uses very smooth adjustements:
>
> ""slewing": change the clock frequency to be slightly faster or slower
> (which is done with adjtime()). Since the slew rate is limited to 0.5
> ms/s, each second of adjustment requires an amortization interval of
> 2000 s. Thus, an adjustment of many seconds can take hours or days to
> amortize."
> http://www.python.org/dev/peps/pep-0418/#ntp-adjustment
>

Right, the description in that paragraph is exactly what I was referring 
to above. :) It is unfortunate that a clock with a resolution of 1ns may 
be purposefully thrown off by 500,000ns per second in the short term.

In practice you are probably correct that it is better to take the 
slewed clock even though it may have purposeful short term inaccuracy 
thrown in than it is to use the completely unadjusted one.

>> So for benchmarking it would not be surprising to be better off with the
>> non-adjusted clock. Ideally there would be a clock that was slewed "just
>> enough" to try and achieve short term accuracy, but I don't know of anything
>> providing that.
>
> time.monotonic() is not written for benchmarks. It does not have the
> highest frequecency, it's primary property is that is monotonic. A
> side effect is that it is usually the steadiest clock.
>
> For example, on Windows time.monotonic() has only an accuracy of 15 ms
> (15 milliseconds not 15 microseconds).
>

Hmm, I just realized an unfortunate result of that. Since it means 
time.monotonic() will be too coarse to be useful for frame rate level 
timing on windows.

Janzert


From jimjjewett at gmail.com  Wed Apr 11 08:49:41 2012
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 11 Apr 2012 02:49:41 -0400
Subject: [Python-Dev] PEP 418 glossary
Message-ID: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>

I believe PEP 418 (or at least the discussion) would benefit greatly
from a glossary to encourage people to use the same definitions.  This
is arguably the Definitions section, but it should move either near
the end or (preferably) ahead of the Functions.  It also needs to be
greatly expanded.

Here is my strawman proposal, which does use slightly different
definitions than the current PEP even for some terms that the PEP does
define:

Accuracy:
    Is the answer correct?  Any clock will eventually <drift>; if a
clock is intended to match <Civil Time>, it will need to be <adjusted>
back to the "true" time.

Adjusted:
    Resetting a clock to the correct time.  This may be done either
with a <Step> or by <Slewing>.

Civil Time:
    Time of day; external to the system.  10:45:13am is a Civil time;
45 seconds is not.  Provided by existing function time.localtime() and
time.gmtime().  Not changed by this PEP.

Clock:
    An instrument for measuring time.  Different clocks have different
characteristics; for example, a clock with <nanonsecond> <precision>
may start to <drift> after a few minutes, while a less precise clock
remained accurate for days.  This PEP is primarily concerned with
clocks which use a <unit> of seconds.

Clock_Monotonic:
    The characteristics expected of a monotonic clock in practice.  In
addition to being <monotonic>, the <clock> should also be <steady> and
have relatively high <precision>, and should be convertible to a
<unit> of seconds.  The tradeoffs often include lack of a defined
<epoch> or mapping to <Civil Time>, and being more expensive (in
<latency>, power usage, or <duration> spent within calls to the clock
itself) to use.  For example, the clock may represent (a constant
multiplied by) ticks of a specific quartz timer on a specific CPU
core, and calls would therefore require synchronization between cores.
 The original motivation for this PEP was to provide a cross-platform
name for requesting a clock_monotonic clock.

Counter:
    A clock which increments each time a certain event occurs.  A
counter is <strictly monotonic>, but not <clock_monotonic>.  It can be
used to generate a unique (and ordered) timestamp, but these
timestamps cannot be mapped to <civil time>; tick creation may well be
bursty, with several advances in the same millisecond followed by
several days without any advance.

CPU Time:
    A measure of how much CPU effort has been spent on a certain task.
 CPU seconds are often normalized (so that a variable number can occur
in the same actual second).  CPU seconds can be important when
profiling, but they do not map directly to user response time, nor are
they directly comparable to (real time) seconds.  time.clock() is
deprecated because it returns <real time> seconds on Windows, but CPU
seconds on unix, which prevents a consistent cross-platform
interpretation.

Duration:
    Elapsed time.  The difference between the starting and ending
times.  A defined <epoch> creates an implicit (and usually large)
duration.  More precision can generally be provided for a relatively
small <duration>.

Drift:
    The accumulated error against "true" time, as defined externally
to the system.

Epoch:
    The reference point of a clock.  For clocks providing <civil
time>, this is often midnight as the day (and year) rolled over to
January 1, 1970.  For a <clock_monotonic> clock, the epoch may be
undefined (represented as None).

Latency:
    Delay.  By the time a clock call returns, the <real time> has
advanced, possibly by more than the precision of the clock.

Microsecond:
    1/1,000,000 of a second.  Fast enough for most -- but not all --
profiling uses.

Millisecond:
    1/1,000 of a second.  More than adequate for most end-to-end UI
measurements, but often too coarse for profiling individual functions.

Monotonic:
    Moving in at most one direction; for clocks, that direction is
forward.  A (nearly useless) clock that always returns exactly the
same time is technically monotonic.  In practice, most uses of
"monotonic" with respect to clocks actually refer to a stronger set of
guarantees, as described under <clock_monotonic>

Nanosecond
    1/1,000,000,000 of a second.  The smallest unit of resolution --
and smaller than the actual precision -- available in current
mainstream operating systems.

Precision:
    Significant Digits.  What is the smallest duration that the clock
can distinguish?  This differs from <resolution> in that a difference
greater than the minimum precision is actually meaningful.

Process Time:
    Time elapsed since the process began.  It is typically measured in
<CPU time> rather than <real time>, and typically does not advance
while the process is suspended.

Real Time:
    Time in the real world.  This differs from <Civil time> in that it
is not <adjusted>, but they should otherwise advance in lockstep.  It
is not related to the "real time" of "Real Time [Operating] Systems".
It is sometimes called "wall clock time" to avoid that ambiguity;
unfortunately, that introduces different ambiguities.

Resolution:
    Represented Digits.  Note that many clocks will have a resolution
greater than their actual <precision>.

Slew:
    A temporary slight change to a clock's speed, usually intended to
correct <drift> with respect to an external authority.

Stability:
    Persistence of accuracy.  A measure of expected <drift>.

Steady:
    A clock with high <stability> and relatively high <accuracy> and
<precision>.  In practice, it is often used to indicate a
<clock_monotonic> clock, but places greater emphasis on the
consistency of the duration between subsequent ticks.

Step:
    An instantaneous change in the represented time.  Instead of
speeding or slowing the clock (<slew>), a single offset is permanently
added.

Strictly Monotonic:
    Monotonic, and not repeating any values.  A strictly monotonic
clock is useful as a counter.  Very few clocks promise this
explicitly, but <clock_monotonic> clocks typically have a precision
high enough (and are expensive enough to call) that the same value
will not be returned twice in practice.

System Time:
    Time as represented by the Operating System.

Thread Time
    Time elapsed since the thread began.  It is typically measured in
<CPU time> rather than <real time>, and typically does not advance
while the thread is idle.

Tic, Tick:
    The smallest increment of a clock.  There may or may not be a
constant k such such that k*ticks == 1 second.

time.clock():
    Existing function; deprecated because of platform inconsistencies.
 On Windows, it measures <real time>, and on unix it measures <CPU
time>.

time.monotonic_clock():
    Proposed addition to the time module, providing a <steady> or
<clock_monotonic> clock which measures in <real time> seconds with
high precision and stability.

time.time():
    Existing function to provide <civil time>.  Users should be
prepared for arbitrarily large steps or slew in either direction.  Not
affected by this PEP.

Unit:
    What a clock measures in.  Other than counters, most clocks are
normalized to either <real time> seconds or <CPU time> seconds.

Wall Clock Time, Wallclock, Walltime:
    What the clock on the wall says.  This is typically used as a
synonym for <real time>; unfortunately, wall time is itself ambiguous.
 (Does it mean the physical wall, external to the system?  Does it
mean <civil time> and imply jumps for daylight savings time?)

-jJ

From rosuav at gmail.com  Wed Apr 11 10:01:47 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 11 Apr 2012 18:01:47 +1000
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
Message-ID: <CAPTjJmoohz=_3gQOYcZvqqssH1HpB1=5c-P1aPg0O+jjiFQmMg@mail.gmail.com>

On Wed, Apr 11, 2012 at 4:49 PM, Jim Jewett <jimjjewett at gmail.com> wrote:
> Clock:
> ? ?An instrument for measuring time. ?Different clocks have different
> characteristics; for example, a clock with <nanonsecond> <precision>

Small typo. Otherwise, excellent reference document - thank you! Well
worth gathering all those terms.

ChrisA
"There's a glory for you!"

From stephen at xemacs.org  Wed Apr 11 11:30:47 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 11 Apr 2012 18:30:47 +0900
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
Message-ID: <CAL_0O1_Nqy5eOmQX5a1rKJ5Rx7+_ojwJi3kXagkRq9M069_g2w@mail.gmail.com>

A few comments, YMMV.

On Wed, Apr 11, 2012 at 3:49 PM, Jim Jewett <jimjjewett at gmail.com> wrote:

> Here is my strawman proposal, which does use slightly different
> definitions than the current PEP even for some terms that the PEP does
> define:
>
> Accuracy:
> ? ?Is the answer correct? ?Any clock will eventually <drift>; if a
> clock is intended to match <Civil Time>, it will need to be <adjusted>
> back to the "true" time.

Accuracy is not a Boolean.  Accuracy is the lack of difference from
some standard.

> Adjusted:
> ? ?Resetting a clock to the correct time. ?This may be done either
> with a <Step> or by <Slewing>.
>
> Civil Time:
> ? ?Time of day; external to the system. ?10:45:13am is a Civil time;
> 45 seconds is not. ?Provided by existing function time.localtime() and
> time.gmtime(). ?Not changed by this PEP.
>
> Clock:
> ? ?An instrument for measuring time. ?Different clocks have different
> characteristics; for example, a clock with <nanonsecond> <precision>
> may start to <drift> after a few minutes, while a less precise clock
> remained accurate for days. ?This PEP is primarily concerned with
> clocks which use a <unit> of seconds.
>
> Clock_Monotonic:
> ? ?The characteristics expected of a monotonic clock in practice.

Whose practice?  In C++, "monotonic" was defined as "mathematically
monotonic", and rather than talk about "what's expected of a monotonic
clock in practice," they chose to use a different term ("steady") for
the clocks that (come closer to) DTRT.

I think it would be best to use a different name.

>?In
> addition to being <monotonic>, the <clock> should also be <steady> and
> have relatively high <precision>, and should be convertible to a
> <unit> of seconds. ?The tradeoffs often include lack of a defined
> <epoch> or mapping to <Civil Time>, and being more expensive (in
> <latency>, power usage, or <duration> spent within calls to the clock
> itself) to use. ?For example, the clock may represent (a constant
> multiplied by) ticks of a specific quartz timer on a specific CPU
> core, and calls would therefore require synchronization between cores.
> ?The original motivation for this PEP was to provide a cross-platform
> name for requesting a clock_monotonic clock.
>
> Counter:
> ? ?A clock which increments each time a certain event occurs. ?A
> counter is <strictly monotonic>, but not <clock_monotonic>. ?It can be
> used to generate a unique (and ordered) timestamp, but these
> timestamps cannot be mapped to <civil time>; tick creation may well be
> bursty, with several advances in the same millisecond followed by
> several days without any advance.
>
> CPU Time:
> ? ?A measure of how much CPU effort has been spent on a certain task.
> ?CPU seconds are often normalized (so that a variable number can occur
> in the same actual second). ?CPU seconds can be important when
> profiling, but they do not map directly to user response time, nor are
> they directly comparable to (real time) seconds. ?time.clock() is
> deprecated because it returns <real time> seconds on Windows, but CPU
> seconds on unix, which prevents a consistent cross-platform
> interpretation.
>
> Duration:
> ? ?Elapsed time. ?The difference between the starting and ending
> times. ?A defined <epoch> creates an implicit (and usually large)
> duration. ?More precision can generally be provided for a relatively
> small <duration>.

Epoch is independent of duration.  Rather, epoch can be combined with
duration to provide a clock that measures <civil time>.

> Drift:
> ? ?The accumulated error against "true" time, as defined externally
> to the system.
>
> Epoch:
> ? ?The reference point of a clock. ?For clocks providing <civil
> time>, this is often midnight as the day (and year) rolled over to
> January 1, 1970. ?For a <clock_monotonic> clock, the epoch may be
> undefined (represented as None).
>
> Latency:
> ? ?Delay. ?By the time a clock call returns, the <real time> has
> advanced, possibly by more than the precision of the clock.
>
> Microsecond:
> ? ?1/1,000,000 of a second. ?Fast enough for most -- but not all --
> profiling uses.
>
> Millisecond:
> ? ?1/1,000 of a second. ?More than adequate for most end-to-end UI
> measurements, but often too coarse for profiling individual functions.
>
> Monotonic:
> ? ?Moving in at most one direction; for clocks, that direction is
> forward. ?A (nearly useless) clock that always returns exactly the
> same time is technically monotonic. ?In practice, most uses of
> "monotonic" with respect to clocks actually refer to a stronger set of
> guarantees, as described under <clock_monotonic>

Again, even in a glossary you need to be vague about monotonic.

> Nanosecond
> ? ?1/1,000,000,000 of a second. ?The smallest unit of resolution --
> and smaller than the actual precision -- available in current
> mainstream operating systems.
>
> Precision:
> ? ?Significant Digits. ?What is the smallest duration that the clock
> can distinguish? ?This differs from <resolution> in that a difference
> greater than the minimum precision is actually meaningful.

I think you have this backwards.  Precision is the number of
significant digits reported.  Resolution is the smallest duration that
is meaningful.

> Process Time:
> ? ?Time elapsed since the process began. ?It is typically measured in
> <CPU time> rather than <real time>, and typically does not advance
> while the process is suspended.
>
> Real Time:
> ? ?Time in the real world. ?This differs from <Civil time> in that it
> is not <adjusted>, but they should otherwise advance in lockstep. ?It
> is not related to the "real time" of "Real Time [Operating] Systems".
> It is sometimes called "wall clock time" to avoid that ambiguity;
> unfortunately, that introduces different ambiguities.
>
> Resolution:
> ? ?Represented Digits. ?Note that many clocks will have a resolution
> greater than their actual <precision>.
>
> Slew:
> ? ?A temporary slight change to a clock's speed, usually intended to
> correct <drift> with respect to an external authority.

I don't see that anything needs to be temporary about it.  Also, the
gloss should say something about making the correction smoothly, and
refer to "<Step>".

Something like: A slight change to a clock's speed to smoothly correct
drift.  Contrast with <Step>.

> Stability:
> ? ?Persistence of accuracy. ?A measure of expected <drift>.
>
> Steady:
> ? ?A clock with high <stability> and relatively high <accuracy> and
> <precision>. ?In practice, it is often used to indicate a
> <clock_monotonic> clock, but places greater emphasis on the
> consistency of the duration between subsequent ticks.
>
> Step:
> ? ?An instantaneous change in the represented time. ?Instead of
> speeding or slowing the clock (<slew>), a single offset is permanently
> added.
>
> Strictly Monotonic:
> ? ?Monotonic, and not repeating any values. ?A strictly monotonic
> clock is useful as a counter. ?Very few clocks promise this
> explicitly, but <clock_monotonic> clocks typically have a precision
> high enough (and are expensive enough to call) that the same value
> will not be returned twice in practice.
>
> System Time:
> ? ?Time as represented by the Operating System.
>
> Thread Time
> ? ?Time elapsed since the thread began. ?It is typically measured in
> <CPU time> rather than <real time>, and typically does not advance
> while the thread is idle.
>
> Tic, Tick:
> ? ?The smallest increment of a clock. ?There may or may not be a
> constant k such such that k*ticks == 1 second.

Does anybody who matters actually spell this "tic"?  "Tic" is a
perfectly good English word that means a twitch or other unconscious,
instantaneous behavior.  I think this is sufficiently ambiguous (a
clock with a tic presumably is one that is unreliable!) that the
spelling should be deprecated in documentation (people can spell
program identifiers however they like, of course).

> time.clock():
> ? ?Existing function; deprecated because of platform inconsistencies.
> ?On Windows, it measures <real time>, and on unix it measures <CPU
> time>.
>
> time.monotonic_clock():
> ? ?Proposed addition to the time module, providing a <steady> or
> <clock_monotonic> clock which measures in <real time> seconds with
> high precision and stability.
>
> time.time():
> ? ?Existing function to provide <civil time>. ?Users should be
> prepared for arbitrarily large steps or slew in either direction. ?Not
> affected by this PEP.
>
> Unit:
> ? ?What a clock measures in. ?Other than counters, most clocks are
> normalized to either <real time> seconds or <CPU time> seconds.
>
> Wall Clock Time, Wallclock, Walltime:
> ? ?What the clock on the wall says. ?This is typically used as a
> synonym for <real time>; unfortunately, wall time is itself ambiguous.
> ?(Does it mean the physical wall, external to the system? ?Does it
> mean <civil time> and imply jumps for daylight savings time?)

From mark at hotpy.org  Wed Apr 11 11:56:04 2012
From: mark at hotpy.org (Mark Shannon)
Date: Wed, 11 Apr 2012 10:56:04 +0100
Subject: [Python-Dev] Meaning of the f_tstate field in the frame object
Message-ID: <4F855534.1020406@hotpy.org>

What is the purpose of the f_tstate field in the frame object?
It holds a borrowed reference to the threadstate in which the frame
was created.

If PyThreadState_GET()->frame->f_state == PyThreadState_GET()
then it is redundant.

But what if PyThreadState_GET()->frame->f_state != PyThreadState_GET(),
which can happen when a generator is created in one thread and called in
another?

Removing the f_tstate field provides a clean fix to 
http://bugs.python.org/issue14432, but is it safe to do so?
I think it is safe, but does anyone think otherwise?

(Removing it requires the replacement of frame->f_state
with PyThreadState_GET() at one place in _PyEval_CallTracing)

Cheers,
Mark.

From arigo at tunes.org  Wed Apr 11 13:47:42 2012
From: arigo at tunes.org (Armin Rigo)
Date: Wed, 11 Apr 2012 13:47:42 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
Message-ID: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>

Hi all,

This is an update on the (so far PyPy-only) project of adding "Automatic
Mutual Exclusion" to Python, via STM (Software Transactional Memory).
For the motivation, see here:

http://morepypy.blogspot.com/2012/03/call-for-donations-for-software.html
"""The point is that [with STM/AME] your program is always correct,
and can be tweaked to improve performance. This is the opposite from
what explicit threads and locks give you, which is a performant
program which you need to tweak to remove bugs. Arguably, this
approach is the reason for why you use Python in the first place
:-)"""

The update is: I now believe that it might be (reasonably) possible to
apply the same techniques to CPython, and not only to PyPy.  For now I
am experimenting with applying them in a simple CPython-like
interpreter.  If it works, it might end up as a patch to the core parts
of CPython.  The interesting property is that it would still be able to
run unmodified C extension modules --- the Python code gets the benefits
of multi-core STM/AME only if it involves only the patched parts of the
C code, but in all cases it still works correctly, falling back to
single-core usage.

I did not try to hack CPython so far, but only this custom interpreter
for a Lisp language, whose implementation should be immediately familiar
to anyone who knows CPython C code: https://bitbucket.org/arigo/duhton .
The non-standard built-in function is "transaction", which schedules a
transaction to run later (see test/test_transaction.py).

The code contains the necessary tweaks to reference counting, and seems
to work on all examples, but leaks some of the objects so far.  Fixing
this directly might be possible, but I'm not sure yet (it might require
interaction with the cycle-detecting GC of CPython).  Moreover the
performance hit is well below 2x, more like 20%.

If anyone is interested, I could create a dedicated mailing list in
order to discuss this in more details.  From experience I would think
that this has the potential to become a Psyco-like experiment, but
unlike 10 years ago, today I'm not ready any more to dive completely
alone into a project of that scale :-)


A bient?t,

Armin.

From stefan_ml at behnel.de  Wed Apr 11 14:29:29 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Wed, 11 Apr 2012 14:29:29 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
In-Reply-To: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
Message-ID: <jm3tf9$nme$1@dough.gmane.org>

Armin Rigo, 11.04.2012 13:47:
> This is an update on the (so far PyPy-only) project of adding "Automatic
> Mutual Exclusion" to Python, via STM (Software Transactional Memory).
> [...]
> Moreover the performance hit is well below 2x, more like 20%.

Hmm, those 20% refer to STM, right? Without hardware support? Then hardware
support could be expected to drop that even further?

Did you do any experiments with running parallel code so far, to see if
that scales as expected?

Stefan


From ncoghlan at gmail.com  Wed Apr 11 14:40:14 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 11 Apr 2012 22:40:14 +1000
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <CAL_0O1_Nqy5eOmQX5a1rKJ5Rx7+_ojwJi3kXagkRq9M069_g2w@mail.gmail.com>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
	<CAL_0O1_Nqy5eOmQX5a1rKJ5Rx7+_ojwJi3kXagkRq9M069_g2w@mail.gmail.com>
Message-ID: <CADiSq7cN9LVap289MT1cQMWPzRUpEJfOUAO_mDU-x+DjJrRkRg@mail.gmail.com>

On Wed, Apr 11, 2012 at 7:30 PM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
>> Clock_Monotonic:
>> ? ?The characteristics expected of a monotonic clock in practice.
>
> Whose practice? ?In C++, "monotonic" was defined as "mathematically
> monotonic", and rather than talk about "what's expected of a monotonic
> clock in practice," they chose to use a different term ("steady") for
> the clocks that (come closer to) DTRT.
>
> I think it would be best to use a different name.

We may as well stick with the POSIX terminology (noting cases where
there are discrepancies with other uses of terms). For better or for
worse, Python is a POSIX based language.

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From arigo at tunes.org  Wed Apr 11 14:51:34 2012
From: arigo at tunes.org (Armin Rigo)
Date: Wed, 11 Apr 2012 14:51:34 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
In-Reply-To: <jm3tf9$nme$1@dough.gmane.org>
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
	<jm3tf9$nme$1@dough.gmane.org>
Message-ID: <CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>

Hi Stefan,

On Wed, Apr 11, 2012 at 14:29, Stefan Behnel <stefan_ml at behnel.de> wrote:
>> Moreover the performance hit is well below 2x, more like 20%.
>
> Hmm, those 20% refer to STM, right? Without hardware support? Then hardware
> support could be expected to drop that even further?

Yes, that's using STM on my regular laptop.  How HTM would help
remains unclear at this point, because in this approach transactions
are typically rather large --- likely much larger than what the
first-generation HTM-capable processors will support next year.  But
20% looks good anyway :-)

> Did you do any experiments with running parallel code so far, to see if
> that scales as expected?

Yes, it scales very nicely on small non-conflicting examples.  I
believe that it scales just as nicely on large examples on CPython
too, based on the approach --- as long as we, as CPython developers,
make enough efforts to adapt a sufficiently large portion of the
CPython C code base (which would mean: most mutable built-in objects'
implementation).


A bient?t,

Armin.

From vinay_sajip at yahoo.co.uk  Wed Apr 11 15:16:09 2012
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Wed, 11 Apr 2012 13:16:09 +0000 (UTC)
Subject: [Python-Dev] Possible change to logging.handlers.SysLogHandler
References: <f755d5d3-b2f4-4d3a-8ad7-9b1e0d950b99@i18g2000vbx.googlegroups.com>
	<CAGE7PNKEeoec5TNvsAP21QX6rshUMrpqa9E9Km6AxSATWiN0-g@mail.gmail.com>
Message-ID: <loom.20120411T151405-269@post.gmane.org>

Gregory P. Smith <greg <at> krypto.org> writes:

> Given the existing brokenness I personally think that removing the BOM
insertion (because it is incorrect) in 2.7 and 3.2 is fine if you cannot find a
way to make it correct in 2.7 and 3.2 without breaking existing APIs.

I have an idea for a change which won't require changing any public APIs; though
it does change the behaviour so that BOM insertion doesn't happen any more,
anyone who needs a BOM can have it by a simple update to their format string.
The idea is outlined here:

http://bugs.python.org/issue14452#msg158030

Comments would be appreciated.

Regards,

Vinay Sajip


From stefan_ml at behnel.de  Wed Apr 11 15:31:09 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Wed, 11 Apr 2012 15:31:09 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
In-Reply-To: <CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
	<jm3tf9$nme$1@dough.gmane.org>
	<CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>
Message-ID: <jm412u$mcd$1@dough.gmane.org>

Armin Rigo, 11.04.2012 14:51:
> On Wed, Apr 11, 2012 at 14:29, Stefan Behnel wrote:
>>> Moreover the performance hit is well below 2x, more like 20%.
>>
>> Hmm, those 20% refer to STM, right? Without hardware support? Then hardware
>> support could be expected to drop that even further?
> 
> Yes, that's using STM on my regular laptop.  How HTM would help
> remains unclear at this point, because in this approach transactions
> are typically rather large --- likely much larger than what the
> first-generation HTM-capable processors will support next year.

Ok. I guess once the code is there, the hardware will eventually catch up.

However, I'm not sure what you consider "large". A lot of manipulation
operations for the builtin types are not all that involved, at least in the
"normal" cases (read: fast paths) that involve no memory reallocation etc.,
and anything that can be called by and doesn't call into the interpreter
would be a complete and independent transaction all by itself, as the GIL
is allowed to be released between any two ticks.

Do you know if hybrid TM is possible at this level? I.e. short transactions
run in hardware, long ones in software? (Assuming we know what's "long" and
"short", I guess...)


> But 20% looks good anyway :-)

Oh, definitely.


>> Did you do any experiments with running parallel code so far, to see if
>> that scales as expected?
> 
> Yes, it scales very nicely on small non-conflicting examples.  I
> believe that it scales just as nicely on large examples on CPython
> too, based on the approach --- as long as we, as CPython developers,
> make enough efforts to adapt a sufficiently large portion of the
> CPython C code base (which would mean: most mutable built-in objects'
> implementation).

Right, that would involve some work. But the advantage, as I understand it,
is that this can be done incrementally. I.e. make it work, then make it
fast and make it scale.

Stefan


From stefan_ml at behnel.de  Wed Apr 11 15:57:01 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Wed, 11 Apr 2012 15:57:01 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
In-Reply-To: <jm412u$mcd$1@dough.gmane.org>
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
	<jm3tf9$nme$1@dough.gmane.org>
	<CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>
	<jm412u$mcd$1@dough.gmane.org>
Message-ID: <jm42je$3pf$1@dough.gmane.org>

Stefan Behnel, 11.04.2012 15:31:
> Armin Rigo, 11.04.2012 14:51:
>> On Wed, Apr 11, 2012 at 14:29, Stefan Behnel wrote:
>>> Did you do any experiments with running parallel code so far, to see if
>>> that scales as expected?
>>
>> Yes, it scales very nicely on small non-conflicting examples.  I
>> believe that it scales just as nicely on large examples on CPython
>> too, based on the approach --- as long as we, as CPython developers,
>> make enough efforts to adapt a sufficiently large portion of the
>> CPython C code base (which would mean: most mutable built-in objects'
>> implementation).
> 
> Right, that would involve some work. But the advantage, as I understand it,
> is that this can be done incrementally.

Hmm, and according to the papers that are referenced on the PyPy proposal
page, at least some of this work has already been done, it seems.

http://pypy.org/tmdonate.html#why-hasn-t-the-idea-been-implemented-for-cpython-already

Stefan


From neologix at free.fr  Wed Apr 11 16:27:55 2012
From: neologix at free.fr (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=)
Date: Wed, 11 Apr 2012 16:27:55 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
In-Reply-To: <jm412u$mcd$1@dough.gmane.org>
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
	<jm3tf9$nme$1@dough.gmane.org>
	<CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>
	<jm412u$mcd$1@dough.gmane.org>
Message-ID: <CAH_1eM0FySY_KjmnJEECGQRbdh5Uq1MRSEGankavWO-hiQbd4A@mail.gmail.com>

>> Yes, that's using STM on my regular laptop. ?How HTM would help
>> remains unclear at this point, because in this approach transactions
>> are typically rather large --- likely much larger than what the
>> first-generation HTM-capable processors will support next year.
>
> Ok. I guess once the code is there, the hardware will eventually catch up.
>
> However, I'm not sure what you consider "large". A lot of manipulation
> operations for the builtin types are not all that involved, at least in the
> "normal" cases (read: fast paths) that involve no memory reallocation etc.,
> and anything that can be called by and doesn't call into the interpreter
> would be a complete and independent transaction all by itself, as the GIL
> is allowed to be released between any two ticks.

Large as in L2-cache large, and as in "you won't get a page fault or
an interrupt, you won't make any syscall, any I/O..." ;-)

From solipsis at pitrou.net  Wed Apr 11 16:33:58 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 11 Apr 2012 16:33:58 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
	<jm3tf9$nme$1@dough.gmane.org>
	<CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>
	<jm412u$mcd$1@dough.gmane.org>
Message-ID: <20120411163358.5f59eb6a@pitrou.net>

On Wed, 11 Apr 2012 15:31:09 +0200
Stefan Behnel <stefan_ml at behnel.de> wrote:
> 
> Ok. I guess once the code is there, the hardware will eventually catch up.
> 
> However, I'm not sure what you consider "large". A lot of manipulation
> operations for the builtin types are not all that involved, at least in the
> "normal" cases (read: fast paths) that involve no memory reallocation etc.,
> and anything that can be called by and doesn't call into the interpreter
> would be a complete and independent transaction all by itself, as the GIL
> is allowed to be released between any two ticks.

I think Armin's plan is not to work at the bytecode level, but make
transactions explicit (at least in framework code - e.g. Twisted or
Stackless -, perhaps not in user code). Perhaps he can elaborate on
that.

> Do you know if hybrid TM is possible at this level? I.e. short transactions
> run in hardware, long ones in software? (Assuming we know what's "long" and
> "short", I guess...)

There are other issues than the size of transactions. For example, one
issue is that not all operations may be allowed in a transaction:

?In addition, there are a number of instructions that may cause an
abort on specific implementations. These instructions include x87 and
MMX, mixed access to XMM and YMM registers, updates to non-status parts
of EFLAGs, updating segment, debug or control registers, ring
transitions, cache and TLB control instructions, any non-writeback
memory type accesses, processor state save, interrupts, I/O,
virtualization (VMX), trusted execution (SMX) and several miscellaneous
types.?

http://realworldtech.com/page.cfm?ArticleID=RWT021512050738

So, realistically, a (S)TM implementation in CPython (and probably also
in PyPy) would have to stand on its own merits, rather than betting on
pie-in-the-sky improvements of HTM implementations.

Regards

Antoine.



From arigo at tunes.org  Wed Apr 11 17:08:09 2012
From: arigo at tunes.org (Armin Rigo)
Date: Wed, 11 Apr 2012 17:08:09 +0200
Subject: [Python-Dev] Experimenting with STM on CPython
In-Reply-To: <20120411163358.5f59eb6a@pitrou.net>
References: <CAMSv6X0HoNr6tVdXTxMPyzr9F6VSc40kZSPr3Fe-Z9QDNduwQQ@mail.gmail.com>
	<jm3tf9$nme$1@dough.gmane.org>
	<CAMSv6X0ieCRFtOaPF8UcTAv7QRjY-_-2i=pX1kitFzG8xLjt8g@mail.gmail.com>
	<jm412u$mcd$1@dough.gmane.org> <20120411163358.5f59eb6a@pitrou.net>
Message-ID: <CAMSv6X3Rjf4AUggQe63VJy-bpzu_VmAH6+ch=iGjYzEhCZz7yw@mail.gmail.com>

Hi Antoine, hi Stefan,

On Wed, Apr 11, 2012 at 16:33, Antoine Pitrou <solipsis at pitrou.net> wrote:
> I think Armin's plan is not to work at the bytecode level, but make
> transactions explicit (at least in framework code - e.g. Twisted or
> Stackless -, perhaps not in user code). Perhaps he can elaborate on
> that.

Yes, precisely.  It should be explained in the proposal.  The
references in "http://pypy.org/tmdonate.html#why-hasn-t-the-idea-been-implemented-for-cpython-already"
don't change CPython (or only minimally).  They use Hardware TM, but
(the most important thing imho) they target bytecode-level
transactions --- i.e. the programmer is still stuck with the
"threading" module.

About using it explicitly in user code: I found out that there are use
cases to do so directly.  If you have a CPU-intensive program that
does:

    for x in some_random_order_iterator:
        do_stuff_for(x)

Then if the things you do are "not too dependent on each other", you
can win by replacing it with:

    for x in some_random_order_iterator:
        transaction.add(do_stuff_for, x)
    transaction.run()

and no other change.  It has exactly the same semantics, and in this
case you don't really need a framework in which to hide the
transaction.add().  Compare it with the situation of spawning threads:
you need to carefully add locks *everywhere* or your program is buggy
--- both in today's CPython or in a GIL-less,
bytecode-level-transaction CPython.

By the way, that's why I said that transactions are arbitrarily long:
one transaction will be, in this case, everything that do_stuff_for(x)
does.


A bient?t,

Armin.

From g.brandl at gmx.net  Wed Apr 11 21:33:30 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 11 Apr 2012 21:33:30 +0200
Subject: [Python-Dev] cpython: use assertWarns instead of check_warnings
	- Issue14341
In-Reply-To: <E1SHz7Y-0004v5-KQ@dinsdale.python.org>
References: <E1SHz7Y-0004v5-KQ@dinsdale.python.org>
Message-ID: <jm4m9l$cc5$1@dough.gmane.org>

On 11.04.2012 17:06, senthil.kumaran wrote:
> http://hg.python.org/cpython/rev/751c7b81f6ee
> changeset:   76241:751c7b81f6ee
> parent:      76232:8a47d2322df0
> user:        Senthil Kumaran<senthil at uthcode.com>
> date:        Wed Apr 11 23:05:49 2012 +0800
> summary:
>    use assertWarns instead of check_warnings - Issue14341
>
> files:
>    Lib/test/test_urllib2.py |  16 +++++++++-------
>    1 files changed, 9 insertions(+), 7 deletions(-)
>
>
> diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py
> --- a/Lib/test/test_urllib2.py
> +++ b/Lib/test/test_urllib2.py
> @@ -618,21 +618,23 @@
>
>       def test_method_deprecations(self):
>           req = Request("http://www.example.com")
> -        with support.check_warnings(('', DeprecationWarning)):
> +
> +        with self.assertWarns(DeprecationWarning) as cm:
>               req.add_data("data")

There's no need for adding the "as cm" if you don't need the cm object.

Georg


From benjamin at python.org  Wed Apr 11 21:37:49 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 11 Apr 2012 15:37:49 -0400
Subject: [Python-Dev] [RELEASED] Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3
Message-ID: <CAPZV6o9mY=_UybV_Rjj9xAYF1pRF_vNk9Uw_LE9JLWyQ=n512w@mail.gmail.com>

We're bursting with enthusiasm to announce the immediate availability of Python
2.6.8, 2.7.3, 3.1.5, and 3.2.3. These releases included several security fixes.

Note: Virtualenvs created with older releases in the 2.6, 2.7, 3.1, or 3.2
series may not work with these bugfix releases. Specifically, the os module may
not appear to have a urandom function. This is a virtualenv bug, which can be
solved by recreating the broken virtualenvs with the newer Python versions.

The main impetus for these releases is fixing a security issue in Python's hash
based types, dict and set, as described below. Python 2.7.3 and 3.2.3 include
the security patch and the normal set of bug fixes. Since Python 2.6 and 3.1 are
maintained only for security issues, 2.6.8 and 3.1.5 contain only various
security patches.

The security issue exploits Python's dict and set implementations. Carefully
crafted input can lead to extremely long computation times and denials of
service. [1] Python dict and set types use hash tables to provide amortized
constant time operations. Hash tables require a well-distributed hash function
to spread data evenly across the hash table. The security issue is that an
attacker could compute thousands of keys with colliding hashes; this causes
quadratic algorithmic complexity when the hash table is constructed. To
alleviate the problem, the new releases add randomization to the hashing of
Python's string types (bytes/str in Python 3 and str/unicode in Python 2),
datetime.date, and datetime.datetime. This prevents an attacker from computing
colliding keys of these types without access to the Python process.

Hash randomization causes the iteration order of dicts and sets to be
unpredictable and differ across Python runs. Python has never guaranteed
iteration order of keys in a dict or set, and applications are advised to never
rely on it. Historically, dict iteration order has not changed very often across
releases and has always remained consistent between successive executions of
Python. Thus, some existing applications may be relying on dict or set ordering.
Because of this and the fact that many Python applications which don't accept
untrusted input are not vulnerable to this attack, in all stable Python releases
mentioned here, HASH RANDOMIZATION IS DISABLED BY DEFAULT. There are two ways to
enable it. The -R commandline option can be passed to the python executable. It
can also be enabled by setting an environmental variable PYTHONHASHSEED to
"random". (Other values are accepted, too; pass -h to python for complete
description.)

More details about the issue and the patch can be found in the oCERT advisory
[1] and the Python bug tracker [2].

Another related security issue fixed in these releases is in the expat XML
parsing library. expat had the same hash security issue detailed above as
Python's core types. The hashing algorithm used in the expat library is now
randomized.

A few other security issues were fixed. They are described on the release pages
below.

These releases are production releases.

Downloads are at

    http://python.org/download/releases/2.6.8/
    http://python.org/download/releases/2.7.3/
    http://python.org/download/releases/3.1.5/
    http://python.org/download/releases/3.2.3/

As always, please report bugs to

    http://bugs.python.org/

Happy-to-put-hash-attack-issues-behind-them-ly yours,
The Python release team
Barry Warsaw (2.6), Georg Brandl (3.2), and Benjamin Peterson (2.7 and 3.1)

[1] http://www.ocert.org/advisories/ocert-2011-003.html
[2] http://bugs.python.org/issue13703

From raymond.hettinger at gmail.com  Wed Apr 11 22:27:41 2012
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 11 Apr 2012 16:27:41 -0400
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
Message-ID: <308C4AFC-029A-4241-8EFB-7DCAB3233A20@gmail.com>


On Apr 11, 2012, at 2:49 AM, Jim Jewett wrote:

> I believe PEP 418 (or at least the discussion) would benefit greatly
> from a glossary to encourage people to use the same definitions. 

This sort of information is a good candidate for the HOW-TO section
of the docs.


Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120411/fb025c9e/attachment.html>

From victor.stinner at gmail.com  Thu Apr 12 01:09:58 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 12 Apr 2012 01:09:58 +0200
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
Message-ID: <CAMpsgwY2zBi2z2j+UAoqmcXK-vuPbq3nkpgMHewg=3w7DtW27g@mail.gmail.com>

2012/4/11 Jim Jewett <jimjjewett at gmail.com>:
> I believe PEP 418 (or at least the discussion) would benefit greatly
> from a glossary to encourage people to use the same definitions. ?This
> is arguably the Definitions section, but it should move either near
> the end or (preferably) ahead of the Functions. ?It also needs to be
> greatly expanded.

I integrated a simplified version of your Glossary into the PEP. Some changes:
 * a monotonic clock has not necessary an high precision, on Windows,
the two properties are exclusive
 * I replaced "Clock Monotonic" with "Monotonic" and removed the
"Monotonic" term
 * I removed some questions

Victor

From roundup-admin at psf.upfronthosting.co.za  Thu Apr 12 02:41:06 2012
From: roundup-admin at psf.upfronthosting.co.za (Python tracker)
Date: Thu, 12 Apr 2012 00:41:06 +0000
Subject: [Python-Dev] Failed issue tracker submission
Message-ID: <20120412004106.66F9F1CBB2@psf.upfronthosting.co.za>


An unexpected error occurred during the processing
of your message. The tracker administrator is being
notified.
-------------- next part --------------
Return-Path: <python-dev at python.org>
X-Original-To: report at bugs.python.org
Delivered-To: roundup+tracker at psf.upfronthosting.co.za
Received: from mail.python.org (mail.python.org [82.94.164.166])
	by psf.upfronthosting.co.za (Postfix) with ESMTPS id E4BA91C99B
	for <report at bugs.python.org>; Thu, 12 Apr 2012 02:41:05 +0200 (CEST)
Received: from albatross.python.org (localhost [127.0.0.1])
	by mail.python.org (Postfix) with ESMTP id 3VSjwK4VJXzMYx
	for <report at bugs.python.org>; Thu, 12 Apr 2012 02:41:05 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
	t=1334191265; bh=lo47e+asT0uJCAqRxhHod6Ew5FD2UVZ/Hv0bVhpeOUw=;
	h=Date:Message-Id:Content-Type:MIME-Version:
	 Content-Transfer-Encoding:From:To:Subject;
	b=FsM+hBGbG0zAjavHnV3td7hqCskSq6wdTfPSNViUvv+CdHDk2DvCJe20fZ430xpBb
	 gn+4wlvxHKY9rHfNvO+XzyZLkec7f6XD4h7fFcztlkTJPPscZhKwgICtmO6NpfVyi8
	 ITZ9D8vkNOrSAp8Z9+dlmrDELiKwWuQ2S9dOHIyQ=
Received: from localhost (HELO mail.python.org) (127.0.0.1)
  by albatross.python.org with SMTP; 12 Apr 2012 02:41:05 +0200
Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4])
	(using TLSv1 with cipher AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.python.org (Postfix) with ESMTPS
	for <report at bugs.python.org>; Thu, 12 Apr 2012 02:41:05 +0200 (CEST)
Received: from localhost
	([127.0.0.1] helo=dinsdale.python.org ident=hg)
	by dinsdale.python.org with esmtp (Exim 4.72)
	(envelope-from <python-dev at python.org>)
	id 1SI85x-00062T-Eb
	for report at bugs.python.org; Thu, 12 Apr 2012 02:41:05 +0200
Date: Thu, 12 Apr 2012 02:41:05 +0200
Message-Id: <E1SI85x-00062T-Eb at dinsdale.python.org>
Content-Type: text/plain; charset="utf8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: python-dev at python.org
To: report at bugs.python.org
Subject: [issue14552]

TmV3IGNoYW5nZXNldCBmMjVmYjdlMWQwNzYgYnkgUiBEYXZpZCBNdXJyYXkgaW4gYnJhbmNoICcz
LjInOgojMTQ1NTI6IHJlbW92ZSByZWR1bmRhbnQgd29yZGluZyBpbiAndGVzdCcgZG9jcy4KaHR0
cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvZjI1ZmI3ZTFkMDc2CgoKTmV3IGNoYW5nZXNl
dCBiZDM1M2YxMmMwMDcgYnkgUiBEYXZpZCBNdXJyYXkgaW4gYnJhbmNoICdkZWZhdWx0JzoKTWVy
Z2UgZG9jIGZpeGVzICMxNDU1MyBhbmQgIzE0NTUyLgpodHRwOi8vaGcucHl0aG9uLm9yZy9jcHl0
aG9uL3Jldi9iZDM1M2YxMmMwMDcKCgpOZXcgY2hhbmdlc2V0IGQ2MGVmMTQxZTA5MCBieSBSIERh
dmlkIE11cnJheSBpbiBicmFuY2ggJzIuNyc6CiMxNDU1MjogcmVtb3ZlIHJlZHVuZGFudCB3b3Jk
aW5nIGluICd0ZXN0JyBkb2NzLgpodHRwOi8vaGcucHl0aG9uLm9yZy9jcHl0aG9uL3Jldi9kNjBl
ZjE0MWUwOTAK

From tjreedy at udel.edu  Thu Apr 12 06:47:15 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Apr 2012 00:47:15 -0400
Subject: [Python-Dev] [RELEASED] Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3
In-Reply-To: <CAPZV6o9mY=_UybV_Rjj9xAYF1pRF_vNk9Uw_LE9JLWyQ=n512w@mail.gmail.com>
References: <CAPZV6o9mY=_UybV_Rjj9xAYF1pRF_vNk9Uw_LE9JLWyQ=n512w@mail.gmail.com>
Message-ID: <jm5mop$1cl$1@dough.gmane.org>

On 4/11/2012 3:37 PM, Benjamin Peterson wrote:

> Downloads are at
>
>      http://python.org/download/releases/2.6.8/
>      http://python.org/download/releases/2.7.3/

This page lists 'program databases' after the normal msi installers for 
Windows. I am puzzled and curious as to what those are, and I suspect 
other naive users will be too.

>      http://python.org/download/releases/3.1.5/
>      http://python.org/download/releases/3.2.3/

No such thing here.


-- 
Terry Jan Reedy


From g.brandl at gmx.net  Thu Apr 12 08:45:50 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 12 Apr 2012 08:45:50 +0200
Subject: [Python-Dev] [RELEASED] Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3
In-Reply-To: <jm5mop$1cl$1@dough.gmane.org>
References: <CAPZV6o9mY=_UybV_Rjj9xAYF1pRF_vNk9Uw_LE9JLWyQ=n512w@mail.gmail.com>
	<jm5mop$1cl$1@dough.gmane.org>
Message-ID: <jm5tm9$b4a$1@dough.gmane.org>

On 12.04.2012 06:47, Terry Reedy wrote:
> On 4/11/2012 3:37 PM, Benjamin Peterson wrote:
>
>>  Downloads are at
>>
>>       http://python.org/download/releases/2.6.8/
>>       http://python.org/download/releases/2.7.3/
>
> This page lists 'program databases' after the normal msi installers for
> Windows. I am puzzled and curious as to what those are, and I suspect
> other naive users will be too.
>
>>       http://python.org/download/releases/3.1.5/
>>       http://python.org/download/releases/3.2.3/
>
> No such thing here.

Here they are called "Visual Studio debug information files".  I agree
that "program database", while it is the official name of the file and
the literal meaning of the file extension ".pdb", is not a very good
description.

Georg


From senthil at uthcode.com  Thu Apr 12 13:29:26 2012
From: senthil at uthcode.com (Senthil Kumaran)
Date: Thu, 12 Apr 2012 19:29:26 +0800
Subject: [Python-Dev] cpython: use assertWarns instead of check_warnings
 - Issue14341
In-Reply-To: <jm4m9l$cc5$1@dough.gmane.org>
References: <E1SHz7Y-0004v5-KQ@dinsdale.python.org>
	<jm4m9l$cc5$1@dough.gmane.org>
Message-ID: <20120412112926.GD2300@mathmagic>

On Wed, Apr 11, 2012 at 09:33:30PM +0200, Georg Brandl wrote:
> >+
> >+        with self.assertWarns(DeprecationWarning) as cm:
> >              req.add_data("data")
> 
> There's no need for adding the "as cm" if you don't need the cm object.

I overlooked. Thanks for spotting. I have corrected it.

-- 
Senthil

From kristjan at ccpgames.com  Thu Apr 12 15:49:43 2012
From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=)
Date: Thu, 12 Apr 2012 13:49:43 +0000
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <CAMpsgwY2zBi2z2j+UAoqmcXK-vuPbq3nkpgMHewg=3w7DtW27g@mail.gmail.com>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
	<CAMpsgwY2zBi2z2j+UAoqmcXK-vuPbq3nkpgMHewg=3w7DtW27g@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD338D843@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of Victor Stinner
> Sent: 11. apr?l 2012 23:10
> I integrated a simplified version of your Glossary into the PEP. Some changes:
>  * a monotonic clock has not necessary an high precision, on Windows, the
> two properties are exclusive

By this I assume you mean that QueryPerformanceCounter is not monotonic, because it certainly is high precision.
I don't understand why you have come to this conclusion.  The fact that some older bios or drivers may be buggy does not suffice to condem whole of windows.

Wallclock:  This definition is wrong no metter how the BDFL feels about the word.  Please see http://en.wikipedia.org/wiki/Wall_clock_time.

K

From mark at hotpy.org  Thu Apr 12 16:25:33 2012
From: mark at hotpy.org (Mark Shannon)
Date: Thu, 12 Apr 2012 15:25:33 +0100
Subject: [Python-Dev] PEP 412 Key-Sharing Dictionary
Message-ID: <4F86E5DD.9000906@hotpy.org>

I would like to get the new shared-keys dictionary implementation
committed, or rejected or further reviewed, if necessary.
It seems to have got a bit stuck at the moment.

As far as I am concerned it is ready to go in.
Memory usage is reduced, speed is roughly unchanged, and it passes all
the tests (except for 1 test in test_pprint which relies on dict/set
ordering, see http://bugs.python.org/issue13907)

Cheers,
Mark.

From guido at python.org  Thu Apr 12 16:30:07 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 12 Apr 2012 07:30:07 -0700
Subject: [Python-Dev] PEP 412 Key-Sharing Dictionary
In-Reply-To: <4F86E5DD.9000906@hotpy.org>
References: <4F86E5DD.9000906@hotpy.org>
Message-ID: <CAP7+vJJjY5YUatWGatwT+ZJxj-nrLjajgnw=10bpYVdrqVhQ0A@mail.gmail.com>

Wow, I thought it was accepted already! I don't see the hangup.

On Thu, Apr 12, 2012 at 7:25 AM, Mark Shannon <mark at hotpy.org> wrote:
> I would like to get the new shared-keys dictionary implementation
> committed, or rejected or further reviewed, if necessary.
> It seems to have got a bit stuck at the moment.
>
> As far as I am concerned it is ready to go in.
> Memory usage is reduced, speed is roughly unchanged, and it passes all
> the tests (except for 1 test in test_pprint which relies on dict/set
> ordering, see http://bugs.python.org/issue13907)

-- 
--Guido van Rossum (python.org/~guido)

From rdmurray at bitdance.com  Thu Apr 12 16:54:19 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 12 Apr 2012 10:54:19 -0400
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <EFE3877620384242A686D52278B7CCD338D843@RKV-IT-EXCH104.ccp.ad.local>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
	<CAMpsgwY2zBi2z2j+UAoqmcXK-vuPbq3nkpgMHewg=3w7DtW27g@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD338D843@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120412145419.D15F7250147@webabinitio.net>

On Thu, 12 Apr 2012 13:49:43 -0000, =?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?= <kristjan at ccpgames.com> wrote:
> Wallclock:  This definition is wrong no metter how the BDFL feels about the word.  Please see http://en.wikipedia.org/wiki/Wall_clock_time.

I agree with the BDFL.  I have always heard "wallclock" as referring to
the clock on the wall (that's what the words mean, after all).

When this term became current that meant an *analog* clock that did not
automatically update for daylight savings time, so naturally if you
measure an interval using it it is equivalent to "real time".

However, to my mind the implication of the term has always been that
the actual time value returned by a 'wallclock' function can be directly
mapped to the time shown on the clock on the wall (assuming the computer's
clock and the clock on the wall are synchronized, of course).

Heh.  Come to think of it, when I first encountered the term it was in
the context of one of the early IBM PCs running DOS, which means that
the computer clock *was* set to the same time as the wall clock.

Thus regardless of what Wikipedia thinks, I think in many people's
minds there is an inherent ambiguity in what the term means.   If you
use it to measure an interval, then I think most people would agree
automatically that it is equivalent to "real time".  But outside of
interval measurement, there is ambiguity.

So I think the definition in the PEP is correct.

--David

From solipsis at pitrou.net  Thu Apr 12 17:25:32 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 12 Apr 2012 17:25:32 +0200
Subject: [Python-Dev] PEP 412 Key-Sharing Dictionary
References: <4F86E5DD.9000906@hotpy.org>
	<CAP7+vJJjY5YUatWGatwT+ZJxj-nrLjajgnw=10bpYVdrqVhQ0A@mail.gmail.com>
Message-ID: <20120412172532.561e6599@pitrou.net>

On Thu, 12 Apr 2012 07:30:07 -0700
Guido van Rossum <guido at python.org> wrote:
> Wow, I thought it was accepted already! I don't see the hangup.

It's under review.
http://bugs.python.org/issue13903

Regards

Antoine.



From brett at python.org  Thu Apr 12 19:23:17 2012
From: brett at python.org (Brett Cannon)
Date: Thu, 12 Apr 2012 13:23:17 -0400
Subject: [Python-Dev] PEP 412 Key-Sharing Dictionary
In-Reply-To: <4F86E5DD.9000906@hotpy.org>
References: <4F86E5DD.9000906@hotpy.org>
Message-ID: <CAP1=2W7k1MoF7OgxdrYHiV6Fsum9stMj6jMqa2P9DASHr3ngiA@mail.gmail.com>

On Thu, Apr 12, 2012 at 10:25, Mark Shannon <mark at hotpy.org> wrote:

> I would like to get the new shared-keys dictionary implementation
> committed, or rejected or further reviewed, if necessary.
> It seems to have got a bit stuck at the moment.
>
> As far as I am concerned it is ready to go in.
> Memory usage is reduced, speed is roughly unchanged, and it passes all
> the tests (except for 1 test in test_pprint which relies on dict/set
> ordering, see http://bugs.python.org/**issue13907<http://bugs.python.org/issue13907>
> )
>

The language summit result was this should go in once the review is over
(as Antoine pointed out) and committed by someone other than Mark since it
is a complicated enough thing to require a full review.

And did you ever follow through on getting your commit privileges, Mark?
While you can't commit this patch you can definitely help with maintaining
it (and other things =).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120412/f5b70f5f/attachment.html>

From tjreedy at udel.edu  Thu Apr 12 20:45:59 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Apr 2012 14:45:59 -0400
Subject: [Python-Dev] Fwd: Error in MD5 checksums of the 2.7.3 release page.
In-Reply-To: <CAC88Dfu2G1fBzihbNcvDtDLrALYS+Ek0M8b3D-KtU4T8h2pMZA@mail.gmail.com>
References: <CAC88Dfu2G1fBzihbNcvDtDLrALYS+Ek0M8b3D-KtU4T8h2pMZA@mail.gmail.com>
Message-ID: <jm77td$cq8$1@dough.gmane.org>

From: J?r?my Bethmont <jeremy.bethmont at gmail.com>
To: python-list at python.org
Newsgroups: gmane.comp.python.general

There is an error in the MD5 checksums section of the following page:
     http://python.org/download/releases/2.7.3/

Python-3.1.5.tgz, Python-3.1.5.tar.bz2 and Python-3.1.5.tar.xz
are listed instead of:
Python-2.7.3.tgz, Python-2.7.3.tar.bz2 and Python-2.7.3.tar.xz
---
Error verified

Terry Jan Reedy



From solipsis at pitrou.net  Fri Apr 13 15:27:24 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 13 Apr 2012 15:27:24 +0200
Subject: [Python-Dev] PEP 418 glossary
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
Message-ID: <20120413152724.0b7b68ed@pitrou.net>

On Wed, 11 Apr 2012 02:49:41 -0400
Jim Jewett <jimjjewett at gmail.com> wrote:
> 
> Accuracy:
>     Is the answer correct?  Any clock will eventually <drift>; if a
> clock is intended to match <Civil Time>, it will need to be <adjusted>
> back to the "true" time.

You may also point to
http://en.wikipedia.org/wiki/Accuracy_and_precision

We are probably interested in precision more than in accuracy, as far
as this PEP is concerned.

Regards

Antoine.



From solipsis at pitrou.net  Fri Apr 13 15:38:39 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 13 Apr 2012 15:38:39 +0200
Subject: [Python-Dev] A couple of PEP 418 comments
Message-ID: <20120413153839.32d3036f@pitrou.net>


Hello,

I'm just starting a new thread since the old ones are so crowded.
First, overall I think the PEP is starting to look really good and
insightful! (congratulations to Victor)

I have a couple of comments, mostly small ones:

> "function" (str): name of the underlying operating system function.

I think "implementation" is a better name here (more precise, and
perhaps also more accurate :-)).

> time.monotonic()
> time.perf_counter()
> time.process_time()

The descriptions should really stress the scope of the result's
validity. My guess (or wish :-)) would be:

- time.monotonic(): system-wide results, comparable from one process to
  another
- time.perf_counter(): process-wide results, comparable from one thread
  to another (?)
- time.process_time(): process-wide, by definition

It would also be nice to know if some systems may be unable to
implement time.monotonic().

> GetTickCount() has an precision of 55 ms on Windows 9x.

Do we care? :) Precision under recent Windows variants (XP or later)
would be more useful.

Is there a designated dictator for this PEP?

Regards

Antoine.



From status at bugs.python.org  Fri Apr 13 18:07:16 2012
From: status at bugs.python.org (Python tracker)
Date: Fri, 13 Apr 2012 18:07:16 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20120413160716.595621CBBF@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2012-04-06 - 2012-04-13)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    3377 (+17)
  closed 22971 (+36)
  total  26348 (+53)

Open issues with patches: 1436 


Issues opened (36)
==================

#11501: distutils.archive_util should handle absence of zlib module
http://bugs.python.org/issue11501  reopened by eric.araujo

#14310: Socket duplication for windows
http://bugs.python.org/issue14310  reopened by pitrou

#14518: Add bcrypt $2a$ to crypt.py
http://bugs.python.org/issue14518  opened by dholth

#14519: In re's examples the example with scanf() contains wrong analo
http://bugs.python.org/issue14519  opened by py.user

#14521: math.copysign(1., float('nan')) returns -1.
http://bugs.python.org/issue14521  opened by mattip

#14525: ia64-hp-hpux11.31 won't compile Python-2.6.8rc2 without -D_TER
http://bugs.python.org/issue14525  opened by pda

#14527: How to link with an external libffi?
http://bugs.python.org/issue14527  opened by pda

#14529: distutils's build_msi command ignores the data_files argument
http://bugs.python.org/issue14529  opened by Mario.Vilas

#14530: distutils's build_wininst command fails to correctly interpret
http://bugs.python.org/issue14530  opened by Mario.Vilas

#14531: Backtrace should not attempt to open <stdin> file
http://bugs.python.org/issue14531  opened by ezyang

#14532: multiprocessing module performs a time-dependent hmac comparis
http://bugs.python.org/issue14532  opened by Jon.Oberheide

#14534: Add method to mark unittest.TestCases as "do not run".
http://bugs.python.org/issue14534  opened by r.david.murray

#14535: three code examples in docs are not syntax highlighted
http://bugs.python.org/issue14535  opened by ramchandra.apte

#14537: "Fatal Python error: Cannot recover from stack overflow."  wit
http://bugs.python.org/issue14537  opened by Aaron.Meurer

#14538: HTMLParser: parsing error
http://bugs.python.org/issue14538  opened by Michel.Leunen

#14540: Crash in Modules/_ctypes/libffi/src/dlmalloc.c on ia64-hp-hpux
http://bugs.python.org/issue14540  opened by pda

#14543: Upgrade OpenSSL on Windows to 0.9.8u
http://bugs.python.org/issue14543  opened by dino.viehland

#14544: Limit "global" keyword name conflicts in language spec to thos
http://bugs.python.org/issue14544  opened by ncoghlan

#14546: lll.py can't handle multiple parameters correctly
http://bugs.python.org/issue14546  opened by carton

#14547: Python symlink to script behaves unexpectedly
http://bugs.python.org/issue14547  opened by j13r

#14548: garbage collection just after multiprocessing's fork causes ex
http://bugs.python.org/issue14548  opened by sbt

#14549: Recursive inclusion of packages
http://bugs.python.org/issue14549  opened by eric.araujo

#14550: os.path.abspath() should have an option to use PWD
http://bugs.python.org/issue14550  opened by csawyer-yumaed

#14554: test module: correction
http://bugs.python.org/issue14554  opened by tshepang

#14555: clock_gettime/settime/getres: Add more clock identifiers
http://bugs.python.org/issue14555  opened by haypo

#14556: telnetlib Telnet.expect fails with timeout=0
http://bugs.python.org/issue14556  opened by Joel.Lovinger

#14558: Documentation for unittest.main does not describe some keyword
http://bugs.python.org/issue14558  opened by jfinkels

#14559: (2.7.3 Regression)  PC\8.0 directory can no longer be used to 
http://bugs.python.org/issue14559  opened by mitchblank

#14561: python-2.7.2-r3 suffers test failure at test_mhlib
http://bugs.python.org/issue14561  opened by idella5

#14562: urllib2 maybe blocks too long
http://bugs.python.org/issue14562  opened by Anrs.Hu

#14563: Segmentation fault on ctypes.Structure subclass with byte stri
http://bugs.python.org/issue14563  opened by aliles

#14565: is_cgi doesn't function as documented for cgi_directories
http://bugs.python.org/issue14565  opened by v+python

#14566: run_cgi reverts to using unnormalized path
http://bugs.python.org/issue14566  opened by v+python

#14567: http/server.py query string handling incorrect, inefficient
http://bugs.python.org/issue14567  opened by v+python

#14568: HP-UX local libraries not included
http://bugs.python.org/issue14568  opened by adiroiban

#14570: Document json "sort_keys" parameter properly
http://bugs.python.org/issue14570  opened by ncoghlan



Most recent 15 issues with no replies (15)
==========================================

#14570: Document json "sort_keys" parameter properly
http://bugs.python.org/issue14570

#14566: run_cgi reverts to using unnormalized path
http://bugs.python.org/issue14566

#14561: python-2.7.2-r3 suffers test failure at test_mhlib
http://bugs.python.org/issue14561

#14558: Documentation for unittest.main does not describe some keyword
http://bugs.python.org/issue14558

#14535: three code examples in docs are not syntax highlighted
http://bugs.python.org/issue14535

#14530: distutils's build_wininst command fails to correctly interpret
http://bugs.python.org/issue14530

#14529: distutils's build_msi command ignores the data_files argument
http://bugs.python.org/issue14529

#14517: Recompilation of sources with Distutils
http://bugs.python.org/issue14517

#14504: Suggestion to improve argparse's help messages for "store_cons
http://bugs.python.org/issue14504

#14499: Extension module builds fail with Xcode 4.3 on OS X 10.7 due t
http://bugs.python.org/issue14499

#14494: __future__.py and its documentation claim absolute imports bec
http://bugs.python.org/issue14494

#14483: inspect.getsource fails to read a file of only comments
http://bugs.python.org/issue14483

#14477: Rietveld test issue
http://bugs.python.org/issue14477

#14462: In re's named group the name cannot contain unicode characters
http://bugs.python.org/issue14462

#14461: In re's positive lookbehind assertion documentation match() ca
http://bugs.python.org/issue14461



Most recent 15 issues waiting for review (15)
=============================================

#14568: HP-UX local libraries not included
http://bugs.python.org/issue14568

#14555: clock_gettime/settime/getres: Add more clock identifiers
http://bugs.python.org/issue14555

#14554: test module: correction
http://bugs.python.org/issue14554

#14548: garbage collection just after multiprocessing's fork causes ex
http://bugs.python.org/issue14548

#14546: lll.py can't handle multiple parameters correctly
http://bugs.python.org/issue14546

#14538: HTMLParser: parsing error
http://bugs.python.org/issue14538

#14537: "Fatal Python error: Cannot recover from stack overflow."  wit
http://bugs.python.org/issue14537

#14532: multiprocessing module performs a time-dependent hmac comparis
http://bugs.python.org/issue14532

#14521: math.copysign(1., float('nan')) returns -1.
http://bugs.python.org/issue14521

#14516: test_tools assumes BUILDDIR=SRCDIR
http://bugs.python.org/issue14516

#14515: tempfile.TemporaryDirectory documented as returning object but
http://bugs.python.org/issue14515

#14494: __future__.py and its documentation claim absolute imports bec
http://bugs.python.org/issue14494

#14478: Decimal hashing very slow, could be cached
http://bugs.python.org/issue14478

#14477: Rietveld test issue
http://bugs.python.org/issue14477

#14472: .gitignore is outdated
http://bugs.python.org/issue14472



Top 10 most discussed issues (10)
=================================

#14532: multiprocessing module performs a time-dependent hmac comparis
http://bugs.python.org/issue14532  21 msgs

#8799: Hang in lib/test/test_threading.py
http://bugs.python.org/issue8799  19 msgs

#14521: math.copysign(1., float('nan')) returns -1.
http://bugs.python.org/issue14521  19 msgs

#4892: Sending Connection-objects over multiprocessing connections fa
http://bugs.python.org/issue4892  17 msgs

#14478: Decimal hashing very slow, could be cached
http://bugs.python.org/issue14478  16 msgs

#14310: Socket duplication for windows
http://bugs.python.org/issue14310  13 msgs

#14423: Getting the starting date of iso week from a week number and a
http://bugs.python.org/issue14423  13 msgs

#14538: HTMLParser: parsing error
http://bugs.python.org/issue14538  12 msgs

#14548: garbage collection just after multiprocessing's fork causes ex
http://bugs.python.org/issue14548  10 msgs

#9141: Allow objects to decide if they can be collected by GC
http://bugs.python.org/issue9141   9 msgs



Issues closed (35)
==================

#7978: SocketServer doesn't handle syscall interruption
http://bugs.python.org/issue7978  closed by pitrou

#12537: mailbox's _become_message is very fragile
http://bugs.python.org/issue12537  closed by r.david.murray

#13165: Integrate stringbench in the Tools directory
http://bugs.python.org/issue13165  closed by pitrou

#13708: Document ctypes.wintypes
http://bugs.python.org/issue13708  closed by ramchandra.apte

#14222: Use time.steady() to implement timeout
http://bugs.python.org/issue14222  closed by rhettinger

#14288: Make iterators pickleable
http://bugs.python.org/issue14288  closed by kristjan.jonsson

#14399: zipfile and creat/update comment
http://bugs.python.org/issue14399  closed by r.david.murray

#14412: Sqlite Integer Fields
http://bugs.python.org/issue14412  closed by benjamin.peterson

#14444: Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom
http://bugs.python.org/issue14444  closed by benjamin.peterson

#14488: Can't install Python2.7.2
http://bugs.python.org/issue14488  closed by kiwii128

#14500: test_importlib fails in refleak mode
http://bugs.python.org/issue14500  closed by brett.cannon

#14508: gprof2html is broken
http://bugs.python.org/issue14508  closed by r.david.murray

#14509: Build failures in non-pydebug builds without NDEBUG.
http://bugs.python.org/issue14509  closed by python-dev

#14511: _static/opensearch.xml for Python 3.2 docs directs searches to
http://bugs.python.org/issue14511  closed by python-dev

#14514: Equivalent to tempfile.NamedTemporaryFile that deletes file at
http://bugs.python.org/issue14514  closed by ncoghlan

#14520: Buggy Decimal.__sizeof__
http://bugs.python.org/issue14520  closed by skrah

#14522: Avoid using DuplicateHandle() on sockets in multiprocessing.co
http://bugs.python.org/issue14522  closed by pitrou

#14523: IDLE's subprocess startup error
http://bugs.python.org/issue14523  closed by r.david.murray

#14524: Python-2.7.3rc2/Modules/_ctypes/libffi/src/dlmalloc.c won't co
http://bugs.python.org/issue14524  closed by pda

#14526: Python-2.6.8rc2 test never finishes ia64-hp-hpux11.31
http://bugs.python.org/issue14526  closed by pda

#14528: Document whether strings implement __iter__
http://bugs.python.org/issue14528  closed by rhettinger

#14533: Modify regrtest to make test_main optional
http://bugs.python.org/issue14533  closed by r.david.murray

#14536: Invalid links in svn.python.org
http://bugs.python.org/issue14536  closed by pitrou

#14539: logging module: logger does not print log message with logging
http://bugs.python.org/issue14539  closed by vinay.sajip

#14541: test_sndhdr fails when run from an installation
http://bugs.python.org/issue14541  closed by vinay.sajip

#14542: reverse() doesn't reverse sort correctly
http://bugs.python.org/issue14542  closed by eric.smith

#14545: html module should not be available in Python 3.1
http://bugs.python.org/issue14545  closed by python-dev

#14551: imp.load_source docs removed from python3 docs...is this corre
http://bugs.python.org/issue14551  closed by r.david.murray

#14552: test module: remove repetition
http://bugs.python.org/issue14552  closed by r.david.murray

#14553: http.server module: grammar fix
http://bugs.python.org/issue14553  closed by r.david.murray

#14557: HP-UX libraries not included
http://bugs.python.org/issue14557  closed by neologix

#14560: urllib2 cannot make POST with utf-8 content
http://bugs.python.org/issue14560  closed by ????????????.??

#14564: Error running: ( echo 'import os'; echo 'help(os)'; )| python 
http://bugs.python.org/issue14564  closed by neologix

#14569: pystate.c #ifdef ordering problem
http://bugs.python.org/issue14569  closed by python-dev

#1559549: ImportError needs attributes for module and file name
http://bugs.python.org/issue1559549  closed by brett.cannon

From victor.stinner at gmail.com  Fri Apr 13 18:29:10 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Fri, 13 Apr 2012 18:29:10 +0200
Subject: [Python-Dev] A couple of PEP 418 comments
In-Reply-To: <20120413153839.32d3036f@pitrou.net>
References: <20120413153839.32d3036f@pitrou.net>
Message-ID: <CAMpsgwaxx4gLQDN4W5yjPub-M7sbxd8J5thF37=ZvWttz37Gyg@mail.gmail.com>

> The descriptions should really stress the scope of the result's
> validity. My guess (or wish :-)) would be:
>
> - time.monotonic(): system-wide results, comparable from one process to
> ?another
> - time.perf_counter(): process-wide results, comparable from one thread
> ?to another (?)
> - time.process_time(): process-wide, by definition

time.monotonic() and time.perf_counter() are process-wide on Windows
older than Vista because of GetTickCount() overflow, on other OSes,
they are system-wide.

> It would also be nice to know if some systems may be unable to
> implement time.monotonic().

You can find such information in the following section:
http://www.python.org/dev/peps/pep-0418/#clock-monotonic-clock-monotonic-raw-clock-boottime

All OSes provide a monotonic clock, except GNU/Hurd. You mean that it
should be mentioned in the time.monotonic() section?

>> GetTickCount() has an precision of 55 ms on Windows 9x.
>
> Do we care? :) Precision under recent Windows variants (XP or later)
> would be more useful.

You can get the precision on Windows Seven in the following table:
http://www.python.org/dev/peps/pep-0418/#monotonic-clocks

I will move the precision of monotonic clock of Windows 9x info into this table.

Victor

From brian at python.org  Fri Apr 13 18:37:53 2012
From: brian at python.org (Brian Curtin)
Date: Fri, 13 Apr 2012 11:37:53 -0500
Subject: [Python-Dev] A couple of PEP 418 comments
In-Reply-To: <CAMpsgwaxx4gLQDN4W5yjPub-M7sbxd8J5thF37=ZvWttz37Gyg@mail.gmail.com>
References: <20120413153839.32d3036f@pitrou.net>
	<CAMpsgwaxx4gLQDN4W5yjPub-M7sbxd8J5thF37=ZvWttz37Gyg@mail.gmail.com>
Message-ID: <CAD+XWwrTptWY08_4z6BAdXrMPFbvukkkODU=bqkQc7k3Jy1f=Q@mail.gmail.com>

On Fri, Apr 13, 2012 at 11:29, Victor Stinner
> I will move the precision of monotonic clock of Windows 9x info into this table.

I would just remove it entirely. It's not relevant since it's not supported.

From techtonik at gmail.com  Fri Apr 13 20:23:12 2012
From: techtonik at gmail.com (anatoly techtonik)
Date: Fri, 13 Apr 2012 21:23:12 +0300
Subject: [Python-Dev] Security issue with the tracker
Message-ID: <CAPkN8xK36QfLpd6XN845YdPQ-aA_g-fOh+vWNW2jK-B8Lne4Bg@mail.gmail.com>

Are there any good small Python libraries for making HTML safe out there?

http://goo.gl/D6ag1

Just to make sure that devs are aware of the problem, which was
reported more than 6 months ago, gain some traction and release fix
sooner. I am not sure what can you do with a stolen bugs.python.org
cookie as everything seems audited, but it is a good precedent for a
grant on Roundup security research.

Have a nice weekend.
--
anatoly t.

From techtonik at gmail.com  Fri Apr 13 20:24:33 2012
From: techtonik at gmail.com (anatoly techtonik)
Date: Fri, 13 Apr 2012 21:24:33 +0300
Subject: [Python-Dev] Security issue with the tracker
In-Reply-To: <CAPkN8xK36QfLpd6XN845YdPQ-aA_g-fOh+vWNW2jK-B8Lne4Bg@mail.gmail.com>
References: <CAPkN8xK36QfLpd6XN845YdPQ-aA_g-fOh+vWNW2jK-B8Lne4Bg@mail.gmail.com>
Message-ID: <CAPkN8xKSoVLgiKZSL_9kn6uLzJ3SqjC463p6nov9ye_3dZCsbA@mail.gmail.com>

On Fri, Apr 13, 2012 at 9:23 PM, anatoly techtonik <techtonik at gmail.com> wrote:
> Are there any good small Python libraries for making HTML safe out there?
>
> http://goo.gl/D6ag1
>
> Just to make sure that devs are aware of the problem, which was
> reported more than 6 months ago, gain some traction and release fix
> sooner. I am not sure what can you do with a stolen bugs.python.org
> cookie as everything seems audited, but it is a good precedent for a
> grant on Roundup security research.
>
> Have a nice weekend.

Link to security report if you can help
http://issues.roundup-tracker.org/issue2550724
--
anatoly t.

From eric at netwok.org  Fri Apr 13 20:53:51 2012
From: eric at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Fri, 13 Apr 2012 14:53:51 -0400
Subject: [Python-Dev] Security issue with the tracker
In-Reply-To: <CAPkN8xKSoVLgiKZSL_9kn6uLzJ3SqjC463p6nov9ye_3dZCsbA@mail.gmail.com>
References: <CAPkN8xK36QfLpd6XN845YdPQ-aA_g-fOh+vWNW2jK-B8Lne4Bg@mail.gmail.com>
	<CAPkN8xKSoVLgiKZSL_9kn6uLzJ3SqjC463p6nov9ye_3dZCsbA@mail.gmail.com>
Message-ID: <4F88763F.707@netwok.org>

bugs.python.org already sanitizes the ok_message and Ezio already posted 
a patch to the upstream bug tracker, so I don?t see what else we could do.

Also note that the Firefox extension NoScript blocks the XSS in this case.

Regards

From solipsis at pitrou.net  Fri Apr 13 23:24:27 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 13 Apr 2012 23:24:27 +0200
Subject: [Python-Dev] A couple of PEP 418 comments
References: <20120413153839.32d3036f@pitrou.net>
	<CAMpsgwaxx4gLQDN4W5yjPub-M7sbxd8J5thF37=ZvWttz37Gyg@mail.gmail.com>
Message-ID: <20120413232427.073295e7@pitrou.net>

On Fri, 13 Apr 2012 18:29:10 +0200
Victor Stinner <victor.stinner at gmail.com> wrote:
> > The descriptions should really stress the scope of the result's
> > validity. My guess (or wish :-)) would be:
> >
> > - time.monotonic(): system-wide results, comparable from one process to
> > ?another
> > - time.perf_counter(): process-wide results, comparable from one thread
> > ?to another (?)
> > - time.process_time(): process-wide, by definition
> 
> time.monotonic() and time.perf_counter() are process-wide on Windows
> older than Vista because of GetTickCount() overflow, on other OSes,
> they are system-wide.

Perhaps, but you should say in the PEP, not here ;-)
By the way, I wonder if it may be a problem if monotonic() is
process-wide under Windows.

> All OSes provide a monotonic clock, except GNU/Hurd. You mean that it
> should be mentioned in the time.monotonic() section?

Yes, that would be clearer.

Regards

Antoine.



From victor.stinner at gmail.com  Sat Apr 14 01:36:17 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 14 Apr 2012 01:36:17 +0200
Subject: [Python-Dev] PEP 418 glossary
In-Reply-To: <20120412145419.D15F7250147@webabinitio.net>
References: <CA+OGgf7icJZGLityn3d-ucQrM1xZJfSqxvnb98ODAwdO-w1wyw@mail.gmail.com>
	<CAMpsgwY2zBi2z2j+UAoqmcXK-vuPbq3nkpgMHewg=3w7DtW27g@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD338D843@RKV-IT-EXCH104.ccp.ad.local>
	<20120412145419.D15F7250147@webabinitio.net>
Message-ID: <CAMpsgwbmDJmX=kiS9W5A0yLkioSa5PxtwmLXK8Ph8PLOEGGNWA@mail.gmail.com>

By the way, I hesitate to add a new mandatory key to
time.get_clock_info() which indicates if the clock includes time
elapsed during a sleep. Is "is_realtime" a good name for such flag?
Examples:

time.get_clock_info('time')['is_realtime'] == True
time.get_clock_info('monotonic')['is_realtime'] == True
time.get_clock_info('process_time')['is_realtime'] == False
time.get_clock_info('clock')['is_realtime'] == True on Windows, False on Unix
time.get_clock_info('perf_counter')['is_realtime'] == True on Windows
and GNU/Hurd (which will use gettimeofday()), False on Unix. It may
vary depending on which clocks are available.

Another candidate for a new optional key is a flag indicating if the
clock includes time elapsed during system suspend. I don't know how to
call such key. Let's call it "include_suspend". Examples:

time.get_clock_info('time')['include_suspend'] == True
time.get_clock_info('monotonic')['include_suspend'] == True on
Windows, False on Mac OS X, Linux and FreeBSD
time.get_clock_info('perf_counter')['include_suspend'] == False on Mac
OS X, Linux and FreeBSD. It is not set on Windows, until someone tells
me how QueryPerformanceCounter() behaves on suspend :-)
time.get_clock_info('process_time')['include_suspend'] == ??? (not set?)
time.get_clock_info('clock')['include_suspend'] == ??? (not set?)

Victor

2012/4/12 R. David Murray <rdmurray at bitdance.com>:
> On Thu, 12 Apr 2012 13:49:43 -0000, =?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?= <kristjan at ccpgames.com> wrote:
>> Wallclock: ?This definition is wrong no metter how the BDFL feels about the word. ?Please see http://en.wikipedia.org/wiki/Wall_clock_time.
>
> I agree with the BDFL. ?I have always heard "wallclock" as referring to
> the clock on the wall (that's what the words mean, after all).
>
> When this term became current that meant an *analog* clock that did not
> automatically update for daylight savings time, so naturally if you
> measure an interval using it it is equivalent to "real time".
>
> However, to my mind the implication of the term has always been that
> the actual time value returned by a 'wallclock' function can be directly
> mapped to the time shown on the clock on the wall (assuming the computer's
> clock and the clock on the wall are synchronized, of course).
>
> Heh. ?Come to think of it, when I first encountered the term it was in
> the context of one of the early IBM PCs running DOS, which means that
> the computer clock *was* set to the same time as the wall clock.
>
> Thus regardless of what Wikipedia thinks, I think in many people's
> minds there is an inherent ambiguity in what the term means. ? If you
> use it to measure an interval, then I think most people would agree
> automatically that it is equivalent to "real time". ?But outside of
> interval measurement, there is ambiguity.
>
> So I think the definition in the PEP is correct.
>
> --David

From rdmurray at bitdance.com  Sat Apr 14 02:14:55 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Fri, 13 Apr 2012 20:14:55 -0400
Subject: [Python-Dev] tracker searches fixed
Message-ID: <20120414001455.8F73725060A@webabinitio.net>

For those of you who had noticed that since the upgrade the tracker
search hasn't been returning a complete set of hits on typical searches,
this should now be fixed.

--David

From victor.stinner at gmail.com  Sat Apr 14 02:51:09 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 14 Apr 2012 02:51:09 +0200
Subject: [Python-Dev] Questions for the PEP 418: monotonic vs steady,
	is_adjusted
Message-ID: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>

Hi,

Before posting a first draft of the PEP 418 to python-dev, I have some
questions.

== Naming: time.monotonic() or time.steady()? ==

I like the "steady" name but different people complained that the
steady name should not be used if the function falls back to the
system clock or if the clock is adjusted.

time.monotonic() does not fallback to the system clock anymore, it is
now always monotonic.

There is only one clock used by time.monotonic() which is adjusted:
CLOCK_MONOTONIC on Linux. On Linux, CLOCK_MONOTONIC is slewed by NTP,
but not stepped. From the user point of view, the clock *is* steady.
IMO CLOCK_MONOTONIC_RAW is less steady than CLOCK_MONOTONIC.
CLOCK_MONOTONIC_RAW does drift from the real time, whereas NTP adjusts
CLOCK_MONOTONIC to make it following closer to the real time. (I mean
"real time" as defined in the Glossary of the PEP, not "civil time.)

I prefer "steady" over "monotonic" because the steady property is what
users really expect from a "monotonic" clock. A monotonic but not
steady clock may be useless.

All clocks used by the time.monotonic() of the PEP *are* steady.
time.monotonic() should be the most steady clock of all available
clocks. It may not have the best precision, use time.perf_counter() is
you need the highest available precision, but you don't care if the
clock is steady or not.


== "is_adjusted" key of time.get_clock_info() ==

time.get_clock_info() returns a dict with an optional key:
"is_adjusted". This flag indicates "if the clock *can be* adjusted".

Should it be called "is_adjustable" instead? On Windows, the flag
value may change at runtime when NTP is enabled or disabled. So the
value is the current status of the clock adjustement. The description
may be changed to "if the clock *is* adjusted".

Is a single flag enough? Or would be it better to indicate if the
clock: only slewed, slewed *and* stepped, or not adjusted? (3 possible
values) I guess that a single flag is enough. If you need more precise
information, use the "implementation" information.

Victor

From stephen at xemacs.org  Sat Apr 14 07:41:55 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 14 Apr 2012 14:41:55 +0900
Subject: [Python-Dev] Questions for the PEP 418: monotonic vs steady,
	is_adjusted
In-Reply-To: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
References: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
Message-ID: <CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA@mail.gmail.com>

Executive summary:

On naming, how about "CLOCK_METRONOMIC"?  Also, "is_adjusted" is
better, until the API is expanded to provide "when and how much"
information about past adjustments.

On the glossary, (1) precision, accuracy, and resolution mean
different things for "points in time" and for "durations"; (2) the
definitions of precision and resolution in the glossary still do not
agree with Wikipedia.  ("Wikipedia is wrong" is of course an
acceptable answer, but if so the fact that authorities differ on the
definitions should be mentioned in the glossary.)

Proposed definitions:

Accuracy: A clock is accurate if it gives the same results as a
different accurate clock under the same conditions.  Accuracy is
measured by the size of the error (compared to physical time).  Since
error magnitudes will differ, it makes sense to speak of "worst-case
accuracy" and "average accuracy" (the latter will usually be computed
as root mean square error).  A clock can be accurate in measuring
duration even though it is not accurate in measuring the point in
time. [It's hard to see how the opposite could be true.]

Precision: A clock is precise if it gives the same results in the same
conditions.  It's hard to imagine how a computer clock could be
imprecise in reporting points of time [perhaps across threads?] but
the same duration measured starting at different points in time could
easily be different (eg, due to clock slew), and thus imprecise.
Precision is measured by the size of the difference (in physical time)
between measurements of the same point in, or duration of, time by the
clock.

Clocks need not be accurate or precise for both points in time and
durations; they may be good for one but not the other.

Resolution: The resolution of a clock is the shortest duration in
physical time that will cause the clock to report a different value.

On Sat, Apr 14, 2012 at 9:51 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:

> == Naming: time.monotonic() or time.steady()? ==
>
> I like the "steady" name but different people complained that the
> steady name should not be used if the function falls back to the
> system clock or if the clock is adjusted.

Unfortunately, both names suck because they mean different things to
different people.  +1 for the PEP author (you) deciding.

FWIW, I would use CLOCK_MONOTONIC on Linux, and the name "monotonic".
It is not accurate (to physical time in seconds), but it's probably
highest precision for *both* points in time and duration.  See below
for why not "steady".

It occurs to me that a *metronome* is an example of what we would
think of as a "steady" tick (but not a clock in the sense that the
metronome doesn't tell how many ticks).  Since "clock" already implies
the counting, how about CLOCK_METRONOMIC to indicate a clock that
ticks with a steady beat?  (Unfortunately, it's somewhat awkward to
pronounce, easily confused with "monotonic", and unfamiliar: maybe
only musicians will have intuition for it.  WDYT?)

> There is only one clock used by time.monotonic() which is adjusted:
> CLOCK_MONOTONIC on Linux. On Linux, CLOCK_MONOTONIC is slewed by NTP,
> but not stepped. From the user point of view, the clock *is* steady.

I don't think so (see below).  The question is, is it steady *enough*?
 No clock is perfectly steady, we've already agreed that.  It would be
nice if time.get_clock_info() reported "accuracy" (including any
inaccuracy due to NTP clock slewing and the like) as well as
resolution and precision.  That would be optional.

By the way, I still think the glossary has precision and resolution
defined incorrectly.  Several sources on the web define "precision" to
mean "degree of repeatability under identical physical conditions".
Resolution is defined as "the smallest change in physical conditions
that produce a change in the measured value".

Thus a clock reporting in nanoseconds such that different threads
calling clock() "simultaneously" get a difference of a multiple of
1000 nanoseconds has infinite precision (because if they're actually
simultaneous the difference will be zero) but microsecond resolution.

The fact that a clock reports values denominated in nanoseconds is
mostly irrelevant to the definitions used in measurement terminology,
that's an algorithmic consideration.  (Of course if the nanosecond
values are integral, then picosecond resolution is impossible and
picosecond precision is equivalent to infinite precision.  But if the
values are floats, picosecond precision and resolution both make sense
as fractions of a nanosecond.)

> IMO CLOCK_MONOTONIC_RAW is less steady than CLOCK_MONOTONIC.

I disagree.  If the drift is consistent (eg, exactly +1 second per
day), then the _RAW version is steadier.  The point of a steady clock
is not that its nominal second approximates a second of real time,
it's that the nominal second is always the same length of time.  The
unit of time of a clock is being slewed differs from its unit of time
"normally", and this is not steady.

> CLOCK_MONOTONIC_RAW does drift from the real time, whereas NTP adjusts
> CLOCK_MONOTONIC to make it following closer to the real time. (I mean
> "real time" as defined in the Glossary of the PEP, not "civil time.)
>
> I prefer "steady" over "monotonic" because the steady property is what
> users really expect from a "monotonic" clock.

Not the users who defined "monotonic" in C++ though; they decided that
what they expected from a monotonic clock was mathematical
monotonicity, and therefore changed the name.

> All clocks used by the time.monotonic() of the PEP *are* steady.

Up to the accuracy you care about, yes, but on Linux CLOCK_MONOTONIC
is presumably less steady (ie, less precise, though more accurate)
than CLOCK_MONOTONIC_RAW.

> time.monotonic() should be the most steady clock of all available
> clocks. It may not have the best precision, use time.perf_counter() is
> you need the highest available precision, but you don't care if the
> clock is steady or not.

If the clock is not steady, it can't be precise for benchmarking, as
two time periods that are actually equal may not be measured to be
equal.  Precision has *different* meanings for *points in  time*
versus *durations*.

> == "is_adjusted" key of time.get_clock_info() ==
>
> time.get_clock_info() returns a dict with an optional key:
> "is_adjusted". This flag indicates "if the clock *can be* adjusted".
>
> Should it be called "is_adjustable" instead? On Windows, the flag
> value may change at runtime when NTP is enabled or disabled. So the
> value is the current status of the clock adjustement. The description
> may be changed to "if the clock *is* adjusted".

No.  For the API as given, Python can't know whether adjustment
occurred or not, and therefore must assume (a) that it doesn't know
for sure but (b) adjustment may have occurred.  So "is_adjusted" is
better here.

IMHO, a clock with "is_adjustable" True should ideally also provide a
"was_adjusted()" method returning a list of (when, how_much) tuples.
(See Glyph's posts for more about this.)

From regebro at gmail.com  Sat Apr 14 08:26:11 2012
From: regebro at gmail.com (Lennart Regebro)
Date: Sat, 14 Apr 2012 08:26:11 +0200
Subject: [Python-Dev] Questions for the PEP 418: monotonic vs steady,
	is_adjusted
In-Reply-To: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
References: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
Message-ID: <CAL0kPAUpHsVKH5YS2ais=kuwBJt0JT=Gpck1h+LzQpU-7izxtg@mail.gmail.com>

On Sat, Apr 14, 2012 at 02:51, Victor Stinner <victor.stinner at gmail.com> wrote:
> Hi,
>
> Before posting a first draft of the PEP 418 to python-dev, I have some
> questions.
>
> == Naming: time.monotonic() or time.steady()? ==

The clock is monotonic by all reasonable definitions of monotonic (ie
they don't go backwards). There are some reasonable definitions of
steady in which the clocks returned aren't steady (ie, the rate is not
necessarily the same always), especially when it comes to system
suspends, but also with regards to slew adjustments.

Hence the function should be called monotonic().

//Lennart

From p.f.moore at gmail.com  Sat Apr 14 11:02:33 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 14 Apr 2012 10:02:33 +0100
Subject: [Python-Dev] Questions for the PEP 418: monotonic vs steady,
	is_adjusted
In-Reply-To: <CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA@mail.gmail.com>
References: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
	<CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA@mail.gmail.com>
Message-ID: <CACac1F-S0-6sqtWFN45iUTf-FcSP9eduWQhc7V_ONs7QwVGW3Q@mail.gmail.com>

On 14 April 2012 06:41, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> ?A clock can be accurate in measuring
> duration even though it is not accurate in measuring the point in
> time. [It's hard to see how the opposite could be true.]

Pedantic point: A clock that is stepped (say, by NTP) is precisely one
that is accurate in measuring the point in time (that's what stepping
is *for*) but not in measuring duration.

Paul.

From solipsis at pitrou.net  Sat Apr 14 11:52:27 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 14 Apr 2012 11:52:27 +0200
Subject: [Python-Dev] Questions for the PEP 418: monotonic vs steady,
	is_adjusted
References: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
Message-ID: <20120414115227.56feac08@pitrou.net>

On Sat, 14 Apr 2012 02:51:09 +0200
Victor Stinner <victor.stinner at gmail.com> wrote:
> 
> time.monotonic() does not fallback to the system clock anymore, it is
> now always monotonic.

Then just call it "monotonic" :-)

> I prefer "steady" over "monotonic" because the steady property is what
> users really expect from a "monotonic" clock. A monotonic but not
> steady clock may be useless.

"steady" is ambiguous IMO. It can only be "steady" in reference to
another clock - but which one ? (real time presumably, but perhaps not,
e.g. if the clock gets suspended on suspend)

Regards

Antoine.



From victor.stinner at gmail.com  Sat Apr 14 13:16:00 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 14 Apr 2012 13:16:00 +0200
Subject: [Python-Dev] Questions for the PEP 418: monotonic vs steady,
	is_adjusted
In-Reply-To: <20120414115227.56feac08@pitrou.net>
References: <CAMpsgwY=w4rpCz=Q9jfCffH1Ht3S_HC806A8exttwwSto2k8+A@mail.gmail.com>
	<20120414115227.56feac08@pitrou.net>
Message-ID: <CAMpsgwYN8pyRXPmGFFDxYchPe5rSH89NR1Ob-_iut4jA8g=HgA@mail.gmail.com>

>> I prefer "steady" over "monotonic" because the steady property is what
>> users really expect from a "monotonic" clock. A monotonic but not
>> steady clock may be useless.
>
> "steady" is ambiguous IMO. It can only be "steady" in reference to
> another clock - but which one ? (real time presumably, but perhaps not,
> e.g. if the clock gets suspended on suspend)

Yes, real time is the reference when I say that CLOCK_MONOTONIC is
steadier than CLOCK_MONOTONIC_RAW.

I agree that CLOCK_MONOTONIC is not steady from the real time
reference when the system is suspended. CLOCK_BOOTTIME includes
suspend time, but it was only introduced recently in Linux.

Because the "steady" name is controversal, I agree to use the
"monotonic" name. I will complete the section explaning why
time.monotonic() is not called steady :-)

Victor

From albl500 at york.ac.uk  Sat Apr 14 17:35:07 2012
From: albl500 at york.ac.uk (Alex Leach)
Date: Sat, 14 Apr 2012 16:35:07 +0100
Subject: [Python-Dev] Compiling Python on Linux with Intel's icc
In-Reply-To: <20120302165838.GA16028@sleipnir.bytereef.org>
Message-ID: <1647597.Jq8WLN7aQo@metabuntu>

Thought I'd tie this thread up with a successful method, as I've just compiled Python-2.7.3 and have got the benchmarks to run slightly faster than the system Python :D

** First benchmark **

metabuntu:benchmarks> python perf.py -r -b apps /usr/bin/python ../Python-2.7.3/python
Running 2to3...
INFO:root:Running ../Python-2.7.3/python lib/2to3/2to3 -f all lib/2to3_data
INFO:root:Running `['../Python-2.7.3/python', 'lib/2to3/2to3', '-f', 'all', 'lib/2to3_data']` 5 times
INFO:root:Running /usr/bin/python lib/2to3/2to3 -f all lib/2to3_data
INFO:root:Running `['/usr/bin/python', 'lib/2to3/2to3', '-f', 'all', 'lib/2to3_data']` 5 times
Running html5lib...
INFO:root:Running ../Python-2.7.3/python performance/bm_html5lib.py -n 1
INFO:root:Running `['../Python-2.7.3/python', 'performance/bm_html5lib.py', '-n', '1']` 10 times
INFO:root:Running /usr/bin/python performance/bm_html5lib.py -n 1
INFO:root:Running `['/usr/bin/python', 'performance/bm_html5lib.py', '-n', '1']` 10 times
Running rietveld...
INFO:root:Running ../Python-2.7.3/python performance/bm_rietveld.py -n 100
INFO:root:Running /usr/bin/python performance/bm_rietveld.py -n 100
Running spambayes...
INFO:root:Running ../Python-2.7.3/python performance/bm_spambayes.py -n 100
INFO:root:Running /usr/bin/python performance/bm_spambayes.py -n 100

Report on Linux metabuntu 3.0.0-19-server #32-Ubuntu SMP Thu Apr 5 20:05:13 UTC 2012 x86_64 x86_64
Total CPU cores: 12

### html5lib ###
Min: 8.132508 -> 7.316457: 1.11x faster
Avg: 8.297318 -> 7.460066: 1.11x faster
Significant (t=11.15)
Stddev: 0.21605 -> 0.09843: 2.1950x smaller
Timeline: http://tinyurl.com/bqql4oa

### rietveld ###
Min: 0.297604 -> 0.276587: 1.08x faster
Avg: 0.302667 -> 0.279202: 1.08x faster
Significant (t=37.06)
Stddev: 0.00529 -> 0.00348: 1.5188x smaller
Timeline: http://tinyurl.com/brb3dk5

### spambayes ###
Min: 0.152264 -> 0.143518: 1.06x faster
Avg: 0.156512 -> 0.146559: 1.07x faster
Significant (t=6.66)
Stddev: 0.00847 -> 0.01232: 1.4547x larger
Timeline: http://tinyurl.com/d2dzz6k

The following not significant results are hidden, use -v to show them:
2to3.

( I just noticed the date's wrong in the above report... But I did run that just now, being April 14th 2012, ~1300GMT )



** Required patch **

Only file that breaks compilation is Modules/_ctypes/libffi/src/x86/ffi64.c
I uploaded a patch to http://bugs.python.org/issue4130 that corrects the __int128_t issue.



** Compilation method **

I used a two-step compilation process, with Profile-Guided Optimisation. Relevant environment variables are at the bottom.
In the build directory, make a separate directory for the PGO files.
 mkdir PGO
Then, configure command:-
CFLAGS="-O3 -fomit-frame-pointer -shared-intel -fpic -prof-gen -prof-dir $PWD/PGO -fp-model strict -no-prec-div -xHost -fomit-frame-pointer" \
        ./configure --with-libm="-limf" --with-libc="-lirc" --with-signal-module --with-cxx-main="icpc" --without-gcc --build=x86_64-linux-intel

Then I ran `make -j9` and `make test`. Running the tests ensures that (almost) every module is run at least once.
As the -prof-gen option was used, this means that PGO information is written to files in -prof-dir, when the binaries are running.
To give the code even more rigorous usage, I also ran the benchmark suite, which generates even more PGO information.
The results are useless though.

Then, need to do a `make clean`, and reconfigure.
This time, add "-ipo" to CFLAGS, enabling inter-procedural optimisation, and change "-prof-gen" for "-prof-use":-
CFLAGS="-O3 -fomit-frame-pointer -ipo -shared-intel -fpic -prof-use -prof-dir $PWD/PGO -fp-model strict -no-prec-div -xHost -fomit-frame-pointer" \
        ./configure --with-libm="-limf" --with-libc="-lirc" --with-signal-module --with-cxx-main="icpc" --without-gcc --build=x86_64-linux-intel
Then, of course make -j9 && make test

At this point, I produced the above benchmark results.



** Failed test summary **

I'm happy with most of them, except I don't get what the test_gdbm failure is on about..?
I should probably add --enable-curses to the configure command, and I wouldn't mind getting the network and audio modules to build, 
but I can't see any relevant configure options nor find any missing dependencies. Any suggestions would be appreciated.

349 tests OK.
2 tests failed:
    test_cmath test_gdb
1 test altered the execution environment:
    test_distutils
37 tests skipped:
    test_aepack test_al test_applesingle test_bsddb test_bsddb185
    test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk
    test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses
    test_dl test_gl test_imageop test_imgfile test_kqueue
    test_linuxaudiodev test_macos test_macostools test_msilib
    test_ossaudiodev test_scriptpackages test_smtpnet
    test_socketserver test_startfile test_sunaudiodev test_timeout
    test_tk test_ttk_guionly test_urllib2net test_urllibnet
    test_winreg test_winsound test_zipfile64
2 skips unexpected on linux2:
    test_bsddb test_bsddb3

test test_cmath failed -- Traceback (most recent call last):
  File "/usr/local/src/pysrc/Python-2.7.3/Lib/test/test_cmath.py", line 352, in test_specific_values
    msg=error_message)
  File "/usr/local/src/pysrc/Python-2.7.3/Lib/test/test_cmath.py", line 94, in rAssertAlmostEqual
    'got {!r}'.format(a, b))
AssertionError: acos0000: acos(complex(0.0, 0.0))
Expected: complex(1.5707963267948966, -0.0)
Received: complex(1.5707963267948966, 0.0)
Received value insufficiently close to expected value.

test test_gdb failed -- Traceback (most recent call last):
  File "/usr/local/src/pysrc/Python-2.7.3/Lib/test/test_gdb.py", line 639, in test_up_at_top
    cmds_after_breakpoint=['py-up'] * 4)
  File "/usr/local/src/pysrc/Python-2.7.3/Lib/test/test_gdb.py", line 146, in get_stack_trace
    self.assertEqual(err, '')
AssertionError: 'Traceback (most recent call last):\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1367, in invoke\n    move_in_stack(move_up=True)\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1347, in move_in_stack\n    iter_frame.print_summary()\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1255, in print_summary\n    line = pyop.current_line()\nAttributeError: \'PyIntObjectPtr\' object has no attribute \'current_line\'\nError occurred in Python command: \'PyIntObjectPtr\' object has no attribute \'current_line\'\nTraceback (most recent call last):\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1367, in invoke\n    move_in_stack(move_up=True)\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1347, in move_in_stack\n    iter_frame.print_summary()\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1255, in print_summary\n    line = pyop.current_line()\nAttributeError: \'PyIntObjectPtr\' object has no attribute \'current_line\'\nError occurred in Python command: \'PyIntObjectPtr\' object has no attribute \'current_line\'\nTraceback (most recent call last):\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1367, in invoke\n    move_in_stack(move_up=True)\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1347, in move_in_stack\n    iter_frame.print_summary()\n  File "/usr/local/src/pysrc/Python-2.7.3/python-gdb.py", line 1255, in print_summary\n    line = pyop.current_line()\nAttributeError: \'PyIntObjectPtr\' object has no attribute \'current_line\'\nError occurred in Python command: \'PyIntObjectPtr\' object has no attribute \'current_line\'\n' != ''



********

Next attempt:-
Gonna try with: --enable-curses, --enable-audio, --enable-network and --enable-ipv6. May as well do that now...
added above switches to configure command.
Also, switched -shared-intel for -static-intel, to compare benchmark times. This seems to hardly impact performance or file size...

CFLAGS="-O3 -fomit-frame-pointer -ipo -static-intel -fpic -prof-use -prof-dir $PWD/PGO -fp-model strict -no-prec-div -xHost -fomit-frame-pointer" \
        ./configure --with-libm="-limf" --with-libc="-lirc" --with-signal-module --with-cxx-main="icpc" --without-gcc --enable-curses --enable-ipv6 --enable-network --enable-audio --enable-gui --build=x86_64-linux-intel



** Test results **

This time I ran regrtest.py manually, to enable the networking and audio tests in particular:-
metabuntu:Python-2.7.3> ./python Lib/test/regrtest.py -uall

test_linuxaudiodev just hung, even after killing processes (pulseaudio) which were using /dev/dsp, so I added 'test_linuxaudiodev' to NOTTESTS in Lib/test/regrtest.py

361 tests OK.
3 tests failed:
    test_cmath test_gdb test_ossaudiodev
1 test altered the execution environment:
    test_distutils
23 tests skipped:
    test_aepack test_al test_applesingle test_bsddb test_bsddb185
    test_bsddb3 test_cd test_cl test_dl test_gl test_imageop
    test_imgfile test_kqueue test_macos test_macostools test_msilib
    test_py3kwarn test_scriptpackages test_startfile test_sunaudiodev
    test_winreg test_winsound test_zipfile64
2 skips unexpected on linux2:
    test_bsddb test_bsddb3

I use ALSA over OSS, so I'm not really concerned about the OSS test failing.
Running test_linuxaudiodev manually hangs on test_play_sound_file. Tested with pulseaudio running, and not running. Test can only be terminated with KILL signal.
Adding some print statements to the test script, I found the test hangs on `self.dev.write(data)`. Not sure why though..?



** New benchmark **

metabuntu:benchmarks> python perf.py -r -b apps /usr/bin/python ../Python-2.7.3/python

Report on Linux metabuntu 3.0.0-19-server #32-Ubuntu SMP Thu Apr 5 20:05:13 UTC 2012 x86_64 x86_64
Total CPU cores: 12

### 2to3 ###
Min: 6.524408 -> 6.316394: 1.03x faster
Avg: 6.611613 -> 6.392400: 1.03x faster
Significant (t=5.05)
Stddev: 0.06477 -> 0.07228: 1.1159x larger
Timeline: http://tinyurl.com/bub35l9

### html5lib ###
Min: 7.916494 -> 7.212451: 1.10x faster
Avg: 8.025302 -> 7.304856: 1.10x faster
Significant (t=17.53)
Stddev: 0.07606 -> 0.10539: 1.3856x larger
Timeline: http://tinyurl.com/dy7296k

### rietveld ###
Min: 0.291469 -> 0.272601: 1.07x faster
Avg: 0.302746 -> 0.280126: 1.08x faster
Significant (t=15.86)
Stddev: 0.01126 -> 0.00874: 1.2885x smaller
Timeline: http://tinyurl.com/c5ys4bt

### spambayes ###
Min: 0.145370 -> 0.138528: 1.05x faster
Avg: 0.146689 -> 0.141168: 1.04x faster
Significant (t=11.27)
Stddev: 0.00147 -> 0.00468: 3.1885x larger
Timeline: http://tinyurl.com/d8rrp6g



** Relevant Environment Variables. (Maybe there's more, maybe less) **

N.B. I have all Intel stuff installed in /usr/intel. The default is /opt/intel though.

LIBRARY_PATH=/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21
LD_LIBRARY_PATH=/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/intel/impi/4.0.3.008/ia32/lib:/usr/intel/impi/4.0.3.008/intel64/lib:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64//cc4.1.0_libc2.4_kernel2.6.16.21:/usr/local/lib/boost:/biol/arb/lib:/lib64:/usr/lib64:/usr/local/lib:/usr/local/cuda/lib64:/usr/local/cuda/lib:/usr/intel/composer_xe_2011_sp1.9.293/debugger/lib/intel64:/usr/intel/composer_xe_2011_sp1.9.293/mpirt/lib/intel64
CPATH=/usr/intel/composer_xe_2011_sp1.9.293/tbb/include:/usr/intel/composer_xe_2011_sp1.9.293/mkl/include:/usr/intel/composer_xe_2011_sp1.9.293/tbb/include:/usr/intel/composer_xe_2011_sp1.9.293/tbb/include:/usr/intel/composer_xe_2011_sp1.9.293/mkl/include:/usr/intel/composer_xe_2011_sp1.9.293/tbb/include
CPP=icc -E
PATH=/usr/intel/impi/4.0.3.008/ia32/bin:/usr/intel/impi/4.0.3.008/intel64/bin:/usr/intel/composer_xe_2011_sp1.9.293/bin/intel64:/usr/intel/impi/4.0.3.008/ia32/bin:/usr/intel/composer_xe_2011_sp1.9.293/bin/intel64:/home/albl500/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/intel/bin:/usr/intel/composer_xe_2011_sp1.9.293/mpirt/bin/intel64:/home/albl500/SDKs/android-sdk-linux/tools:/biol/bin:/biol/arb/bin:/usr/local/cuda/bin:/home/albl500/bin:/usr/intel/composer_xe_2011_sp1.9.293/mpirt/bin/intel64
LD=xild
CXX=icpc
CC=icc


** Summary **

All seems okay to me, although using -no-prev-div (over IEEE floating point arithmetic) is slightly concerning, as I occasionally require very accurate floating point arithmetic.
I try to use numpy as much as possible for math operations though, so maybe it's not concerning at all... "-no-prec-div" and "-fp-model strict" I think are required to pass various math tests though.
Otherwise, I got errors about extremely small floats (<10^-300) being unequal to 0.

I hope this proves useful for anyone else trying to compile an optimised Python for an Intel system.

Cheers,
Alex

-- 
Alex Leach BSc. MRes.
Department of Biology
University of York
York YO10 5DD
United Kingdom
www: http://bioltfws1.york.ac.uk/~albl500
EMAIL DISCLAIMER: http://www.york.ac.uk/docs/disclaimer/email.htm

From brett at python.org  Sat Apr 14 20:12:48 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 14:12:48 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
Message-ID: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>

My multi-year project -- started in 2006 according to my blog -- to rewrite
import in pure Python and then bootstrap it into CPython as *the*
implementation of __import__() is finally over (mostly)! Hopefully I didn't
break too much code in the process. =)

Now this is "mostly" finished because the single incompatibility that
importlib has is that it doesn't check the Windows registry for paths to
search since I lack a Windows installation to develop and test on. If
someone can tackle that issue that would be greatly appreciated (
http://bugs.python.org/issue14578).

Next up is how to maintain/develop for all of this. So the Makefile will
regenerate Python/importlib.h whenever Lib/importlib/_bootstrap.py or
Python/freeze_importlib.py changes. So if you make a change to importlib
make sure to get it working and tested before running 'make' again else you
will generate a bad frozen importlib (if you do mess up you can also revert
the changes to importlib.h and re-run make; a perk to having importlib.h
checked in). Otherwise keep in mind that you can't use any module that
isn't a builtin (sys.builtin_module_names) in importlib._bootstrap since
you can't import something w/o import working. =)

Where does this leave imp and Python/import.c? I want to make imp into _imp
and then implement as much as possible in pure Python (either in importlib
itself or in Lib/imp.py). Once that has happened then more C code in
import.c can be gutted (see http://bugs.python.org/issue13959 for tracking
this work which I will start piecemeal shortly).

I have some ideas on how to improve things for import, but I'm going to do
them as separate emails to have separate discussion threads on them so all
of this is easier to follow (e.g. actually following through on PEP 302 and
exposing the import machinery as importers instead of having anything be
implicit, etc.).

And the only outstanding point of contention in all of this is that some
people don't like having freeze_importlib.py in Python/ and instead want it
in Tools/. I didn't leave it in Tools/ as I have always viewed that Python
should build w/o the Tools directory, but maybe the Linux distros actually
ship with it and thus this is an unneeded worry. Plus the scripts to
generate the AST are in Parser so there is precedent for what I have done.

Anyway, I will write up the What's New entry and double-check the language
spec for updating once all of the potential changes I want to talk about in
other emails have been resolved.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/4c48cd4e/attachment.html>

From g.brandl at gmx.net  Sat Apr 14 21:37:47 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Sat, 14 Apr 2012 21:37:47 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
Message-ID: <jmcjln$p41$1@dough.gmane.org>

On 14.04.2012 20:12, Brett Cannon wrote:
> My multi-year project -- started in 2006 according to my blog -- to rewrite
> import in pure Python and then bootstrap it into CPython as *the* implementation
> of __import__() is finally over (mostly)! Hopefully I didn't break too much code
> in the process. =)
>
> Now this is "mostly" finished because the single incompatibility that importlib
> has is that it doesn't check the Windows registry for paths to search since I
> lack a Windows installation to develop and test on. If someone can tackle that
> issue that would be greatly appreciated (http://bugs.python.org/issue14578).
>
> Next up is how to maintain/develop for all of this. So the Makefile will
> regenerate Python/importlib.h whenever Lib/importlib/_bootstrap.py or
> Python/freeze_importlib.py changes.  So if you make a change to importlib make
> sure to get it working and tested before running 'make' again else you will
> generate a bad frozen importlib (if you do mess up you can also revert the
> changes to importlib.h and re-run make; a perk to having importlib.h checked
> in). Otherwise keep in mind that you can't use any module that isn't a builtin
> (sys.builtin_module_names) in importlib._bootstrap since you can't import
> something w/o import working. =)

We've just now talked on IRC about this regeneration.  Since both files --
_bootstrap.py and importlib.h -- are checked in, a make run can try to re-
generate importlib.h.  This depends on the timestamps of the two files, which I 
don't think Mercurial makes any guarantees about.

We have other instances of this (e.g. the Objects/typeslots.inc file is 
generated and checked in), but in the case of importlib, we have to use the
./python binary for freezing to avoid bytecode incompatibilities, which
obviously is a problem if ./python isn't built yet.

> And the only outstanding point of contention in all of this is that some people
> don't like having freeze_importlib.py in Python/ and instead want it in Tools/.
> I didn't leave it in Tools/ as I have always viewed that Python should build w/o
> the Tools directory, but maybe the Linux distros actually ship with it and thus
> this is an unneeded worry. Plus the scripts to generate the AST are in Parser so
> there is precedent for what I have done.

I would have no objections to Python/.  There is also e.g. Objects/typeslots.py.

Georg


From brett at python.org  Sat Apr 14 22:03:01 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 16:03:01 -0400
Subject: [Python-Dev] making the import machinery explicit
Message-ID: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>

To start off, what I am about to propose was brought up at the PyCon
language summit and the whole room agreed with what I want to do here, so I
honestly don't expect much of an argument (famous last words).

In the "ancient" import.c days, a lot of import's stuff was hidden deep in
the C code and in no way exposed to the user. But with importlib finishing
PEP 302's phase 2 plans of getting imoprt to be properly refactored to use
importers, path hooks, etc., this need no longer be the case.

So what I propose to do is stop having import have any kind of implicit
machinery. This means sys.meta_path gets a path finder that does the heavy
lifting for import and sys.path_hooks gets a hook which provides a default
finder. As of right now those two pieces of machinery are entirely implicit
in importlib and can't be modified, stopped, etc.

If this happens, what changes? First, more of importlib will get publicly
exposed (e.g. the meta path finder would become public instead of private
like it is along with everything else that is publicly exposed). Second,
import itself technically becomes much simpler since it really then is
about resolving module names, traversing sys.meta_path, and then handling
fromlist w/ everything else coming from how the path finder and path hook
work.

What also changes is that sys.meta_path and sys.path_hooks cannot be
blindly reset w/o blowing out import. I doubt anyone is even touching those
attributes in the common case, and the few that do can easily just stop
wiping out those two lists. If people really care we can do a warning in
3.3 if they are found to be empty and then fall back to old semantics, but
I honestly don't see this being an issue as backwards-compatibility would
just require being more careful of what you delete (which I have been
warning people to do for years now) which is a minor code change which
falls in line with what goes along with any new Python version.

And lastly, sticking None in sys.path_importer_cache would no longer mean
"do the implicit thing" and instead would mean the same as NullImporter
does now (which also means import can put None into sys.path_importer_cache
instead of NullImporter): no finder is available for an entry on sys.path
when None is found. Once again, I don't see anyone explicitly sticking None
into sys.path_importer_cache, and if they are they can easily stick what
will be the newly exposed finder in there instead. The more common case
would be people wiping out all entries of NullImporter so as to have a new
sys.path_hook entry take effect. That code would instead need to clear out
None on top of NullImporter as well (in Python 3.2 and earlier this would
just be a performance loss, not a semantic change). So this too could
change in Python 3.3 as long as people update their code like they do with
any other new version of Python.

In summary, I want no more magic "behind the curtain" for Python 3.3 and
import: sys.meta_path and sys.path_hooks contain what they should and if
they are emptied then imports will fail and None in sys.path_importer_cache
means "no finder" instead of "use magical, implicit stuff".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/d7de5479/attachment.html>

From brett at python.org  Sat Apr 14 22:56:55 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 16:56:55 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
Message-ID: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>

An open issue in PEP 302 is whether to require __loader__ attributes on
modules. The claimed worry is memory consumption, but considering importlib
and zipimport are already doing this that seems like a red herring.
Requiring it, though, opens the door to people relying on its existence and
thus starting to do things like loading assets with
``__loader__.get_data(path_to_internal_package_file)`` which allows code to
not care how modules are stored (e.g. zip file, sqlite database, etc.).

What I would like to do is update the PEP to state that loaders are
expected to set __loader__. Now importlib will get updated to do that
implicitly so external code can expect it post-import, but requiring
loaders to set it would mean that code executed during import can rely on
it as well.

As for __package__, PEP 366 states that modules should set it but it isn't
referenced by PEP 302. What I want to do is add a reference and make it
required like __loader__. Importlib already sets it implicitly post-import,
but once again it would be nice to do this pre-import.

To help facilitate both new requirements, I would update the
importlib.util.module_for_loader decorator to set both on a module that
doesn't have them before passing the module down to the decorated method.
That way people already using the decorator don't have to worry about
anything and it is one less detail to have to worry about. I would also
update the docs on importlib.util.set_package and importlib.util.set_loader
to suggest people use importlib.util.module_for_loader and only use the
other two decorators for backwards-compatibility.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/df57c16c/attachment-0001.html>

From ericsnowcurrently at gmail.com  Sat Apr 14 23:12:27 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 14 Apr 2012 15:12:27 -0600
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
Message-ID: <CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>

On Sat, Apr 14, 2012 at 2:03 PM, Brett Cannon <brett at python.org> wrote:
> To start off, what I am about to propose was brought up at the PyCon
> language summit and the whole room agreed with what I want to do here, so I
> honestly don't expect much of an argument (famous last words).
>
> In the "ancient" import.c days, a lot of import's stuff was hidden deep in
> the C code and in no way exposed to the user. But with importlib finishing
> PEP 302's phase 2 plans of getting imoprt to be properly refactored to use
> importers, path hooks, etc., this need no longer be the case.
>
> So what I propose to do is stop having import have any kind of implicit
> machinery. This means sys.meta_path gets a path finder that does the heavy
> lifting for import and sys.path_hooks gets a hook which provides a default
> finder. As of right now those two pieces of machinery are entirely implicit
> in importlib and can't be modified, stopped, etc.
>
> If this happens, what changes? First, more of importlib will get publicly
> exposed (e.g. the meta path finder would become public instead of private
> like it is along with everything else that is publicly exposed). Second,
> import itself technically becomes much simpler since it really then is about
> resolving module names, traversing sys.meta_path, and then handling fromlist
> w/ everything else coming from how the path finder and path hook work.
>
> What also changes is that sys.meta_path and sys.path_hooks cannot be blindly
> reset w/o blowing out import. I doubt anyone is even touching those
> attributes in the common case, and the few that do can easily just stop
> wiping out those two lists. If people really care we can do a warning in 3.3
> if they are found to be empty and then fall back to old semantics, but I
> honestly don't see this being an issue as backwards-compatibility would just
> require being more careful of what you delete (which I have been warning
> people to do for years now) which is a minor code change which falls in line
> with what goes along with any new Python version.
>
> And lastly, sticking None in sys.path_importer_cache would no longer mean
> "do the implicit thing" and instead would mean the same as NullImporter does
> now (which also means import can put None into sys.path_importer_cache
> instead of NullImporter): no finder is available for an entry on sys.path
> when None is found. Once again, I don't see anyone explicitly sticking None
> into sys.path_importer_cache, and if they are they can easily stick what
> will be the newly exposed finder in there instead. The more common case
> would be people wiping out all entries of NullImporter so as to have a new
> sys.path_hook entry take effect. That code would instead need to clear out
> None on top of NullImporter as well (in Python 3.2 and earlier this would
> just be a performance loss, not a semantic change). So this too could change
> in Python 3.3 as long as people update their code like they do with any
> other new version of Python.
>
> In summary, I want no more magic "behind the curtain" for Python 3.3 and
> import: sys.meta_path and sys.path_hooks contain what they should and if
> they are emptied then imports will fail and None in sys.path_importer_cache
> means "no finder" instead of "use magical, implicit stuff".

This is great, Brett.  About sys.meta_path and sys.path_hooks, I see
only one potential backwards-compatibility problem.

Those implicit hooks were fallbacks, effectively always at the end of
the list, no matter how you manipulated the them.  Code that appended
onto those lists would now have to insert the importers/finders in the
right way.  Otherwise the default hooks would be tried first, which
has a good chance of being the wrong thing.

That concern aside, I'm a big +1 on your proposal.

-eric

From ericsnowcurrently at gmail.com  Sat Apr 14 23:15:06 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 14 Apr 2012 15:15:06 -0600
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
Message-ID: <CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>

On Sat, Apr 14, 2012 at 2:56 PM, Brett Cannon <brett at python.org> wrote:
> An open issue in PEP 302 is whether to require __loader__ attributes on
> modules. The claimed worry is memory consumption, but considering importlib
> and zipimport are already doing this that seems like a red herring.
> Requiring it, though, opens the door to people relying on its existence and
> thus starting to do things like loading assets with
> ``__loader__.get_data(path_to_internal_package_file)`` which allows code to
> not care how modules are stored (e.g. zip file, sqlite database, etc.).
>
> What I would like to do is update the PEP to state that loaders are expected
> to set __loader__. Now importlib will get updated to do that implicitly so
> external code can expect it post-import, but requiring loaders to set it
> would mean that code executed during import can rely on it as well.
>
> As for __package__, PEP 366 states that modules should set it but it isn't
> referenced by PEP 302. What I want to do is add a reference and make it
> required like __loader__. Importlib already sets it implicitly post-import,
> but once again it would be nice to do this pre-import.
>
> To help facilitate both new requirements, I would update the
> importlib.util.module_for_loader decorator to set both on a module that
> doesn't have them before passing the module down to the decorated method.
> That way people already using the decorator don't have to worry about
> anything and it is one less detail to have to worry about. I would also
> update the docs on importlib.util.set_package and importlib.util.set_loader
> to suggest people use importlib.util.module_for_loader and only use the
> other two decorators for backwards-compatibility.

+1

-eric

From p.f.moore at gmail.com  Sat Apr 14 23:27:56 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Sat, 14 Apr 2012 22:27:56 +0100
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
Message-ID: <CACac1F9QgQKM6M6bLoBKmJVZNPg6PWDT7Q+Q08aq=TCMjkikZw@mail.gmail.com>

On 14 April 2012 21:03, Brett Cannon <brett at python.org> wrote:
> So what I propose to do is stop having import have any kind of implicit
> machinery. This means sys.meta_path gets a path finder that does the heavy
> lifting for import and sys.path_hooks gets a hook which provides a default
> finder.

+1 to your proposal. And thanks for all of your work on importlib - it
makes me very happy to see the ideas Just and I thrashed out in PEP
302 come together fully at last.

Paul.

From brett at python.org  Sun Apr 15 00:16:02 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 18:16:02 -0400
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
Message-ID: <CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>

On Sat, Apr 14, 2012 at 17:12, Eric Snow <ericsnowcurrently at gmail.com>wrote:

> On Sat, Apr 14, 2012 at 2:03 PM, Brett Cannon <brett at python.org> wrote:
> > To start off, what I am about to propose was brought up at the PyCon
> > language summit and the whole room agreed with what I want to do here,
> so I
> > honestly don't expect much of an argument (famous last words).
> >
> > In the "ancient" import.c days, a lot of import's stuff was hidden deep
> in
> > the C code and in no way exposed to the user. But with importlib
> finishing
> > PEP 302's phase 2 plans of getting imoprt to be properly refactored to
> use
> > importers, path hooks, etc., this need no longer be the case.
> >
> > So what I propose to do is stop having import have any kind of implicit
> > machinery. This means sys.meta_path gets a path finder that does the
> heavy
> > lifting for import and sys.path_hooks gets a hook which provides a
> default
> > finder. As of right now those two pieces of machinery are entirely
> implicit
> > in importlib and can't be modified, stopped, etc.
> >
> > If this happens, what changes? First, more of importlib will get publicly
> > exposed (e.g. the meta path finder would become public instead of private
> > like it is along with everything else that is publicly exposed). Second,
> > import itself technically becomes much simpler since it really then is
> about
> > resolving module names, traversing sys.meta_path, and then handling
> fromlist
> > w/ everything else coming from how the path finder and path hook work.
> >
> > What also changes is that sys.meta_path and sys.path_hooks cannot be
> blindly
> > reset w/o blowing out import. I doubt anyone is even touching those
> > attributes in the common case, and the few that do can easily just stop
> > wiping out those two lists. If people really care we can do a warning in
> 3.3
> > if they are found to be empty and then fall back to old semantics, but I
> > honestly don't see this being an issue as backwards-compatibility would
> just
> > require being more careful of what you delete (which I have been warning
> > people to do for years now) which is a minor code change which falls in
> line
> > with what goes along with any new Python version.
> >
> > And lastly, sticking None in sys.path_importer_cache would no longer mean
> > "do the implicit thing" and instead would mean the same as NullImporter
> does
> > now (which also means import can put None into sys.path_importer_cache
> > instead of NullImporter): no finder is available for an entry on sys.path
> > when None is found. Once again, I don't see anyone explicitly sticking
> None
> > into sys.path_importer_cache, and if they are they can easily stick what
> > will be the newly exposed finder in there instead. The more common case
> > would be people wiping out all entries of NullImporter so as to have a
> new
> > sys.path_hook entry take effect. That code would instead need to clear
> out
> > None on top of NullImporter as well (in Python 3.2 and earlier this would
> > just be a performance loss, not a semantic change). So this too could
> change
> > in Python 3.3 as long as people update their code like they do with any
> > other new version of Python.
> >
> > In summary, I want no more magic "behind the curtain" for Python 3.3 and
> > import: sys.meta_path and sys.path_hooks contain what they should and if
> > they are emptied then imports will fail and None in
> sys.path_importer_cache
> > means "no finder" instead of "use magical, implicit stuff".
>
> This is great, Brett.  About sys.meta_path and sys.path_hooks, I see
> only one potential backwards-compatibility problem.
>
> Those implicit hooks were fallbacks, effectively always at the end of
> the list, no matter how you manipulated the them.  Code that appended
> onto those lists would now have to insert the importers/finders in the
> right way.  Otherwise the default hooks would be tried first, which
> has a good chance of being the wrong thing.
>
> That concern aside, I'm a big +1 on your proposal.


Once again, it's just code that needs updating to run on Python 3.3 so I
don't view it as a concern. Going from list.append() to list.insert() (even
if its ``list.insert(hook, len(list)-2)``) is not exactly difficult.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/ae5e296a/attachment.html>

From guido at python.org  Sun Apr 15 00:32:02 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 14 Apr 2012 15:32:02 -0700
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
Message-ID: <CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>

On Sat, Apr 14, 2012 at 2:15 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
> On Sat, Apr 14, 2012 at 2:56 PM, Brett Cannon <brett at python.org> wrote:
>> An open issue in PEP 302 is whether to require __loader__ attributes on
>> modules. The claimed worry is memory consumption, but considering importlib
>> and zipimport are already doing this that seems like a red herring.
>> Requiring it, though, opens the door to people relying on its existence and
>> thus starting to do things like loading assets with
>> ``__loader__.get_data(path_to_internal_package_file)`` which allows code to
>> not care how modules are stored (e.g. zip file, sqlite database, etc.).
>>
>> What I would like to do is update the PEP to state that loaders are expected
>> to set __loader__. Now importlib will get updated to do that implicitly so
>> external code can expect it post-import, but requiring loaders to set it
>> would mean that code executed during import can rely on it as well.
>>
>> As for __package__, PEP 366 states that modules should set it but it isn't
>> referenced by PEP 302. What I want to do is add a reference and make it
>> required like __loader__. Importlib already sets it implicitly post-import,
>> but once again it would be nice to do this pre-import.
>>
>> To help facilitate both new requirements, I would update the
>> importlib.util.module_for_loader decorator to set both on a module that
>> doesn't have them before passing the module down to the decorated method.
>> That way people already using the decorator don't have to worry about
>> anything and it is one less detail to have to worry about. I would also
>> update the docs on importlib.util.set_package and importlib.util.set_loader
>> to suggest people use importlib.util.module_for_loader and only use the
>> other two decorators for backwards-compatibility.
>
> +1

Funny, I was just thinking about having a simple standard API that
will let you open files (and list directories) relative to a given
module or package regardless of how the thing is loaded. If we
guarantee that there's always a __loader__ that's a first step, though
I think we may need to do a little more to get people who currently do
things like open(os.path.join(os.path.basename(__file__),
'some_file_name') to switch. I was thinking of having a stdlib
function that you give a module/package object, a relative filename,
and optionally a mode ('b' or 't') and returns a stream -- and sibling
functions that return a string or bytes object (depending on what API
the user is using either the stream or the data can be more useful).
What would we call thos functions and where would the live?

-- 
--Guido van Rossum (python.org/~guido)

From lists at cheimes.de  Sun Apr 15 00:41:04 2012
From: lists at cheimes.de (Christian Heimes)
Date: Sun, 15 Apr 2012 00:41:04 +0200
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
Message-ID: <jmcue0$qs3$1@dough.gmane.org>

Am 15.04.2012 00:32, schrieb Guido van Rossum:
> Funny, I was just thinking about having a simple standard API that
> will let you open files (and list directories) relative to a given
> module or package regardless of how the thing is loaded. If we
> guarantee that there's always a __loader__ that's a first step, though
> I think we may need to do a little more to get people who currently do
> things like open(os.path.join(os.path.basename(__file__),
> 'some_file_name') to switch. I was thinking of having a stdlib
> function that you give a module/package object, a relative filename,
> and optionally a mode ('b' or 't') and returns a stream -- and sibling
> functions that return a string or bytes object (depending on what API
> the user is using either the stream or the data can be more useful).
> What would we call thos functions and where would the live?

pkg_resources has a similar API [1] that supports dotted names.
pkg_resources also does some caching for files that aren't stored on a
local file system (database, ZIP file, you name it). It should be
trivial to support both dotted names and module instances.

Christian

[1]
http://packages.python.org/distribute/pkg_resources.html#resourcemanager-api


From brett at python.org  Sun Apr 15 00:50:30 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 18:50:30 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
Message-ID: <CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>

On Sat, Apr 14, 2012 at 18:32, Guido van Rossum <guido at python.org> wrote:

> On Sat, Apr 14, 2012 at 2:15 PM, Eric Snow <ericsnowcurrently at gmail.com>
> wrote:
> > On Sat, Apr 14, 2012 at 2:56 PM, Brett Cannon <brett at python.org> wrote:
> >> An open issue in PEP 302 is whether to require __loader__ attributes on
> >> modules. The claimed worry is memory consumption, but considering
> importlib
> >> and zipimport are already doing this that seems like a red herring.
> >> Requiring it, though, opens the door to people relying on its existence
> and
> >> thus starting to do things like loading assets with
> >> ``__loader__.get_data(path_to_internal_package_file)`` which allows
> code to
> >> not care how modules are stored (e.g. zip file, sqlite database, etc.).
> >>
> >> What I would like to do is update the PEP to state that loaders are
> expected
> >> to set __loader__. Now importlib will get updated to do that implicitly
> so
> >> external code can expect it post-import, but requiring loaders to set it
> >> would mean that code executed during import can rely on it as well.
> >>
> >> As for __package__, PEP 366 states that modules should set it but it
> isn't
> >> referenced by PEP 302. What I want to do is add a reference and make it
> >> required like __loader__. Importlib already sets it implicitly
> post-import,
> >> but once again it would be nice to do this pre-import.
> >>
> >> To help facilitate both new requirements, I would update the
> >> importlib.util.module_for_loader decorator to set both on a module that
> >> doesn't have them before passing the module down to the decorated
> method.
> >> That way people already using the decorator don't have to worry about
> >> anything and it is one less detail to have to worry about. I would also
> >> update the docs on importlib.util.set_package and
> importlib.util.set_loader
> >> to suggest people use importlib.util.module_for_loader and only use the
> >> other two decorators for backwards-compatibility.
> >
> > +1
>
> Funny, I was just thinking about having a simple standard API that
> will let you open files (and list directories) relative to a given
> module or package regardless of how the thing is loaded. If we
> guarantee that there's always a __loader__ that's a first step, though
> I think we may need to do a little more to get people who currently do
> things like open(os.path.join(os.path.basename(__file__),
> 'some_file_name') to switch. I was thinking of having a stdlib
> function that you give a module/package object, a relative filename,
> and optionally a mode ('b' or 't') and returns a stream -- and sibling
> functions that return a string or bytes object (depending on what API
> the user is using either the stream or the data can be more useful).
> What would we call thos functions and where would the live?


IOW go one level lower than get_data() and return the stream and then just
have helper functions which I guess just exhaust the stream for you to
return bytes or str? Or are you thinking that somehow providing a function
that can get an explicit bytes or str object will be more optimized than
doing something with the stream? Either way you will need new methods on
loaders to make it work more efficiently since loaders only have get_data()
which returns bytes and not a stream object. Plus there is currently no API
for listing the contents of a directory.

As for what to call such functions, I really don't know since they are
essentially abstract functions above the OS which work on whatever storage
backend a module uses.

For where they should live, it depends if you are viewing this as more of a
file abstraction or something that ties into modules. For the former it
seems like shutil or something that dealt with higher order file
manipulation. If it's the latter I would say importlib.util.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/badfa7fb/attachment.html>

From guido at python.org  Sun Apr 15 00:56:49 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 14 Apr 2012 15:56:49 -0700
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
Message-ID: <CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>

On Sat, Apr 14, 2012 at 3:50 PM, Brett Cannon <brett at python.org> wrote:
> On Sat, Apr 14, 2012 at 18:32, Guido van Rossum <guido at python.org> wrote:
>> Funny, I was just thinking about having a simple standard API that
>> will let you open files (and list directories) relative to a given
>> module or package regardless of how the thing is loaded. If we
>> guarantee that there's always a __loader__ that's a first step, though
>> I think we may need to do a little more to get people who currently do
>> things like open(os.path.join(os.path.basename(__file__),
>> 'some_file_name') to switch. I was thinking of having a stdlib
>> function that you give a module/package object, a relative filename,
>> and optionally a mode ('b' or 't') and returns a stream -- and sibling
>> functions that return a string or bytes object (depending on what API
>> the user is using either the stream or the data can be more useful).
>> What would we call thos functions and where would the live?

> IOW go one level lower than get_data() and return the stream and then just
> have helper functions which I guess just exhaust the stream for you to
> return bytes or str? Or are you thinking that somehow providing a function
> that can get an explicit bytes or str object will be more optimized than
> doing something with the stream? Either way you will need new methods on
> loaders to make it work more efficiently since loaders only have get_data()
> which returns bytes and not a stream object. Plus there is currently no API
> for listing the contents of a directory.

Well, if it's a real file, and you need a stream, that's efficient,
and if you need the data, you can read it. But if it comes from a
loader, and you need a stream, you'd have to wrap it in a StringIO
instance. So having two APIs, one to get a stream, and one to get the
data, allows the implementation to be more optimal -- it would be bad
to wrap a StringIO instance around data only so you can read the data
from the stream again...

> As for what to call such functions, I really don't know since they are
> essentially abstract functions above the OS which work on whatever storage
> backend a module uses.
>
> For where they should live, it depends if you are viewing this as more of a
> file abstraction or something that ties into modules. For the former it
> seems like shutil or something that dealt with higher order file
> manipulation. If it's the latter I would say importlib.util.

if pkg_resources is in the stdlib that would be a fine place to put it.

-- 
--Guido van Rossum (python.org/~guido)

From brett at python.org  Sun Apr 15 00:58:00 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 18:58:00 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <jmcue0$qs3$1@dough.gmane.org>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<jmcue0$qs3$1@dough.gmane.org>
Message-ID: <CAP1=2W5xWMLsy-axJ70RG6O7jAsKXMs6CzyT5ktGUo6d2k6Zig@mail.gmail.com>

On Sat, Apr 14, 2012 at 18:41, Christian Heimes <lists at cheimes.de> wrote:

> Am 15.04.2012 00:32, schrieb Guido van Rossum:
> > Funny, I was just thinking about having a simple standard API that
> > will let you open files (and list directories) relative to a given
> > module or package regardless of how the thing is loaded. If we
> > guarantee that there's always a __loader__ that's a first step, though
> > I think we may need to do a little more to get people who currently do
> > things like open(os.path.join(os.path.basename(__file__),
> > 'some_file_name') to switch. I was thinking of having a stdlib
> > function that you give a module/package object, a relative filename,
> > and optionally a mode ('b' or 't') and returns a stream -- and sibling
> > functions that return a string or bytes object (depending on what API
> > the user is using either the stream or the data can be more useful).
> > What would we call thos functions and where would the live?
>
> pkg_resources has a similar API [1] that supports dotted names.
> pkg_resources also does some caching for files that aren't stored on a
> local file system (database, ZIP file, you name it). It should be
> trivial to support both dotted names and module instances.
>
>
But that begs the question of whether this API should conflate module
hierarchies with file directories. Are we trying to support reading files
from within packages w/o caring about storage details but still
fundamentally working with files, or are we trying to abstract away the
concept of files and deal more with stored bytes inside packages? For the
former you would essentially want the root package and then simply specify
some file path. But for the latter you would want the module or package
that is next to or containing the data and grab it from there.

And I just realized that we would have to be quite clear that for namespace
packages it is what is in __file__ that people care about, else people
might expect some search to be performed on their behalf. Namespace
packages also dictate that you would want the module closest to the data in
the hierarchy to make sure you went down the right directory (e.g. if you
had the namespace package monty with modules spam and bacon but from
different directories, you really want to make sure you grab the right
module). I would argue that you can only go next to/within
modules/packages; going up would just cause confusion on where you were
grabbing from and going down could be done but makes things a little
messier.

-Brett


> Christian
>
> [1]
>
> http://packages.python.org/distribute/pkg_resources.html#resourcemanager-api
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/ad097a87/attachment.html>

From brett at python.org  Sun Apr 15 00:59:39 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 18:59:39 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
	<CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
Message-ID: <CAP1=2W7=wgKtok7Fb51Mr4VE2sX2C2w3W13_oisWW26XaqSdug@mail.gmail.com>

On Sat, Apr 14, 2012 at 18:56, Guido van Rossum <guido at python.org> wrote:

> On Sat, Apr 14, 2012 at 3:50 PM, Brett Cannon <brett at python.org> wrote:
> > On Sat, Apr 14, 2012 at 18:32, Guido van Rossum <guido at python.org>
> wrote:
> >> Funny, I was just thinking about having a simple standard API that
> >> will let you open files (and list directories) relative to a given
> >> module or package regardless of how the thing is loaded. If we
> >> guarantee that there's always a __loader__ that's a first step, though
> >> I think we may need to do a little more to get people who currently do
> >> things like open(os.path.join(os.path.basename(__file__),
> >> 'some_file_name') to switch. I was thinking of having a stdlib
> >> function that you give a module/package object, a relative filename,
> >> and optionally a mode ('b' or 't') and returns a stream -- and sibling
> >> functions that return a string or bytes object (depending on what API
> >> the user is using either the stream or the data can be more useful).
> >> What would we call thos functions and where would the live?
>
> > IOW go one level lower than get_data() and return the stream and then
> just
> > have helper functions which I guess just exhaust the stream for you to
> > return bytes or str? Or are you thinking that somehow providing a
> function
> > that can get an explicit bytes or str object will be more optimized than
> > doing something with the stream? Either way you will need new methods on
> > loaders to make it work more efficiently since loaders only have
> get_data()
> > which returns bytes and not a stream object. Plus there is currently no
> API
> > for listing the contents of a directory.
>
> Well, if it's a real file, and you need a stream, that's efficient,
> and if you need the data, you can read it. But if it comes from a
> loader, and you need a stream, you'd have to wrap it in a StringIO
> instance. So having two APIs, one to get a stream, and one to get the
> data, allows the implementation to be more optimal -- it would be bad
> to wrap a StringIO instance around data only so you can read the data
> from the stream again...
>

Right, so you would need to grow, which is fine and can be done in a
backwards-compatible way using io.BytesIO and StringIO.


>
> > As for what to call such functions, I really don't know since they are
> > essentially abstract functions above the OS which work on whatever
> storage
> > backend a module uses.
> >
> > For where they should live, it depends if you are viewing this as more
> of a
> > file abstraction or something that ties into modules. For the former it
> > seems like shutil or something that dealt with higher order file
> > manipulation. If it's the latter I would say importlib.util.
>
> if pkg_resources is in the stdlib that would be a fine place to put it.
>

It's not.

-Brett


>
> --
> --Guido van Rossum (python.org/~guido)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/10e5c929/attachment.html>

From ericsnowcurrently at gmail.com  Sun Apr 15 01:38:07 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 14 Apr 2012 17:38:07 -0600
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
	<CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
Message-ID: <CALFfu7Coge4Fj7V_qRCs8G1frgnWaRUvQXT4GchddDriaBhiwQ@mail.gmail.com>

On Sat, Apr 14, 2012 at 4:16 PM, Brett Cannon <brett at python.org> wrote:
> Once again, it's just code that needs updating to run on Python 3.3 so I
> don't view it as a concern. Going from list.append() to list.insert() (even
> if its ``list.insert(hook, len(list)-2)``) is not exactly difficult.

I'm fine with that.  It's not a big deal either way, especially with
how few people it directly impacts.

-eric

From lists at cheimes.de  Sun Apr 15 02:06:47 2012
From: lists at cheimes.de (Christian Heimes)
Date: Sun, 15 Apr 2012 02:06:47 +0200
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
	<CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
Message-ID: <4F8A1117.5010708@cheimes.de>

Am 15.04.2012 00:56, schrieb Guido van Rossum:
> Well, if it's a real file, and you need a stream, that's efficient,
> and if you need the data, you can read it. But if it comes from a
> loader, and you need a stream, you'd have to wrap it in a StringIO
> instance. So having two APIs, one to get a stream, and one to get the
> data, allows the implementation to be more optimal -- it would be bad
> to wrap a StringIO instance around data only so you can read the data
> from the stream again...

We need a third way to access a file. The two methods get_data() and
get_stream() aren't sufficient for libraries that need a read file that
lifes on the file system. In order to have real files the loader (or
some other abstraction layer) needs to create a temporary directory for
the current process and clean it up when the process ends. The file is
saved to the temporary directory the first time it's accessed.

The get_file() feature has a neat benefit. Since it transparently
extracts files from the loader, users can ship binary extensions and
shared libraries (dlls) in a ZIP file and use them without too much hassle.

Christian

From brett at python.org  Sun Apr 15 04:03:43 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 14 Apr 2012 22:03:43 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Handle importing
	pkg.mod by executing
In-Reply-To: <E1SJEbQ-0005Om-Lb@dinsdale.python.org>
References: <E1SJEbQ-0005Om-Lb@dinsdale.python.org>
Message-ID: <CAP1=2W6papQh0br6h1ge7GcSTYZ_V7egGH0qFmBseH23zSfFDg@mail.gmail.com>

That commit message should have said "Handle importing pkg.mod -- by
executing ``__import__('mod', {'__packaging__': 'pkg'}, level=1)`` --
properly (and thus not segfaulting)."

Although honestly I'm not sure if the semantics make sense since this is
the equivalent of ``import .mod`` from within pkg and I'm not sure what
they should return: pkg or pkg.mod (currently it's the latter)? Not sure I
even really care since it's such a messed up way of specifying it and you
should be using importlib.import_module() anyway which lacks these issues).

On Sat, Apr 14, 2012 at 21:50, brett.cannon <python-checkins at python.org>wrote:

> http://hg.python.org/cpython/rev/9e8cbf07068a
> changeset:   76312:9e8cbf07068a
> user:        Brett Cannon <brett at python.org>
> date:        Sat Apr 14 21:50:00 2012 -0400
> summary:
>  Handle importing pkg.mod by executing
> __import__('mod', {'__packaging__': 'pkg', level=1) w/o properly (and
> thus not segfaulting).
>
> files:
>  Lib/importlib/_bootstrap.py                         |    2 +-
>  Lib/importlib/test/import_/test_relative_imports.py |    9 +
>  Python/import.c                                     |   21 +-
>  Python/importlib.h                                  |  497 +++++----
>  4 files changed, 276 insertions(+), 253 deletions(-)
>
>
> diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py
> --- a/Lib/importlib/_bootstrap.py
> +++ b/Lib/importlib/_bootstrap.py
> @@ -1083,7 +1083,7 @@
>             return module
>         else:
>             cut_off = len(name) - len(name.partition('.')[0])
> -            return sys.modules[module.__name__[:-cut_off]]
> +            return
> sys.modules[module.__name__[:len(module.__name__)-cut_off]]
>     else:
>         return _handle_fromlist(module, fromlist, _gcd_import)
>
> diff --git a/Lib/importlib/test/import_/test_relative_imports.py
> b/Lib/importlib/test/import_/test_relative_imports.py
> --- a/Lib/importlib/test/import_/test_relative_imports.py
> +++ b/Lib/importlib/test/import_/test_relative_imports.py
> @@ -193,6 +193,15 @@
>             self.assertEqual(module.__name__, '__runpy_pkg__.uncle.cousin')
>         self.relative_import_test(create, globals_, callback)
>
> +    def test_import_relative_import_no_fromlist(self):
> +        # Import a relative module w/ no fromlist.
> +        create = ['crash.__init__', 'crash.mod']
> +        globals_ = [{'__package__': 'crash', '__name__': 'crash'}]
> +        def callback(global_):
> +            import_util.import_('crash')
> +            mod = import_util.import_('mod', global_, {}, [], 1)
> +            self.assertEqual(mod.__name__, 'crash.mod')
> +        self.relative_import_test(create, globals_, callback)
>
>
>  def test_main():
> diff --git a/Python/import.c b/Python/import.c
> --- a/Python/import.c
> +++ b/Python/import.c
> @@ -3016,20 +3016,33 @@
>             Py_DECREF(partition);
>
>             if (level == 0) {
> -                final_mod = PyDict_GetItemWithError(interp->modules,
> front);
> +                final_mod = PyDict_GetItem(interp->modules, front);
>                 Py_DECREF(front);
> -                Py_XINCREF(final_mod);
> +                if (final_mod == NULL) {
> +                    PyErr_Format(PyExc_KeyError,
> +                                 "%R not in sys.modules as expected",
> front);
> +                }
> +                else {
> +                    Py_INCREF(final_mod);
> +                }
>             }
>             else {
>                 Py_ssize_t cut_off = PyUnicode_GetLength(name) -
>                                         PyUnicode_GetLength(front);
>                 Py_ssize_t abs_name_len = PyUnicode_GetLength(abs_name);
> -                PyObject *to_return = PyUnicode_Substring(name, 0,
> +                PyObject *to_return = PyUnicode_Substring(abs_name, 0,
>                                                         abs_name_len -
> cut_off);
>
>                 final_mod = PyDict_GetItem(interp->modules, to_return);
> -                Py_INCREF(final_mod);
>                 Py_DECREF(to_return);
> +                if (final_mod == NULL) {
> +                    PyErr_Format(PyExc_KeyError,
> +                                 "%R not in sys.modules as expected",
> +                                 to_return);
> +                }
> +                else {
> +                    Py_INCREF(final_mod);
> +                }
>             }
>         }
>         else {
> diff --git a/Python/importlib.h b/Python/importlib.h
> --- a/Python/importlib.h
> +++ b/Python/importlib.h
> @@ -2826,262 +2826,263 @@
>     0,115,12,0,0,0,0,7,15,1,12,1,10,1,12,1,
>     25,1,117,17,0,0,0,95,99,97,108,99,95,95,95,112,
>     97,99,107,97,103,101,95,95,99,5,0,0,0,0,0,0,
> -    0,8,0,0,0,4,0,0,0,67,0,0,0,115,192,0,
> +    0,8,0,0,0,5,0,0,0,67,0,0,0,115,204,0,
>     0,0,124,4,0,100,1,0,107,2,0,114,27,0,116,0,
>     0,124,0,0,131,1,0,125,5,0,110,30,0,116,1,0,
>     124,1,0,131,1,0,125,6,0,116,0,0,124,0,0,124,
> -    6,0,124,4,0,131,3,0,125,5,0,124,3,0,115,172,
> +    6,0,124,4,0,131,3,0,125,5,0,124,3,0,115,184,
>     0,124,4,0,100,1,0,107,2,0,114,99,0,116,2,0,
>     106,3,0,124,0,0,106,4,0,100,2,0,131,1,0,100,
>     1,0,25,25,83,124,0,0,115,109,0,124,5,0,83,116,
>     5,0,124,0,0,131,1,0,116,5,0,124,0,0,106,4,
>     0,100,2,0,131,1,0,100,1,0,25,131,1,0,24,125,
>     7,0,116,2,0,106,3,0,124,5,0,106,6,0,100,3,
> -    0,124,7,0,11,133,2,0,25,25,83,110,16,0,116,7,
> -    0,124,5,0,124,3,0,116,0,0,131,3,0,83,100,3,
> -    0,83,40,4,0,0,0,117,214,1,0,0,73,109,112,111,
> -    114,116,32,97,32,109,111,100,117,108,101,46,10,10,32,32,
> -    32,32,84,104,101,32,39,103,108,111,98,97,108,115,39,32,
> -    97,114,103,117,109,101,110,116,32,105,115,32,117,115,101,100,
> -    32,116,111,32,105,110,102,101,114,32,119,104,101,114,101,32,
> -    116,104,101,32,105,109,112,111,114,116,32,105,115,32,111,99,
> -    99,117,114,105,110,103,32,102,114,111,109,10,32,32,32,32,
> -    116,111,32,104,97,110,100,108,101,32,114,101,108,97,116,105,
> -    118,101,32,105,109,112,111,114,116,115,46,32,84,104,101,32,
> -    39,108,111,99,97,108,115,39,32,97,114,103,117,109,101,110,
> -    116,32,105,115,32,105,103,110,111,114,101,100,46,32,84,104,
> -    101,10,32,32,32,32,39,102,114,111,109,108,105,115,116,39,
> -    32,97,114,103,117,109,101,110,116,32,115,112,101,99,105,102,
> -    105,101,115,32,119,104,97,116,32,115,104,111,117,108,100,32,
> -    101,120,105,115,116,32,97,115,32,97,116,116,114,105,98,117,
> -    116,101,115,32,111,110,32,116,104,101,32,109,111,100,117,108,
> -    101,10,32,32,32,32,98,101,105,110,103,32,105,109,112,111,
> -    114,116,101,100,32,40,101,46,103,46,32,96,96,102,114,111,
> -    109,32,109,111,100,117,108,101,32,105,109,112,111,114,116,32,
> -    60,102,114,111,109,108,105,115,116,62,96,96,41,46,32,32,
> -    84,104,101,32,39,108,101,118,101,108,39,10,32,32,32,32,
> -    97,114,103,117,109,101,110,116,32,114,101,112,114,101,115,101,
> -    110,116,115,32,116,104,101,32,112,97,99,107,97,103,101,32,
> -    108,111,99,97,116,105,111,110,32,116,111,32,105,109,112,111,
> -    114,116,32,102,114,111,109,32,105,110,32,97,32,114,101,108,
> -    97,116,105,118,101,10,32,32,32,32,105,109,112,111,114,116,
> -    32,40,101,46,103,46,32,96,96,102,114,111,109,32,46,46,
> -    112,107,103,32,105,109,112,111,114,116,32,109,111,100,96,96,
> -    32,119,111,117,108,100,32,104,97,118,101,32,97,32,39,108,
> -    101,118,101,108,39,32,111,102,32,50,41,46,10,10,32,32,
> -    32,32,105,0,0,0,0,117,1,0,0,0,46,78,40,8,
> -    0,0,0,117,11,0,0,0,95,103,99,100,95,105,109,112,
> -    111,114,116,117,17,0,0,0,95,99,97,108,99,95,95,95,
> -    112,97,99,107,97,103,101,95,95,117,3,0,0,0,115,121,
> -    115,117,7,0,0,0,109,111,100,117,108,101,115,117,9,0,
> -    0,0,112,97,114,116,105,116,105,111,110,117,3,0,0,0,
> -    108,101,110,117,8,0,0,0,95,95,110,97,109,101,95,95,
> -    117,16,0,0,0,95,104,97,110,100,108,101,95,102,114,111,
> -    109,108,105,115,116,40,8,0,0,0,117,4,0,0,0,110,
> -    97,109,101,117,7,0,0,0,103,108,111,98,97,108,115,117,
> -    6,0,0,0,108,111,99,97,108,115,117,8,0,0,0,102,
> -    114,111,109,108,105,115,116,117,5,0,0,0,108,101,118,101,
> -    108,117,6,0,0,0,109,111,100,117,108,101,117,7,0,0,
> -    0,112,97,99,107,97,103,101,117,7,0,0,0,99,117,116,
> -    95,111,102,102,40,0,0,0,0,40,0,0,0,0,117,29,
> -    0,0,0,60,102,114,111,122,101,110,32,105,109,112,111,114,
> -    116,108,105,98,46,95,98,111,111,116,115,116,114,97,112,62,
> -    117,10,0,0,0,95,95,105,109,112,111,114,116,95,95,37,
> -    4,0,0,115,24,0,0,0,0,11,12,1,15,2,12,1,
> -    18,1,6,3,12,1,24,1,6,1,4,2,35,1,28,2,
> -    117,10,0,0,0,95,95,105,109,112,111,114,116,95,95,99,
> -    2,0,0,0,0,0,0,0,9,0,0,0,12,0,0,0,
> -    67,0,0,0,115,109,1,0,0,124,1,0,97,0,0,124,
> -    0,0,97,1,0,120,47,0,116,0,0,116,1,0,102,2,
> -    0,68,93,33,0,125,2,0,116,2,0,124,2,0,100,1,
> -    0,131,2,0,115,25,0,116,3,0,124,2,0,95,4,0,
> -    113,25,0,113,25,0,87,116,1,0,106,5,0,116,6,0,
> -    25,125,3,0,120,76,0,100,17,0,68,93,68,0,125,4,
> -    0,124,4,0,116,1,0,106,5,0,107,7,0,114,121,0,
> -    116,3,0,106,7,0,124,4,0,131,1,0,125,5,0,110,
> -    13,0,116,1,0,106,5,0,124,4,0,25,125,5,0,116,
> -    8,0,124,3,0,124,4,0,124,5,0,131,3,0,1,113,
> -    82,0,87,120,153,0,100,18,0,100,19,0,100,20,0,103,
> -    3,0,68,93,124,0,92,2,0,125,6,0,125,7,0,124,
> -    6,0,116,1,0,106,5,0,107,6,0,114,214,0,116,1,
> -    0,106,5,0,124,6,0,25,125,8,0,80,113,170,0,121,
> -    56,0,116,3,0,106,7,0,124,6,0,131,1,0,125,8,
> -    0,124,6,0,100,10,0,107,2,0,114,12,1,100,11,0,
> -    116,1,0,106,9,0,107,6,0,114,12,1,100,7,0,125,
> -    7,0,110,0,0,80,87,113,170,0,4,116,10,0,107,10,
> -    0,114,37,1,1,1,1,119,170,0,89,113,170,0,88,113,
> -    170,0,87,116,10,0,100,12,0,131,1,0,130,1,0,116,
> -    8,0,124,3,0,100,13,0,124,8,0,131,3,0,1,116,
> -    8,0,124,3,0,100,14,0,124,7,0,131,3,0,1,116,
> -    8,0,124,3,0,100,15,0,116,11,0,131,0,0,131,3,
> -    0,1,100,16,0,83,40,21,0,0,0,117,249,0,0,0,
> -    83,101,116,117,112,32,105,109,112,111,114,116,108,105,98,32,
> -    98,121,32,105,109,112,111,114,116,105,110,103,32,110,101,101,
> -    100,101,100,32,98,117,105,108,116,45,105,110,32,109,111,100,
> -    117,108,101,115,32,97,110,100,32,105,110,106,101,99,116,105,
> -    110,103,32,116,104,101,109,10,32,32,32,32,105,110,116,111,
> -    32,116,104,101,32,103,108,111,98,97,108,32,110,97,109,101,
> -    115,112,97,99,101,46,10,10,32,32,32,32,65,115,32,115,
> -    121,115,32,105,115,32,110,101,101,100,101,100,32,102,111,114,
> -    32,115,121,115,46,109,111,100,117,108,101,115,32,97,99,99,
> -    101,115,115,32,97,110,100,32,105,109,112,32,105,115,32,110,
> -    101,101,100,101,100,32,116,111,32,108,111,97,100,32,98,117,
> -    105,108,116,45,105,110,10,32,32,32,32,109,111,100,117,108,
> -    101,115,44,32,116,104,111,115,101,32,116,119,111,32,109,111,
> -    100,117,108,101,115,32,109,117,115,116,32,98,101,32,101,120,
> -    112,108,105,99,105,116,108,121,32,112,97,115,115,101,100,32,
> -    105,110,46,10,10,32,32,32,32,117,10,0,0,0,95,95,
> -    108,111,97,100,101,114,95,95,117,3,0,0,0,95,105,111,
> -    117,9,0,0,0,95,119,97,114,110,105,110,103,115,117,8,
> -    0,0,0,98,117,105,108,116,105,110,115,117,7,0,0,0,
> -    109,97,114,115,104,97,108,117,5,0,0,0,112,111,115,105,
> -    120,117,1,0,0,0,47,117,2,0,0,0,110,116,117,1,
> -    0,0,0,92,117,3,0,0,0,111,115,50,117,7,0,0,
> -    0,69,77,88,32,71,67,67,117,30,0,0,0,105,109,112,
> -    111,114,116,108,105,98,32,114,101,113,117,105,114,101,115,32,
> -    112,111,115,105,120,32,111,114,32,110,116,117,3,0,0,0,
> -    95,111,115,117,8,0,0,0,112,97,116,104,95,115,101,112,
> -    117,11,0,0,0,95,114,101,108,97,120,95,99,97,115,101,
> -    78,40,4,0,0,0,117,3,0,0,0,95,105,111,117,9,
> -    0,0,0,95,119,97,114,110,105,110,103,115,117,8,0,0,
> -    0,98,117,105,108,116,105,110,115,117,7,0,0,0,109,97,
> -    114,115,104,97,108,40,2,0,0,0,117,5,0,0,0,112,
> -    111,115,105,120,117,1,0,0,0,47,40,2,0,0,0,117,
> -    2,0,0,0,110,116,117,1,0,0,0,92,40,2,0,0,
> -    0,117,3,0,0,0,111,115,50,117,1,0,0,0,92,40,
> -    12,0,0,0,117,3,0,0,0,105,109,112,117,3,0,0,
> -    0,115,121,115,117,7,0,0,0,104,97,115,97,116,116,114,
> -    117,15,0,0,0,66,117,105,108,116,105,110,73,109,112,111,
> -    114,116,101,114,117,10,0,0,0,95,95,108,111,97,100,101,
> -    114,95,95,117,7,0,0,0,109,111,100,117,108,101,115,117,
> -    8,0,0,0,95,95,110,97,109,101,95,95,117,11,0,0,
> -    0,108,111,97,100,95,109,111,100,117,108,101,117,7,0,0,
> -    0,115,101,116,97,116,116,114,117,7,0,0,0,118,101,114,
> -    115,105,111,110,117,11,0,0,0,73,109,112,111,114,116,69,
> -    114,114,111,114,117,16,0,0,0,95,109,97,107,101,95,114,
> -    101,108,97,120,95,99,97,115,101,40,9,0,0,0,117,10,
> -    0,0,0,115,121,115,95,109,111,100,117,108,101,117,10,0,
> -    0,0,105,109,112,95,109,111,100,117,108,101,117,6,0,0,
> -    0,109,111,100,117,108,101,117,11,0,0,0,115,101,108,102,
> -    95,109,111,100,117,108,101,117,12,0,0,0,98,117,105,108,
> -    116,105,110,95,110,97,109,101,117,14,0,0,0,98,117,105,
> -    108,116,105,110,95,109,111,100,117,108,101,117,10,0,0,0,
> -    98,117,105,108,116,105,110,95,111,115,117,8,0,0,0,112,
> -    97,116,104,95,115,101,112,117,9,0,0,0,111,115,95,109,
> -    111,100,117,108,101,40,0,0,0,0,40,0,0,0,0,117,
> -    29,0,0,0,60,102,114,111,122,101,110,32,105,109,112,111,
> -    114,116,108,105,98,46,95,98,111,111,116,115,116,114,97,112,
> -    62,117,6,0,0,0,95,115,101,116,117,112,67,4,0,0,
> -    115,52,0,0,0,0,9,6,1,6,2,19,1,15,1,16,
> -    2,13,1,13,1,15,1,18,2,13,1,20,2,28,1,15,
> -    1,13,1,4,2,3,1,15,2,27,1,9,1,5,1,13,
> -    1,12,2,12,1,16,1,16,2,117,6,0,0,0,95,115,
> -    101,116,117,112,99,2,0,0,0,0,0,0,0,3,0,0,
> -    0,3,0,0,0,67,0,0,0,115,44,0,0,0,116,0,
> -    0,124,0,0,124,1,0,131,2,0,1,116,1,0,106,2,
> -    0,125,2,0,116,2,0,116,1,0,95,2,0,124,2,0,
> -    116,1,0,95,3,0,100,1,0,83,40,2,0,0,0,117,
> -    201,0,0,0,73,110,115,116,97,108,108,32,105,109,112,111,
> -    114,116,108,105,98,32,97,115,32,116,104,101,32,105,109,112,
> -    108,101,109,101,110,116,97,116,105,111,110,32,111,102,32,105,
> -    109,112,111,114,116,46,10,10,32,32,32,32,73,116,32,105,
> -    115,32,97,115,115,117,109,101,100,32,116,104,97,116,32,105,
> -    109,112,32,97,110,100,32,115,121,115,32,104,97,118,101,32,
> -    98,101,101,110,32,105,109,112,111,114,116,101,100,32,97,110,
> -    100,32,105,110,106,101,99,116,101,100,32,105,110,116,111,32,
> -    116,104,101,10,32,32,32,32,103,108,111,98,97,108,32,110,
> -    97,109,101,115,112,97,99,101,32,102,111,114,32,116,104,101,
> -    32,109,111,100,117,108,101,32,112,114,105,111,114,32,116,111,
> -    32,99,97,108,108,105,110,103,32,116,104,105,115,32,102,117,
> -    110,99,116,105,111,110,46,10,10,32,32,32,32,78,40,4,
> -    0,0,0,117,6,0,0,0,95,115,101,116,117,112,117,8,
> -    0,0,0,98,117,105,108,116,105,110,115,117,10,0,0,0,
> -    95,95,105,109,112,111,114,116,95,95,117,19,0,0,0,95,
> -    95,111,114,105,103,105,110,97,108,95,105,109,112,111,114,116,
> -    95,95,40,3,0,0,0,117,10,0,0,0,115,121,115,95,
> -    109,111,100,117,108,101,117,10,0,0,0,105,109,112,95,109,
> -    111,100,117,108,101,117,11,0,0,0,111,114,105,103,95,105,
> -    109,112,111,114,116,40,0,0,0,0,40,0,0,0,0,117,
> -    29,0,0,0,60,102,114,111,122,101,110,32,105,109,112,111,
> -    114,116,108,105,98,46,95,98,111,111,116,115,116,114,97,112,
> -    62,117,8,0,0,0,95,105,110,115,116,97,108,108,112,4,
> -    0,0,115,8,0,0,0,0,7,13,1,9,1,9,1,117,
> -    8,0,0,0,95,105,110,115,116,97,108,108,78,40,3,0,
> -    0,0,117,3,0,0,0,119,105,110,117,6,0,0,0,99,
> -    121,103,119,105,110,117,6,0,0,0,100,97,114,119,105,110,
> -    40,55,0,0,0,117,7,0,0,0,95,95,100,111,99,95,
> -    95,117,26,0,0,0,67,65,83,69,95,73,78,83,69,78,
> -    83,73,84,73,86,69,95,80,76,65,84,70,79,82,77,83,
> +    0,116,5,0,124,5,0,106,6,0,131,1,0,124,7,0,
> +    24,133,2,0,25,25,83,110,16,0,116,7,0,124,5,0,
> +    124,3,0,116,0,0,131,3,0,83,100,3,0,83,40,4,
> +    0,0,0,117,214,1,0,0,73,109,112,111,114,116,32,97,
> +    32,109,111,100,117,108,101,46,10,10,32,32,32,32,84,104,
> +    101,32,39,103,108,111,98,97,108,115,39,32,97,114,103,117,
> +    109,101,110,116,32,105,115,32,117,115,101,100,32,116,111,32,
> +    105,110,102,101,114,32,119,104,101,114,101,32,116,104,101,32,
> +    105,109,112,111,114,116,32,105,115,32,111,99,99,117,114,105,
> +    110,103,32,102,114,111,109,10,32,32,32,32,116,111,32,104,
> +    97,110,100,108,101,32,114,101,108,97,116,105,118,101,32,105,
> +    109,112,111,114,116,115,46,32,84,104,101,32,39,108,111,99,
> +    97,108,115,39,32,97,114,103,117,109,101,110,116,32,105,115,
> +    32,105,103,110,111,114,101,100,46,32,84,104,101,10,32,32,
> +    32,32,39,102,114,111,109,108,105,115,116,39,32,97,114,103,
> +    117,109,101,110,116,32,115,112,101,99,105,102,105,101,115,32,
> +    119,104,97,116,32,115,104,111,117,108,100,32,101,120,105,115,
> +    116,32,97,115,32,97,116,116,114,105,98,117,116,101,115,32,
> +    111,110,32,116,104,101,32,109,111,100,117,108,101,10,32,32,
> +    32,32,98,101,105,110,103,32,105,109,112,111,114,116,101,100,
> +    32,40,101,46,103,46,32,96,96,102,114,111,109,32,109,111,
> +    100,117,108,101,32,105,109,112,111,114,116,32,60,102,114,111,
> +    109,108,105,115,116,62,96,96,41,46,32,32,84,104,101,32,
> +    39,108,101,118,101,108,39,10,32,32,32,32,97,114,103,117,
> +    109,101,110,116,32,114,101,112,114,101,115,101,110,116,115,32,
> +    116,104,101,32,112,97,99,107,97,103,101,32,108,111,99,97,
> +    116,105,111,110,32,116,111,32,105,109,112,111,114,116,32,102,
> +    114,111,109,32,105,110,32,97,32,114,101,108,97,116,105,118,
> +    101,10,32,32,32,32,105,109,112,111,114,116,32,40,101,46,
> +    103,46,32,96,96,102,114,111,109,32,46,46,112,107,103,32,
> +    105,109,112,111,114,116,32,109,111,100,96,96,32,119,111,117,
> +    108,100,32,104,97,118,101,32,97,32,39,108,101,118,101,108,
> +    39,32,111,102,32,50,41,46,10,10,32,32,32,32,105,0,
> +    0,0,0,117,1,0,0,0,46,78,40,8,0,0,0,117,
> +    11,0,0,0,95,103,99,100,95,105,109,112,111,114,116,117,
> +    17,0,0,0,95,99,97,108,99,95,95,95,112,97,99,107,
> +    97,103,101,95,95,117,3,0,0,0,115,121,115,117,7,0,
> +    0,0,109,111,100,117,108,101,115,117,9,0,0,0,112,97,
> +    114,116,105,116,105,111,110,117,3,0,0,0,108,101,110,117,
> +    8,0,0,0,95,95,110,97,109,101,95,95,117,16,0,0,
> +    0,95,104,97,110,100,108,101,95,102,114,111,109,108,105,115,
> +    116,40,8,0,0,0,117,4,0,0,0,110,97,109,101,117,
> +    7,0,0,0,103,108,111,98,97,108,115,117,6,0,0,0,
> +    108,111,99,97,108,115,117,8,0,0,0,102,114,111,109,108,
> +    105,115,116,117,5,0,0,0,108,101,118,101,108,117,6,0,
> +    0,0,109,111,100,117,108,101,117,7,0,0,0,112,97,99,
> +    107,97,103,101,117,7,0,0,0,99,117,116,95,111,102,102,
> +    40,0,0,0,0,40,0,0,0,0,117,29,0,0,0,60,
> +    102,114,111,122,101,110,32,105,109,112,111,114,116,108,105,98,
> +    46,95,98,111,111,116,115,116,114,97,112,62,117,10,0,0,
> +    0,95,95,105,109,112,111,114,116,95,95,37,4,0,0,115,
> +    24,0,0,0,0,11,12,1,15,2,12,1,18,1,6,3,
> +    12,1,24,1,6,1,4,2,35,1,40,2,117,10,0,0,
> +    0,95,95,105,109,112,111,114,116,95,95,99,2,0,0,0,
> +    0,0,0,0,9,0,0,0,12,0,0,0,67,0,0,0,
> +    115,109,1,0,0,124,1,0,97,0,0,124,0,0,97,1,
> +    0,120,47,0,116,0,0,116,1,0,102,2,0,68,93,33,
> +    0,125,2,0,116,2,0,124,2,0,100,1,0,131,2,0,
> +    115,25,0,116,3,0,124,2,0,95,4,0,113,25,0,113,
> +    25,0,87,116,1,0,106,5,0,116,6,0,25,125,3,0,
> +    120,76,0,100,17,0,68,93,68,0,125,4,0,124,4,0,
> +    116,1,0,106,5,0,107,7,0,114,121,0,116,3,0,106,
> +    7,0,124,4,0,131,1,0,125,5,0,110,13,0,116,1,
> +    0,106,5,0,124,4,0,25,125,5,0,116,8,0,124,3,
> +    0,124,4,0,124,5,0,131,3,0,1,113,82,0,87,120,
> +    153,0,100,18,0,100,19,0,100,20,0,103,3,0,68,93,
> +    124,0,92,2,0,125,6,0,125,7,0,124,6,0,116,1,
> +    0,106,5,0,107,6,0,114,214,0,116,1,0,106,5,0,
> +    124,6,0,25,125,8,0,80,113,170,0,121,56,0,116,3,
> +    0,106,7,0,124,6,0,131,1,0,125,8,0,124,6,0,
> +    100,10,0,107,2,0,114,12,1,100,11,0,116,1,0,106,
> +    9,0,107,6,0,114,12,1,100,7,0,125,7,0,110,0,
> +    0,80,87,113,170,0,4,116,10,0,107,10,0,114,37,1,
> +    1,1,1,119,170,0,89,113,170,0,88,113,170,0,87,116,
> +    10,0,100,12,0,131,1,0,130,1,0,116,8,0,124,3,
> +    0,100,13,0,124,8,0,131,3,0,1,116,8,0,124,3,
> +    0,100,14,0,124,7,0,131,3,0,1,116,8,0,124,3,
> +    0,100,15,0,116,11,0,131,0,0,131,3,0,1,100,16,
> +    0,83,40,21,0,0,0,117,249,0,0,0,83,101,116,117,
> +    112,32,105,109,112,111,114,116,108,105,98,32,98,121,32,105,
> +    109,112,111,114,116,105,110,103,32,110,101,101,100,101,100,32,
> +    98,117,105,108,116,45,105,110,32,109,111,100,117,108,101,115,
> +    32,97,110,100,32,105,110,106,101,99,116,105,110,103,32,116,
> +    104,101,109,10,32,32,32,32,105,110,116,111,32,116,104,101,
> +    32,103,108,111,98,97,108,32,110,97,109,101,115,112,97,99,
> +    101,46,10,10,32,32,32,32,65,115,32,115,121,115,32,105,
> +    115,32,110,101,101,100,101,100,32,102,111,114,32,115,121,115,
> +    46,109,111,100,117,108,101,115,32,97,99,99,101,115,115,32,
> +    97,110,100,32,105,109,112,32,105,115,32,110,101,101,100,101,
> +    100,32,116,111,32,108,111,97,100,32,98,117,105,108,116,45,
> +    105,110,10,32,32,32,32,109,111,100,117,108,101,115,44,32,
> +    116,104,111,115,101,32,116,119,111,32,109,111,100,117,108,101,
> +    115,32,109,117,115,116,32,98,101,32,101,120,112,108,105,99,
> +    105,116,108,121,32,112,97,115,115,101,100,32,105,110,46,10,
> +    10,32,32,32,32,117,10,0,0,0,95,95,108,111,97,100,
> +    101,114,95,95,117,3,0,0,0,95,105,111,117,9,0,0,
> +    0,95,119,97,114,110,105,110,103,115,117,8,0,0,0,98,
> +    117,105,108,116,105,110,115,117,7,0,0,0,109,97,114,115,
> +    104,97,108,117,5,0,0,0,112,111,115,105,120,117,1,0,
> +    0,0,47,117,2,0,0,0,110,116,117,1,0,0,0,92,
> +    117,3,0,0,0,111,115,50,117,7,0,0,0,69,77,88,
> +    32,71,67,67,117,30,0,0,0,105,109,112,111,114,116,108,
> +    105,98,32,114,101,113,117,105,114,101,115,32,112,111,115,105,
> +    120,32,111,114,32,110,116,117,3,0,0,0,95,111,115,117,
> +    8,0,0,0,112,97,116,104,95,115,101,112,117,11,0,0,
> +    0,95,114,101,108,97,120,95,99,97,115,101,78,40,4,0,
> +    0,0,117,3,0,0,0,95,105,111,117,9,0,0,0,95,
> +    119,97,114,110,105,110,103,115,117,8,0,0,0,98,117,105,
> +    108,116,105,110,115,117,7,0,0,0,109,97,114,115,104,97,
> +    108,40,2,0,0,0,117,5,0,0,0,112,111,115,105,120,
> +    117,1,0,0,0,47,40,2,0,0,0,117,2,0,0,0,
> +    110,116,117,1,0,0,0,92,40,2,0,0,0,117,3,0,
> +    0,0,111,115,50,117,1,0,0,0,92,40,12,0,0,0,
> +    117,3,0,0,0,105,109,112,117,3,0,0,0,115,121,115,
> +    117,7,0,0,0,104,97,115,97,116,116,114,117,15,0,0,
> +    0,66,117,105,108,116,105,110,73,109,112,111,114,116,101,114,
> +    117,10,0,0,0,95,95,108,111,97,100,101,114,95,95,117,
> +    7,0,0,0,109,111,100,117,108,101,115,117,8,0,0,0,
> +    95,95,110,97,109,101,95,95,117,11,0,0,0,108,111,97,
> +    100,95,109,111,100,117,108,101,117,7,0,0,0,115,101,116,
> +    97,116,116,114,117,7,0,0,0,118,101,114,115,105,111,110,
> +    117,11,0,0,0,73,109,112,111,114,116,69,114,114,111,114,
>     117,16,0,0,0,95,109,97,107,101,95,114,101,108,97,120,
> -    95,99,97,115,101,117,7,0,0,0,95,119,95,108,111,110,
> -    103,117,7,0,0,0,95,114,95,108,111,110,103,117,10,0,
> -    0,0,95,112,97,116,104,95,106,111,105,110,117,12,0,0,
> -    0,95,112,97,116,104,95,101,120,105,115,116,115,117,18,0,
> -    0,0,95,112,97,116,104,95,105,115,95,109,111,100,101,95,
> -    116,121,112,101,117,12,0,0,0,95,112,97,116,104,95,105,
> -    115,102,105,108,101,117,11,0,0,0,95,112,97,116,104,95,
> -    105,115,100,105,114,117,17,0,0,0,95,112,97,116,104,95,
> -    119,105,116,104,111,117,116,95,101,120,116,117,14,0,0,0,
> -    95,112,97,116,104,95,97,98,115,111,108,117,116,101,117,13,
> -    0,0,0,95,119,114,105,116,101,95,97,116,111,109,105,99,
> -    117,5,0,0,0,95,119,114,97,112,117,4,0,0,0,116,
> -    121,112,101,117,8,0,0,0,95,95,99,111,100,101,95,95,
> -    117,9,0,0,0,99,111,100,101,95,116,121,112,101,117,15,
> -    0,0,0,118,101,114,98,111,115,101,95,109,101,115,115,97,
> -    103,101,117,11,0,0,0,115,101,116,95,112,97,99,107,97,
> -    103,101,117,10,0,0,0,115,101,116,95,108,111,97,100,101,
> -    114,117,17,0,0,0,109,111,100,117,108,101,95,102,111,114,
> -    95,108,111,97,100,101,114,117,11,0,0,0,95,99,104,101,
> -    99,107,95,110,97,109,101,117,17,0,0,0,95,114,101,113,
> -    117,105,114,101,115,95,98,117,105,108,116,105,110,117,16,0,
> -    0,0,95,114,101,113,117,105,114,101,115,95,102,114,111,122,
> -    101,110,117,12,0,0,0,95,115,117,102,102,105,120,95,108,
> -    105,115,116,117,15,0,0,0,66,117,105,108,116,105,110,73,
> -    109,112,111,114,116,101,114,117,14,0,0,0,70,114,111,122,
> -    101,110,73,109,112,111,114,116,101,114,117,13,0,0,0,95,
> -    76,111,97,100,101,114,66,97,115,105,99,115,117,12,0,0,
> -    0,83,111,117,114,99,101,76,111,97,100,101,114,117,11,0,
> -    0,0,95,70,105,108,101,76,111,97,100,101,114,117,17,0,
> -    0,0,95,83,111,117,114,99,101,70,105,108,101,76,111,97,
> -    100,101,114,117,21,0,0,0,95,83,111,117,114,99,101,108,
> -    101,115,115,70,105,108,101,76,111,97,100,101,114,117,20,0,
> -    0,0,95,69,120,116,101,110,115,105,111,110,70,105,108,101,
> -    76,111,97,100,101,114,117,10,0,0,0,80,97,116,104,70,
> -    105,110,100,101,114,117,11,0,0,0,95,70,105,108,101,70,
> -    105,110,100,101,114,117,20,0,0,0,95,83,111,117,114,99,
> -    101,70,105,110,100,101,114,68,101,116,97,105,108,115,117,24,
> -    0,0,0,95,83,111,117,114,99,101,108,101,115,115,70,105,
> -    110,100,101,114,68,101,116,97,105,108,115,117,23,0,0,0,
> -    95,69,120,116,101,110,115,105,111,110,70,105,110,100,101,114,
> -    68,101,116,97,105,108,115,117,15,0,0,0,95,102,105,108,
> -    101,95,112,97,116,104,95,104,111,111,107,117,18,0,0,0,
> -    95,68,69,70,65,85,76,84,95,80,65,84,72,95,72,79,
> -    79,75,117,18,0,0,0,95,68,101,102,97,117,108,116,80,
> -    97,116,104,70,105,110,100,101,114,117,18,0,0,0,95,73,
> -    109,112,111,114,116,76,111,99,107,67,111,110,116,101,120,116,
> -    117,13,0,0,0,95,114,101,115,111,108,118,101,95,110,97,
> -    109,101,117,12,0,0,0,95,102,105,110,100,95,109,111,100,
> -    117,108,101,117,13,0,0,0,95,115,97,110,105,116,121,95,
> -    99,104,101,99,107,117,19,0,0,0,95,73,77,80,76,73,
> -    67,73,84,95,77,69,84,65,95,80,65,84,72,117,8,0,
> -    0,0,95,69,82,82,95,77,83,71,117,14,0,0,0,95,
> -    102,105,110,100,95,97,110,100,95,108,111,97,100,117,4,0,
> -    0,0,78,111,110,101,117,11,0,0,0,95,103,99,100,95,
> -    105,109,112,111,114,116,117,16,0,0,0,95,104,97,110,100,
> -    108,101,95,102,114,111,109,108,105,115,116,117,17,0,0,0,
> -    95,99,97,108,99,95,95,95,112,97,99,107,97,103,101,95,
> -    95,117,10,0,0,0,95,95,105,109,112,111,114,116,95,95,
> -    117,6,0,0,0,95,115,101,116,117,112,117,8,0,0,0,
> -    95,105,110,115,116,97,108,108,40,0,0,0,0,40,0,0,
> -    0,0,40,0,0,0,0,117,29,0,0,0,60,102,114,111,
> -    122,101,110,32,105,109,112,111,114,116,108,105,98,46,95,98,
> -    111,111,116,115,116,114,97,112,62,117,8,0,0,0,60,109,
> -    111,100,117,108,101,62,8,0,0,0,115,102,0,0,0,6,
> -    14,6,3,12,13,12,16,12,15,12,6,12,10,12,10,12,
> -    6,12,7,12,9,12,13,12,21,12,8,15,4,12,8,12,
> -    13,12,11,12,32,12,16,12,11,12,11,12,8,19,53,19,
> -    47,19,77,22,114,19,22,25,38,25,24,19,45,19,68,19,
> -    77,19,8,19,9,19,11,12,10,6,2,22,21,19,13,12,
> -    9,12,15,12,17,15,2,6,2,12,41,18,25,12,23,12,
> -    15,24,30,12,45,
> +    95,99,97,115,101,40,9,0,0,0,117,10,0,0,0,115,
> +    121,115,95,109,111,100,117,108,101,117,10,0,0,0,105,109,
> +    112,95,109,111,100,117,108,101,117,6,0,0,0,109,111,100,
> +    117,108,101,117,11,0,0,0,115,101,108,102,95,109,111,100,
> +    117,108,101,117,12,0,0,0,98,117,105,108,116,105,110,95,
> +    110,97,109,101,117,14,0,0,0,98,117,105,108,116,105,110,
> +    95,109,111,100,117,108,101,117,10,0,0,0,98,117,105,108,
> +    116,105,110,95,111,115,117,8,0,0,0,112,97,116,104,95,
> +    115,101,112,117,9,0,0,0,111,115,95,109,111,100,117,108,
> +    101,40,0,0,0,0,40,0,0,0,0,117,29,0,0,0,
> +    60,102,114,111,122,101,110,32,105,109,112,111,114,116,108,105,
> +    98,46,95,98,111,111,116,115,116,114,97,112,62,117,6,0,
> +    0,0,95,115,101,116,117,112,67,4,0,0,115,52,0,0,
> +    0,0,9,6,1,6,2,19,1,15,1,16,2,13,1,13,
> +    1,15,1,18,2,13,1,20,2,28,1,15,1,13,1,4,
> +    2,3,1,15,2,27,1,9,1,5,1,13,1,12,2,12,
> +    1,16,1,16,2,117,6,0,0,0,95,115,101,116,117,112,
> +    99,2,0,0,0,0,0,0,0,3,0,0,0,3,0,0,
> +    0,67,0,0,0,115,44,0,0,0,116,0,0,124,0,0,
> +    124,1,0,131,2,0,1,116,1,0,106,2,0,125,2,0,
> +    116,2,0,116,1,0,95,2,0,124,2,0,116,1,0,95,
> +    3,0,100,1,0,83,40,2,0,0,0,117,201,0,0,0,
> +    73,110,115,116,97,108,108,32,105,109,112,111,114,116,108,105,
> +    98,32,97,115,32,116,104,101,32,105,109,112,108,101,109,101,
> +    110,116,97,116,105,111,110,32,111,102,32,105,109,112,111,114,
> +    116,46,10,10,32,32,32,32,73,116,32,105,115,32,97,115,
> +    115,117,109,101,100,32,116,104,97,116,32,105,109,112,32,97,
> +    110,100,32,115,121,115,32,104,97,118,101,32,98,101,101,110,
> +    32,105,109,112,111,114,116,101,100,32,97,110,100,32,105,110,
> +    106,101,99,116,101,100,32,105,110,116,111,32,116,104,101,10,
> +    32,32,32,32,103,108,111,98,97,108,32,110,97,109,101,115,
> +    112,97,99,101,32,102,111,114,32,116,104,101,32,109,111,100,
> +    117,108,101,32,112,114,105,111,114,32,116,111,32,99,97,108,
> +    108,105,110,103,32,116,104,105,115,32,102,117,110,99,116,105,
> +    111,110,46,10,10,32,32,32,32,78,40,4,0,0,0,117,
> +    6,0,0,0,95,115,101,116,117,112,117,8,0,0,0,98,
> +    117,105,108,116,105,110,115,117,10,0,0,0,95,95,105,109,
> +    112,111,114,116,95,95,117,19,0,0,0,95,95,111,114,105,
> +    103,105,110,97,108,95,105,109,112,111,114,116,95,95,40,3,
> +    0,0,0,117,10,0,0,0,115,121,115,95,109,111,100,117,
> +    108,101,117,10,0,0,0,105,109,112,95,109,111,100,117,108,
> +    101,117,11,0,0,0,111,114,105,103,95,105,109,112,111,114,
> +    116,40,0,0,0,0,40,0,0,0,0,117,29,0,0,0,
> +    60,102,114,111,122,101,110,32,105,109,112,111,114,116,108,105,
> +    98,46,95,98,111,111,116,115,116,114,97,112,62,117,8,0,
> +    0,0,95,105,110,115,116,97,108,108,112,4,0,0,115,8,
> +    0,0,0,0,7,13,1,9,1,9,1,117,8,0,0,0,
> +    95,105,110,115,116,97,108,108,78,40,3,0,0,0,117,3,
> +    0,0,0,119,105,110,117,6,0,0,0,99,121,103,119,105,
> +    110,117,6,0,0,0,100,97,114,119,105,110,40,55,0,0,
> +    0,117,7,0,0,0,95,95,100,111,99,95,95,117,26,0,
> +    0,0,67,65,83,69,95,73,78,83,69,78,83,73,84,73,
> +    86,69,95,80,76,65,84,70,79,82,77,83,117,16,0,0,
> +    0,95,109,97,107,101,95,114,101,108,97,120,95,99,97,115,
> +    101,117,7,0,0,0,95,119,95,108,111,110,103,117,7,0,
> +    0,0,95,114,95,108,111,110,103,117,10,0,0,0,95,112,
> +    97,116,104,95,106,111,105,110,117,12,0,0,0,95,112,97,
> +    116,104,95,101,120,105,115,116,115,117,18,0,0,0,95,112,
> +    97,116,104,95,105,115,95,109,111,100,101,95,116,121,112,101,
> +    117,12,0,0,0,95,112,97,116,104,95,105,115,102,105,108,
> +    101,117,11,0,0,0,95,112,97,116,104,95,105,115,100,105,
> +    114,117,17,0,0,0,95,112,97,116,104,95,119,105,116,104,
> +    111,117,116,95,101,120,116,117,14,0,0,0,95,112,97,116,
> +    104,95,97,98,115,111,108,117,116,101,117,13,0,0,0,95,
> +    119,114,105,116,101,95,97,116,111,109,105,99,117,5,0,0,
> +    0,95,119,114,97,112,117,4,0,0,0,116,121,112,101,117,
> +    8,0,0,0,95,95,99,111,100,101,95,95,117,9,0,0,
> +    0,99,111,100,101,95,116,121,112,101,117,15,0,0,0,118,
> +    101,114,98,111,115,101,95,109,101,115,115,97,103,101,117,11,
> +    0,0,0,115,101,116,95,112,97,99,107,97,103,101,117,10,
> +    0,0,0,115,101,116,95,108,111,97,100,101,114,117,17,0,
> +    0,0,109,111,100,117,108,101,95,102,111,114,95,108,111,97,
> +    100,101,114,117,11,0,0,0,95,99,104,101,99,107,95,110,
> +    97,109,101,117,17,0,0,0,95,114,101,113,117,105,114,101,
> +    115,95,98,117,105,108,116,105,110,117,16,0,0,0,95,114,
> +    101,113,117,105,114,101,115,95,102,114,111,122,101,110,117,12,
> +    0,0,0,95,115,117,102,102,105,120,95,108,105,115,116,117,
> +    15,0,0,0,66,117,105,108,116,105,110,73,109,112,111,114,
> +    116,101,114,117,14,0,0,0,70,114,111,122,101,110,73,109,
> +    112,111,114,116,101,114,117,13,0,0,0,95,76,111,97,100,
> +    101,114,66,97,115,105,99,115,117,12,0,0,0,83,111,117,
> +    114,99,101,76,111,97,100,101,114,117,11,0,0,0,95,70,
> +    105,108,101,76,111,97,100,101,114,117,17,0,0,0,95,83,
> +    111,117,114,99,101,70,105,108,101,76,111,97,100,101,114,117,
> +    21,0,0,0,95,83,111,117,114,99,101,108,101,115,115,70,
> +    105,108,101,76,111,97,100,101,114,117,20,0,0,0,95,69,
> +    120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,
> +    101,114,117,10,0,0,0,80,97,116,104,70,105,110,100,101,
> +    114,117,11,0,0,0,95,70,105,108,101,70,105,110,100,101,
> +    114,117,20,0,0,0,95,83,111,117,114,99,101,70,105,110,
> +    100,101,114,68,101,116,97,105,108,115,117,24,0,0,0,95,
> +    83,111,117,114,99,101,108,101,115,115,70,105,110,100,101,114,
> +    68,101,116,97,105,108,115,117,23,0,0,0,95,69,120,116,
> +    101,110,115,105,111,110,70,105,110,100,101,114,68,101,116,97,
> +    105,108,115,117,15,0,0,0,95,102,105,108,101,95,112,97,
> +    116,104,95,104,111,111,107,117,18,0,0,0,95,68,69,70,
> +    65,85,76,84,95,80,65,84,72,95,72,79,79,75,117,18,
> +    0,0,0,95,68,101,102,97,117,108,116,80,97,116,104,70,
> +    105,110,100,101,114,117,18,0,0,0,95,73,109,112,111,114,
> +    116,76,111,99,107,67,111,110,116,101,120,116,117,13,0,0,
> +    0,95,114,101,115,111,108,118,101,95,110,97,109,101,117,12,
> +    0,0,0,95,102,105,110,100,95,109,111,100,117,108,101,117,
> +    13,0,0,0,95,115,97,110,105,116,121,95,99,104,101,99,
> +    107,117,19,0,0,0,95,73,77,80,76,73,67,73,84,95,
> +    77,69,84,65,95,80,65,84,72,117,8,0,0,0,95,69,
> +    82,82,95,77,83,71,117,14,0,0,0,95,102,105,110,100,
> +    95,97,110,100,95,108,111,97,100,117,4,0,0,0,78,111,
> +    110,101,117,11,0,0,0,95,103,99,100,95,105,109,112,111,
> +    114,116,117,16,0,0,0,95,104,97,110,100,108,101,95,102,
> +    114,111,109,108,105,115,116,117,17,0,0,0,95,99,97,108,
> +    99,95,95,95,112,97,99,107,97,103,101,95,95,117,10,0,
> +    0,0,95,95,105,109,112,111,114,116,95,95,117,6,0,0,
> +    0,95,115,101,116,117,112,117,8,0,0,0,95,105,110,115,
> +    116,97,108,108,40,0,0,0,0,40,0,0,0,0,40,0,
> +    0,0,0,117,29,0,0,0,60,102,114,111,122,101,110,32,
> +    105,109,112,111,114,116,108,105,98,46,95,98,111,111,116,115,
> +    116,114,97,112,62,117,8,0,0,0,60,109,111,100,117,108,
> +    101,62,8,0,0,0,115,102,0,0,0,6,14,6,3,12,
> +    13,12,16,12,15,12,6,12,10,12,10,12,6,12,7,12,
> +    9,12,13,12,21,12,8,15,4,12,8,12,13,12,11,12,
> +    32,12,16,12,11,12,11,12,8,19,53,19,47,19,77,22,
> +    114,19,22,25,38,25,24,19,45,19,68,19,77,19,8,19,
> +    9,19,11,12,10,6,2,22,21,19,13,12,9,12,15,12,
> +    17,15,2,6,2,12,41,18,25,12,23,12,15,24,30,12,
> +    45,
>  };
>
> --
> Repository URL: http://hg.python.org/cpython
>
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120414/bcec6f82/attachment-0001.html>

From guido at python.org  Sun Apr 15 04:59:33 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 14 Apr 2012 19:59:33 -0700
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <4F8A1117.5010708@cheimes.de>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
	<CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
	<4F8A1117.5010708@cheimes.de>
Message-ID: <CAP7+vJ+tqPWbzw9gk5Ad1Lt8cR3R65XcHYXapR9c8xPxo7QBag@mail.gmail.com>

On Sat, Apr 14, 2012 at 5:06 PM, Christian Heimes <lists at cheimes.de> wrote:
> Am 15.04.2012 00:56, schrieb Guido van Rossum:
>> Well, if it's a real file, and you need a stream, that's efficient,
>> and if you need the data, you can read it. But if it comes from a
>> loader, and you need a stream, you'd have to wrap it in a StringIO
>> instance. So having two APIs, one to get a stream, and one to get the
>> data, allows the implementation to be more optimal -- it would be bad
>> to wrap a StringIO instance around data only so you can read the data
>> from the stream again...
>
> We need a third way to access a file. The two methods get_data() and
> get_stream() aren't sufficient for libraries that need a read file that
> lives on the file system. In order to have real files the loader (or
> some other abstraction layer) needs to create a temporary directory for
> the current process and clean it up when the process ends. The file is
> saved to the temporary directory the first time it's accessed.

Hm... Can you give an example of a library that needs a real file?
That sounds like a poorly designed API.

Perhaps you're talking about APIs that take a filename instead of a
stream? Maybe for those it would be best to start getting serious
about a virtual filesystem... (Sorry, probably python-ideas stuff).

> The get_file() feature has a neat benefit. Since it transparently
> extracts files from the loader, users can ship binary extensions and
> shared libraries (dlls) in a ZIP file and use them without too much hassle.

Yeah, DLLs are about the only example I can think of where even a
virtual filesystem doesn't help...

-- 
--Guido van Rossum (python.org/~guido)

From anacrolix at gmail.com  Sun Apr 15 05:38:24 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Sun, 15 Apr 2012 11:38:24 +0800
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
Message-ID: <CAB4yi1NHVNb0UinmY8zBx23wB_SBu8zknJHQ4L4LQ8JhBiReMg@mail.gmail.com>

+1! Thanks for pushing this.
On Apr 15, 2012 4:04 AM, "Brett Cannon" <brett at python.org> wrote:

> To start off, what I am about to propose was brought up at the PyCon
> language summit and the whole room agreed with what I want to do here, so I
> honestly don't expect much of an argument (famous last words).
>
> In the "ancient" import.c days, a lot of import's stuff was hidden deep in
> the C code and in no way exposed to the user. But with importlib finishing
> PEP 302's phase 2 plans of getting imoprt to be properly refactored to use
> importers, path hooks, etc., this need no longer be the case.
>
> So what I propose to do is stop having import have any kind of implicit
> machinery. This means sys.meta_path gets a path finder that does the heavy
> lifting for import and sys.path_hooks gets a hook which provides a default
> finder. As of right now those two pieces of machinery are entirely implicit
> in importlib and can't be modified, stopped, etc.
>
> If this happens, what changes? First, more of importlib will get publicly
> exposed (e.g. the meta path finder would become public instead of private
> like it is along with everything else that is publicly exposed). Second,
> import itself technically becomes much simpler since it really then is
> about resolving module names, traversing sys.meta_path, and then handling
> fromlist w/ everything else coming from how the path finder and path hook
> work.
>
> What also changes is that sys.meta_path and sys.path_hooks cannot be
> blindly reset w/o blowing out import. I doubt anyone is even touching those
> attributes in the common case, and the few that do can easily just stop
> wiping out those two lists. If people really care we can do a warning in
> 3.3 if they are found to be empty and then fall back to old semantics, but
> I honestly don't see this being an issue as backwards-compatibility would
> just require being more careful of what you delete (which I have been
> warning people to do for years now) which is a minor code change which
> falls in line with what goes along with any new Python version.
>
> And lastly, sticking None in sys.path_importer_cache would no longer mean
> "do the implicit thing" and instead would mean the same as NullImporter
> does now (which also means import can put None into sys.path_importer_cache
> instead of NullImporter): no finder is available for an entry on sys.path
> when None is found. Once again, I don't see anyone explicitly sticking None
> into sys.path_importer_cache, and if they are they can easily stick what
> will be the newly exposed finder in there instead. The more common case
> would be people wiping out all entries of NullImporter so as to have a new
> sys.path_hook entry take effect. That code would instead need to clear out
> None on top of NullImporter as well (in Python 3.2 and earlier this would
> just be a performance loss, not a semantic change). So this too could
> change in Python 3.3 as long as people update their code like they do with
> any other new version of Python.
>
> In summary, I want no more magic "behind the curtain" for Python 3.3 and
> import: sys.meta_path and sys.path_hooks contain what they should and if
> they are emptied then imports will fail and None in sys.path_importer_cache
> means "no finder" instead of "use magical, implicit stuff".
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/1cf9dfcd/attachment.html>

From brett at python.org  Sun Apr 15 07:51:39 2012
From: brett at python.org (Brett Cannon)
Date: Sun, 15 Apr 2012 01:51:39 -0400
Subject: [Python-Dev] [Python-checkins] Daily reference leaks
	(556b9bafdee8): sum=1144
In-Reply-To: <E1SJGMq-0005hL-3C@ap.vmr.nerim.net>
References: <E1SJGMq-0005hL-3C@ap.vmr.nerim.net>
Message-ID: <CAP1=2W5XCm86GrAbCEg1-y_2dKdXCGbwf1q2Ep9bY8NfZ9xxPA@mail.gmail.com>

I'm going to guess my bootstrap patch caused most of these. =) test_capi is
now plugged, so I'm going to assume Python/pythonrun.c:import_init() is
taken care of. The real question is where in
http://hg.python.org/cpython/rev/2dd046be2c88 are the other leaks coming
from. Any help would be great as I have been staring at this code for so
long I really don't want to have to go hunting for refleaks right now.

On Sat, Apr 14, 2012 at 23:43, <solipsis at pitrou.net> wrote:

> results for 556b9bafdee8 on branch "default"
> --------------------------------------------
>
> test_support leaked [-2, 2, 0] references, sum=0
> test_bz2 leaked [-1, -1, -1] references, sum=-3
> test_capi leaked [78, 78, 78] references, sum=234
> test_concurrent_futures leaked [120, 120, 120] references, sum=360
> test_hashlib leaked [-1, -1, -1] references, sum=-3
> test_import leaked [4, 4, 4] references, sum=12
> test_lib2to3 leaked [14, 14, 14] references, sum=42
> test_multiprocessing leaked [149, 149, 150] references, sum=448
> test_runpy leaked [18, 18, 18] references, sum=54
>
>
> Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R',
> '3:3:/home/antoine/cpython/refleaks/reflogBFPz19', '-x']
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/5d92ed77/attachment.html>

From techtonik at gmail.com  Sun Apr 15 08:57:58 2012
From: techtonik at gmail.com (anatoly techtonik)
Date: Sun, 15 Apr 2012 09:57:58 +0300
Subject: [Python-Dev] Security issue with the tracker
In-Reply-To: <4F88763F.707@netwok.org>
References: <CAPkN8xK36QfLpd6XN845YdPQ-aA_g-fOh+vWNW2jK-B8Lne4Bg@mail.gmail.com>
	<CAPkN8xKSoVLgiKZSL_9kn6uLzJ3SqjC463p6nov9ye_3dZCsbA@mail.gmail.com>
	<4F88763F.707@netwok.org>
Message-ID: <CAPkN8xJOWQUBR-bEJ-by3h7xQHTGJdoc0VSrqJOtMrRAY01_1g@mail.gmail.com>

On Fri, Apr 13, 2012 at 9:53 PM, ?ric Araujo <eric at netwok.org> wrote:
> bugs.python.org already sanitizes the ok_message and Ezio already posted a
> patch to the upstream bug tracker, so I don?t see what else we could do.

I am +1 with Glyph that XSS protection in Roundup is an unreliable
hack. Ezio's patch just prolongs the agony - it doesn't make it
better. Code becomes less maintainable. There are two solutions to
that:

1. Use specialized library such as
http://pypi.python.org/pypi/MarkupSafe/ - benefits - easier
maintenance, to get future fixes without waiting until somebody will
have the time to test attacks on Roundup
2. Quote all HTML on server side and use alternative (wiki) markup for
message decorations
3. Do not allow HTML content to be injected through the URL

> Also note that the Firefox extension NoScript blocks the XSS in this case.

NoScripts blocks everything, doesn't it?

From urban.dani+py at gmail.com  Sun Apr 15 11:36:29 2012
From: urban.dani+py at gmail.com (Daniel Urban)
Date: Sun, 15 Apr 2012 11:36:29 +0200
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
Message-ID: <CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>

On Tue, Apr 19, 2011 at 16:10, Nick Coghlan <ncoghlan at gmail.com> wrote:
> In reviewing a fix for the metaclass calculation in __build_class__
> [1], I realised that PEP 3115 poses a potential problem for the common
> practice of using "type(name, bases, ns)" for dynamic class creation.
>
> Specifically, if one of the base classes has a metaclass with a
> significant __prepare__() method, then the current idiom will do the
> wrong thing (and most likely fail as a result), since "ns" will
> probably be an ordinary dictionary instead of whatever __prepare__()
> would have returned.
>
> Initially I was going to suggest making __build_class__ part of the
> language definition rather than a CPython implementation detail, but
> then I realised that various CPython specific elements in its
> signature made that a bad idea.

Are you referring to the first 'func' argument? (Which is basically
the body of the "class" statement, if I'm not mistaken).

> Instead, I'm thinking along the lines of an
> "operator.prepare(metaclass, bases)" function that does the metaclass
> calculation dance, invoking __prepare__() and returning the result if
> it exists, otherwise returning an ordinary dict. Under the hood we
> would refactor this so that operator.prepare and __build_class__ were
> using a shared implementation of the functionality at the C level - it
> may even be advisable to expose that implementation via the C API as
> PyType_PrepareNamespace().

__prepare__ also needs the name and optional keyword arguments.  So it
probably should be something like "operator.prepare(name, bases,
metaclass, **kw)". But this way it would need almost the same
arguments as __build_class__(func, name, *bases, metaclass=None,
**kwds).

> The correct idiom for dynamic type creation in a PEP 3115 world would then be:
>
> ? ?from operator import prepare
> ? ?cls = type(name, bases, prepare(type, bases))
>
> Thoughts?

When creating a dynamic type, we may want to do it with a non-empty
namespace. Maybe like this (with the extra arguments mentioned above):

   from operator import prepare
   ns = prepare(name, bases, type, **kwargs)
   ns.update(my_ns)  # add the attributes we want
   cls = type(name, bases, ns)

What about an "operator.build_class(name, bases, ns, **kw)" function?
It would work like this:

   def build_class(name, bases, ns, **kw):
       metaclass = kw.pop('metaclass', type)
       pns = prepare(name, bases, metaclass, **kw)
       pns.update(ns)
       return metaclass(name, bases, pns)

(Where 'prepare' is the same as above).
This way we wouldn't even need to make 'prepare' public, and the new
way to create a dynamic type would be:

   from operator import build_class
   cls = build_class(name, bases, ns, **my_kwargs)


Daniel

From ncoghlan at gmail.com  Sun Apr 15 13:10:53 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Apr 2012 21:10:53 +1000
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+tqPWbzw9gk5Ad1Lt8cR3R65XcHYXapR9c8xPxo7QBag@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
	<CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
	<4F8A1117.5010708@cheimes.de>
	<CAP7+vJ+tqPWbzw9gk5Ad1Lt8cR3R65XcHYXapR9c8xPxo7QBag@mail.gmail.com>
Message-ID: <CADiSq7d+Dt3a3U7MBGCzi4pKpDgk_SgCphn=2OwcBoGcRJj1=g@mail.gmail.com>

On Sun, Apr 15, 2012 at 12:59 PM, Guido van Rossum <guido at python.org> wrote:
> Hm... Can you give an example of a library that needs a real file?
> That sounds like a poorly designed API.

If you're invoking a separate utility (e.g. via it's command line
interface), you may need a real filesystem path that you can pass
along.

>> The get_file() feature has a neat benefit. Since it transparently
>> extracts files from the loader, users can ship binary extensions and
>> shared libraries (dlls) in a ZIP file and use them without too much hassle.
>
> Yeah, DLLs are about the only example I can think of where even a
> virtual filesystem doesn't help...

An important example, though. However, I still don't believe it is
something we should necessarily be rushing into implementing in the
standard library in the *same* release that finally completes the
conversion started so long ago with PEP 302.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sun Apr 15 13:26:30 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Apr 2012 21:26:30 +1000
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
	<CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
Message-ID: <CADiSq7eCW90u7ipu_3WQH8Eb6+M-9M4Cquzwa6XvszGZbONv=g@mail.gmail.com>

Hooray for finally having this to the point where it has been pushed to trunk :)

On Sun, Apr 15, 2012 at 8:16 AM, Brett Cannon <brett at python.org> wrote:
> Once again, it's just code that needs updating to run on Python 3.3 so I
> don't view it as a concern. Going from list.append() to list.insert() (even
> if its ``list.insert(hook, len(list)-2)``) is not exactly difficult.

I'm not sure you can so blithely wave away the "check this before the
standard hooks" problem. If the recommended approach becomes to insert
new hooks at the *start* of path_hooks and meta_path, then that should
work fairly well, since the new additions will take precedence
regardless of what other changes have already been made. However,
trying to be clever and say "before the standard hooks, but after
everything else" is fraught with peril, since there may be hooks
present in the lists *after* the standard ones so naive counting
wouldn't work.

As far as the guidelines for managing the import state go, it may be
worth having public "importlib.default_path_hooks" and
"importlib.default_meta_path" attributes.

Then "clearing" the hooks would just be a matter of resetting them
back to defaults: "sys.path_hooks[:] = importlib.default_path_hooks".
You could also locate them in the hooks list correctly by checking
"for i, hook in enumerate(sys.path_hooks): if hook is
importlib.default_path_hooks[0]: break"

Alternatively, it may be simpler to just expose a less granular
"reset_import_hooks()" function that restores meta_path and path_hooks
back to their default state (the defaults could then be private
attributes rather than public ones) and invalidates all the caches.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sun Apr 15 13:48:06 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Apr 2012 21:48:06 +1000
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
Message-ID: <CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>

/me pages thoughts from 12 months ago back into brain...

On Sun, Apr 15, 2012 at 7:36 PM, Daniel Urban <urban.dani+py at gmail.com> wrote:
> On Tue, Apr 19, 2011 at 16:10, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> Initially I was going to suggest making __build_class__ part of the
>> language definition rather than a CPython implementation detail, but
>> then I realised that various CPython specific elements in its
>> signature made that a bad idea.
>
> Are you referring to the first 'func' argument? (Which is basically
> the body of the "class" statement, if I'm not mistaken).

Yup, I believe that was my main objection to exposing __build_class__
directly. There's no obligation for implementations to build a
throwaway function to evaluate a class body.

> __prepare__ also needs the name and optional keyword arguments. ?So it
> probably should be something like "operator.prepare(name, bases,
> metaclass, **kw)". But this way it would need almost the same
> arguments as __build_class__(func, name, *bases, metaclass=None,
> **kwds).

True.

>> The correct idiom for dynamic type creation in a PEP 3115 world would then be:
>>
>> ? ?from operator import prepare
>> ? ?cls = type(name, bases, prepare(type, bases))
>>
>> Thoughts?
>
> When creating a dynamic type, we may want to do it with a non-empty
> namespace. Maybe like this (with the extra arguments mentioned above):
>
> ? from operator import prepare
> ? ns = prepare(name, bases, type, **kwargs)
> ? ns.update(my_ns) ?# add the attributes we want
> ? cls = type(name, bases, ns)
>
> What about an "operator.build_class(name, bases, ns, **kw)" function?
> It would work like this:
>
> ? def build_class(name, bases, ns, **kw):
> ? ? ? metaclass = kw.pop('metaclass', type)
> ? ? ? pns = prepare(name, bases, metaclass, **kw)
> ? ? ? pns.update(ns)
> ? ? ? return metaclass(name, bases, pns)
>
> (Where 'prepare' is the same as above).
> This way we wouldn't even need to make 'prepare' public, and the new
> way to create a dynamic type would be:
>
> ? from operator import build_class
> ? cls = build_class(name, bases, ns, **my_kwargs)

No, I think we would want to expose the created namespace directly -
that way people can use update(), direct assigment, exec(), eval(), or
whatever other mechanism they choose to handle the task of populating
the namespace. However, a potentially cleaner way to do that might be
offer use an optional callback API rather than exposing a separate
public prepare() function. Something like:

    def build_class(name, bases=(), kwds=None, eval_body=None):
        metaclass, ns = _prepare(name, bases, kwds)
        if eval_body is not None:
            eval_body(ns)
        return metaclass(name, bases, ns)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sun Apr 15 13:55:54 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Apr 2012 21:55:54 +1000
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
Message-ID: <CADiSq7cXkMBnrHrNpZw-8u2gnosH-WE8U4Rcr1LoPk-PjN3gzQ@mail.gmail.com>

On Sun, Apr 15, 2012 at 8:32 AM, Guido van Rossum <guido at python.org> wrote:
> Funny, I was just thinking about having a simple standard API that
> will let you open files (and list directories) relative to a given
> module or package regardless of how the thing is loaded. If we
> guarantee that there's always a __loader__ that's a first step, though
> I think we may need to do a little more to get people who currently do
> things like open(os.path.join(os.path.basename(__file__),
> 'some_file_name') to switch. I was thinking of having a stdlib
> function that you give a module/package object, a relative filename,
> and optionally a mode ('b' or 't') and returns a stream -- and sibling
> functions that return a string or bytes object (depending on what API
> the user is using either the stream or the data can be more useful).
> What would we call thos functions and where would the live?

We already offer pkgutil.get_data() for the latter API:
http://docs.python.org/library/pkgutil#pkgutil.get_data

There's no get_file() or get_filename() equivalent, since there's no
relevant API formally defined for PEP 302 loader objects (the closest
we have is get_filename(), which is only defined for the actual module
objects, not for arbitrary colocated files).

Now that importlib is the official import implementation, and is fully
PEP 302 compliant, large sections of pkgutil should either be
deprecated (the import emulation) or updated to be thin wrappers
around importlib (the package walking components and other utility
functions).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Sun Apr 15 14:53:34 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 15 Apr 2012 14:53:34 +0200
Subject: [Python-Dev] cpython: Rebuild importlib.h to incorporate added
	comments.
References: <E1SJEbO-0005Nm-9K@dinsdale.python.org>
Message-ID: <20120415145334.7e8ae874@pitrou.net>

On Sun, 15 Apr 2012 03:50:06 +0200
brett.cannon <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/6a77697d2a63
> changeset:   76311:6a77697d2a63
> user:        Brett Cannon <brett at python.org>
> date:        Sat Apr 14 21:18:48 2012 -0400
> summary:
>   Rebuild importlib.h to incorporate added comments.

Isn't there a Makefile rule to rebuild it automatically?

Regards

Antoine.



From g.brandl at gmx.net  Sun Apr 15 16:42:17 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 15 Apr 2012 16:42:17 +0200
Subject: [Python-Dev] cpython: Rebuild importlib.h to incorporate added
	comments.
In-Reply-To: <20120415145334.7e8ae874@pitrou.net>
References: <E1SJEbO-0005Nm-9K@dinsdale.python.org>
	<20120415145334.7e8ae874@pitrou.net>
Message-ID: <jmemnj$o9p$1@dough.gmane.org>

On 15.04.2012 14:53, Antoine Pitrou wrote:
> On Sun, 15 Apr 2012 03:50:06 +0200
> brett.cannon<python-checkins at python.org>  wrote:
>>  http://hg.python.org/cpython/rev/6a77697d2a63
>>  changeset:   76311:6a77697d2a63
>>  user:        Brett Cannon<brett at python.org>
>>  date:        Sat Apr 14 21:18:48 2012 -0400
>>  summary:
>>    Rebuild importlib.h to incorporate added comments.
>
> Isn't there a Makefile rule to rebuild it automatically?

See the "importlib is now bootstrapped" thread for some problems with that.

Georg


From victor.stinner at gmail.com  Sun Apr 15 17:15:15 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 15 Apr 2012 17:15:15 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
Message-ID: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>

Hi,

Here is a simplified version of the first draft of the PEP 418. The
full version can be read online.
http://www.python.org/dev/peps/pep-0418/

The implementation of the PEP can be found in this issue:
http://bugs.python.org/issue14428

I post a simplified version for readability and to focus on changes
introduced by the PEP. Removed sections: Existing Functions,
Deprecated Function, Glossary, Hardware clocks, Operating system time
functions, System Standby, Links.

---

PEP: 418
Title: Add monotonic time, performance counter and process time functions
Version: f2bb3f74298a
Last-Modified: 2012-04-15 17:06:07 +0200 (Sun, 15 Apr 2012)
Author: Cameron Simpson <cs at zip.com.au>, Jim Jewett
<jimjjewett at gmail.com>, Victor Stinner <victor.stinner at gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 26-March-2012
Python-Version: 3.3

Abstract
========

This PEP proposes to add ``time.get_clock_info(name)``,
``time.monotonic()``, ``time.perf_counter()`` and
``time.process_time()`` functions to Python 3.3.

Rationale
=========

If a program uses the system time to schedule events or to implement
a timeout, it will not run events at the right moment or stop the
timeout too early or too late when the system time is set manually or
adjusted automatically by NTP.  A monotonic clock should be used
instead to not be affected by system time updates:
``time.monotonic()``.

To measure the performance of a function, ``time.clock()`` can be used
but it is very different on Windows and on Unix.  On Windows,
``time.clock()`` includes time elapsed during sleep, whereas it does
not on Unix.  ``time.clock()`` precision is very good on Windows, but
very bad on Unix.  The new ``time.perf_counter()`` function should be
used instead to always get the most precise performance counter with a
portable behaviour (ex: include time spend during sleep).

To measure CPU time, Python does not provide directly a portable
function.  ``time.clock()`` can be used on Unix, but it has a bad
precision.  ``resource.getrusage()`` can also be used on Unix, but it
requires to get fields of a structure and compute the sum of time
spent in kernel space and user space.  The new ``time.process_time()``
function acts as a portable counter that always measures CPU time
(doesn't include time elapsed during sleep) and has the best available
precision.

Each operating system implements clocks and performance counters
differently, and it is useful to know exactly which function is used
and some properties of the clock like its resolution and its
precision.  The new ``time.get_clock_info()`` function gives access to
all available information of each Python time function.

New functions:

* ``time.monotonic()``: timeout and scheduling, not affected by system
  clock updates
* ``time.perf_counter()``: benchmarking, most precise clock for short
  period
* ``time.process_time()``: profiling, CPU time of the process

Users of new functions:

* time.monotonic(): concurrent.futures, multiprocessing, queue, subprocess,
  telnet and threading modules to implement timeout
* time.perf_counter(): trace and timeit modules, pybench program
* time.process_time(): profile module
* time.get_clock_info(): pybench program to display information about the
  timer like the precision or the resolution

The ``time.clock()`` function is deprecated because it is not
portable: it behaves differently depending on the operating system.
``time.perf_counter()`` or ``time.process_time()`` should be used
instead, depending on your requirements. ``time.clock()`` is marked as
deprecated but is not planned for removal.


Python functions
================

New Functions
-------------

time.get_clock_info(name)
^^^^^^^^^^^^^^^^^^^^^^^^^

Get information on the specified clock.  Supported clock names:

* ``"clock"``: ``time.clock()``
* ``"monotonic"``: ``time.monotonic()``
* ``"perf_counter"``: ``time.perf_counter()``
* ``"process_time"``: ``time.process_time()``
* ``"time"``: ``time.time()``

Return a dictionary with the following keys:

* Mandatory keys:

  * ``"implementation"`` (str): name of the underlying operating system
    function.  Examples: ``"QueryPerformanceCounter()"``,
    ``"clock_gettime(CLOCK_REALTIME)"``.
  * ``"resolution"`` (float): resolution in seconds of the clock.
  * ``"is_monotonic"`` (bool): True if the clock cannot go backward.

* Optional keys:

  * ``"precision"`` (float): precision in seconds of the clock
    reported by the operating system.
  * ``"is_adjusted"`` (bool): True if the clock is adjusted (e.g. by a
    NTP daemon).


time.monotonic()
^^^^^^^^^^^^^^^^

Monotonic clock, i.e. cannot go backward.  It is not affected by system
clock updates.  The reference point of the returned value is
undefined, so that only the difference between the results of
consecutive calls is valid and is a number of seconds.

On Windows versions older than Vista, ``time.monotonic()`` detects
``GetTickCount()`` integer overflow (32 bits, roll-over after 49.7
days): it increases a delta by 2\ :sup:`32` each time than an overflow
is detected.  The delta is stored in the process-local state and so
the value of ``time.monotonic()`` may be different in two Python
processes running for more than 49 days. On more recent versions of
Windows and on other operating systems, ``time.monotonic()`` is
system-wide.

Availability: Windows, Mac OS X, Unix, Solaris. Not available on
GNU/Hurd.

Pseudo-code [#pseudo]_::

    if os.name == 'nt':
        # GetTickCount64() requires Windows Vista, Server 2008 or later
        if hasattr(time, '_GetTickCount64'):
            def monotonic():
                return _time.GetTickCount64() * 1e-3
        else:
            def monotonic():
                ticks = _time.GetTickCount()
                if ticks < monotonic.last:
                    # Integer overflow detected
                    monotonic.delta += 2**32
                monotonic.last = ticks
                return (ticks + monotonic.delta) * 1e-3
            monotonic.last = 0
            monotonic.delta = 0

    elif os.name == 'mac':
        def monotonic():
            if monotonic.factor is None:
                factor = _time.mach_timebase_info()
                monotonic.factor = timebase[0] / timebase[1]
            return _time.mach_absolute_time() * monotonic.factor
        monotonic.factor = None

    elif hasattr(time, "clock_gettime") and hasattr(time, "CLOCK_HIGHRES"):
        def monotonic():
            return time.clock_gettime(time.CLOCK_HIGHRES)

    elif hasattr(time, "clock_gettime") and hasattr(time, "CLOCK_MONOTONIC"):
        def monotonic():
            return time.clock_gettime(time.CLOCK_MONOTONIC)


On Windows, ``QueryPerformanceCounter()`` is not used even though it
has a better precision than ``GetTickCount()``.  It is not reliable
and has too many issues.


time.perf_counter()
^^^^^^^^^^^^^^^^^^^

Performance counter with the highest available precision to measure a
duration.  It does include time elapsed during sleep and is
system-wide.  The reference point of the returned value is undefined,
so that only the difference between the results of consecutive calls
is valid and is a number of seconds.

Pseudo-code::

    def perf_counter():
        if perf_counter.use_performance_counter:
            if perf_counter.performance_frequency is None:
                try:
                    perf_counter.performance_frequency =
_time.QueryPerformanceFrequency()
                except OSError:
                    # QueryPerformanceFrequency() fails if the installed
                    # hardware does not support a high-resolution performance
                    # counter
                    perf_counter.use_performance_counter = False
                else:
                    return _time.QueryPerformanceCounter() /
perf_counter.performance_frequency
            else:
                return _time.QueryPerformanceCounter() /
perf_counter.performance_frequency
        if perf_counter.use_monotonic:
            # The monotonic clock is preferred over the system time
            try:
                return time.monotonic()
            except OSError:
                perf_counter.use_monotonic = False
        return time.time()
    perf_counter.use_performance_counter = (os.name == 'nt')
    if perf_counter.use_performance_counter:
        perf_counter.performance_frequency = None
    perf_counter.use_monotonic = hasattr(time, 'monotonic')


time.process_time()
^^^^^^^^^^^^^^^^^^^

Sum of the system and user CPU time of the current process. It does
not include time elapsed during sleep. It is process-wide by
definition.  The reference point of the returned value is undefined,
so that only the difference between the results of consecutive calls
is valid.

It is available on all platforms.

Pseudo-code [#pseudo]_::

    if os.name == 'nt':
        def process_time():
            handle = win32process.GetCurrentProcess()
            process_times = win32process.GetProcessTimes(handle)
            return (process_times['UserTime'] +
process_times['KernelTime']) * 1e-7
    else:
        import os
        try:
            import resource
        except ImportError:
            has_resource = False
        else:
            has_resource = True

        def process_time():
            if process_time.use_process_cputime:
                try:
                    return time.clock_gettime(time.CLOCK_PROCESS_CPUTIME_ID)
                except OSError:
                    process_time.use_process_cputime = False
            if process_time.use_getrusage:
                try:
                    usage = resource.getrusage(resource.RUSAGE_SELF)
                    return usage[0] + usage[1]
                except OSError:
                    process_time.use_getrusage = False
            if process_time.use_times:
                try:
                    times = os.times()
                    return times[0] + times[1]
                except OSError:
                    process_time.use_getrusage = False
            return _time.clock()
        process_time.use_process_cputime = (
            hasattr(time, 'clock_gettime')
            and hasattr(time, 'CLOCK_PROCESS_CPUTIME_ID'))
        process_time.use_getrusage = has_resource
        # On OS/2, only the 5th field of os.times() is set, others are zeros
        process_time.use_times = (hasattr(os, 'times') and os.name != 'os2')


Alternatives: API design
========================

Other names for time.monotonic()
--------------------------------

* time.counter()
* time.metronomic()
* time.seconds()
* time.steady(): "steady" is ambiguous: it means different things to
  different people. For example, on Linux, CLOCK_MONOTONIC is
  adjusted. If we uses the real time as the reference clock, we may
  say that CLOCK_MONOTONIC is steady.  But CLOCK_MONOTONIC gets
  suspended on system suspend, whereas real time includes any time
  spent in suspend.
* time.timeout_clock()
* time.wallclock(): time.monotonic() is not the system time aka the
  "wall clock", but a monotonic clock with an unspecified starting
  point.

The name "time.try_monotonic()" was also proposed for an older
proposition of time.monotonic() which was falling back to the system
time when no monotonic clock was available.

Other names for time.perf_counter()
-----------------------------------

* time.hires()
* time.highres()
* time.timer()

Only expose operating system clocks
-----------------------------------

To not have to define high-level clocks, which is a difficult task, a
simpler approach is to only expose operating system clocks.
time.clock_gettime() and related clock identifiers were already added
to Python 3.3 for example.


time.monotonic(): Fallback to system time
-----------------------------------------

If no monotonic clock is available, time.monotonic() falls back to the
system time.

Issues:

* It is hard to define correctly such function in the documentation:
  is it monotonic? Is it steady? Is it adjusted?
* Some user want to decide what to do when no monotonic clock is
  available: use another clock, display an error, or do something
  else?

Different APIs were proposed to define such function.

One function with a flag: time.monotonic(fallback=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

* time.monotonic(fallback=True) falls back to the system time if no
  monotonic clock is available or if the monotonic clock failed.
* time.monotonic(fallback=False) raises OSError if monotonic clock
  fails and NotImplementedError if the system does not provide a
  monotonic clock

A keyword argument that gets passed as a constant in the caller is
usually poor API.

Raising NotImplementedError for a function is something uncommon in
Python and should be avoided.


One time.monotonic() function, no flag
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

time.monotonic() returns (time: float, is_monotonic: bool).

An alternative is to use a function attribute:
time.monotonic.is_monotonic.  The attribute value would be None before
the first call to time.monotonic().


Choosing the clock from a list of constraints
---------------------------------------------

The PEP as proposed offers a few new clocks, but their guarentees
are deliberately loose in order to offer useful clocks on different
platforms. This inherently embeds policy in the calls, and the
caller must thus choose a policy.

The "choose a clock" approach suggests an additional API to let
callers implement their own policy if necessary
by making most platform clocks available and letting the caller pick
amongst them.
The PEP's suggested clocks are still expected to be available for the common
simple use cases.

To do this two facilities are needed:
an enumeration of clocks, and metadata on the clocks to enable the user to
evaluate their suitability.

The primary interface is a function make simple choices easy:
the caller can use ``time.get_clock(*flags)`` with some combination of flags.
This include at least:

* time.MONOTONIC: clock cannot go backward
* time.STEADY: clock rate is steady
* time.ADJUSTED: clock may be adjusted, for example by NTP
* time.HIGHRES: clock with the highest precision

It returns a clock object with a .now() method returning the current time.
The clock object is annotated with metadata describing the clock feature set;
its .flags field will contain at least all the requested flags.

time.get_clock() returns None if no matching clock is found and so calls can
be chained using the or operator.  Example of a simple policy decision::

    T = get_clock(MONOTONIC) or get_clock(STEADY) or get_clock()
    t = T.now()

The available clocks always at least include a wrapper for ``time.time()``,
so a final call with no flags can always be used to obtain a working clock.

Example of flags of system clocks:

* QueryPerformanceCounter: MONOTONIC | HIGHRES
* GetTickCount: MONOTONIC | STEADY
* CLOCK_MONOTONIC: MONOTONIC | STEADY (or only MONOTONIC on Linux)
* CLOCK_MONOTONIC_RAW: MONOTONIC | STEADY
* gettimeofday(): (no flag)

The clock objects contain other metadata including the clock flags
with additional feature flags above those listed above, the name
of the underlying OS facility, and clock precisions.

``time.get_clock()`` still chooses a single clock; an enumeration
facility is also required.
The most obvious method is to offer ``time.get_clocks()`` with the
same signature as ``time.get_clock()``, but returning a sequence
of all clocks matching the requested flags.
Requesting no flags would thus enumerate all available clocks,
allowing the caller to make an arbitrary choice amongst them based
on their metadata.

Example partial implementation:
`clockutils.py <http://hg.python.org/peps/file/tip/pep-0418/clockutils.py>`_.

Working around operating system bugs?
-------------------------------------

Should Python ensure manually that a monotonic clock is truly
monotonic by computing the maximum with the clock value and the
previous value?

Since it's relatively straightforward to cache the last value returned
using a static variable, it might be interesting to use this to make
sure that the values returned are indeed monotonic.

* Virtual machines provide less reliable clocks.
* QueryPerformanceCounter() has known bugs (only one is not fixed yet)

Python may only work around a specific known operating system bug:
`KB274323`_ contains a code example to workaround the bug (use
GetTickCount() to detect QueryPerformanceCounter() leap).

Issues of a hacked monotonic function:

* if the clock is accidentally set forward by an hour and then back
  again, you wouldn't have a useful clock for an hour
* the cache is not shared between processes so different processes
  wouldn't see the same clock value

From victor.stinner at gmail.com  Sun Apr 15 17:18:35 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 15 Apr 2012 17:18:35 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
Message-ID: <CAMpsgwZvQq0ZyrM8ozDpxS4joAspmY0rPXHQKNz2j3jDDWUigA@mail.gmail.com>

> Here is a simplified version of the first draft of the PEP 418. The
> full version can be read online.
> http://www.python.org/dev/peps/pep-0418/

FYI there is no time.thread_time() function. It would only be
available on Windows and Linux. It does not use seconds but CPU
cycles. No module or program of the Python source code need such
function, whereas all other functions added by the PEP already have
users in the Python source code, see the Rationale section. For Linux,
CLOCK_THREAD_CPUTIME_ID is already available in Python 3.3. For
Windows, you can get GetThreadTimes() using ctypes or win32process.

> time.process_time()
> ^^^^^^^^^^^^^^^^^^^
>
> Pseudo-code [#pseudo]_::
>
> ? ?if os.name == 'nt':
> ? ? ? ?def process_time():
> ? ? ? ? ? ?handle = win32process.GetCurrentProcess()
> ? ? ? ? ? ?process_times = win32process.GetProcessTimes(handle)
> ? ? ? ? ? ?return (process_times['UserTime'] +
> process_times['KernelTime']) * 1e-7
> ? ?else:
> ? ? ? ?import os
> ? ? ? ?...
>
> ? ? ? ?def process_time():
> ? ? ? ? ? ?...
> ? ? ? ? ? ?return _time.clock()

Is the C clock() function available on all platforms? timemodule.c
checks for HAVE_CLOCK, but test_time checks that time.clock() is
defined and does not fail since the changeset 4de05cbf5ad1, Dec 06
1996. If clock() is not available on all platforms,
time.process_time() documentation should be fixed.

Victor

From mal at egenix.com  Sun Apr 15 17:36:03 2012
From: mal at egenix.com (M.-A. Lemburg)
Date: Sun, 15 Apr 2012 17:36:03 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
Message-ID: <4F8AEAE3.7000509@egenix.com>

Victor Stinner wrote:
> Hi,
> 
> Here is a simplified version of the first draft of the PEP 418. The
> full version can be read online.
> http://www.python.org/dev/peps/pep-0418/
> 
> The implementation of the PEP can be found in this issue:
> http://bugs.python.org/issue14428
> 
> I post a simplified version for readability and to focus on changes
> introduced by the PEP. Removed sections: Existing Functions,
> Deprecated Function, Glossary, Hardware clocks, Operating system time
> functions, System Standby, Links.

Looks good.

I'd suggest to also include a tool or API to determine the
real resolution of a time function (as opposed to the advertised
one). See pybench's clockres.py helper as example. You often
find large differences between the advertised resolution and
the available one, e.g. while process timers often advertise
very good resolution, they are in fact often only updated
at very coarse rates.

E.g. compare the results of clockres.py on Linux:

Clock resolution of various timer implementations:
time.clock:            10000.000us
time.time:                 0.954us
systimes.processtime:    999.000us

and FreeBSD:

Clock resolution of various timer implementations:
time.clock:             7812.500us
time.time:                 1.907us
systimes.processtime:      1.000us

and Mac OS X:

Clock resolution of various timer implementations:
time.clock:                1.000us
time.time:                 0.954us
systimes.processtime:      1.000us

Regarding changing pybench:
pybench has to stay backwards incompatible with
earlier releases to make it possible to compare timings.
You can add support for new timers to pybench, but please leave
the existing timers and defaults in place.

> ---
> 
> PEP: 418
> Title: Add monotonic time, performance counter and process time functions
> Version: f2bb3f74298a
> Last-Modified: 2012-04-15 17:06:07 +0200 (Sun, 15 Apr 2012)
> Author: Cameron Simpson <cs at zip.com.au>, Jim Jewett
> <jimjjewett at gmail.com>, Victor Stinner <victor.stinner at gmail.com>
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 26-March-2012
> Python-Version: 3.3
> 
> Abstract
> ========
> 
> This PEP proposes to add ``time.get_clock_info(name)``,
> ``time.monotonic()``, ``time.perf_counter()`` and
> ``time.process_time()`` functions to Python 3.3.
> 
> Rationale
> =========
> 
> If a program uses the system time to schedule events or to implement
> a timeout, it will not run events at the right moment or stop the
> timeout too early or too late when the system time is set manually or
> adjusted automatically by NTP.  A monotonic clock should be used
> instead to not be affected by system time updates:
> ``time.monotonic()``.
> 
> To measure the performance of a function, ``time.clock()`` can be used
> but it is very different on Windows and on Unix.  On Windows,
> ``time.clock()`` includes time elapsed during sleep, whereas it does
> not on Unix.  ``time.clock()`` precision is very good on Windows, but
> very bad on Unix.  The new ``time.perf_counter()`` function should be
> used instead to always get the most precise performance counter with a
> portable behaviour (ex: include time spend during sleep).
> 
> To measure CPU time, Python does not provide directly a portable
> function.  ``time.clock()`` can be used on Unix, but it has a bad
> precision.  ``resource.getrusage()`` can also be used on Unix, but it
> requires to get fields of a structure and compute the sum of time
> spent in kernel space and user space.  The new ``time.process_time()``
> function acts as a portable counter that always measures CPU time
> (doesn't include time elapsed during sleep) and has the best available
> precision.
> 
> Each operating system implements clocks and performance counters
> differently, and it is useful to know exactly which function is used
> and some properties of the clock like its resolution and its
> precision.  The new ``time.get_clock_info()`` function gives access to
> all available information of each Python time function.
> 
> New functions:
> 
> * ``time.monotonic()``: timeout and scheduling, not affected by system
>   clock updates
> * ``time.perf_counter()``: benchmarking, most precise clock for short
>   period
> * ``time.process_time()``: profiling, CPU time of the process
> 
> Users of new functions:
> 
> * time.monotonic(): concurrent.futures, multiprocessing, queue, subprocess,
>   telnet and threading modules to implement timeout
> * time.perf_counter(): trace and timeit modules, pybench program
> * time.process_time(): profile module
> * time.get_clock_info(): pybench program to display information about the
>   timer like the precision or the resolution
> 
> The ``time.clock()`` function is deprecated because it is not
> portable: it behaves differently depending on the operating system.
> ``time.perf_counter()`` or ``time.process_time()`` should be used
> instead, depending on your requirements. ``time.clock()`` is marked as
> deprecated but is not planned for removal.
> 
> 
> Python functions
> ================
> 
> New Functions
> -------------
> 
> time.get_clock_info(name)
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> Get information on the specified clock.  Supported clock names:
> 
> * ``"clock"``: ``time.clock()``
> * ``"monotonic"``: ``time.monotonic()``
> * ``"perf_counter"``: ``time.perf_counter()``
> * ``"process_time"``: ``time.process_time()``
> * ``"time"``: ``time.time()``
> 
> Return a dictionary with the following keys:
> 
> * Mandatory keys:
> 
>   * ``"implementation"`` (str): name of the underlying operating system
>     function.  Examples: ``"QueryPerformanceCounter()"``,
>     ``"clock_gettime(CLOCK_REALTIME)"``.
>   * ``"resolution"`` (float): resolution in seconds of the clock.
>   * ``"is_monotonic"`` (bool): True if the clock cannot go backward.
> 
> * Optional keys:
> 
>   * ``"precision"`` (float): precision in seconds of the clock
>     reported by the operating system.
>   * ``"is_adjusted"`` (bool): True if the clock is adjusted (e.g. by a
>     NTP daemon).
> 
> 
> time.monotonic()
> ^^^^^^^^^^^^^^^^
> 
> Monotonic clock, i.e. cannot go backward.  It is not affected by system
> clock updates.  The reference point of the returned value is
> undefined, so that only the difference between the results of
> consecutive calls is valid and is a number of seconds.
> 
> On Windows versions older than Vista, ``time.monotonic()`` detects
> ``GetTickCount()`` integer overflow (32 bits, roll-over after 49.7
> days): it increases a delta by 2\ :sup:`32` each time than an overflow
> is detected.  The delta is stored in the process-local state and so
> the value of ``time.monotonic()`` may be different in two Python
> processes running for more than 49 days. On more recent versions of
> Windows and on other operating systems, ``time.monotonic()`` is
> system-wide.
> 
> Availability: Windows, Mac OS X, Unix, Solaris. Not available on
> GNU/Hurd.
> 
> Pseudo-code [#pseudo]_::
> 
>     if os.name == 'nt':
>         # GetTickCount64() requires Windows Vista, Server 2008 or later
>         if hasattr(time, '_GetTickCount64'):
>             def monotonic():
>                 return _time.GetTickCount64() * 1e-3
>         else:
>             def monotonic():
>                 ticks = _time.GetTickCount()
>                 if ticks < monotonic.last:
>                     # Integer overflow detected
>                     monotonic.delta += 2**32
>                 monotonic.last = ticks
>                 return (ticks + monotonic.delta) * 1e-3
>             monotonic.last = 0
>             monotonic.delta = 0
> 
>     elif os.name == 'mac':
>         def monotonic():
>             if monotonic.factor is None:
>                 factor = _time.mach_timebase_info()
>                 monotonic.factor = timebase[0] / timebase[1]
>             return _time.mach_absolute_time() * monotonic.factor
>         monotonic.factor = None
> 
>     elif hasattr(time, "clock_gettime") and hasattr(time, "CLOCK_HIGHRES"):
>         def monotonic():
>             return time.clock_gettime(time.CLOCK_HIGHRES)
> 
>     elif hasattr(time, "clock_gettime") and hasattr(time, "CLOCK_MONOTONIC"):
>         def monotonic():
>             return time.clock_gettime(time.CLOCK_MONOTONIC)
> 
> 
> On Windows, ``QueryPerformanceCounter()`` is not used even though it
> has a better precision than ``GetTickCount()``.  It is not reliable
> and has too many issues.
> 
> 
> time.perf_counter()
> ^^^^^^^^^^^^^^^^^^^
> 
> Performance counter with the highest available precision to measure a
> duration.  It does include time elapsed during sleep and is
> system-wide.  The reference point of the returned value is undefined,
> so that only the difference between the results of consecutive calls
> is valid and is a number of seconds.
> 
> Pseudo-code::
> 
>     def perf_counter():
>         if perf_counter.use_performance_counter:
>             if perf_counter.performance_frequency is None:
>                 try:
>                     perf_counter.performance_frequency =
> _time.QueryPerformanceFrequency()
>                 except OSError:
>                     # QueryPerformanceFrequency() fails if the installed
>                     # hardware does not support a high-resolution performance
>                     # counter
>                     perf_counter.use_performance_counter = False
>                 else:
>                     return _time.QueryPerformanceCounter() /
> perf_counter.performance_frequency
>             else:
>                 return _time.QueryPerformanceCounter() /
> perf_counter.performance_frequency
>         if perf_counter.use_monotonic:
>             # The monotonic clock is preferred over the system time
>             try:
>                 return time.monotonic()
>             except OSError:
>                 perf_counter.use_monotonic = False
>         return time.time()
>     perf_counter.use_performance_counter = (os.name == 'nt')
>     if perf_counter.use_performance_counter:
>         perf_counter.performance_frequency = None
>     perf_counter.use_monotonic = hasattr(time, 'monotonic')
> 
> 
> time.process_time()
> ^^^^^^^^^^^^^^^^^^^
> 
> Sum of the system and user CPU time of the current process. It does
> not include time elapsed during sleep. It is process-wide by
> definition.  The reference point of the returned value is undefined,
> so that only the difference between the results of consecutive calls
> is valid.
> 
> It is available on all platforms.
> 
> Pseudo-code [#pseudo]_::
> 
>     if os.name == 'nt':
>         def process_time():
>             handle = win32process.GetCurrentProcess()
>             process_times = win32process.GetProcessTimes(handle)
>             return (process_times['UserTime'] +
> process_times['KernelTime']) * 1e-7
>     else:
>         import os
>         try:
>             import resource
>         except ImportError:
>             has_resource = False
>         else:
>             has_resource = True
> 
>         def process_time():
>             if process_time.use_process_cputime:
>                 try:
>                     return time.clock_gettime(time.CLOCK_PROCESS_CPUTIME_ID)
>                 except OSError:
>                     process_time.use_process_cputime = False
>             if process_time.use_getrusage:
>                 try:
>                     usage = resource.getrusage(resource.RUSAGE_SELF)
>                     return usage[0] + usage[1]
>                 except OSError:
>                     process_time.use_getrusage = False
>             if process_time.use_times:
>                 try:
>                     times = os.times()
>                     return times[0] + times[1]
>                 except OSError:
>                     process_time.use_getrusage = False
>             return _time.clock()
>         process_time.use_process_cputime = (
>             hasattr(time, 'clock_gettime')
>             and hasattr(time, 'CLOCK_PROCESS_CPUTIME_ID'))
>         process_time.use_getrusage = has_resource
>         # On OS/2, only the 5th field of os.times() is set, others are zeros
>         process_time.use_times = (hasattr(os, 'times') and os.name != 'os2')
> 
> 
> Alternatives: API design
> ========================
> 
> Other names for time.monotonic()
> --------------------------------
> 
> * time.counter()
> * time.metronomic()
> * time.seconds()
> * time.steady(): "steady" is ambiguous: it means different things to
>   different people. For example, on Linux, CLOCK_MONOTONIC is
>   adjusted. If we uses the real time as the reference clock, we may
>   say that CLOCK_MONOTONIC is steady.  But CLOCK_MONOTONIC gets
>   suspended on system suspend, whereas real time includes any time
>   spent in suspend.
> * time.timeout_clock()
> * time.wallclock(): time.monotonic() is not the system time aka the
>   "wall clock", but a monotonic clock with an unspecified starting
>   point.
> 
> The name "time.try_monotonic()" was also proposed for an older
> proposition of time.monotonic() which was falling back to the system
> time when no monotonic clock was available.
> 
> Other names for time.perf_counter()
> -----------------------------------
> 
> * time.hires()
> * time.highres()
> * time.timer()
> 
> Only expose operating system clocks
> -----------------------------------
> 
> To not have to define high-level clocks, which is a difficult task, a
> simpler approach is to only expose operating system clocks.
> time.clock_gettime() and related clock identifiers were already added
> to Python 3.3 for example.
> 
> 
> time.monotonic(): Fallback to system time
> -----------------------------------------
> 
> If no monotonic clock is available, time.monotonic() falls back to the
> system time.
> 
> Issues:
> 
> * It is hard to define correctly such function in the documentation:
>   is it monotonic? Is it steady? Is it adjusted?
> * Some user want to decide what to do when no monotonic clock is
>   available: use another clock, display an error, or do something
>   else?
> 
> Different APIs were proposed to define such function.
> 
> One function with a flag: time.monotonic(fallback=True)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> * time.monotonic(fallback=True) falls back to the system time if no
>   monotonic clock is available or if the monotonic clock failed.
> * time.monotonic(fallback=False) raises OSError if monotonic clock
>   fails and NotImplementedError if the system does not provide a
>   monotonic clock
> 
> A keyword argument that gets passed as a constant in the caller is
> usually poor API.
> 
> Raising NotImplementedError for a function is something uncommon in
> Python and should be avoided.
> 
> 
> One time.monotonic() function, no flag
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> time.monotonic() returns (time: float, is_monotonic: bool).
> 
> An alternative is to use a function attribute:
> time.monotonic.is_monotonic.  The attribute value would be None before
> the first call to time.monotonic().
> 
> 
> Choosing the clock from a list of constraints
> ---------------------------------------------
> 
> The PEP as proposed offers a few new clocks, but their guarentees
> are deliberately loose in order to offer useful clocks on different
> platforms. This inherently embeds policy in the calls, and the
> caller must thus choose a policy.
> 
> The "choose a clock" approach suggests an additional API to let
> callers implement their own policy if necessary
> by making most platform clocks available and letting the caller pick
> amongst them.
> The PEP's suggested clocks are still expected to be available for the common
> simple use cases.
> 
> To do this two facilities are needed:
> an enumeration of clocks, and metadata on the clocks to enable the user to
> evaluate their suitability.
> 
> The primary interface is a function make simple choices easy:
> the caller can use ``time.get_clock(*flags)`` with some combination of flags.
> This include at least:
> 
> * time.MONOTONIC: clock cannot go backward
> * time.STEADY: clock rate is steady
> * time.ADJUSTED: clock may be adjusted, for example by NTP
> * time.HIGHRES: clock with the highest precision
> 
> It returns a clock object with a .now() method returning the current time.
> The clock object is annotated with metadata describing the clock feature set;
> its .flags field will contain at least all the requested flags.
> 
> time.get_clock() returns None if no matching clock is found and so calls can
> be chained using the or operator.  Example of a simple policy decision::
> 
>     T = get_clock(MONOTONIC) or get_clock(STEADY) or get_clock()
>     t = T.now()
> 
> The available clocks always at least include a wrapper for ``time.time()``,
> so a final call with no flags can always be used to obtain a working clock.
> 
> Example of flags of system clocks:
> 
> * QueryPerformanceCounter: MONOTONIC | HIGHRES
> * GetTickCount: MONOTONIC | STEADY
> * CLOCK_MONOTONIC: MONOTONIC | STEADY (or only MONOTONIC on Linux)
> * CLOCK_MONOTONIC_RAW: MONOTONIC | STEADY
> * gettimeofday(): (no flag)
> 
> The clock objects contain other metadata including the clock flags
> with additional feature flags above those listed above, the name
> of the underlying OS facility, and clock precisions.
> 
> ``time.get_clock()`` still chooses a single clock; an enumeration
> facility is also required.
> The most obvious method is to offer ``time.get_clocks()`` with the
> same signature as ``time.get_clock()``, but returning a sequence
> of all clocks matching the requested flags.
> Requesting no flags would thus enumerate all available clocks,
> allowing the caller to make an arbitrary choice amongst them based
> on their metadata.
> 
> Example partial implementation:
> `clockutils.py <http://hg.python.org/peps/file/tip/pep-0418/clockutils.py>`_.
> 
> Working around operating system bugs?
> -------------------------------------
> 
> Should Python ensure manually that a monotonic clock is truly
> monotonic by computing the maximum with the clock value and the
> previous value?
> 
> Since it's relatively straightforward to cache the last value returned
> using a static variable, it might be interesting to use this to make
> sure that the values returned are indeed monotonic.
> 
> * Virtual machines provide less reliable clocks.
> * QueryPerformanceCounter() has known bugs (only one is not fixed yet)
> 
> Python may only work around a specific known operating system bug:
> `KB274323`_ contains a code example to workaround the bug (use
> GetTickCount() to detect QueryPerformanceCounter() leap).
> 
> Issues of a hacked monotonic function:
> 
> * if the clock is accidentally set forward by an hour and then back
>   again, you wouldn't have a useful clock for an hour
> * the cache is not shared between processes so different processes
>   wouldn't see the same clock value
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/mal%40egenix.com

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 15 2012)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2012-04-28: PythonCamp 2012, Cologne, Germany              13 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From brett at python.org  Sun Apr 15 18:31:46 2012
From: brett at python.org (Brett Cannon)
Date: Sun, 15 Apr 2012 12:31:46 -0400
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CADiSq7eCW90u7ipu_3WQH8Eb6+M-9M4Cquzwa6XvszGZbONv=g@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
	<CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
	<CADiSq7eCW90u7ipu_3WQH8Eb6+M-9M4Cquzwa6XvszGZbONv=g@mail.gmail.com>
Message-ID: <CAP1=2W4bMv1n7+NO+7f7SATKBTT+DL0nOjASXfnEZxpNMXWbiA@mail.gmail.com>

On Sun, Apr 15, 2012 at 07:26, Nick Coghlan <ncoghlan at gmail.com> wrote:

> Hooray for finally having this to the point where it has been pushed to
> trunk :)
>
> On Sun, Apr 15, 2012 at 8:16 AM, Brett Cannon <brett at python.org> wrote:
> > Once again, it's just code that needs updating to run on Python 3.3 so I
> > don't view it as a concern. Going from list.append() to list.insert()
> (even
> > if its ``list.insert(hook, len(list)-2)``) is not exactly difficult.
>
> I'm not sure you can so blithely wave away the "check this before the
> standard hooks" problem. If the recommended approach becomes to insert
> new hooks at the *start* of path_hooks and meta_path, then that should
> work fairly well, since the new additions will take precedence
> regardless of what other changes have already been made. However,
> trying to be clever and say "before the standard hooks, but after
> everything else" is fraught with peril, since there may be hooks
> present in the lists *after* the standard ones so naive counting
> wouldn't work.
>

Well, I personally say always insert to the front of sys.meta_path because
getting precedence right is always difficult when you are trying to insert
into the middle of a list. I mean this issue could happen in any
application that have multiple meta path finders and their is an explicit
order that is desired that does not simply fall through from the way code
is called.


>
> As far as the guidelines for managing the import state go, it may be
> worth having public "importlib.default_path_hooks" and
> "importlib.default_meta_path" attributes.
>

I don't think it is. ``importlib.default_meta_path =
[importlib.machinery.PathFinder]`` and ``importlib.default_path_hooks =
[importlib.machinery.some_name_I_have_not_chosen_yet,
zipimport.whatever_its_called]`` is not exactly complicated and if people
are not reading the docs closely enough to realize that is what the
defaults are they are already asking for trouble when mucking around with
import.


>
> Then "clearing" the hooks would just be a matter of resetting them
> back to defaults: "sys.path_hooks[:] = importlib.default_path_hooks".
> You could also locate them in the hooks list correctly by checking
> "for i, hook in enumerate(sys.path_hooks): if hook is
> importlib.default_path_hooks[0]: break"
>

You do realize people will forget the [:] and end up simply screwing up
that original list, right? =)


>
> Alternatively, it may be simpler to just expose a less granular
> "reset_import_hooks()" function that restores meta_path and path_hooks
> back to their default state (the defaults could then be private
> attributes rather than public ones) and invalidates all the caches.
>

What about sys.path_importer_cache: all of it or just NullImporter/None
entries (or should that be a boolean to this function)? And shouldn't it be
called reset_import() with the level of changes you are proposing the
function make?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/b325759c/attachment.html>

From raymond.hettinger at gmail.com  Sun Apr 15 19:13:00 2012
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sun, 15 Apr 2012 13:13:00 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
Message-ID: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>

We should publish some advice on creating content managers.

Context managers are a general purpose tool but have a primary
use case of creating and releasing resources.  This creates an
expectation that that is what the context managers are doing unless
they explicitly say otherwise.

For example in the following calls, the content managers are responsible
for acquiring and releasing a resource.  This is a good and clean design:

    with open(filename) as f: ...      # will release the file resource when done

    with lock:                                     # will acquire and release the lock.

However, in the case that someone wants to create a context manager that does
something other than acquiring and releasing resources, then they should
create a separate content manager so that the behavior will be named.

In other words, if the behavior is going to be the common and expected case,
it is okay to add __enter__ and __exit__ to existing classes (as was done
for locks and files).  However, if the behavior is going to do something else,
then the __enter__ and __exit__ methods need to be in a new class or
factory function.

For example, given the typical uses of context managers, I would expect the 
following code to automatically close the database connection:

     with sqlite3.connect(filename) as db:
          ...

Instead, the context manager implements a different behavior.  It would
have been better if that behavior had been given a name:

    db = sqlite3.connect(filename)
    with auto_commit_or_rollback(db):
          # do a transaction

Explicit beats implicit whenever the implicit behavior would deviate from expected norms.


Raymond
    

From brett at python.org  Sun Apr 15 19:21:59 2012
From: brett at python.org (Brett Cannon)
Date: Sun, 15 Apr 2012 13:21:59 -0400
Subject: [Python-Dev] cpython: Rebuild importlib.h to incorporate added
	comments.
In-Reply-To: <jmemnj$o9p$1@dough.gmane.org>
References: <E1SJEbO-0005Nm-9K@dinsdale.python.org>
	<20120415145334.7e8ae874@pitrou.net> <jmemnj$o9p$1@dough.gmane.org>
Message-ID: <CAP1=2W6QjkS4VSuM-=aXCngqR55JhHA5t2xf3iYuStjQjsdvnQ@mail.gmail.com>

On Sun, Apr 15, 2012 at 10:42, Georg Brandl <g.brandl at gmx.net> wrote:

> On 15.04.2012 14:53, Antoine Pitrou wrote:
>
>> On Sun, 15 Apr 2012 03:50:06 +0200
>> brett.cannon<python-checkins@**python.org <python-checkins at python.org>>
>>  wrote:
>>
>>>  http://hg.python.org/cpython/**rev/6a77697d2a63<http://hg.python.org/cpython/rev/6a77697d2a63>
>>>  changeset:   76311:6a77697d2a63
>>>  user:        Brett Cannon<brett at python.org>
>>>  date:        Sat Apr 14 21:18:48 2012 -0400
>>>  summary:
>>>   Rebuild importlib.h to incorporate added comments.
>>>
>>
>> Isn't there a Makefile rule to rebuild it automatically?
>>
>
> See the "importlib is now bootstrapped" thread for some problems with that.
>
>
In this instance it was because I tested using importlib directly instead
of using import, so I just forgot to build before committing.


>  Georg
>
>
> ______________________________**_________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/**mailman/listinfo/python-dev<http://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
> brett%40python.org<http://mail.python.org/mailman/options/python-dev/brett%40python.org>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/97af897c/attachment.html>

From victor.stinner at gmail.com  Sun Apr 15 19:40:43 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 15 Apr 2012 19:40:43 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F8AEAE3.7000509@egenix.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<4F8AEAE3.7000509@egenix.com>
Message-ID: <CAMpsgwZNk75A2giU9ou7NakeoU2SiyKEbQdBvfJL3sLw0yZ-6Q@mail.gmail.com>

2012/4/15 M.-A. Lemburg <mal at egenix.com>:
> I'd suggest to also include a tool or API to determine the
> real resolution of a time function (as opposed to the advertised
> one). See pybench's clockres.py helper as example.

The PEP includes such tool, but I forgot to mention it in the PEP:
http://hg.python.org/peps/file/tip/pep-0418/clock_precision.py

It is based on clockres.py from pybench. I used this tools to fill the
"Precision in Python" column of the different tables. The "Precision"
is the precision announced by the OS, whereas the "Precision in
Python" is the effictive precision in Python.

The full PEP includes results of different benchmarks: performance of
hardware clocks and performance of the different OS time functions.

> E.g. compare the results of clockres.py on Linux:
>
> Clock resolution of various timer implementations:
> time.clock: ? ? ? ? ? ?10000.000us
> time.time: ? ? ? ? ? ? ? ? 0.954us
> systimes.processtime: ? ?999.000us
>
> and FreeBSD:
>
> Clock resolution of various timer implementations:
> time.clock: ? ? ? ? ? ? 7812.500us
> time.time: ? ? ? ? ? ? ? ? 1.907us
> systimes.processtime: ? ? ?1.000us

Cool, I found similar numbers :-)

> and Mac OS X:
>
> Clock resolution of various timer implementations:
> time.clock: ? ? ? ? ? ? ? ?1.000us
> time.time: ? ? ? ? ? ? ? ? 0.954us
> systimes.processtime: ? ? ?1.000us

I will include these numbers on Mac OS X to the PEP.

> Regarding changing pybench:
> pybench has to stay backwards incompatible with
> earlier releases to make it possible to compare timings.
> You can add support for new timers to pybench, but please leave
> the existing timers and defaults in place.

I suppose that you are talking about this change:

-# Choose platform default timer
-if sys.platform[:3] == 'win':
-    # On WinXP this has 2.5ms resolution
-    TIMER_PLATFORM_DEFAULT = TIMER_TIME_CLOCK
-else:
-    # On Linux this has 1ms resolution
-    TIMER_PLATFORM_DEFAULT = TIMER_TIME_TIME
+TIMER_PLATFORM_DEFAULT = TIMER_TIME_PERF_COUNTER

from http://bugs.python.org/file25202/perf_counter_process_time.patch

It does not change the OS clock on Windows, only on Unix:
CLOCK_REALTIME (gettimeofday() for Python 3.2 and earlier) is replaced
with CLOCK_MONOTONIC. This change should only give a different result
if the system time is changed during the benchmark.

I'm ok to keep the default timer if you consider the change as incompatible.

Victor

From urban.dani+py at gmail.com  Sun Apr 15 21:34:12 2012
From: urban.dani+py at gmail.com (Daniel Urban)
Date: Sun, 15 Apr 2012 21:34:12 +0200
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
	<CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
Message-ID: <CACoLFeR7B=SMqSttoYuwGW-PhgbnzVuC+tVQmqqFtVzwZhDfkQ@mail.gmail.com>

On Sun, Apr 15, 2012 at 13:48, Nick Coghlan <ncoghlan at gmail.com> wrote:
> /me pages thoughts from 12 months ago back into brain...

Sorry about that, I planned to do this earlier...

> On Sun, Apr 15, 2012 at 7:36 PM, Daniel Urban <urban.dani+py at gmail.com> wrote:
>> On Tue, Apr 19, 2011 at 16:10, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>> Initially I was going to suggest making __build_class__ part of the
>>> language definition rather than a CPython implementation detail, but
>>> then I realised that various CPython specific elements in its
>>> signature made that a bad idea.
>>
>> Are you referring to the first 'func' argument? (Which is basically
>> the body of the "class" statement, if I'm not mistaken).
>
> Yup, I believe that was my main objection to exposing __build_class__
> directly. There's no obligation for implementations to build a
> throwaway function to evaluate a class body.
>
>> __prepare__ also needs the name and optional keyword arguments. ?So it
>> probably should be something like "operator.prepare(name, bases,
>> metaclass, **kw)". But this way it would need almost the same
>> arguments as __build_class__(func, name, *bases, metaclass=None,
>> **kwds).
>
> True.
>
>>> The correct idiom for dynamic type creation in a PEP 3115 world would then be:
>>>
>>> ? ?from operator import prepare
>>> ? ?cls = type(name, bases, prepare(type, bases))
>>>
>>> Thoughts?
>>
>> When creating a dynamic type, we may want to do it with a non-empty
>> namespace. Maybe like this (with the extra arguments mentioned above):
>>
>> ? from operator import prepare
>> ? ns = prepare(name, bases, type, **kwargs)
>> ? ns.update(my_ns) ?# add the attributes we want
>> ? cls = type(name, bases, ns)
>>
>> What about an "operator.build_class(name, bases, ns, **kw)" function?
>> It would work like this:
>>
>> ? def build_class(name, bases, ns, **kw):
>> ? ? ? metaclass = kw.pop('metaclass', type)
>> ? ? ? pns = prepare(name, bases, metaclass, **kw)
>> ? ? ? pns.update(ns)
>> ? ? ? return metaclass(name, bases, pns)
>>
>> (Where 'prepare' is the same as above).
>> This way we wouldn't even need to make 'prepare' public, and the new
>> way to create a dynamic type would be:
>>
>> ? from operator import build_class
>> ? cls = build_class(name, bases, ns, **my_kwargs)
>
> No, I think we would want to expose the created namespace directly -
> that way people can use update(), direct assigment, exec(), eval(), or
> whatever other mechanism they choose to handle the task of populating
> the namespace. However, a potentially cleaner way to do that might be
> offer use an optional callback API rather than exposing a separate
> public prepare() function. Something like:
>
> ? ?def build_class(name, bases=(), kwds=None, eval_body=None):
> ? ? ? ?metaclass, ns = _prepare(name, bases, kwds)
> ? ? ? ?if eval_body is not None:
> ? ? ? ? ? ?eval_body(ns)
> ? ? ? ?return metaclass(name, bases, ns)

That seems more flexible indeed. I will try to make a patch next week,
if that's OK.


Daniel

From glyph at twistedmatrix.com  Sun Apr 15 23:12:28 2012
From: glyph at twistedmatrix.com (Glyph)
Date: Sun, 15 Apr 2012 14:12:28 -0700
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
Message-ID: <A4250AB9-F158-49A5-B16D-0FEEB050C6B1@twistedmatrix.com>


On Apr 14, 2012, at 3:32 PM, Guido van Rossum wrote:

> Funny, I was just thinking about having a simple standard API that
> will let you open files (and list directories) relative to a given
> module or package regardless of how the thing is loaded.


Twisted has such a thing, mostly written by me, called twisted.python.modules.

Sorry if I'm repeating myself here, I know I've brought it up on this list before, but it seems germane to this thread.  I'd be interested in getting feedback from the import-wizards participating in this thread in case it is doing anything bad (in particular I'd like to make sure it will keep working in future versions of Python), but I think it may provide quite a good template for a standard API.

The code's here: <http://twistedmatrix.com/trac/browser/trunk/twisted/python/modules.py>

The API is fairly simple.

>>> from twisted.python.modules import getModule
>>> e = getModule("email") # get an abstract "module" object (un-loaded)
>>> e
PythonModule<'email'>
>>> walker = e.walkModules() # walk the module hierarchy
>>> walker.next()
PythonModule<'email'>
>>> walker.next()
PythonModule<'email._parseaddr'>
>>> walker.next() # et cetera
PythonModule<'email.base64mime'>
>>> charset = e["charset"] # get the 'charset' child module of the 'e' package
>>> charset.filePath
FilePath('.../lib/python2.7/email/charset.py')
>>> charset.filePath.parent().children() # list the directory containing charset.py

Worth pointing out is that although in this example it's a FilePath, it could also be a ZipPath if you imported stuff from a zipfile.  We have an adapter that inspects path_importer_cache and produces appropriately-shaped filesystem-like objects depending on where your module was imported from.  Thank you to authors of PEP 302; that was my religion while writing this code.

You can also, of course, ask to load something once you've identified it with the traversal API:

>>> charset.load()
<module 'email.charset' from '.../lib/python2.7/email/charset.pyc'>

You can also ask questions like this, which are very useful when debugging setup problems:

>>> ifaces = getModule("twisted.internet.interfaces")
>>> ifaces.pathEntry
PathEntry<FilePath('/Domicile/glyph/Projects/Twisted/trunk')>
>>> list(ifaces.pathEntry.iterModules())
[PythonModule<'setup'>, PythonModule<'twisted'>]

This asks what sys.path entry is responsible twisted.internet.interfaces, and then what other modules could be loaded from there.  Just 'setup' and 'twisted' indicates that this is a development install (not surprising for one of my computers), since site-packages would be much more crowded.

The idiom for saying "there's a file installed near this module, and I'd like to grab it as a string", is pretty straightforward:

from twisted.python.modules import getModule
mod = getModule(__name__).filePath.sibling("my-file").open().read()

And hopefully it's obvious from this idiom how one might get the pathname, or a stream rather than the bytes.

-glyph
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/0112594d/attachment.html>

From glyph at twistedmatrix.com  Sun Apr 15 23:12:30 2012
From: glyph at twistedmatrix.com (Glyph)
Date: Sun, 15 Apr 2012 14:12:30 -0700
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+tqPWbzw9gk5Ad1Lt8cR3R65XcHYXapR9c8xPxo7QBag@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<CAP1=2W6JHP6eMskU=CxjCTbGY_zAww5coyj7nYGOzPRuHT0n2Q@mail.gmail.com>
	<CAP7+vJ+KdkHky=+mNp9907_+HBou0u4TwnjO63PZVNJWmWU-mA@mail.gmail.com>
	<4F8A1117.5010708@cheimes.de>
	<CAP7+vJ+tqPWbzw9gk5Ad1Lt8cR3R65XcHYXapR9c8xPxo7QBag@mail.gmail.com>
Message-ID: <2D3D87E4-5485-476A-A8A5-C459750AA34A@twistedmatrix.com>


On Apr 14, 2012, at 7:59 PM, Guido van Rossum wrote:

> On Sat, Apr 14, 2012 at 5:06 PM, Christian Heimes <lists at cheimes.de> wrote:
>> Am 15.04.2012 00:56, schrieb Guido van Rossum:
>>> Well, if it's a real file, and you need a stream, that's efficient,
>>> and if you need the data, you can read it. But if it comes from a
>>> loader, and you need a stream, you'd have to wrap it in a StringIO
>>> instance. So having two APIs, one to get a stream, and one to get the
>>> data, allows the implementation to be more optimal -- it would be bad
>>> to wrap a StringIO instance around data only so you can read the data
>>> from the stream again...
>> 
>> We need a third way to access a file. The two methods get_data() and
>> get_stream() aren't sufficient for libraries that need a read file that
>> lives on the file system. In order to have real files the loader (or
>> some other abstraction layer) needs to create a temporary directory for
>> the current process and clean it up when the process ends. The file is
>> saved to the temporary directory the first time it's accessed.
> 
> Hm... Can you give an example of a library that needs a real file?
> That sounds like a poorly designed API.

Lots of C libraries use filenames or FILE*s where they _should_ be using some much more abstract things; i.e., constellations of function pointers that are isomorphic to Python's "file-like objects".  Are these APIs poorly designed?  Sure, but they also exist ;).

> Perhaps you're talking about APIs that take a filename instead of a
> stream? Maybe for those it would be best to start getting serious
> about a virtual filesystem... (Sorry, probably python-ideas stuff).

twisted.python.filepath... ;-)

>> The get_file() feature has a neat benefit. Since it transparently
>> extracts files from the loader, users can ship binary extensions and
>> shared libraries (dlls) in a ZIP file and use them without too much hassle.
> 
> Yeah, DLLs are about the only example I can think of where even a
> virtual filesystem doesn't help...

In a previous life, I was frequently exposed to proprietary game-engine things that could only load resources (3D models, audio files, textures) from actual real files, and I had to do lots of unpacking stuff either from things tacked on to a .exe or inside a zip file.  (I don't know how common this is any more in that world but I suspect "very".)

Unfortunately all the examples I can think of off the top of my head were in proprietary, now defunct code; but this is exactly the sort of polish that open-sourcing tends to apply, so I would guess problematic code in this regard would more often be invisible.

-glyph

From solipsis at pitrou.net  Mon Apr 16 00:02:40 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 00:02:40 +0200
Subject: [Python-Dev] cpython: Update importlib.h
References: <E1SJXQg-0007BR-JW@dinsdale.python.org>
Message-ID: <20120416000240.63d6d663@pitrou.net>

On Sun, 15 Apr 2012 23:56:18 +0200
brett.cannon <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/096653de404d
> changeset:   76332:096653de404d
> user:        Brett Cannon <brett at python.org>
> date:        Sun Apr 15 17:47:19 2012 -0400
> summary:
>   Update importlib.h

I wonder if we could somehow set importlib.h as binary so that
Mercurial doesn't give us huge diffs each time the Python source is
modified.
Adding a NUL byte in the generated file would probably be sufficient.

Regards

Antoine.



From victor.stinner at gmail.com  Mon Apr 16 00:28:47 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 16 Apr 2012 00:28:47 +0200
Subject: [Python-Dev] Edit the rejected PEP 416 (frozendict) to mention the
	newly added types.MappingProxyType
Message-ID: <CAMpsgwamGspaGdcvkxriD4Ekc1K9MqPzF0s00sW3_xA-zkXGKQ@mail.gmail.com>

Hi,

The frozendict (PEP 416) was rejected, but I just added the
types.MappingProxyType. This type is not new, it existed since Python
2.2 as the internal dict_proxy type. See also the issue #14386.

I would like to know if I can edit the rejected PEP, or if Guido
prefers to do it, to mention the new type? The "Rejection Notice"
section currently ends with "On the other hand, exposing the existing
read-only dict proxy as a built-in type sounds good to me. (It would
need to be changed to allow calling the constructor.) GvR."

Victor

From barry at python.org  Mon Apr 16 00:37:36 2012
From: barry at python.org (Barry Warsaw)
Date: Sun, 15 Apr 2012 18:37:36 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
Message-ID: <20120415183736.64375b0d@limelight.wooz.org>

On Apr 14, 2012, at 03:32 PM, Guido van Rossum wrote:

>Funny, I was just thinking about having a simple standard API that
>will let you open files (and list directories) relative to a given
>module or package regardless of how the thing is loaded.

I tend to use the "basic resource access" API of pkg_resources.

http://peak.telecommunity.com/DevCenter/PkgResources#basic-resource-access

I'm not suggesting that we adopt all of pkg_resources, but I think the 5
functions listed there, plus resource_filename() (from the next section)
provide basic functionality I've found very useful.

-Barry

From barry at python.org  Mon Apr 16 00:38:29 2012
From: barry at python.org (Barry Warsaw)
Date: Sun, 15 Apr 2012 18:38:29 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <A4250AB9-F158-49A5-B16D-0FEEB050C6B1@twistedmatrix.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<A4250AB9-F158-49A5-B16D-0FEEB050C6B1@twistedmatrix.com>
Message-ID: <20120415183829.1883526c@limelight.wooz.org>

On Apr 15, 2012, at 02:12 PM, Glyph wrote:

>Twisted has such a thing, mostly written by me, called
>twisted.python.modules.
>
>Sorry if I'm repeating myself here, I know I've brought it up on this list
>before, but it seems germane to this thread.  I'd be interested in getting
>feedback from the import-wizards participating in this thread in case it is
>doing anything bad (in particular I'd like to make sure it will keep working
>in future versions of Python), but I think it may provide quite a good
>template for a standard API.
>
>The code's here: <http://twistedmatrix.com/trac/browser/trunk/twisted/python/modules.py>
>
>The API is fairly simple.
>
>>>> from twisted.python.modules import getModule
>>>> e = getModule("email") # get an abstract "module" object (un-loaded)

Got a PEP 8 friendly version? :)

-Barry

From guido at python.org  Mon Apr 16 00:54:32 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 15 Apr 2012 15:54:32 -0700
Subject: [Python-Dev] Edit the rejected PEP 416 (frozendict) to mention
 the newly added types.MappingProxyType
In-Reply-To: <CAMpsgwamGspaGdcvkxriD4Ekc1K9MqPzF0s00sW3_xA-zkXGKQ@mail.gmail.com>
References: <CAMpsgwamGspaGdcvkxriD4Ekc1K9MqPzF0s00sW3_xA-zkXGKQ@mail.gmail.com>
Message-ID: <CAP7+vJJaoJNUffnS-Rgca8o6HoSLP9yov_5E3SPyZkO6PxzVxg@mail.gmail.com>

Go ahead and update the PEP!

On Sunday, April 15, 2012, Victor Stinner wrote:

> Hi,
>
> The frozendict (PEP 416) was rejected, but I just added the
> types.MappingProxyType. This type is not new, it existed since Python
> 2.2 as the internal dict_proxy type. See also the issue #14386.
>
> I would like to know if I can edit the rejected PEP, or if Guido
> prefers to do it, to mention the new type? The "Rejection Notice"
> section currently ends with "On the other hand, exposing the existing
> read-only dict proxy as a built-in type sounds good to me. (It would
> need to be changed to allow calling the constructor.) GvR."
>
> Victor
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org <javascript:;>
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/eeb3c1bf/attachment.html>

From glyph at twistedmatrix.com  Mon Apr 16 00:58:37 2012
From: glyph at twistedmatrix.com (Glyph)
Date: Sun, 15 Apr 2012 18:58:37 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <20120415183829.1883526c@limelight.wooz.org>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CALFfu7CPUmdJ4CMmYHLXCjkWD_+fxM8_Y6YuCAROPfOv8nCYwg@mail.gmail.com>
	<CAP7+vJ+gd4piFLzJsJ6OTw2zKykQ7VihBJmeyMWu2-KiBy7YNw@mail.gmail.com>
	<A4250AB9-F158-49A5-B16D-0FEEB050C6B1@twistedmatrix.com>
	<20120415183829.1883526c@limelight.wooz.org>
Message-ID: <0ADB7C19-4E14-47F7-A37C-461A38207665@twistedmatrix.com>


On Apr 15, 2012, at 6:38 PM, Barry Warsaw wrote:

> On Apr 15, 2012, at 02:12 PM, Glyph wrote:
> 
>> Twisted has such a thing, mostly written by me, called
>> twisted.python.modules.
>> 
>> Sorry if I'm repeating myself here, I know I've brought it up on this list
>> before, but it seems germane to this thread.  I'd be interested in getting
>> feedback from the import-wizards participating in this thread in case it is
>> doing anything bad (in particular I'd like to make sure it will keep working
>> in future versions of Python), but I think it may provide quite a good
>> template for a standard API.
>> 
>> The code's here: <http://twistedmatrix.com/trac/browser/trunk/twisted/python/modules.py>
>> 
>> The API is fairly simple.
>> 
>>>>> from twisted.python.modules import getModule
>>>>> e = getModule("email") # get an abstract "module" object (un-loaded)
> 
> Got a PEP 8 friendly version? :)

No, but I'd be happy to do the translation manually if people actually prefer the shape of this API!

I am just pointing it out as a source of inspiration for whatever comes next, which I assume will be based on pkg_resources.

-glyph

From victor.stinner at gmail.com  Mon Apr 16 01:25:42 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 16 Apr 2012 01:25:42 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
Message-ID: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>

> time.perf_counter()
> ^^^^^^^^^^^^^^^^^^^
>
> Performance counter with the highest available precision to measure a
> duration. ?It does include time elapsed during sleep and is
> system-wide. ?The reference point of the returned value is undefined,
> so that only the difference between the results of consecutive calls
> is valid and is a number of seconds.

It's maybe time for bikeshedding! Glyph wrote me in private:
"IMHO, perf_counter should be performance_counter() or
high_precision(); the abbreviation is silly :)"

The time module has other abbreviated names. I don't have a preference
between time.perf_counter() or time.performance_counter().

Solaris provides CLOCK_HIGHRES, "the nonadjustable, high-resolution
clock." If we map CLOCK_xxx names to functions name, we have:

 * CLOCK_MONOTONIC: time.monotonic()
 * CLOCK_HIGHRES: time.highres()

(whereas Windows provides QueryPerformanceCounter -> performance_counter)

I suppose that most people don't care that "resolution" and
"precision" are different things.

Victor

From ncoghlan at gmail.com  Mon Apr 16 04:03:05 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 16 Apr 2012 12:03:05 +1000
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W4bMv1n7+NO+7f7SATKBTT+DL0nOjASXfnEZxpNMXWbiA@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
	<CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
	<CADiSq7eCW90u7ipu_3WQH8Eb6+M-9M4Cquzwa6XvszGZbONv=g@mail.gmail.com>
	<CAP1=2W4bMv1n7+NO+7f7SATKBTT+DL0nOjASXfnEZxpNMXWbiA@mail.gmail.com>
Message-ID: <CADiSq7dKqcDw6TWEHGudE0UDeTMTNzcCJNwDnNF=8FxrDMW1vg@mail.gmail.com>

On Mon, Apr 16, 2012 at 2:31 AM, Brett Cannon <brett at python.org> wrote:
> What about sys.path_importer_cache: all of it or just NullImporter/None
> entries (or should that be a boolean to this function)? And shouldn't it be
> called reset_import() with the level of changes you are proposing the
> function make?

Hmm, perhaps the simpler suggestion is: "If you want a clean import
state, use multiprocessing or the subprocess module to invoke a new
instance of python" :)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Mon Apr 16 04:17:41 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 16 Apr 2012 12:17:41 +1000
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CACoLFeR7B=SMqSttoYuwGW-PhgbnzVuC+tVQmqqFtVzwZhDfkQ@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
	<CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
	<CACoLFeR7B=SMqSttoYuwGW-PhgbnzVuC+tVQmqqFtVzwZhDfkQ@mail.gmail.com>
Message-ID: <CADiSq7d7rBy1arxh5AQz6mq-ftQVWYOMR+fjkowCxmZRC_GvtQ@mail.gmail.com>

On Mon, Apr 16, 2012 at 5:34 AM, Daniel Urban <urban.dani+py at gmail.com> wrote:
> On Sun, Apr 15, 2012 at 13:48, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> /me pages thoughts from 12 months ago back into brain...
>
> Sorry about that, I planned to do this earlier...

No worries - good to have someone following up on it, since it had
completely dropped off my own radar :)

>> No, I think we would want to expose the created namespace directly -
>> that way people can use update(), direct assigment, exec(), eval(), or
>> whatever other mechanism they choose to handle the task of populating
>> the namespace. However, a potentially cleaner way to do that might be
>> offer use an optional callback API rather than exposing a separate
>> public prepare() function. Something like:
>>
>> ? ?def build_class(name, bases=(), kwds=None, eval_body=None):
>> ? ? ? ?metaclass, ns = _prepare(name, bases, kwds)
>> ? ? ? ?if eval_body is not None:
>> ? ? ? ? ? ?eval_body(ns)
>> ? ? ? ?return metaclass(name, bases, ns)
>
> That seems more flexible indeed. I will try to make a patch next week,
> if that's OK.

Sure, just create a new tracker issue and assign it to me. You already
know better than most what the _prepare() step needs to do :)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From brett at python.org  Mon Apr 16 04:25:01 2012
From: brett at python.org (Brett Cannon)
Date: Sun, 15 Apr 2012 22:25:01 -0400
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CADiSq7dKqcDw6TWEHGudE0UDeTMTNzcCJNwDnNF=8FxrDMW1vg@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<CALFfu7Aeuv2rzZpZFCmsfneF7VY79eXJ33jh6o_3sdiij=2mGw@mail.gmail.com>
	<CAP1=2W7QSndNbqRvw4VrzxwkeGOuQJUoB3XEcaFrvdSx1cmCcA@mail.gmail.com>
	<CADiSq7eCW90u7ipu_3WQH8Eb6+M-9M4Cquzwa6XvszGZbONv=g@mail.gmail.com>
	<CAP1=2W4bMv1n7+NO+7f7SATKBTT+DL0nOjASXfnEZxpNMXWbiA@mail.gmail.com>
	<CADiSq7dKqcDw6TWEHGudE0UDeTMTNzcCJNwDnNF=8FxrDMW1vg@mail.gmail.com>
Message-ID: <CAP1=2W6otQVRoPtB-PYVWqOfUZNr3pC-Yd5As-2yFYFvfPhOVA@mail.gmail.com>

On Sun, Apr 15, 2012 at 22:03, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Mon, Apr 16, 2012 at 2:31 AM, Brett Cannon <brett at python.org> wrote:
> > What about sys.path_importer_cache: all of it or just NullImporter/None
> > entries (or should that be a boolean to this function)? And shouldn't it
> be
> > called reset_import() with the level of changes you are proposing the
> > function make?
>
> Hmm, perhaps the simpler suggestion is: "If you want a clean import
> state, use multiprocessing or the subprocess module to invoke a new
> instance of python" :)
>
>
Yeah, kinda. =) This is why testing import (as you know) is such an utter
pain.

-Brett



>  Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120415/b2411904/attachment.html>

From rosuav at gmail.com  Mon Apr 16 04:37:57 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Mon, 16 Apr 2012 12:37:57 +1000
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
Message-ID: <CAPTjJmr+4rHOBCS+UcFLn8BgLvBEYex3k2jBzsBN8o4SO=h5Sw@mail.gmail.com>

On Mon, Apr 16, 2012 at 3:13 AM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
> Instead, the context manager implements a different behavior. ?It would
> have been better if that behavior had been given a name:
>
> ? ?db = sqlite3.connect(filename)
> ? ?with auto_commit_or_rollback(db):
> ? ? ? ? ?# do a transaction

I agree that it wants a name. If explicitness is the goal, would this
be more suitable?

db = sqlite3.connect(filename)
with db.begin_transaction() as trans:
  # do a transaction

This way, if a database engine supports multiple simultaneous
transactions, the same syntax can be used.

Chris Angelico

From p.f.moore at gmail.com  Mon Apr 16 09:13:47 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 16 Apr 2012 08:13:47 +0100
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
Message-ID: <CACac1F9n_FxQWGjMA1jjm6SrRm_eXc2fww7ko+QjafGmmRz6wA@mail.gmail.com>

On 15 April 2012 18:13, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
> We should publish some advice on creating content managers.
>
> Context managers are a general purpose tool but have a primary
> use case of creating and releasing resources. ?This creates an
> expectation that that is what the context managers are doing unless
> they explicitly say otherwise.

I'd have said this was unnecessary, but the sqlite example shows it
isn't, so +1 from me.

As a database specialist, the sqlite behaviour you show is completely
non-intuitive :-(

Paul.

From g.brandl at gmx.net  Mon Apr 16 09:42:59 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 16 Apr 2012 09:42:59 +0200
Subject: [Python-Dev] cpython: Issue #10576: Add a progress callback to
	gcmodule
In-Reply-To: <E1SJNqX-0002eu-K7@dinsdale.python.org>
References: <E1SJNqX-0002eu-K7@dinsdale.python.org>
Message-ID: <jmgihe$bhn$1@dough.gmane.org>

On 15.04.2012 13:42, kristjan.jonsson wrote:
> http://hg.python.org/cpython/rev/88f8ef5785d7
> changeset:   76319:88f8ef5785d7
> user:        Kristj?n Valur J?nsson<kristjan at ccpgames.com>
> date:        Sun Apr 15 11:41:32 2012 +0000
> summary:
>    Issue #10576: Add a progress callback to gcmodule
>
> files:
>    Doc/library/gc.rst  |   39 ++++++++-
>    Lib/test/test_gc.py |  136 +++++++++++++++++++++++++++++++-
>    Misc/NEWS           |    3 +
>    Modules/gcmodule.c  |   80 +++++++++++++++++-
>    4 files changed, 249 insertions(+), 9 deletions(-)
>
>
> diff --git a/Doc/library/gc.rst b/Doc/library/gc.rst
> --- a/Doc/library/gc.rst
> +++ b/Doc/library/gc.rst
> @@ -153,8 +153,8 @@
>      .. versionadded:: 3.1
>
>
> -The following variable is provided for read-only access (you can mutate its
> -value but should not rebind it):
> +The following variables are provided for read-only access (you can mutate the
> +values but should not rebind them):
>
>   .. data:: garbage
>
> @@ -183,6 +183,41 @@
>         :const:`DEBUG_UNCOLLECTABLE` is set, in addition all uncollectable objects
>         are printed.
>
> +.. data:: callbacks
> +
> +   A list of callbacks that will be invoked by the garbage collector before and
> +   after collection.  The callbacks will be called with two arguments,
> +   :arg:`phase` and :arg:`info`.
> +
> +   :arg:`phase` can one of two values:

There is no role ":arg:".  Please fix it to *phase* etc.

Georg



From stefan_ml at behnel.de  Mon Apr 16 09:54:41 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 16 Apr 2012 09:54:41 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
Message-ID: <jmgj81$gp7$1@dough.gmane.org>

Brett Cannon, 14.04.2012 20:12:
> My multi-year project -- started in 2006 according to my blog -- to rewrite
> import in pure Python and then bootstrap it into CPython as *the*
> implementation of __import__() is finally over (mostly)! Hopefully I didn't
> break too much code in the process. =)

Well, some at least.

The new import cache broke Cython's load of on-the-fly compiled extension
modules, which naively used "__import__(module_name)" after building them.
I could fix that by moving to "imp.load_dynamic()" (we know where we put
the compiled module anyway), although I just noticed that that's not
actually documented. So I hope that won't break later on.

The next thing I noticed is that the old-style level -1 import no longer
works, which presumably breaks a *lot* of Cython compiled modules. It used
to work in the master branch until two days ago, now it raises a
ValueError. We may be able to fix this by copying over CPython's old import
code into Cython, but I actually wonder if this was really intended. If
this feature wasn't deliberately broken in Py3.0, why break it now?

Stefan


From urban.dani+py at gmail.com  Mon Apr 16 10:06:22 2012
From: urban.dani+py at gmail.com (Daniel Urban)
Date: Mon, 16 Apr 2012 10:06:22 +0200
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CADiSq7d7rBy1arxh5AQz6mq-ftQVWYOMR+fjkowCxmZRC_GvtQ@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
	<CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
	<CACoLFeR7B=SMqSttoYuwGW-PhgbnzVuC+tVQmqqFtVzwZhDfkQ@mail.gmail.com>
	<CADiSq7d7rBy1arxh5AQz6mq-ftQVWYOMR+fjkowCxmZRC_GvtQ@mail.gmail.com>
Message-ID: <CACoLFeSfLA8zTE1CgJBchOsSNW4YM_zWFC=5BunY7Z3Uv8Pd8g@mail.gmail.com>

On Mon, Apr 16, 2012 at 04:17, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Sure, just create a new tracker issue and assign it to me. You already
> know better than most what the _prepare() step needs to do :)

I've created http://bugs.python.org/issue14588, and attached the first
version of a patch. I can't assign it to you, but you're on the nosy
list.

Thanks,
Daniel

From anacrolix at gmail.com  Mon Apr 16 11:49:02 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Mon, 16 Apr 2012 17:49:02 +0800
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
Message-ID: <CAB4yi1MzizM0-MOVN9QHeqdNyHhDXV0_-Usz8CRMMVT6ZabBoQ@mail.gmail.com>

This is becoming the Manhattan Project of bike sheds.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/311ff1a8/attachment.html>

From victor.stinner at gmail.com  Mon Apr 16 12:38:41 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 16 Apr 2012 12:38:41 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAB4yi1MzizM0-MOVN9QHeqdNyHhDXV0_-Usz8CRMMVT6ZabBoQ@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<CAB4yi1MzizM0-MOVN9QHeqdNyHhDXV0_-Usz8CRMMVT6ZabBoQ@mail.gmail.com>
Message-ID: <CAMpsgwY3soHY9KQBK+zxaTXmKh+vuWJzuZ6zryu6FxM9S+-+KQ@mail.gmail.com>

2012/4/16 Matt Joiner <anacrolix at gmail.com>:
> This is becoming the Manhattan Project of bike sheds.

FreeBSD FAQ contains an entry "Why should I care what color the
bikeshed is?" which mention  a "sleep(1) should take fractional second
arguments" saga in 1999.
http://www.freebsd.org/doc/en/books/faq/misc.html#BIKESHED-PAINTING

Bikeshedding is maybe a common issue with the discussion around time
function? :-)

Victor

From phd at phdru.name  Mon Apr 16 12:46:04 2012
From: phd at phdru.name (Oleg Broytman)
Date: Mon, 16 Apr 2012 14:46:04 +0400
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwY3soHY9KQBK+zxaTXmKh+vuWJzuZ6zryu6FxM9S+-+KQ@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<CAB4yi1MzizM0-MOVN9QHeqdNyHhDXV0_-Usz8CRMMVT6ZabBoQ@mail.gmail.com>
	<CAMpsgwY3soHY9KQBK+zxaTXmKh+vuWJzuZ6zryu6FxM9S+-+KQ@mail.gmail.com>
Message-ID: <20120416104604.GA25634@iskra.aviel.ru>

On Mon, Apr 16, 2012 at 12:38:41PM +0200, Victor Stinner <victor.stinner at gmail.com> wrote:
> Bikeshedding is maybe a common issue with the discussion around time
> function? :-)

   Perhaps because everyone of us lives in a different Time-Space
Continuum? ;-)

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From solipsis at pitrou.net  Mon Apr 16 13:13:53 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 13:13:53 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmgj81$gp7$1@dough.gmane.org>
Message-ID: <20120416131353.676c87d5@pitrou.net>

On Mon, 16 Apr 2012 09:54:41 +0200
Stefan Behnel <stefan_ml at behnel.de> wrote:
> 
> The new import cache broke Cython's load of on-the-fly compiled extension
> modules, which naively used "__import__(module_name)" after building them.
> I could fix that by moving to "imp.load_dynamic()" (we know where we put
> the compiled module anyway), although I just noticed that that's not
> actually documented. So I hope that won't break later on.

You can call importlib.invalidate_caches().
http://docs.python.org/dev/library/importlib.html#importlib.invalidate_caches

> The next thing I noticed is that the old-style level -1 import no longer
> works, which presumably breaks a *lot* of Cython compiled modules. It used
> to work in the master branch until two days ago, now it raises a
> ValueError. We may be able to fix this by copying over CPython's old import
> code into Cython, but I actually wonder if this was really intended. If
> this feature wasn't deliberately broken in Py3.0, why break it now?

Regressions should be reported on the bug tracker IMHO.

Regards

Antoine.



From solipsis at pitrou.net  Mon Apr 16 13:16:17 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 13:16:17 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
Message-ID: <20120416131617.5a81acb8@pitrou.net>

On Mon, 16 Apr 2012 01:25:42 +0200
Victor Stinner <victor.stinner at gmail.com> wrote:
> 
> I suppose that most people don't care that "resolution" and
> "precision" are different things.

Don't they? Actually, they don't care about resolution since they
receive a Python float.

Regards

Antoine.



From solipsis at pitrou.net  Mon Apr 16 13:19:47 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 13:19:47 +0200
Subject: [Python-Dev] cpython: Fix #10854. Make use of the new path and
 name attributes on ImportError
References: <E1SJeCt-0004kt-4K@dinsdale.python.org>
Message-ID: <20120416131947.32b859d9@pitrou.net>

On Mon, 16 Apr 2012 07:10:31 +0200
brian.curtin <python-checkins at python.org> wrote:
> PyErr_SetFromImportErrorWithNameAndPath

Apparently this new function isn't documented anywhere.

Regards

Antoine.




From stefan_ml at behnel.de  Mon Apr 16 13:49:37 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 16 Apr 2012 13:49:37 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416131353.676c87d5@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmgj81$gp7$1@dough.gmane.org> <20120416131353.676c87d5@pitrou.net>
Message-ID: <jmh10j$n8n$1@dough.gmane.org>

Antoine Pitrou, 16.04.2012 13:13:
> On Mon, 16 Apr 2012 09:54:41 +0200
> Stefan Behnel wrote:
>>
>> The new import cache broke Cython's load of on-the-fly compiled extension
>> modules, which naively used "__import__(module_name)" after building them.
>> I could fix that by moving to "imp.load_dynamic()" (we know where we put
>> the compiled module anyway), although I just noticed that that's not
>> actually documented. So I hope that won't break later on.
> 
> You can call importlib.invalidate_caches().
> http://docs.python.org/dev/library/importlib.html#importlib.invalidate_caches

Well, yes, but imp.load_dynamic() would be the right thing to do for us. Is
there a reason why it's not documented?

I would like to avoid changing the code to load_dynamic() now and then
having to realise that that's going to die in 3.3 final because it somehow
got in the way of the importlob rewrites and is not being considered a
valuable enough public API.

New doc bug ticket:

http://bugs.python.org/issue14594


>> The next thing I noticed is that the old-style level -1 import no longer
>> works, which presumably breaks a *lot* of Cython compiled modules. It used
>> to work in the master branch until two days ago, now it raises a
>> ValueError. We may be able to fix this by copying over CPython's old import
>> code into Cython, but I actually wonder if this was really intended. If
>> this feature wasn't deliberately broken in Py3.0, why break it now?
> 
> Regressions should be reported on the bug tracker IMHO.

It was meant as more of a question for now, but here it goes:

http://bugs.python.org/issue14592

Stefan


From martin at v.loewis.de  Mon Apr 16 16:07:55 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 16 Apr 2012 16:07:55 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <jmcjln$p41$1@dough.gmane.org>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org>
Message-ID: <4F8C27BB.1010703@v.loewis.de>

> We have other instances of this (e.g. the Objects/typeslots.inc file
> is generated and checked in), but in the case of importlib, we have
> to use the ./python binary for freezing to avoid bytecode
> incompatibilities, which obviously is a problem if ./python isn't
> built yet.

As for dependencies on byte code: we could consider using Cython instead
of freeze (not sure whether Cython would build the bootstrap correctly;
it may need to be fixed first). With that, we would get semi-readable
source code, which should also play more nicely with hg diffs. On the
down side, we would depend on Cython for evolving .

As for the timestamp issue: I think we should add a target "make touch"
or some such which checks whether the respective files are unmodified
in the local clone, and if so, arranges to touch generated files that
are older than their sources. If this is done by a plain shell script,
one could also make this a post-update Mercurial hook.

Regards,
Martin


From brett at python.org  Mon Apr 16 16:52:23 2012
From: brett at python.org (Brett Cannon)
Date: Mon, 16 Apr 2012 10:52:23 -0400
Subject: [Python-Dev] cpython: Fix #10854. Make use of the new path and
 name attributes on ImportError
In-Reply-To: <20120416131947.32b859d9@pitrou.net>
References: <E1SJeCt-0004kt-4K@dinsdale.python.org>
	<20120416131947.32b859d9@pitrou.net>
Message-ID: <CAP1=2W7Z81n+21-wR6hjiEj17YRjgR1OUttukTG-HDAGm5fuGw@mail.gmail.com>

On Mon, Apr 16, 2012 at 07:19, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Mon, 16 Apr 2012 07:10:31 +0200
> brian.curtin <python-checkins at python.org> wrote:
> > PyErr_SetFromImportErrorWithNameAndPath
>
> Apparently this new function isn't documented anywhere.
>
>
I forgot to write the docs for it when I committed Brian's code.

Brian, do you mind writing the docs for the two functions?

-Brett



> Regards
>
> Antoine.
>
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/0c04ef70/attachment.html>

From brian at python.org  Mon Apr 16 16:54:03 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 16 Apr 2012 09:54:03 -0500
Subject: [Python-Dev] cpython: Fix #10854. Make use of the new path and
 name attributes on ImportError
In-Reply-To: <CAP1=2W7Z81n+21-wR6hjiEj17YRjgR1OUttukTG-HDAGm5fuGw@mail.gmail.com>
References: <E1SJeCt-0004kt-4K@dinsdale.python.org>
	<20120416131947.32b859d9@pitrou.net>
	<CAP1=2W7Z81n+21-wR6hjiEj17YRjgR1OUttukTG-HDAGm5fuGw@mail.gmail.com>
Message-ID: <CAD+XWwoO_cW1AmEhxsoJtgH5fyti5JAzPCpGgQLvydGySULVrw@mail.gmail.com>

On Mon, Apr 16, 2012 at 09:52, Brett Cannon <brett at python.org> wrote:
>
>
> On Mon, Apr 16, 2012 at 07:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>
>> On Mon, 16 Apr 2012 07:10:31 +0200
>> brian.curtin <python-checkins at python.org> wrote:
>> > PyErr_SetFromImportErrorWithNameAndPath
>>
>> Apparently this new function isn't documented anywhere.
>>
>
> I forgot to write the docs for it when I committed Brian's code.
>
> Brian, do you mind writing the docs for the two functions?

I'll take care of it today.

From brett at python.org  Mon Apr 16 17:21:34 2012
From: brett at python.org (Brett Cannon)
Date: Mon, 16 Apr 2012 11:21:34 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F8C27BB.1010703@v.loewis.de>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
Message-ID: <CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>

On Mon, Apr 16, 2012 at 10:07, "Martin v. L?wis" <martin at v.loewis.de> wrote:

> > We have other instances of this (e.g. the Objects/typeslots.inc file
> > is generated and checked in), but in the case of importlib, we have
> > to use the ./python binary for freezing to avoid bytecode
> > incompatibilities, which obviously is a problem if ./python isn't
> > built yet.
>
> As for dependencies on byte code: we could consider using Cython instead
> of freeze (not sure whether Cython would build the bootstrap correctly;
> it may need to be fixed first). With that, we would get semi-readable
> source code, which should also play more nicely with hg diffs. On the
> down side, we would depend on Cython for evolving .
>

We could also just store the raw source code and use that if we are all
willing to pay the performance cost of parsing and compiling the code at
every startup.


>
> As for the timestamp issue: I think we should add a target "make touch"
> or some such which checks whether the respective files are unmodified
> in the local clone, and if so, arranges to touch generated files that
> are older than their sources. If this is done by a plain shell script,
> one could also make this a post-update Mercurial hook.
>

So like execute hg diff on the dependent files and if nothing changed then
touch the auto-generated file w/ 'touch' to prevent future attempts to
execute the target?

-Brett


>
> Regards,
> Martin
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/ec169ace/attachment.html>

From barry at python.org  Mon Apr 16 17:30:37 2012
From: barry at python.org (Barry Warsaw)
Date: Mon, 16 Apr 2012 11:30:37 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
Message-ID: <20120416113037.66e4da6f@limelight.wooz.org>

On Apr 15, 2012, at 01:13 PM, Raymond Hettinger wrote:

>We should publish some advice on creating content managers.

I agree, I'm just not sure PEP 8 is the right place for it.

PEP 8 seems like it is structured more as mechanical guidelines for the look
and feel of code, not so much for the semantic content of the code.  As such,
including best practices for naming context managers would both appear
out-of-place, and possibly get lost in all the whitespace noise :).

Perhaps the contextlib docs are a better place for this?

-Barry

From rdmurray at bitdance.com  Mon Apr 16 18:15:16 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 16 Apr 2012 12:15:16 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
Message-ID: <20120416161516.CAC2D2509BC@webabinitio.net>

On Mon, 16 Apr 2012 11:21:34 -0400, Brett Cannon <brett at python.org> wrote:
> On Mon, Apr 16, 2012 at 10:07, "Martin v. L??wis" <martin at v.loewis.de> wrote:
> 
> > > We have other instances of this (e.g. the Objects/typeslots.inc file
> > > is generated and checked in), but in the case of importlib, we have
> > > to use the ./python binary for freezing to avoid bytecode
> > > incompatibilities, which obviously is a problem if ./python isn't
> > > built yet.
> >
> > As for dependencies on byte code: we could consider using Cython instead
> > of freeze (not sure whether Cython would build the bootstrap correctly;
> > it may need to be fixed first). With that, we would get semi-readable
> > source code, which should also play more nicely with hg diffs. On the
> > down side, we would depend on Cython for evolving .
> >
> 
> We could also just store the raw source code and use that if we are all
> willing to pay the performance cost of parsing and compiling the code at
> every startup.

I don't see how depending on Cython is better than depending on having
an existing Python.  If the only benefit is semi-readable code, surely
we do have source code for the pre-frozen module, and it is just a matter
of convincing hg that the bytecode is binary, not text?

Brett's earlier thought of compiling from source as a *fallback* makes
sense to me.  I'd rather not add overhead to startup that we can avoid.

--David

From solipsis at pitrou.net  Mon Apr 16 18:31:16 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 18:31:16 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
Message-ID: <20120416183116.49b2ec87@pitrou.net>

On Mon, 16 Apr 2012 12:15:16 -0400
"R. David Murray" <rdmurray at bitdance.com> wrote:
> 
> I don't see how depending on Cython is better than depending on having
> an existing Python.  If the only benefit is semi-readable code, surely
> we do have source code for the pre-frozen module, and it is just a matter
> of convincing hg that the bytecode is binary, not text?
> 
> Brett's earlier thought of compiling from source as a *fallback* makes
> sense to me.  I'd rather not add overhead to startup that we can avoid.

Compiling from source at which point, though?
In essence, that would mean reimplement Python/freeze_importlib.py in C?
We could even compile it to a separate executable that gets built
before the Python executable (like pgen) :-)

Regards

Antoine.



From brett at python.org  Mon Apr 16 18:40:42 2012
From: brett at python.org (Brett Cannon)
Date: Mon, 16 Apr 2012 12:40:42 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416183116.49b2ec87@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<20120416183116.49b2ec87@pitrou.net>
Message-ID: <CAP1=2W6w_G5W6rALo75mcUDnpbAyG_Eheog7W4cX+J1fJoYf7Q@mail.gmail.com>

On Mon, Apr 16, 2012 at 12:31, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Mon, 16 Apr 2012 12:15:16 -0400
> "R. David Murray" <rdmurray at bitdance.com> wrote:
> >
> > I don't see how depending on Cython is better than depending on having
> > an existing Python.  If the only benefit is semi-readable code, surely
> > we do have source code for the pre-frozen module, and it is just a matter
> > of convincing hg that the bytecode is binary, not text?
> >
> > Brett's earlier thought of compiling from source as a *fallback* makes
> > sense to me.  I'd rather not add overhead to startup that we can avoid.
>
>
In reply to David, one trick with this, though, is that frozen modules
don't store the magic number of the bytecode, so that would need to change
in order to make this fully feasible.


> Compiling from source at which point, though?
>

At startup of the interpreter.


> In essence, that would mean reimplement Python/freeze_importlib.py in C?
> We could even compile it to a separate executable that gets built
> before the Python executable (like pgen) :-)
>

So a mini Python that just knew how to compile to bytecode and nothing more?

-Brett


>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/c7f047eb/attachment.html>

From martin at v.loewis.de  Mon Apr 16 19:04:25 2012
From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 16 Apr 2012 19:04:25 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416161516.CAC2D2509BC@webabinitio.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
Message-ID: <4F8C5119.1020901@v.loewis.de>

> I don't see how depending on Cython is better than depending on having
> an existing Python.  If the only benefit is semi-readable code, surely
> we do have source code for the pre-frozen module, and it is just a matter
> of convincing hg that the bytecode is binary, not text?

Cython-generated C code would likely be more stable (and produce
compiler errors if it gets stale), whereas importlib.h needs to be
regenerated with byte code changes.

Having source code has the advantage that it becomes possible to
single-step through the import process in C debugger. Single-stepping
with pdb would, of course, be better than that, but I doubt it's
feasible.

In addition, there might be a performance gain with Cython over ceval.

Regards,
Martin


From martin at v.loewis.de  Mon Apr 16 19:08:53 2012
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Mon, 16 Apr 2012 19:08:53 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>	<jmcjln$p41$1@dough.gmane.org>
	<4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
Message-ID: <4F8C5225.4010205@v.loewis.de>

> So like execute hg diff on the dependent files and if nothing changed
> then touch the auto-generated file w/ 'touch' to prevent future attempts
> to execute the target?

Exactly. There might be something better than hg diff, perhaps some form
of hg status.

Regards,
Martin

From brett at python.org  Mon Apr 16 19:32:59 2012
From: brett at python.org (Brett Cannon)
Date: Mon, 16 Apr 2012 13:32:59 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F8C5119.1020901@v.loewis.de>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<4F8C5119.1020901@v.loewis.de>
Message-ID: <CAP1=2W4ZtkziMh-SC_+WBhqQbGEVM2zZ24d3MarHK5HBfnzqbw@mail.gmail.com>

On Mon, Apr 16, 2012 at 13:04, "Martin v. L?wis" <martin at v.loewis.de> wrote:

> > I don't see how depending on Cython is better than depending on having
> > an existing Python.  If the only benefit is semi-readable code, surely
> > we do have source code for the pre-frozen module, and it is just a matter
> > of convincing hg that the bytecode is binary, not text?
>
> Cython-generated C code would likely be more stable (and produce
> compiler errors if it gets stale), whereas importlib.h needs to be
> regenerated with byte code changes.
>
> Having source code has the advantage that it becomes possible to
> single-step through the import process in C debugger. Single-stepping
> with pdb would, of course, be better than that, but I doubt it's
> feasible.
>
> In addition, there might be a performance gain with Cython over ceval.
>

The other benefit is maintainability. In order to hit my roughly 5% startup
speed I had to rewrite chunks of __import__() in C code and then delegate
to importlib's Python code in cases where sys.modules was not hit. Using
Cython would mean that can all go away and the differences between the C
and Python code would become (supposedly) non-existent, making tweaks
easier (e.g. when I made the change to hit sys.modules less when a loader
returned the desired module it was annoying to have to change importlib
*and* import.c).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/698c3444/attachment.html>

From brett at python.org  Mon Apr 16 19:33:45 2012
From: brett at python.org (Brett Cannon)
Date: Mon, 16 Apr 2012 13:33:45 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F8C5225.4010205@v.loewis.de>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<4F8C5225.4010205@v.loewis.de>
Message-ID: <CAP1=2W5n-nxPnB=-yyxJnvo=AUDj0X0c6wsSE80zjOexmiUE-g@mail.gmail.com>

On Mon, Apr 16, 2012 at 13:08, "Martin v. L?wis" <martin at v.loewis.de> wrote:

> > So like execute hg diff on the dependent files and if nothing changed
> > then touch the auto-generated file w/ 'touch' to prevent future attempts
> > to execute the target?
>
> Exactly. There might be something better than hg diff, perhaps some form
> of hg status.
>

Yeah, hg status is probably better. Now someone just needs to write the
shell script. =)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/d8917fde/attachment.html>

From ericsnowcurrently at gmail.com  Mon Apr 16 19:45:05 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Mon, 16 Apr 2012 11:45:05 -0600
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W4ZtkziMh-SC_+WBhqQbGEVM2zZ24d3MarHK5HBfnzqbw@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<4F8C5119.1020901@v.loewis.de>
	<CAP1=2W4ZtkziMh-SC_+WBhqQbGEVM2zZ24d3MarHK5HBfnzqbw@mail.gmail.com>
Message-ID: <CALFfu7A7vHOBjLmAvdKZRrFG6Axbb4E_jcreoVVpRuiQ5=8O9w@mail.gmail.com>

On Mon, Apr 16, 2012 at 11:32 AM, Brett Cannon <brett at python.org> wrote:
>
>
> On Mon, Apr 16, 2012 at 13:04, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>>
>> > I don't see how depending on Cython is better than depending on having
>> > an existing Python. ?If the only benefit is semi-readable code, surely
>> > we do have source code for the pre-frozen module, and it is just a
>> > matter
>> > of convincing hg that the bytecode is binary, not text?
>>
>> Cython-generated C code would likely be more stable (and produce
>> compiler errors if it gets stale), whereas importlib.h needs to be
>> regenerated with byte code changes.
>>
>> Having source code has the advantage that it becomes possible to
>> single-step through the import process in C debugger. Single-stepping
>> with pdb would, of course, be better than that, but I doubt it's
>> feasible.
>>
>> In addition, there might be a performance gain with Cython over ceval.
>
>
> The other benefit is maintainability. In order to hit my roughly 5% startup
> speed I had to rewrite chunks of __import__() in C code and then delegate to
> importlib's Python code in cases where sys.modules was not hit. Using Cython
> would mean that can all go away and the differences between the C and Python
> code would become (supposedly) non-existent, making tweaks easier (e.g. when
> I made the change to hit sys.modules less when a loader returned the desired
> module it was annoying to have to change importlib *and* import.c).

+1 on reducing the complexity of the import code.

-eric

From solipsis at pitrou.net  Mon Apr 16 19:44:27 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 19:44:27 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<4F8C5225.4010205@v.loewis.de>
	<CAP1=2W5n-nxPnB=-yyxJnvo=AUDj0X0c6wsSE80zjOexmiUE-g@mail.gmail.com>
Message-ID: <20120416194427.797c9f4b@pitrou.net>

On Mon, 16 Apr 2012 13:33:45 -0400
Brett Cannon <brett at python.org> wrote:
> On Mon, Apr 16, 2012 at 13:08, "Martin v. L?wis" <martin at v.loewis.de> wrote:
> 
> > > So like execute hg diff on the dependent files and if nothing changed
> > > then touch the auto-generated file w/ 'touch' to prevent future attempts
> > > to execute the target?
> >
> > Exactly. There might be something better than hg diff, perhaps some form
> > of hg status.
> >
> 
> Yeah, hg status is probably better. Now someone just needs to write the
> shell script. =)

Wouldn't it be better if Python could compile regardless of the
presence of a hg repository?

Regards

Antoine.



From barry at python.org  Mon Apr 16 19:51:35 2012
From: barry at python.org (Barry Warsaw)
Date: Mon, 16 Apr 2012 13:51:35 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416194427.797c9f4b@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<4F8C5225.4010205@v.loewis.de>
	<CAP1=2W5n-nxPnB=-yyxJnvo=AUDj0X0c6wsSE80zjOexmiUE-g@mail.gmail.com>
	<20120416194427.797c9f4b@pitrou.net>
Message-ID: <20120416135135.221f3fe9@resist.wooz.org>

On Apr 16, 2012, at 07:44 PM, Antoine Pitrou wrote:

>Wouldn't it be better if Python could compile regardless of the
>presence of a hg repository?

If you want it in your $DISTRO, yes please!

-Barry

From rdmurray at bitdance.com  Mon Apr 16 20:38:41 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 16 Apr 2012 14:38:41 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416135135.221f3fe9@resist.wooz.org>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<4F8C5225.4010205@v.loewis.de>
	<CAP1=2W5n-nxPnB=-yyxJnvo=AUDj0X0c6wsSE80zjOexmiUE-g@mail.gmail.com>
	<20120416194427.797c9f4b@pitrou.net>
	<20120416135135.221f3fe9@resist.wooz.org>
Message-ID: <20120416183841.78645B14052@webabinitio.net>

On Mon, 16 Apr 2012 13:51:35 -0400, Barry Warsaw <barry at python.org> wrote:
> On Apr 16, 2012, at 07:44 PM, Antoine Pitrou wrote:
> 
> >Wouldn't it be better if Python could compile regardless of the
> >presence of a hg repository?
> 
> If you want it in your $DISTRO, yes please!

My impression is that our usual solution for this is to make sure the
timestamps are correct in distributed tarballs, so that the hg-dependent
machinery is not invoked when building from a release tarball.  Is this
case any different?

--David

From stefan_ml at behnel.de  Mon Apr 16 21:17:54 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 16 Apr 2012 21:17:54 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F8C27BB.1010703@v.loewis.de>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
Message-ID: <jmhr93$n78$1@dough.gmane.org>

"Martin v. L?wis", 16.04.2012 16:07:
>> We have other instances of this (e.g. the Objects/typeslots.inc file
>> is generated and checked in), but in the case of importlib, we have
>> to use the ./python binary for freezing to avoid bytecode
>> incompatibilities, which obviously is a problem if ./python isn't
>> built yet.
> 
> As for dependencies on byte code: we could consider using Cython instead
> of freeze (not sure whether Cython would build the bootstrap correctly;
> it may need to be fixed first).

I assume that this would be needed rather early during the interpreter
startup, so you're right that there may be obstacles due to expectations in
the generated C code that may not be there yet. But I think it's worth a
try. Cython is rather predictable in its requirements given the Python
source code. Just run the module through Cython and see what it gives you.
(Note that you may need the latest github master version to make everything
work nicely in Py3.3, I keep adapting things there as I find them - our CI
server runs daily tests against the latest CPython master.)

One thing Cython does during module initialisation is to import the
builtins module using PyModule_AddModule(). Does that work already at this
point? It uses it mostly to cache builtins that the module uses. That can
be disabled, though, and I also wouldn't mind letting Cython put a
preprocessor guard into that code that would let it do something else based
on a macro that CPython defines at this point, maybe even just
Py_BUILD_CORE. We already do these things for all sorts of purposes.

And, obviously, the module init function can also be called directly
instead of running it as part of an import. That's commonly done when using
Cython modules in an embedded Python runtime.


> With that, we would get semi-readable
> source code, which should also play more nicely with hg diffs.

There's still a Cython patch hanging around on github that aims to keep the
C code from changing all that drastically on Python source code changes
(e.g. due to source line references etc.). Might be worth integrating for
something like this. There's also a switch that disables the (helpful)
reproduction of the Python code context in C code comments, in case that
gets in the way for diffs.


> On the down side, we would depend on Cython for evolving .

Right, although not as a strict dependency. The code would still work just
fine in plain Python. But it would depend on Cython for performance. And we
usually recommend to ship the generated C sources anyway to avoid a user
dependency on Cython, so this use case is quite normal.

Stefan


From brian at python.org  Mon Apr 16 22:27:03 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 16 Apr 2012 15:27:03 -0500
Subject: [Python-Dev] cpython: Fix #10854. Make use of the new path and
 name attributes on ImportError
In-Reply-To: <CAD+XWwoO_cW1AmEhxsoJtgH5fyti5JAzPCpGgQLvydGySULVrw@mail.gmail.com>
References: <E1SJeCt-0004kt-4K@dinsdale.python.org>
	<20120416131947.32b859d9@pitrou.net>
	<CAP1=2W7Z81n+21-wR6hjiEj17YRjgR1OUttukTG-HDAGm5fuGw@mail.gmail.com>
	<CAD+XWwoO_cW1AmEhxsoJtgH5fyti5JAzPCpGgQLvydGySULVrw@mail.gmail.com>
Message-ID: <CAD+XWwqtigMAH9N=3zrEzBi7rZFd-FoKAjJ44t8Y=KeDemTF7Q@mail.gmail.com>

On Mon, Apr 16, 2012 at 09:54, Brian Curtin <brian at python.org> wrote:
> On Mon, Apr 16, 2012 at 09:52, Brett Cannon <brett at python.org> wrote:
>>
>>
>> On Mon, Apr 16, 2012 at 07:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>>
>>> On Mon, 16 Apr 2012 07:10:31 +0200
>>> brian.curtin <python-checkins at python.org> wrote:
>>> > PyErr_SetFromImportErrorWithNameAndPath
>>>
>>> Apparently this new function isn't documented anywhere.
>>>
>>
>> I forgot to write the docs for it when I committed Brian's code.
>>
>> Brian, do you mind writing the docs for the two functions?
>
> I'll take care of it today.

Done. http://hg.python.org/cpython/rev/5cc8b717b38c

From solipsis at pitrou.net  Mon Apr 16 22:31:18 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 16 Apr 2012 22:31:18 +0200
Subject: [Python-Dev] cpython: Fix #10854. Make use of the new path and
 name attributes on ImportError
In-Reply-To: <CAD+XWwqtigMAH9N=3zrEzBi7rZFd-FoKAjJ44t8Y=KeDemTF7Q@mail.gmail.com>
References: <E1SJeCt-0004kt-4K@dinsdale.python.org>
	<20120416131947.32b859d9@pitrou.net>
	<CAP1=2W7Z81n+21-wR6hjiEj17YRjgR1OUttukTG-HDAGm5fuGw@mail.gmail.com>
	<CAD+XWwoO_cW1AmEhxsoJtgH5fyti5JAzPCpGgQLvydGySULVrw@mail.gmail.com>
	<CAD+XWwqtigMAH9N=3zrEzBi7rZFd-FoKAjJ44t8Y=KeDemTF7Q@mail.gmail.com>
Message-ID: <1334608278.3426.8.camel@localhost.localdomain>

Le lundi 16 avril 2012 ? 15:27 -0500, Brian Curtin a ?crit :
> On Mon, Apr 16, 2012 at 09:54, Brian Curtin <brian at python.org> wrote:
> > On Mon, Apr 16, 2012 at 09:52, Brett Cannon <brett at python.org> wrote:
> >>
> >>
> >> On Mon, Apr 16, 2012 at 07:19, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >>>
> >>> On Mon, 16 Apr 2012 07:10:31 +0200
> >>> brian.curtin <python-checkins at python.org> wrote:
> >>> > PyErr_SetFromImportErrorWithNameAndPath
> >>>
> >>> Apparently this new function isn't documented anywhere.
> >>>
> >>
> >> I forgot to write the docs for it when I committed Brian's code.
> >>
> >> Brian, do you mind writing the docs for the two functions?
> >
> > I'll take care of it today.
> 
> Done. http://hg.python.org/cpython/rev/5cc8b717b38c

It would be nice if the refleak behaviour of these functions was
documented too (or, better, fixed, if I'm reading the code correctly;
reference-stealing functions are generally a nuisance).

By the way, why is the naming so complicated?
PyErr_SetImportError() would have sounded explicit enough :)

Regards

Antoine.



From amauryfa at gmail.com  Mon Apr 16 22:43:43 2012
From: amauryfa at gmail.com (Amaury Forgeot d'Arc)
Date: Mon, 16 Apr 2012 22:43:43 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <jmhr93$n78$1@dough.gmane.org>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<jmhr93$n78$1@dough.gmane.org>
Message-ID: <CAGmFidbnpnWVhLw2ksBrYvuZOhx5UwkHc5Tj9q7gwrWoMT-npw@mail.gmail.com>

Hi,

2012/4/16 Stefan Behnel <stefan_ml at behnel.de>

> > On the down side, we would depend on Cython for evolving .
>
> Right, although not as a strict dependency. The code would still work just
> fine in plain Python.


Not quite, we are talking of the imp module here...

-- 
Amaury Forgeot d'Arc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/971c97f0/attachment.html>

From martin at v.loewis.de  Mon Apr 16 23:12:40 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 16 Apr 2012 23:12:40 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416194427.797c9f4b@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>	<jmcjln$p41$1@dough.gmane.org>
	<4F8C27BB.1010703@v.loewis.de>	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>	<4F8C5225.4010205@v.loewis.de>	<CAP1=2W5n-nxPnB=-yyxJnvo=AUDj0X0c6wsSE80zjOexmiUE-g@mail.gmail.com>
	<20120416194427.797c9f4b@pitrou.net>
Message-ID: <4F8C8B48.2060305@v.loewis.de>

Am 16.04.2012 19:44, schrieb Antoine Pitrou:
> On Mon, 16 Apr 2012 13:33:45 -0400
> Brett Cannon <brett at python.org> wrote:
>> On Mon, Apr 16, 2012 at 13:08, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>>
>>>> So like execute hg diff on the dependent files and if nothing changed
>>>> then touch the auto-generated file w/ 'touch' to prevent future attempts
>>>> to execute the target?
>>>
>>> Exactly. There might be something better than hg diff, perhaps some form
>>> of hg status.
>>>
>>
>> Yeah, hg status is probably better. Now someone just needs to write the
>> shell script. =)
> 
> Wouldn't it be better if Python could compile regardless of the
> presence of a hg repository?

Who says it couldn't?

Regards,
Martin

From g.brandl at gmx.net  Tue Apr 17 01:02:32 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 17 Apr 2012 01:02:32 +0200
Subject: [Python-Dev] cpython: Add documentation for the new
	PyErr_SetFromImport* functions
In-Reply-To: <E1SJsJx-0001oz-26@dinsdale.python.org>
References: <E1SJsJx-0001oz-26@dinsdale.python.org>
Message-ID: <jmi8di$s22$1@dough.gmane.org>

On 16.04.2012 22:14, brian.curtin wrote:
> http://hg.python.org/cpython/rev/5cc8b717b38c
> changeset:   76363:5cc8b717b38c
> user:        Brian Curtin<brian at python.org>
> date:        Mon Apr 16 15:14:36 2012 -0500
> summary:
>    Add documentation for the new PyErr_SetFromImport* functions
>
> files:
>    Doc/c-api/exceptions.rst |  18 ++++++++++++++++++
>    1 files changed, 18 insertions(+), 0 deletions(-)
>
>
> diff --git a/Doc/c-api/exceptions.rst b/Doc/c-api/exceptions.rst
> --- a/Doc/c-api/exceptions.rst
> +++ b/Doc/c-api/exceptions.rst
> @@ -229,6 +229,24 @@
>      Similar to :c:func:`PyErr_SetFromWindowsErrWithFilename`, with an additional
>      parameter specifying the exception type to be raised. Availability: Windows.
>
> +.. c:function:: PyObject* PyErr_SetExcWithArgsKwargs(PyObject *exc, PyObject *args, PyObject *kwargs)
> +
> +   This is a convenience function to set an *exc* with the given *args* and
> +   *kwargs* values. If *args* is ``NULL``, an empty :func:`tuple` will be
> +   created when *exc* is created via :c:func:`PyObject_Call`.
> +
> +.. c:function:: PyObject* PyErr_SetFromImportErrorWithName(PyObject *msg, PyObject *name)
> +
> +   This is a convenience function to raise :exc:`ImportError`. *msg* will be
> +   set as the exception's message string, and *name* will be set as the
> +   :exc:`ImportError`'s ``name`` attribute.
> +
> +.. c:function:: PyObject* PyErr_SetFromImportErrorWithNameAndPath(PyObject *msg, PyObject *name, PyObject *path)
> +
> +   This is a convenience function to raise :exc:`ImportError`. *msg* will be
> +   set as the exception's message string. Both *name* and *path* will be set
> +   as the :exc:`ImportError`'s respective ``name`` and ``path`` attributes.
> +

versionadded please.

Georg


From g.brandl at gmx.net  Tue Apr 17 01:11:14 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 17 Apr 2012 01:11:14 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120416161516.CAC2D2509BC@webabinitio.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
Message-ID: <jmi8tt$10f$1@dough.gmane.org>

On 16.04.2012 18:15, R. David Murray wrote:
> On Mon, 16 Apr 2012 11:21:34 -0400, Brett Cannon<brett at python.org>  wrote:
>>  On Mon, Apr 16, 2012 at 10:07, "Martin v. L??wis"<martin at v.loewis.de>  wrote:
>>
>>  >  >  We have other instances of this (e.g. the Objects/typeslots.inc file
>>  >  >  is generated and checked in), but in the case of importlib, we have
>>  >  >  to use the ./python binary for freezing to avoid bytecode
>>  >  >  incompatibilities, which obviously is a problem if ./python isn't
>>  >  >  built yet.
>>  >
>>  >  As for dependencies on byte code: we could consider using Cython instead
>>  >  of freeze (not sure whether Cython would build the bootstrap correctly;
>>  >  it may need to be fixed first). With that, we would get semi-readable
>>  >  source code, which should also play more nicely with hg diffs. On the
>>  >  down side, we would depend on Cython for evolving .

That is an interesting idea.  We already depend on external tools, e.g.
autotools, for updating checked-in files, so why not Cython.

>>  We could also just store the raw source code and use that if we are all
>>  willing to pay the performance cost of parsing and compiling the code at
>>  every startup.
>
> I don't see how depending on Cython is better than depending on having
> an existing Python.

No, it's not just an existing Python, it is (at least currently) the same
version of Python being built.  Therefore I wrote about the bootstrapping
problems when bytecode changes.

Depending on Cython is better in that it breaks the bootstrapping cycle,
but on the other hand the C code may need to be regenerated when the C API
changes in an incompatible way.

>  If the only benefit is semi-readable code, surely
> we do have source code for the pre-frozen module, and it is just a matter
> of convincing hg that the bytecode is binary, not text?

The benefit is also (presumably) better performance.

Georg


From brian at python.org  Tue Apr 17 01:19:05 2012
From: brian at python.org (Brian Curtin)
Date: Mon, 16 Apr 2012 18:19:05 -0500
Subject: [Python-Dev] cpython: Add documentation for the new
 PyErr_SetFromImport* functions
In-Reply-To: <jmi8di$s22$1@dough.gmane.org>
References: <E1SJsJx-0001oz-26@dinsdale.python.org>
	<jmi8di$s22$1@dough.gmane.org>
Message-ID: <CAD+XWwo3NRSD3CdjGyxXTQ5X0xw3K=2hPLxj0dN+4=r8mGfuVA@mail.gmail.com>

On Mon, Apr 16, 2012 at 18:02, Georg Brandl <g.brandl at gmx.net> wrote:
> On 16.04.2012 22:14, brian.curtin wrote:
>>
>> http://hg.python.org/cpython/rev/5cc8b717b38c
>> changeset: ? 76363:5cc8b717b38c
>> user: ? ? ? ?Brian Curtin<brian at python.org>
>> date: ? ? ? ?Mon Apr 16 15:14:36 2012 -0500
>> summary:
>> ? Add documentation for the new PyErr_SetFromImport* functions
>>
>> files:
>> ? Doc/c-api/exceptions.rst | ?18 ++++++++++++++++++
>> ? 1 files changed, 18 insertions(+), 0 deletions(-)
>>
>>
>> diff --git a/Doc/c-api/exceptions.rst b/Doc/c-api/exceptions.rst
>> --- a/Doc/c-api/exceptions.rst
>> +++ b/Doc/c-api/exceptions.rst
>> @@ -229,6 +229,24 @@
>> ? ? Similar to :c:func:`PyErr_SetFromWindowsErrWithFilename`, with an
>> additional
>> ? ? parameter specifying the exception type to be raised. Availability:
>> Windows.
>>
>> +.. c:function:: PyObject* PyErr_SetExcWithArgsKwargs(PyObject *exc,
>> PyObject *args, PyObject *kwargs)
>> +
>> + ? This is a convenience function to set an *exc* with the given *args*
>> and
>> + ? *kwargs* values. If *args* is ``NULL``, an empty :func:`tuple` will be
>> + ? created when *exc* is created via :c:func:`PyObject_Call`.
>> +
>> +.. c:function:: PyObject* PyErr_SetFromImportErrorWithName(PyObject *msg,
>> PyObject *name)
>> +
>> + ? This is a convenience function to raise :exc:`ImportError`. *msg* will
>> be
>> + ? set as the exception's message string, and *name* will be set as the
>> + ? :exc:`ImportError`'s ``name`` attribute.
>> +
>> +.. c:function:: PyObject*
>> PyErr_SetFromImportErrorWithNameAndPath(PyObject *msg, PyObject *name,
>> PyObject *path)
>> +
>> + ? This is a convenience function to raise :exc:`ImportError`. *msg* will
>> be
>> + ? set as the exception's message string. Both *name* and *path* will be
>> set
>> + ? as the :exc:`ImportError`'s respective ``name`` and ``path``
>> attributes.
>> +
>
>
> versionadded please.

http://hg.python.org/cpython/rev/d79aa61ec96d

From victor.stinner at gmail.com  Tue Apr 17 01:22:49 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 17 Apr 2012 01:22:49 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
Message-ID: <CAMpsgwbwwOBAW6FQD+ATPjp6ma=rFQ0WpVwxKyAz14kFLEZujw@mail.gmail.com>

> Here is a simplified version of the first draft of the PEP 418. The
> full version can be read online.
> http://www.python.org/dev/peps/pep-0418/

I wrote an implementation in pure Python using ctypes for Python < 3.3:

https://bitbucket.org/haypo/misc/src/tip/python/pep418.py

I tested it on Linux, OpenBSD, FreeBSD and Windows. It's more a
proof-of-concept to test the PEP than a library written to be reused
by programs.

Victor

From solipsis at pitrou.net  Tue Apr 17 02:27:28 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 17 Apr 2012 02:27:28 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org>
Message-ID: <20120417022728.2e492740@pitrou.net>

On Tue, 17 Apr 2012 01:11:14 +0200
Georg Brandl <g.brandl at gmx.net> wrote:
> 
> No, it's not just an existing Python, it is (at least currently) the same
> version of Python being built.  Therefore I wrote about the bootstrapping
> problems when bytecode changes.
> 
> Depending on Cython is better in that it breaks the bootstrapping cycle,
> but on the other hand the C code may need to be regenerated when the C API
> changes in an incompatible way.

Cython OTOH probably needs Python 2.x, which isn't that great for
building Python 3. And requiring Cython for developing is not very
contributor-friendly.

Regards

Antoine.



From roundup-admin at psf.upfronthosting.co.za  Tue Apr 17 02:48:56 2012
From: roundup-admin at psf.upfronthosting.co.za (Python tracker)
Date: Tue, 17 Apr 2012 00:48:56 +0000
Subject: [Python-Dev] Failed issue tracker submission
Message-ID: <20120417004856.DDBC01CAB1@psf.upfronthosting.co.za>


An unexpected error occurred during the processing
of your message. The tracker administrator is being
notified.
-------------- next part --------------
Return-Path: <python-dev at python.org>
X-Original-To: report at bugs.python.org
Delivered-To: roundup+tracker at psf.upfronthosting.co.za
Received: from mail.python.org (mail.python.org [82.94.164.166])
	by psf.upfronthosting.co.za (Postfix) with ESMTPS id 6360B1C859
	for <report at bugs.python.org>; Tue, 17 Apr 2012 02:48:56 +0200 (CEST)
Received: from albatross.python.org (localhost [127.0.0.1])
	by mail.python.org (Postfix) with ESMTP id 3VWns40yxrzMjw
	for <report at bugs.python.org>; Tue, 17 Apr 2012 02:48:56 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
	t=1334623736; bh=XEXCeYE16fzcKUQhGMmVsnWAtZLmGW9LmP+Uwy05Fbg=;
	h=Date:Message-Id:Content-Type:MIME-Version:
	 Content-Transfer-Encoding:From:To:Subject;
	b=czgqyVC9I6R+u4WpYYlnB9GgLDMku29p5VPNX00bYV0sMCCNTpAH4gTB2TF4UJmAZ
	 o1mZSxW7o1qAx1HIa1OOfDsuETDkfVIUznnllKr78KWQsMP+oNX5thjyQCU9QHlCEk
	 bji6LUyqKCut4Bi9fcC6xUg9bG0GuBoeYOZ9KWp0=
Received: from localhost (HELO mail.python.org) (127.0.0.1)
  by albatross.python.org with SMTP; 17 Apr 2012 02:48:56 +0200
Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4])
	(using TLSv1 with cipher AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.python.org (Postfix) with ESMTPS
	for <report at bugs.python.org>; Tue, 17 Apr 2012 02:48:56 +0200 (CEST)
Received: from localhost
	([127.0.0.1] helo=dinsdale.python.org ident=hg)
	by dinsdale.python.org with esmtp (Exim 4.72)
	(envelope-from <python-dev at python.org>)
	id 1SJwbH-0001PR-UV
	for report at bugs.python.org; Tue, 17 Apr 2012 02:48:55 +0200
Date: Tue, 17 Apr 2012 02:48:55 +0200
Message-Id: <E1SJwbH-0001PR-UV at dinsdale.python.org>
Content-Type: text/plain; charset="utf8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: python-dev at python.org
To: report at bugs.python.org
Subject: [issue13959]

TmV3IGNoYW5nZXNldCBhODg5NTExN2EzOGQgYnkgQnJldHQgQ2Fubm9uIGluIGJyYW5jaCAnZGVm
YXVsdCc6Cklzc3VlICMxMzk1OTogRml4IGEgbG9naWMgYnVnLgpodHRwOi8vaGcucHl0aG9uLm9y
Zy9jcHl0aG9uL3Jldi9hODg5NTExN2EzOGQK

From brett at python.org  Tue Apr 17 02:41:56 2012
From: brett at python.org (Brett Cannon)
Date: Mon, 16 Apr 2012 20:41:56 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120417022728.2e492740@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
Message-ID: <CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>

On Mon, Apr 16, 2012 at 20:27, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Tue, 17 Apr 2012 01:11:14 +0200
> Georg Brandl <g.brandl at gmx.net> wrote:
> >
> > No, it's not just an existing Python, it is (at least currently) the same
> > version of Python being built.  Therefore I wrote about the bootstrapping
> > problems when bytecode changes.
> >
> > Depending on Cython is better in that it breaks the bootstrapping cycle,
> > but on the other hand the C code may need to be regenerated when the C
> API
> > changes in an incompatible way.
>
> Cython OTOH probably needs Python 2.x, which isn't that great for
> building Python 3. And requiring Cython for developing is not very
> contributor-friendly.
>

Well, required to regenerate _frozen_importlib, but nothing else. I mean
making fixes go into importlib directly and get tested that way, not
through __import__(). So really Cython would only be needed when
importlib._bootstrap has been changed and you are making a commit.

-Brett


>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120416/b5e55c20/attachment.html>

From cs at zip.com.au  Tue Apr 17 06:48:22 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Tue, 17 Apr 2012 14:48:22 +1000
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
Message-ID: <20120417044821.GA1979@cskk.homeip.net>

On 16Apr2012 01:25, Victor Stinner <victor.stinner at gmail.com> wrote:
| I suppose that most people don't care that "resolution" and
| "precision" are different things.

If we're using the same definitions we discussed offline, where

  - resolution is the units the clock call (underneath) works in (for
    example, nanoseconds)

  - precision is the effective precision of the results, for example
    milliseconds

I'd say people would care if they knew, and mostly care about
"precision".
-- 
Cameron Simpson <cs at zip.com.au> DoD#743

From stefan_ml at behnel.de  Tue Apr 17 07:05:00 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Tue, 17 Apr 2012 07:05:00 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAGmFidbnpnWVhLw2ksBrYvuZOhx5UwkHc5Tj9q7gwrWoMT-npw@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<jmhr93$n78$1@dough.gmane.org>
	<CAGmFidbnpnWVhLw2ksBrYvuZOhx5UwkHc5Tj9q7gwrWoMT-npw@mail.gmail.com>
Message-ID: <jmitls$v2f$1@dough.gmane.org>

Amaury Forgeot d'Arc, 16.04.2012 22:43:
> 2012/4/16 Stefan Behnel
>>> On the down side, we would depend on Cython for evolving .
>>
>> Right, although not as a strict dependency. The code would still work just
>> fine in plain Python.
> 
> Not quite, we are talking of the imp module here...

Hmm, right, after writing the above, I figured that it would at least have
to do something like "import sys" in order to deal with the import config
(path, meta path, ...). That obviously won't work in pure Python at this point.

Stefan


From stefan_ml at behnel.de  Tue Apr 17 07:12:18 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Tue, 17 Apr 2012 07:12:18 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120417022728.2e492740@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
Message-ID: <jmiu3j$3be$1@dough.gmane.org>

Antoine Pitrou, 17.04.2012 02:27:
> On Tue, 17 Apr 2012 01:11:14 +0200
> Georg Brandl wrote:
>> No, it's not just an existing Python, it is (at least currently) the same
>> version of Python being built.  Therefore I wrote about the bootstrapping
>> problems when bytecode changes.
>>
>> Depending on Cython is better in that it breaks the bootstrapping cycle,
>> but on the other hand the C code may need to be regenerated when the C API
>> changes in an incompatible way.
> 
> Cython OTOH probably needs Python 2.x, which isn't that great for
> building Python 3.

It uses 2to3 at install time, so you get a Py3 version out of it. No need
to have Py2 installed in order to use it.


> And requiring Cython for developing is not very
> contributor-friendly.

Brett Cannon answered that one. If you ship the C sources, developers will
only be impacted when they want to modify source code that gets compiled
with Cython.

Stefan


From bitsink at gmail.com  Mon Apr 16 18:10:48 2012
From: bitsink at gmail.com (Nam Nguyen)
Date: Mon, 16 Apr 2012 09:10:48 -0700
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <20120416113037.66e4da6f@limelight.wooz.org>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
Message-ID: <CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>

On Mon, Apr 16, 2012 at 8:30 AM, Barry Warsaw <barry at python.org> wrote:
> On Apr 15, 2012, at 01:13 PM, Raymond Hettinger wrote:
>
>>We should publish some advice on creating content managers.
>
> I agree, I'm just not sure PEP 8 is the right place for it.
>
> PEP 8 seems like it is structured more as mechanical guidelines for the look
> and feel of code, not so much for the semantic content of the code. ?As such,

I'd like to piggyback this thread for a situtation to consider in PEP 8.

PEP 8 suggests no extra spaces after and before square brackets, and
colons. So code like this is appropriate:

a_list[1:3]

But I find it less readable in the case of:

a_list[pos + 1:-1]

The colon is seemingly lost in the right.

Would it be better to read like below?

a_list[pos + 1 : -1]

Any opinion?

Nam

From mcepl at redhat.com  Tue Apr 17 08:53:43 2012
From: mcepl at redhat.com (Matej Cepl)
Date: Tue, 17 Apr 2012 08:53:43 +0200
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
Message-ID: <4F8D1377.5020001@redhat.com>

On 16.4.2012 18:10, Nam Nguyen wrote:
> a_list[pos + 1 : -1]

or other way around

a_list[pos+1:-1]

?

Mat?j	

From p.f.moore at gmail.com  Tue Apr 17 09:14:11 2012
From: p.f.moore at gmail.com (Paul Moore)
Date: Tue, 17 Apr 2012 08:14:11 +0100
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
Message-ID: <CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>

On 16 April 2012 17:10, Nam Nguyen <bitsink at gmail.com> wrote:
> PEP 8 suggests no extra spaces after and before square brackets, and
> colons. So code like this is appropriate:
>
> a_list[1:3]
>
> But I find it less readable in the case of:
>
> a_list[pos + 1:-1]
>
> The colon is seemingly lost in the right.
>
> Would it be better to read like below?
>
> a_list[pos + 1 : -1]
>
> Any opinion?

It says no space *before* a colon, not after. So the following should
be OK (and is what I'd use):

a_list[pos + 1: -1]

Paul.

From sumerc at gmail.com  Tue Apr 17 09:23:13 2012
From: sumerc at gmail.com (=?UTF-8?Q?S=C3=BCmer_Cip?=)
Date: Tue, 17 Apr 2012 10:23:13 +0300
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZvQq0ZyrM8ozDpxS4joAspmY0rPXHQKNz2j3jDDWUigA@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwZvQq0ZyrM8ozDpxS4joAspmY0rPXHQKNz2j3jDDWUigA@mail.gmail.com>
Message-ID: <CAOkQLaA4dbNG6qEMa=cWbGOw8xU3CQxoAe_Q1b9EH3in4bMejA@mail.gmail.com>

On Sun, Apr 15, 2012 at 6:18 PM, Victor Stinner <victor.stinner at gmail.com>wrote:

> > Here is a simplified version of the first draft of the PEP 418. The
> > full version can be read online.
> > http://www.python.org/dev/peps/pep-0418/
>
> FYI there is no time.thread_time() function. It would only be
> available on Windows and Linux. It does not use seconds but CPU
> cycles. No module or program of the Python source code need such
> function,


Just FYI: in MACOSx, you can use  thread_info() to get that information.
Also you can get that information in Solaris,too. In yappi profiler I use
all of these approaches together to have an OS independent thread_times()
functionality. Here is the relevant code:
http://bitbucket.org/sumerc/yappi/src/7c7dc11e8728/timing.c<https://bitbucket.org/sumerc/yappi/src/7c7dc11e8728/timing.c>

I also think that you are right about Python really not have any use case
for this functionality, ...


-- 
S?mer Cip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/758e75bc/attachment.html>

From solipsis at pitrou.net  Tue Apr 17 11:52:01 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 17 Apr 2012 11:52:01 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
	<CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>
Message-ID: <20120417115201.42992338@pitrou.net>

On Mon, 16 Apr 2012 20:41:56 -0400
Brett Cannon <brett at python.org> wrote:
> On Mon, Apr 16, 2012 at 20:27, Antoine Pitrou <solipsis at pitrou.net> wrote:
> 
> > On Tue, 17 Apr 2012 01:11:14 +0200
> > Georg Brandl <g.brandl at gmx.net> wrote:
> > >
> > > No, it's not just an existing Python, it is (at least currently) the same
> > > version of Python being built.  Therefore I wrote about the bootstrapping
> > > problems when bytecode changes.
> > >
> > > Depending on Cython is better in that it breaks the bootstrapping cycle,
> > > but on the other hand the C code may need to be regenerated when the C
> > API
> > > changes in an incompatible way.
> >
> > Cython OTOH probably needs Python 2.x, which isn't that great for
> > building Python 3. And requiring Cython for developing is not very
> > contributor-friendly.
> >
> 
> Well, required to regenerate _frozen_importlib, but nothing else. I mean
> making fixes go into importlib directly and get tested that way, not
> through __import__(). So really Cython would only be needed when
> importlib._bootstrap has been changed and you are making a commit.

That's still a large dependency to bring in, while we already have a
working solution.
I'd understand using Cython to develop some new extension module which
requires linking against a C library (and is thus impossible to write
in pure Python). But for importlib that's totally non-necessary.

I guess I'm -1 on it.

Regards

Antoine.

From solipsis at pitrou.net  Tue Apr 17 11:53:38 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 17 Apr 2012 11:53:38 +0200
Subject: [Python-Dev] cpython: Issue #13959: Re-implement
 imp.load_source() in imp.py.
References: <E1SJxtD-0008Pd-RG@dinsdale.python.org>
Message-ID: <20120417115338.7fae2d8f@pitrou.net>

On Tue, 17 Apr 2012 04:11:31 +0200
brett.cannon <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/3b5b4b4bb43c
> changeset:   76371:3b5b4b4bb43c
> user:        Brett Cannon <brett at python.org>
> date:        Mon Apr 16 22:11:25 2012 -0400
> summary:
>   Issue #13959: Re-implement imp.load_source() in imp.py.
> 
> files:
>   Lib/imp.py      |   29 ++-
>   Python/import.c |  390 ------------------------------------
>   2 files changed, 28 insertions(+), 391 deletions(-)

It's nice to see all that C code go away :-)

Regards

Antoine.



From eric at trueblade.com  Tue Apr 17 12:43:30 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Tue, 17 Apr 2012 06:43:30 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120417115201.42992338@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
	<CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>
	<20120417115201.42992338@pitrou.net>
Message-ID: <4F8D4952.603@trueblade.com>

On 4/17/2012 5:52 AM, Antoine Pitrou wrote:
> On Mon, 16 Apr 2012 20:41:56 -0400
> Brett Cannon <brett at python.org> wrote:
>> On Mon, Apr 16, 2012 at 20:27, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>
>>> On Tue, 17 Apr 2012 01:11:14 +0200
>>> Georg Brandl <g.brandl at gmx.net> wrote:
>>>>
>>>> No, it's not just an existing Python, it is (at least currently) the same
>>>> version of Python being built.  Therefore I wrote about the bootstrapping
>>>> problems when bytecode changes.
>>>>
>>>> Depending on Cython is better in that it breaks the bootstrapping cycle,
>>>> but on the other hand the C code may need to be regenerated when the C
>>> API
>>>> changes in an incompatible way.
>>>
>>> Cython OTOH probably needs Python 2.x, which isn't that great for
>>> building Python 3. And requiring Cython for developing is not very
>>> contributor-friendly.
>>>
>>
>> Well, required to regenerate _frozen_importlib, but nothing else. I mean
>> making fixes go into importlib directly and get tested that way, not
>> through __import__(). So really Cython would only be needed when
>> importlib._bootstrap has been changed and you are making a commit.
> 
> That's still a large dependency to bring in, while we already have a
> working solution.
> I'd understand using Cython to develop some new extension module which
> requires linking against a C library (and is thus impossible to write
> in pure Python). But for importlib that's totally non-necessary.
> 
> I guess I'm -1 on it.

I agree. If the problem we're trying to solve is that the generated file
isn't always rebuilt, bringing in a large dependency like Cython seems
like overkill to me.

We basically have a working solution now (thanks, Brett). I think we
should focus on getting it polished. Maybe we can bring in Cython in a
later release, if in the 3.4 timeframe it still seems like we have a
problem to solve. I suspect things will be working fine.

Eric.


From kristjan at ccpgames.com  Tue Apr 17 12:55:23 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 17 Apr 2012 10:55:23 +0000
Subject: [Python-Dev] issue 9141, finalizers and gc module
Message-ID: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>

Hello there.
For those familiar with the intricacies of the gcmodule.c, I would like to draw your attention to http://bugs.python.org/issue9141.

I would like to consult with you to find out more about finalizers/gc in order to improve the in-file documentation.

Traditionally, it has not been possible to collect objects that have __del__ methods, or more generally, finalizers.  Instead, they and any objects that are reachable from them, are put in gc.garbage.

What I want to know is, why is this limitation in place?  Here are two possibilities:

1)      "The order of calling finalizers in a cycle is undefined so it is not a solvable problem".  But this would allow a single object, with only internal cycles to be collected.  Currently this is not the case.

2)      "During collection, the interpreter is in a fragile state (linked lists of gc objects with refcount bookkeeping in place) and no unknown code can be allowed to run".  This is the reason I personally think is the true reason.

The reason I'm asking is that python has traditionally tested for finalizers by checking the tp_del slot of the object's type.  This will be true if the object has a __del__ method.  Since generators were added, they use the tp_del slot for their own finalizers, but special code was put in place so that the generators could tell if the finalizer were "trivial" or not (trivial meaning "just doing Py_DECREF()).
This allowed generators to be coollected too, if they were in a common, trivial state, but otherwise, they had to be put in gc.garbage().

Yesterday, I stumbled upon the fact that tp_dealloc of iobase objects also calls an internal finalizer, one that isn't exposed in any tp_del slot:  It will invoke a PyObject_CallMethod(self, "close", "") on itself.  This will happen whenever iobase objects are part of a cycle that needs to be cleared.  This can cause arbitrary code to run.  There are even provisions made for the resurrection of the iobase objects based on the action of this close() call.

Clearly, this has the potential to be non-trivial, and therefore, again, I see this as an argument for my proposed patched in issue 9141.  But others have voiced worries that if we stop collecting iobase objects, that would be a regression.

So, I ask you:  What is allowed during tp_clear()?  Is this a hard rule?  What is the reason?

Kristj?n

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/c2ee5fd8/attachment.html>

From rdmurray at bitdance.com  Tue Apr 17 14:18:38 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 17 Apr 2012 08:18:38 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <jmi8tt$10f$1@dough.gmane.org>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org>
Message-ID: <20120417121838.5CBD22509E8@webabinitio.net>

On Tue, 17 Apr 2012 01:11:14 +0200, Georg Brandl <g.brandl at gmx.net> wrote:
> On 16.04.2012 18:15, R. David Murray wrote:
> > I don't see how depending on Cython is better than depending on having
> > an existing Python.
> 
> No, it's not just an existing Python, it is (at least currently) the same
> version of Python being built.  Therefore I wrote about the bootstrapping
> problems when bytecode changes.

Ah, yes, I had missed that subtlety.

--David

From rdmurray at bitdance.com  Tue Apr 17 14:25:01 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 17 Apr 2012 08:25:01 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <4F8D1377.5020001@redhat.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
Message-ID: <20120417122502.0B9D82509E8@webabinitio.net>

On Tue, 17 Apr 2012 08:53:43 +0200, Matej Cepl <mcepl at redhat.com> wrote:
> On 16.4.2012 18:10, Nam Nguyen wrote:
> > a_list[pos + 1 : -1]
> 
> or other way around
> 
> a_list[pos+1:-1]


That's what I always use.  No spaces inside the brackets for me :)

If the expression gets unreadable that way, factor it out.

--David

From rdmurray at bitdance.com  Tue Apr 17 14:35:44 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 17 Apr 2012 08:35:44 -0400
Subject: [Python-Dev] PEP 418: Add monotonic time,
	performance counter and process time functions
In-Reply-To: <20120417044821.GA1979@cskk.homeip.net>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
Message-ID: <20120417123545.3DF842509E8@webabinitio.net>

On Tue, 17 Apr 2012 14:48:22 +1000, Cameron Simpson <cs at zip.com.au> wrote:
> On 16Apr2012 01:25, Victor Stinner <victor.stinner at gmail.com> wrote:
> | I suppose that most people don't care that "resolution" and
> | "precision" are different things.
> 
> If we're using the same definitions we discussed offline, where
> 
>   - resolution is the units the clock call (underneath) works in (for
>     example, nanoseconds)
> 
>   - precision is the effective precision of the results, for example
>     milliseconds
> 
> I'd say people would care if they knew, and mostly care about
> "precision".

I think what the user cares about is "what is the smallest tick that
this clock result will faithfully represent?".  If the number of bits
returned is larger than the clock accuracy, you want the clock accuracy.
If the number of bits returned is smaller than the clock accuracy,
you want the number of bits.

(Yes, I'm using accuracy in a slightly different sense here...I think
we don't have the right words for this.)

To use other words, what the users cares about are the error bars on
the returned result.

--David

From zuo at chopin.edu.pl  Tue Apr 17 15:29:54 2012
From: zuo at chopin.edu.pl (Jan Kaliszewski)
Date: Tue, 17 Apr 2012 15:29:54 +0200
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>
Message-ID: <20120417132954.GD1788@chopin.edu.pl>

Paul Moore dixit (2012-04-17, 08:14):

> On 16 April 2012 17:10, Nam Nguyen <bitsink at gmail.com> wrote:
> > PEP 8 suggests no extra spaces after and before square brackets, and
> > colons. So code like this is appropriate:
> >
> > a_list[1:3]
> >
> > But I find it less readable in the case of:
> >
> > a_list[pos + 1:-1]
> >
> > The colon is seemingly lost in the right.
> >
> > Would it be better to read like below?
> >
> > a_list[pos + 1 : -1]
> >
> > Any opinion?
> 
> It says no space *before* a colon, not after. So the following should
> be OK (and is what I'd use):
> 
> a_list[pos + 1: -1]

I'd prefer either:

    a_list[pos+1:-1]

or

    a_list[(pos + 1):-1]

Regards.
*j


From rosuav at gmail.com  Tue Apr 17 16:18:41 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 18 Apr 2012 00:18:41 +1000
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <20120417044821.GA1979@cskk.homeip.net>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
Message-ID: <CAPTjJmq8O_yNvJDK56S4W3=an9RQHdpES-HuiiieX_JLt4s8Yw@mail.gmail.com>

On Tue, Apr 17, 2012 at 2:48 PM, Cameron Simpson <cs at zip.com.au> wrote:
> On 16Apr2012 01:25, Victor Stinner <victor.stinner at gmail.com> wrote:
> | I suppose that most people don't care that "resolution" and
> | "precision" are different things.
>
> If we're using the same definitions we discussed offline, where
>
> ?- resolution is the units the clock call (underneath) works in (for
> ? ?example, nanoseconds)
>
> ?- precision is the effective precision of the results, for example
> ? ?milliseconds
>
> I'd say people would care if they knew, and mostly care about
> "precision".

Meaning that resolution is a matter of format and API, not of clock.
If you take a C clock API that returns a value in nanoseconds and
return it as a Python float, you've changed the resolution. I don't
think resolution matters much, beyond that (for example) nanosecond
resolution allows a clock to be subsequently upgraded as far as
nanosecond precision without breaking existing code, even if currently
it's only providing microsecond precision. But it makes just as much
sense for your resolution to be 2**64ths-of-a-second or
quarters-of-the-internal-CPU-clock-speed as it does for nanoseconds.
As long as it's some fraction of the SI second, every different
resolution is just a fixed ratio away from every other one.

ChrisA

From guido at python.org  Tue Apr 17 16:21:46 2012
From: guido at python.org (Guido van Rossum)
Date: Tue, 17 Apr 2012 07:21:46 -0700
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>
Message-ID: <CAP7+vJKJd0vQH9Jdx_5eWzu3=swb9uHBAqEeJtrsktEeLWSs1w@mail.gmail.com>

On Tue, Apr 17, 2012 at 12:14 AM, Paul Moore <p.f.moore at gmail.com> wrote:
> On 16 April 2012 17:10, Nam Nguyen <bitsink at gmail.com> wrote:
>> PEP 8 suggests no extra spaces after and before square brackets, and
>> colons. So code like this is appropriate:
>>
>> a_list[1:3]
>>
>> But I find it less readable in the case of:
>>
>> a_list[pos + 1:-1]
>>
>> The colon is seemingly lost in the right.
>>
>> Would it be better to read like below?
>>
>> a_list[pos + 1 : -1]
>>
>> Any opinion?
>
> It says no space *before* a colon, not after. So the following should
> be OK (and is what I'd use):
>
> a_list[pos + 1: -1]

I hope that's now what it says about slices -- that was meant for dict
displays. For slices it should be symmetrical. In this case I would
remove the spaces around the +, but it's okay to add spaces around the
: too. It does look odd to have an operator that binds tighter (the +)
surrounded by spaces while the operator that binds less tight (:) is
not. (And in this context, : should be considered an operator.)

-- 
--Guido van Rossum (python.org/~guido)

From martin at v.loewis.de  Tue Apr 17 16:45:36 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Tue, 17 Apr 2012 16:45:36 +0200
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>

> What I want to know is, why is this limitation in place?  Here are  
> two possibilities:
>
> 1)      "The order of calling finalizers in a cycle is undefined so  
> it is not a solvable problem".  But this would allow a single  
> object, with only internal cycles to be collected.  Currently this  
> is not the case.

It's similar to this, but not exactly: "A finalizer in a cycle mail
try to refer back to an object that was already cleared, and fail
because of that; this may cause arbitrary failures changing from
run to run".

It's true that a cycle involving only a single object with __del__
could be safely collected. This special case was not implemented.

> 2)      "During collection, the interpreter is in a fragile state  
> (linked lists of gc objects with refcount bookkeeping in place) and  
> no unknown code can be allowed to run".  This is the reason I  
> personally think is the true reason.

No, that's not the case at all. As Antoine explains in the issue,
there are plenty of ways in which Python code can be run when breaking
a cycle. Not only weakrefs, but also objects released as a consequence
of tp_clear which weren't *in* the cycle (but hung from it).

> So, I ask you:  What is allowed during tp_clear()?  Is this a hard  
> rule?  What is the reason?

We are all consenting adults. Everything is allowed - you just have to
live with the consequences.

Regards,
Martin



From barry at python.org  Tue Apr 17 17:36:31 2012
From: barry at python.org (Barry Warsaw)
Date: Tue, 17 Apr 2012 11:36:31 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <20120417122502.0B9D82509E8@webabinitio.net>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
Message-ID: <20120417113631.7fb1b543@resist.wooz.org>

On Apr 17, 2012, at 08:25 AM, R. David Murray wrote:

>On Tue, 17 Apr 2012 08:53:43 +0200, Matej Cepl <mcepl at redhat.com> wrote:
>> On 16.4.2012 18:10, Nam Nguyen wrote:
>> > a_list[pos + 1 : -1]
>> 
>> or other way around
>> 
>> a_list[pos+1:-1]
>
>
>That's what I always use.  No spaces inside the brackets for me :)
>
>If the expression gets unreadable that way, factor it out.

+1

-Barry

From brett at python.org  Tue Apr 17 17:36:20 2012
From: brett at python.org (Brett Cannon)
Date: Tue, 17 Apr 2012 11:36:20 -0400
Subject: [Python-Dev] cpython: Issue #13959: Re-implement
 imp.load_source() in imp.py.
In-Reply-To: <20120417115338.7fae2d8f@pitrou.net>
References: <E1SJxtD-0008Pd-RG@dinsdale.python.org>
	<20120417115338.7fae2d8f@pitrou.net>
Message-ID: <CAP1=2W7Dum-5TUXy5msU1F8cxdKuDHH=2hbUqhEAvU-RWgt3VQ@mail.gmail.com>

On Tue, Apr 17, 2012 at 05:53, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Tue, 17 Apr 2012 04:11:31 +0200
> brett.cannon <python-checkins at python.org> wrote:
> > http://hg.python.org/cpython/rev/3b5b4b4bb43c
> > changeset:   76371:3b5b4b4bb43c
> > user:        Brett Cannon <brett at python.org>
> > date:        Mon Apr 16 22:11:25 2012 -0400
> > summary:
> >   Issue #13959: Re-implement imp.load_source() in imp.py.
> >
> > files:
> >   Lib/imp.py      |   29 ++-
> >   Python/import.c |  390 ------------------------------------
> >   2 files changed, 28 insertions(+), 391 deletions(-)
>
> It's nice to see all that C code go away :-)
>

Oh yes. =) It's definitely acting as motivation to put up with imp's crappy
APIs.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/5c61261f/attachment-0001.html>

From brett at python.org  Tue Apr 17 17:41:32 2012
From: brett at python.org (Brett Cannon)
Date: Tue, 17 Apr 2012 11:41:32 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F8D4952.603@trueblade.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
	<CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>
	<20120417115201.42992338@pitrou.net> <4F8D4952.603@trueblade.com>
Message-ID: <CAP1=2W7YCDrNc7EXJT4Sh7L4rmr9SRLpB5vHdL8dgrs1_Nwj2A@mail.gmail.com>

On Tue, Apr 17, 2012 at 06:43, Eric V. Smith <eric at trueblade.com> wrote:

> On 4/17/2012 5:52 AM, Antoine Pitrou wrote:
> > On Mon, 16 Apr 2012 20:41:56 -0400
> > Brett Cannon <brett at python.org> wrote:
> >> On Mon, Apr 16, 2012 at 20:27, Antoine Pitrou <solipsis at pitrou.net>
> wrote:
> >>
> >>> On Tue, 17 Apr 2012 01:11:14 +0200
> >>> Georg Brandl <g.brandl at gmx.net> wrote:
> >>>>
> >>>> No, it's not just an existing Python, it is (at least currently) the
> same
> >>>> version of Python being built.  Therefore I wrote about the
> bootstrapping
> >>>> problems when bytecode changes.
> >>>>
> >>>> Depending on Cython is better in that it breaks the bootstrapping
> cycle,
> >>>> but on the other hand the C code may need to be regenerated when the C
> >>> API
> >>>> changes in an incompatible way.
> >>>
> >>> Cython OTOH probably needs Python 2.x, which isn't that great for
> >>> building Python 3. And requiring Cython for developing is not very
> >>> contributor-friendly.
> >>>
> >>
> >> Well, required to regenerate _frozen_importlib, but nothing else. I mean
> >> making fixes go into importlib directly and get tested that way, not
> >> through __import__(). So really Cython would only be needed when
> >> importlib._bootstrap has been changed and you are making a commit.
> >
> > That's still a large dependency to bring in, while we already have a
> > working solution.
> > I'd understand using Cython to develop some new extension module which
> > requires linking against a C library (and is thus impossible to write
> > in pure Python). But for importlib that's totally non-necessary.
> >
> > I guess I'm -1 on it.
>
> I agree. If the problem we're trying to solve is that the generated file
> isn't always rebuilt, bringing in a large dependency like Cython seems
> like overkill to me.
>

Actually Cython would help with a subtle maintenance burden of maintaining
*any* C code for import. Right now,
Python/import.c:PyImport_ImportModuleLevelObject() is an accelerated C
version of importlib.__import__() through checking sys.modules, after which
it calls into the Python code. Cython would do away with that C
acceleration code (which I have already had to modify once and Antoine
found a couple refleaks in).


>
> We basically have a working solution now (thanks, Brett). I think we
> should focus on getting it polished. Maybe we can bring in Cython in a
> later release, if in the 3.4 timeframe it still seems like we have a
> problem to solve. I suspect things will be working fine.


I don't view this discussion as work/not work but more of work/work better
(possibly; I have severe bias here to cut the C code down to zilch since I
don't want to write anymore of it so I'm definitely not going to make any
final call on this topic).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/f85a7552/attachment.html>

From brett at python.org  Tue Apr 17 17:58:26 2012
From: brett at python.org (Brett Cannon)
Date: Tue, 17 Apr 2012 11:58:26 -0400
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
Message-ID: <CAP1=2W7qjVHBXetHApHM+efbPZd6YDAPmwYteyfex0ZCHZdT4w@mail.gmail.com>

The only people to bring up worries about this thread were Eric and Nick
and they both seem fine with making stuff explicit and changing the meaning
of None in sys.path_importer_cache, so I have created
http://bugs.python.org/issue14605 and will plan on implementing the ideas
for it before Python 3.3 goes out.

On Sat, Apr 14, 2012 at 16:03, Brett Cannon <brett at python.org> wrote:

> To start off, what I am about to propose was brought up at the PyCon
> language summit and the whole room agreed with what I want to do here, so I
> honestly don't expect much of an argument (famous last words).
>
> In the "ancient" import.c days, a lot of import's stuff was hidden deep in
> the C code and in no way exposed to the user. But with importlib finishing
> PEP 302's phase 2 plans of getting imoprt to be properly refactored to use
> importers, path hooks, etc., this need no longer be the case.
>
> So what I propose to do is stop having import have any kind of implicit
> machinery. This means sys.meta_path gets a path finder that does the heavy
> lifting for import and sys.path_hooks gets a hook which provides a default
> finder. As of right now those two pieces of machinery are entirely implicit
> in importlib and can't be modified, stopped, etc.
>
> If this happens, what changes? First, more of importlib will get publicly
> exposed (e.g. the meta path finder would become public instead of private
> like it is along with everything else that is publicly exposed). Second,
> import itself technically becomes much simpler since it really then is
> about resolving module names, traversing sys.meta_path, and then handling
> fromlist w/ everything else coming from how the path finder and path hook
> work.
>
> What also changes is that sys.meta_path and sys.path_hooks cannot be
> blindly reset w/o blowing out import. I doubt anyone is even touching those
> attributes in the common case, and the few that do can easily just stop
> wiping out those two lists. If people really care we can do a warning in
> 3.3 if they are found to be empty and then fall back to old semantics, but
> I honestly don't see this being an issue as backwards-compatibility would
> just require being more careful of what you delete (which I have been
> warning people to do for years now) which is a minor code change which
> falls in line with what goes along with any new Python version.
>
> And lastly, sticking None in sys.path_importer_cache would no longer mean
> "do the implicit thing" and instead would mean the same as NullImporter
> does now (which also means import can put None into sys.path_importer_cache
> instead of NullImporter): no finder is available for an entry on sys.path
> when None is found. Once again, I don't see anyone explicitly sticking None
> into sys.path_importer_cache, and if they are they can easily stick what
> will be the newly exposed finder in there instead. The more common case
> would be people wiping out all entries of NullImporter so as to have a new
> sys.path_hook entry take effect. That code would instead need to clear out
> None on top of NullImporter as well (in Python 3.2 and earlier this would
> just be a performance loss, not a semantic change). So this too could
> change in Python 3.3 as long as people update their code like they do with
> any other new version of Python.
>
> In summary, I want no more magic "behind the curtain" for Python 3.3 and
> import: sys.meta_path and sys.path_hooks contain what they should and if
> they are emptied then imports will fail and None in sys.path_importer_cache
> means "no finder" instead of "use magical, implicit stuff".
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/35b05479/attachment.html>

From brett at python.org  Tue Apr 17 17:59:23 2012
From: brett at python.org (Brett Cannon)
Date: Tue, 17 Apr 2012 11:59:23 -0400
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
Message-ID: <CAP1=2W6-b62Uuk8qrp=Ba_FqQ3Z2vK8+9RNODL4ETzAY8HdN3w@mail.gmail.com>

Anyone other than Eric have something to say on this proposal? Obviously
the discussion went tangential before I saw a clear consensus that what I
was proposing was fine with people.

On Sat, Apr 14, 2012 at 16:56, Brett Cannon <brett at python.org> wrote:

> An open issue in PEP 302 is whether to require __loader__ attributes on
> modules. The claimed worry is memory consumption, but considering importlib
> and zipimport are already doing this that seems like a red herring.
> Requiring it, though, opens the door to people relying on its existence and
> thus starting to do things like loading assets with
> ``__loader__.get_data(path_to_internal_package_file)`` which allows code to
> not care how modules are stored (e.g. zip file, sqlite database, etc.).
>
> What I would like to do is update the PEP to state that loaders are
> expected to set __loader__. Now importlib will get updated to do that
> implicitly so external code can expect it post-import, but requiring
> loaders to set it would mean that code executed during import can rely on
> it as well.
>
> As for __package__, PEP 366 states that modules should set it but it isn't
> referenced by PEP 302. What I want to do is add a reference and make it
> required like __loader__. Importlib already sets it implicitly post-import,
> but once again it would be nice to do this pre-import.
>
> To help facilitate both new requirements, I would update the
> importlib.util.module_for_loader decorator to set both on a module that
> doesn't have them before passing the module down to the decorated method.
> That way people already using the decorator don't have to worry about
> anything and it is one less detail to have to worry about. I would also
> update the docs on importlib.util.set_package and importlib.util.set_loader
> to suggest people use importlib.util.module_for_loader and only use the
> other two decorators for backwards-compatibility.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/1050eaaa/attachment.html>

From solipsis at pitrou.net  Tue Apr 17 19:39:01 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 17 Apr 2012 19:39:01 +0200
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
	<CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>
	<20120417115201.42992338@pitrou.net> <4F8D4952.603@trueblade.com>
	<CAP1=2W7YCDrNc7EXJT4Sh7L4rmr9SRLpB5vHdL8dgrs1_Nwj2A@mail.gmail.com>
Message-ID: <20120417193901.08687fb0@pitrou.net>

On Tue, 17 Apr 2012 11:41:32 -0400
Brett Cannon <brett at python.org> wrote:
> 
> Actually Cython would help with a subtle maintenance burden of maintaining
> *any* C code for import. Right now,
> Python/import.c:PyImport_ImportModuleLevelObject() is an accelerated C
> version of importlib.__import__() through checking sys.modules, after which
> it calls into the Python code. Cython would do away with that C
> acceleration code (which I have already had to modify once and Antoine
> found a couple refleaks in).

Would it? That's assuming Cython would be smart enough to do the
required optimizations.

Regards

Antoine.



From pjenvey at underboss.org  Tue Apr 17 19:45:24 2012
From: pjenvey at underboss.org (Philip Jenvey)
Date: Tue, 17 Apr 2012 10:45:24 -0700
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
Message-ID: <D60A5DA6-B21F-49B2-9AC6-9FE92EF232B4@underboss.org>


On Apr 14, 2012, at 1:03 PM, Brett Cannon wrote:

> And lastly, sticking None in sys.path_importer_cache would no longer mean "do the implicit thing" and instead would mean the same as NullImporter does now (which also means import can put None into sys.path_importer_cache instead of NullImporter): no finder is available for an entry on sys.path when None is found.

Isn't it more explicit to cache the NullImporter instead of None?

--
Philip Jenvey


From brett at python.org  Tue Apr 17 20:01:18 2012
From: brett at python.org (Brett Cannon)
Date: Tue, 17 Apr 2012 14:01:18 -0400
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <D60A5DA6-B21F-49B2-9AC6-9FE92EF232B4@underboss.org>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<D60A5DA6-B21F-49B2-9AC6-9FE92EF232B4@underboss.org>
Message-ID: <CAP1=2W7JDguyR6NM+1Byw5qc3Riv2WUGOFfGuAmLdZECVAiawg@mail.gmail.com>

On Tue, Apr 17, 2012 at 13:45, Philip Jenvey <pjenvey at underboss.org> wrote:

>
> On Apr 14, 2012, at 1:03 PM, Brett Cannon wrote:
>
> > And lastly, sticking None in sys.path_importer_cache would no longer
> mean "do the implicit thing" and instead would mean the same as
> NullImporter does now (which also means import can put None into
> sys.path_importer_cache instead of NullImporter): no finder is available
> for an entry on sys.path when None is found.
>
> Isn't it more explicit to cache the NullImporter instead of None?
>

I disagree. NullImporter is just another finder that just so happens to
always fail. None is explicitly not a finder and thus obviously not going
to do anything. Isn't it clearer to say ``sys.path_importer_cache[path] is
None`` than ``isinstance(sys.path_importer_cache[path],
imp.NullImporter)``? I mean we have None to represent something is nothing
which is exactly what I want to convey; None in sys.path_importer_cache
means there is no finder for that path.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/6b10fd20/attachment.html>

From kristjan at ccpgames.com  Tue Apr 17 19:22:57 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 17 Apr 2012 17:22:57 +0000
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
Message-ID: <EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> 
> No, that's not the case at all. As Antoine explains in the issue, there are
> plenty of ways in which Python code can be run when breaking a cycle. Not
> only weakrefs, but also objects released as a consequence of tp_clear which
> weren't *in* the cycle (but hung from it).
I see, that makes sense.  The rule is, then that we cannot delete objects with finalalizer, that can reach other garbage, simply because doing so may find the objects in an unexpected (cleared) state and thus cause weird errors.
(weakrefs are a special case, apparently dealt with separately.  And the callback cannot refer back to the referent) . 
 This reasoning belongs in the gcmodule.c, I think.
> 
> > So, I ask you:  What is allowed during tp_clear()?  Is this a hard
> > rule?  What is the reason?
> 
> We are all consenting adults. Everything is allowed - you just have to live with
> the consequences.

Well, we specifically decided that objects with __del__ methods that are part of a cycle cannot be run.
The same reasoning was applied to generators, if they are in a certain state.
What makes iobase so special that its 'close' method can be run even if it is part of a cycle?
Why not allow it for all objects, then?

At the very least, I think this behaviour (this exception for iobase) merits being explicitly documented.

Kristj?n


From solipsis at pitrou.net  Tue Apr 17 20:30:55 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 17 Apr 2012 20:30:55 +0200
Subject: [Python-Dev] issue 9141, finalizers and gc module
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120417203055.689dd7ad@pitrou.net>

On Tue, 17 Apr 2012 17:22:57 +0000
Kristj?n Valur J?nsson <kristjan at ccpgames.com> wrote:
> > 
> > We are all consenting adults. Everything is allowed - you just have to live with
> > the consequences.
> 
> Well, we specifically decided that objects with __del__ methods that are part of a cycle cannot be run.
> The same reasoning was applied to generators, if they are in a certain state.
> What makes iobase so special that its 'close' method can be run even if it is part of a cycle?

The reason is that making file objects uncollectable when they are part
of a reference cycle would be a PITA and a serious regression for many
applications, I think.

> Why not allow it for all objects, then?

I'm not the author of the original GC design. Perhaps it was
deliberately conservative at the time? I think PyPy has a more tolerant
solution for finalizers in reference cycles, perhaps they can explain it
here.

Regards

Antoine.



From brett at python.org  Tue Apr 17 21:52:43 2012
From: brett at python.org (Brett Cannon)
Date: Tue, 17 Apr 2012 15:52:43 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <20120417193901.08687fb0@pitrou.net>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<jmcjln$p41$1@dough.gmane.org> <4F8C27BB.1010703@v.loewis.de>
	<CAP1=2W5XidMXksfXadKWH6z68dQca0HqPNHuAOtmh6vu9ZdYcg@mail.gmail.com>
	<20120416161516.CAC2D2509BC@webabinitio.net>
	<jmi8tt$10f$1@dough.gmane.org> <20120417022728.2e492740@pitrou.net>
	<CAP1=2W4F=RA8=C6ufgX6k6V2abkytLA=mSRx41hp9j0pJpDOyw@mail.gmail.com>
	<20120417115201.42992338@pitrou.net> <4F8D4952.603@trueblade.com>
	<CAP1=2W7YCDrNc7EXJT4Sh7L4rmr9SRLpB5vHdL8dgrs1_Nwj2A@mail.gmail.com>
	<20120417193901.08687fb0@pitrou.net>
Message-ID: <CAP1=2W6kDDhdKzG5c0Du9nzbJ+ymar72=_gbck-q5FLFHOvzpg@mail.gmail.com>

On Tue, Apr 17, 2012 at 13:39, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Tue, 17 Apr 2012 11:41:32 -0400
> Brett Cannon <brett at python.org> wrote:
> >
> > Actually Cython would help with a subtle maintenance burden of
> maintaining
> > *any* C code for import. Right now,
> > Python/import.c:PyImport_ImportModuleLevelObject() is an accelerated C
> > version of importlib.__import__() through checking sys.modules, after
> which
> > it calls into the Python code. Cython would do away with that C
> > acceleration code (which I have already had to modify once and Antoine
> > found a couple refleaks in).
>
> Would it? That's assuming Cython would be smart enough to do the
> required optimizations.
>

Yes, it is an assumption I'm making. I also assume we wouldn't make a
change like this w/o taking the time to run importlib through Cython and
seeing how the performance numbers come out.

-Brett


>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/ad751bc4/attachment.html>

From fijall at gmail.com  Tue Apr 17 23:29:19 2012
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Tue, 17 Apr 2012 23:29:19 +0200
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <20120417203055.689dd7ad@pitrou.net>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
	<20120417203055.689dd7ad@pitrou.net>
Message-ID: <CAK5idxSf=oZNyGjHnCeUyVdBdx8Jb4B4rPjs=BzkcQG=2ROCdg@mail.gmail.com>

On Tue, Apr 17, 2012 at 8:30 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Tue, 17 Apr 2012 17:22:57 +0000
> Kristj?n Valur J?nsson <kristjan at ccpgames.com> wrote:
> > >
> > > We are all consenting adults. Everything is allowed - you just have to
> live with
> > > the consequences.
> >
> > Well, we specifically decided that objects with __del__ methods that are
> part of a cycle cannot be run.
> > The same reasoning was applied to generators, if they are in a certain
> state.
> > What makes iobase so special that its 'close' method can be run even if
> it is part of a cycle?
>
> The reason is that making file objects uncollectable when they are part
> of a reference cycle would be a PITA and a serious regression for many
> applications, I think.
>
> > Why not allow it for all objects, then?
>
> I'm not the author of the original GC design. Perhaps it was
> deliberately conservative at the time? I think PyPy has a more tolerant
> solution for finalizers in reference cycles, perhaps they can explain it
> here.
>
> Regards
>
> Antoine.


PyPy breaks cycles randomly. I think a pretty comprehensive description of
what happens is here:

http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-1.html
http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-2.html

Cheers,
fijal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120417/d1b00fd3/attachment.html>

From andrew.svetlov at gmail.com  Wed Apr 18 00:38:50 2012
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Wed, 18 Apr 2012 01:38:50 +0300
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP1=2W6-b62Uuk8qrp=Ba_FqQ3Z2vK8+9RNODL4ETzAY8HdN3w@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CAP1=2W6-b62Uuk8qrp=Ba_FqQ3Z2vK8+9RNODL4ETzAY8HdN3w@mail.gmail.com>
Message-ID: <CAL3CFcV8XxN_sPU7507D=D1Q2tCfVWVkzjerXqfUF4o=ijXfGg@mail.gmail.com>

+1 for initial proposition.

On Tue, Apr 17, 2012 at 6:59 PM, Brett Cannon <brett at python.org> wrote:
> Anyone other than Eric have something to say on this proposal? Obviously the
> discussion went tangential before I saw a clear consensus that what I was
> proposing was fine with people.
>
>
> On Sat, Apr 14, 2012 at 16:56, Brett Cannon <brett at python.org> wrote:
>>
>> An open issue in PEP 302 is whether to require __loader__ attributes on
>> modules. The claimed worry is memory consumption, but considering importlib
>> and zipimport are already doing this that seems like a red herring.
>> Requiring it, though, opens the door to people relying on its existence and
>> thus starting to do things like loading assets with
>> ``__loader__.get_data(path_to_internal_package_file)`` which allows code to
>> not care how modules are stored (e.g. zip file, sqlite database, etc.).
>>
>> What I would like to do is update the PEP to state that loaders are
>> expected to set __loader__. Now importlib will get updated to do that
>> implicitly so external code can expect it post-import, but requiring loaders
>> to set it would mean that code executed during import can rely on it as
>> well.
>>
>> As for __package__, PEP 366 states that modules should set it but it isn't
>> referenced by PEP 302. What I want to do is add a reference and make it
>> required like __loader__. Importlib already sets it implicitly post-import,
>> but once again it would be nice to do this pre-import.
>>
>> To help facilitate both new requirements, I would update the
>> importlib.util.module_for_loader decorator to set both on a module that
>> doesn't have them before passing the module down to the decorated method.
>> That way people already using the decorator don't have to worry about
>> anything and it is one less detail to have to worry about. I would also
>> update the docs on importlib.util.set_package and importlib.util.set_loader
>> to suggest people use importlib.util.module_for_loader and only use the
>> other two decorators for backwards-compatibility.
>
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>



-- 
Thanks,
Andrew Svetlov

From tjreedy at udel.edu  Wed Apr 18 00:51:22 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 17 Apr 2012 18:51:22 -0400
Subject: [Python-Dev] making the import machinery explicit
In-Reply-To: <CAP1=2W7JDguyR6NM+1Byw5qc3Riv2WUGOFfGuAmLdZECVAiawg@mail.gmail.com>
References: <CAP1=2W5x0i3uzggpK5tK3Ve=97_-5n-zUJJx9VODA94wUN_DUQ@mail.gmail.com>
	<D60A5DA6-B21F-49B2-9AC6-9FE92EF232B4@underboss.org>
	<CAP1=2W7JDguyR6NM+1Byw5qc3Riv2WUGOFfGuAmLdZECVAiawg@mail.gmail.com>
Message-ID: <jmks5e$bor$1@dough.gmane.org>

On 4/17/2012 2:01 PM, Brett Cannon wrote:
> Isn't it clearer to say
> ``sys.path_importer_cache[path] is None`` than
> ``isinstance(sys.path_importer_cache[path], imp.NullImporter)``?

Yes. Great work. Thanks for helping with the Idle breakage.

-- 
Terry Jan Reedy


From ncoghlan at gmail.com  Wed Apr 18 00:58:03 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 18 Apr 2012 08:58:03 +1000
Subject: [Python-Dev] Require loaders set __package__ and __loader__
In-Reply-To: <CAP1=2W6-b62Uuk8qrp=Ba_FqQ3Z2vK8+9RNODL4ETzAY8HdN3w@mail.gmail.com>
References: <CAP1=2W4oX6qM78cRto9bqTVfy29CqVvqnTgvvOXSjBiPteQEeA@mail.gmail.com>
	<CAP1=2W6-b62Uuk8qrp=Ba_FqQ3Z2vK8+9RNODL4ETzAY8HdN3w@mail.gmail.com>
Message-ID: <CADiSq7cDRRJvKtt=ASZ1t=W_BQLg5S4GyztP3dTDmvo_7h5OBQ@mail.gmail.com>

+1 here. Previously, it wasn't a reasonable requirement, since CPython
itself didn't comply with it.

--
Sent from my phone, thus the relative brevity :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120418/2bf6e12c/attachment.html>

From victor.stinner at gmail.com  Wed Apr 18 01:25:03 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 18 Apr 2012 01:25:03 +0200
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <20120417123545.3DF842509E8@webabinitio.net>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
Message-ID: <CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>

> I think what the user cares about is "what is the smallest tick that
> this clock result will faithfully represent?". ?If the number of bits
> returned is larger than the clock accuracy, you want the clock accuracy.
> If the number of bits returned is smaller than the clock accuracy,
> you want the number of bits.
>
> (Yes, I'm using accuracy in a slightly different sense here...I think
> we don't have the right words for this.)
>
> To use other words, what the users cares about are the error bars on
> the returned result.

Ok ok, resolution / accuracy / precision are confusing (or at least
not well known concepts). So it's better to keep the name:
time.perf_counter() :-)

Victor

From cs at zip.com.au  Wed Apr 18 01:34:14 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 18 Apr 2012 09:34:14 +1000
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAPTjJmq8O_yNvJDK56S4W3=an9RQHdpES-HuiiieX_JLt4s8Yw@mail.gmail.com>
References: <CAPTjJmq8O_yNvJDK56S4W3=an9RQHdpES-HuiiieX_JLt4s8Yw@mail.gmail.com>
Message-ID: <20120417233414.GA9417@cskk.homeip.net>

On 18Apr2012 00:18, Chris Angelico <rosuav at gmail.com> wrote:
| On Tue, Apr 17, 2012 at 2:48 PM, Cameron Simpson <cs at zip.com.au> wrote:
| > On 16Apr2012 01:25, Victor Stinner <victor.stinner at gmail.com> wrote:
| > | I suppose that most people don't care that "resolution" and
| > | "precision" are different things.
| >
| > If we're using the same definitions we discussed offline, where
| >
| > ?- resolution is the units the clock call (underneath) works in (for
| > ? ?example, nanoseconds)
| >
| > ?- precision is the effective precision of the results, for example
| > ? ?milliseconds
| >
| > I'd say people would care if they knew, and mostly care about
| > "precision".
| 
| Meaning that resolution is a matter of format and API, not of clock.
| If you take a C clock API that returns a value in nanoseconds and
| return it as a Python float, you've changed the resolution. I don't
| think resolution matters much, beyond that (for example) nanosecond
| resolution allows a clock to be subsequently upgraded as far as
| nanosecond precision without breaking existing code, even if currently
| it's only providing microsecond precision.

Yes; as stated, resolution is largely irrelevant to the user; it really
only places an upper bound on the precision. But it _is_ easy to know
from the unlying API doco, so it is easy to annotate the clocks with its
metadata.

Annoyingly, the more useful precision value is often harder to know.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

If anyone disagrees with anything I say, I am quite prepared not only
to retract it, but also to deny under oath that I ever said it.
        - Tom Lehrer

From cs at zip.com.au  Wed Apr 18 01:36:30 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 18 Apr 2012 09:36:30 +1000
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <20120417123545.3DF842509E8@webabinitio.net>
References: <20120417123545.3DF842509E8@webabinitio.net>
Message-ID: <20120417233630.GA9694@cskk.homeip.net>

On 17Apr2012 08:35, R. David Murray <rdmurray at bitdance.com> wrote:
| On Tue, 17 Apr 2012 14:48:22 +1000, Cameron Simpson <cs at zip.com.au> wrote:
| > On 16Apr2012 01:25, Victor Stinner <victor.stinner at gmail.com> wrote:
| > | I suppose that most people don't care that "resolution" and
| > | "precision" are different things.
| > 
| > If we're using the same definitions we discussed offline, where
| > 
| >   - resolution is the units the clock call (underneath) works in (for
| >     example, nanoseconds)
| > 
| >   - precision is the effective precision of the results, for example
| >     milliseconds
| > 
| > I'd say people would care if they knew, and mostly care about
| > "precision".
| 
| I think what the user cares about is "what is the smallest tick that
| this clock result will faithfully represent?".

That is what "precision" is supposed to mean above. I suspect we're all in
agreement here about its purpose.

| To use other words, what the users cares about are the error bars on
| the returned result.

Yes. And your discussion about the hw clock exceeding the API resulution
means we mean "the error bars as they escape from the OS API".

I still think we're all in agreement about the meaning here.
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Often the good of the many is worth more than the good of the few. Saying
"if they have saved one life then they are worthwhile" places the good of the
few above the good of the many and past a certain threshold that's a
reprehensible attitude, for which I have utter contempt.

From victor.stinner at gmail.com  Wed Apr 18 02:06:39 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 18 Apr 2012 02:06:39 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
Message-ID: <CAMpsgwZ0b5YARrPwUi-kCm7R5NDN-qx1CiTJ8oivHp8CKfPJ8w@mail.gmail.com>

> Here is a simplified version of the first draft of the PEP 418. The
> full version can be read online.
> http://www.python.org/dev/peps/pep-0418/
>
> The implementation of the PEP can be found in this issue:
> http://bugs.python.org/issue14428

The PEP is now fully ready: I just finished the implementation.

It looks like people, who complained on older versions of the PEP,
don't have new complain. Am I wrong? Everybody agree with the PEP 418?

I created http://hg.python.org/features/pep418/ repository for the
implementation. I tested it on Linux 3.3, FreeBSD 8.2, OpenBSD 5.0 and
Windows Seven. The implementation is now waiting your review!

There is also the toy implementation in pure Python for Python < 3.3:
https://bitbucket.org/haypo/misc/src/tip/python/pep418.py

Antoine asked "Is there a designated dictator for this PEP?". Nobody
answered. Maybe Guido van Rossum?

Victor

From guido at python.org  Wed Apr 18 02:50:50 2012
From: guido at python.org (Guido van Rossum)
Date: Tue, 17 Apr 2012 17:50:50 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZ0b5YARrPwUi-kCm7R5NDN-qx1CiTJ8oivHp8CKfPJ8w@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwZ0b5YARrPwUi-kCm7R5NDN-qx1CiTJ8oivHp8CKfPJ8w@mail.gmail.com>
Message-ID: <CAP7+vJK9kHN7bG-HXN5g9sUGnJ2D3y4JoZbvrkdntjvkkCrjjw@mail.gmail.com>

I'll do it. Give me a few days (tomorrow is fully booked with horrible
meetings).

On Tue, Apr 17, 2012 at 5:06 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
>> Here is a simplified version of the first draft of the PEP 418. The
>> full version can be read online.
>> http://www.python.org/dev/peps/pep-0418/
>>
>> The implementation of the PEP can be found in this issue:
>> http://bugs.python.org/issue14428
>
> The PEP is now fully ready: I just finished the implementation.
>
> It looks like people, who complained on older versions of the PEP,
> don't have new complain. Am I wrong? Everybody agree with the PEP 418?
>
> I created http://hg.python.org/features/pep418/ repository for the
> implementation. I tested it on Linux 3.3, FreeBSD 8.2, OpenBSD 5.0 and
> Windows Seven. The implementation is now waiting your review!
>
> There is also the toy implementation in pure Python for Python < 3.3:
> https://bitbucket.org/haypo/misc/src/tip/python/pep418.py
>
> Antoine asked "Is there a designated dictator for this PEP?". Nobody
> answered. Maybe Guido van Rossum?
>
> Victor
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)

From ezio.melotti at gmail.com  Wed Apr 18 05:19:50 2012
From: ezio.melotti at gmail.com (Ezio Melotti)
Date: Tue, 17 Apr 2012 21:19:50 -0600
Subject: [Python-Dev] [Python-checkins] cpython (2.7): Clean-up the
	SQLite introduction.
In-Reply-To: <E1SKKwI-0002gD-Lc@dinsdale.python.org>
References: <E1SKKwI-0002gD-Lc@dinsdale.python.org>
Message-ID: <CACBhJdGuNzq2wj_vZqLcW-yTysHvUATjYMcxHxsydEDtJDkvcQ@mail.gmail.com>

Hi,

On Tue, Apr 17, 2012 at 8:48 PM, raymond.hettinger
<python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/d229032dc213
> changeset: ? 76387:d229032dc213
> branch: ? ? ?2.7
> user: ? ? ? ?Raymond Hettinger <python at rcn.com>
> date: ? ? ? ?Tue Apr 17 22:48:06 2012 -0400
> summary:
> ?Clean-up the SQLite introduction.
>
> files:
> ?Doc/library/sqlite3.rst | ?52 ++++++++++++++--------------
> ?1 files changed, 26 insertions(+), 26 deletions(-)
>
>
> diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst
> --- a/Doc/library/sqlite3.rst
> +++ b/Doc/library/sqlite3.rst
> @@ -23,7 +23,7 @@
> ?:file:`/tmp/example` file::
>

The filename here should be updated too.

> ? ?import sqlite3
> - ? conn = sqlite3.connect('/tmp/example')
> + ? conn = sqlite3.connect('example.db')

Best Regards,
Ezio Melotti

From stephen at xemacs.org  Wed Apr 18 08:45:31 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 18 Apr 2012 15:45:31 +0900
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
Message-ID: <CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>

On Wed, Apr 18, 2012 at 8:25 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:

> Ok ok, resolution / accuracy / precision are confusing (or at least
> not well known concepts).

Maybe not to us, but in fields like astronomy and mechanical
engineering there are commonly accepted definitions:

Resolution: the smallest difference between two physical values that
results in a different measurement by a given instrument.

Precision: the amount of deviation among measurements of the same
physical value by a single instrument.

Accuracy: the amount of deviation of measurements by a given
instrument from true values.

As usual there are issues of average vs. worst case, different
resolution/precision/accuracy over the instrument's range, etc. which
need to be considered in reporting values for these properties.

A typical application to clocks would be the duration of one tick.  If
the clock ticks once per second and time values are reported in
nanoseconds, the /resolution/ is *1 billion* nanoseconds, not *1*
nanosecond.    /Precision/ corresponds to the standard deviation of
tick durations.  It is not necessarily the case that a precise
instrument will be accurate; if every tick is *exactly* 59/60 seconds,
the clock is infinitely precise but horribly inaccurate for most
purposes (it loses an hour every three days, and you'll miss your
favorite TV show!)  And two /accurate/ clocks should report the same
times and the same durations when measuring the same things.

I don't really care if Python decides to use idiosyncratic
definitions, but the above are easy enough to find (eg
http://en.wikipedia.org/wiki/Accuracy_and_precision).

From martin at v.loewis.de  Wed Apr 18 09:11:15 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Wed, 18 Apr 2012 09:11:15 +0200
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120418091115.Horde.O35uQ9jz9kRPjmkTkHhisvA@webmail.df.eu>

> Well, we specifically decided that objects with __del__ methods that  
> are part of a cycle cannot be run.
> The same reasoning was applied to generators, if they are in a certain state.
> What makes iobase so special that its 'close' method can be run even  
> if it is part of a cycle?

It's a hack, and I find it well-documented in iobase.c. It explains  
what tricks
it has to go through to still invoke methods from tp_del.

Invoking methods in tp_clear I find fairly harmless, in comparison. My only
concern is that errors are silently ignored. However, I don't think  
this matters
in practice, since io objects typically are not part of cycles, anyway.

> Why not allow it for all objects, then?

It's *allowed* for all objects. Why do you think it is not?

It must be opt-in, though. In the IO case, there are certain drawbacks;
not being able to report errors is the most prominent one. Any other object
implementation will have to evaluate whether to follow the iobase approach,
or implement a regular __del__. I personally consider the resurrection in
tp_del a much more serious problem, though, as this goes explicitly against
the design of the release procedure. For iobase, it's ok since it can evolve
along with the rest of the code base. Any third-party author would have to
accept that such approach can break from one Python release to the next.

I wonder why Python couldn't promise to always invoke tp_clear on GC
objects; ISTM that this would remove the need for resurrection in tp_del.

> At the very least, I think this behaviour (this exception for  
> iobase) merits being explicitly documented.

I find all of this well-documented in iobase.c. If you think anything
else needs to be said, please submit patches.

Regards,
Martin



From hrvoje.niksic at avl.com  Wed Apr 18 09:31:51 2012
From: hrvoje.niksic at avl.com (Hrvoje Niksic)
Date: Wed, 18 Apr 2012 09:31:51 +0200
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAP7+vJKJd0vQH9Jdx_5eWzu3=swb9uHBAqEeJtrsktEeLWSs1w@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>	<20120416113037.66e4da6f@limelight.wooz.org>	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>	<CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>
	<CAP7+vJKJd0vQH9Jdx_5eWzu3=swb9uHBAqEeJtrsktEeLWSs1w@mail.gmail.com>
Message-ID: <4F8E6DE7.1080100@avl.com>

On 04/17/2012 04:21 PM, Guido van Rossum wrote:
> I hope that's now what it says about slices -- that was meant for dict
> displays. For slices it should be symmetrical. In this case I would
> remove the spaces around the +, but it's okay to add spaces around the
> : too. It does look odd to have an operator that binds tighter (the +)
> surrounded by spaces while the operator that binds less tight (:) is
> not.

The same oddity occurs with expressions in kwargs calls:

func(pos1, pos2, keyword=foo + bar)

I find myself wanting to add parentheses arround the + to make the code 
clearer.

From flub at devork.be  Wed Apr 18 10:29:24 2012
From: flub at devork.be (Floris Bruynooghe)
Date: Wed, 18 Apr 2012 09:29:24 +0100
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <20120417113631.7fb1b543@resist.wooz.org>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
Message-ID: <CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>

On 17 April 2012 16:36, Barry Warsaw <barry at python.org> wrote:
> On Apr 17, 2012, at 08:25 AM, R. David Murray wrote:
>
>>On Tue, 17 Apr 2012 08:53:43 +0200, Matej Cepl <mcepl at redhat.com> wrote:
>>> On 16.4.2012 18:10, Nam Nguyen wrote:
>>> > a_list[pos + 1 : -1]
>>>
>>> or other way around
>>>
>>> a_list[pos+1:-1]
>>
>>
>>That's what I always use. ?No spaces inside the brackets for me :)
>>
>>If the expression gets unreadable that way, factor it out.

Ditto here.

And since this is OT by now, one of the other pep8 annoyances I
have[0] is the blanket whitespace around arithmetic operators,
including **.  To me the first just looks ugly:

>>> 1024 ** 2
>>> 1024**2

Certainly when the expressions are larger.

Regards,
Floris

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org

From victor.stinner at gmail.com  Wed Apr 18 12:29:45 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 18 Apr 2012 12:29:45 +0200
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
Message-ID: <CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>

>> Ok ok, resolution / accuracy / precision are confusing (or at least
>> not well known concepts).
>
> Maybe not to us, but in fields like astronomy and mechanical
> engineering there are commonly accepted definitions:

I was just talking of the name of the time.perf_counter() function:
"perf_counter" vs "high_precision" vs "high_resolution" (or even
"highres") names. For the defintion of these words, see the Glossary
in the PEP.
http://www.python.org/dev/peps/pep-0418/#glossary

It already contains a link to the  Wikipedia article "Accuracy_and_precision".

I don't want to spend days on this glossary. If anyone is motivated to
write a perfect (or at least better) glossary, please do it! And send
me the diff of the pep-0418.txt file. I don't really feel qualified
(nor motivated) to write/maintain such glossary.

Victor

From ncoghlan at gmail.com  Wed Apr 18 13:19:03 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 18 Apr 2012 21:19:03 +1000
Subject: [Python-Dev] [Python-checkins] cpython: Fix #14600. Correct
 reference handling and naming of ImportError convenience
In-Reply-To: <E1SKGOq-0003E2-3B@dinsdale.python.org>
References: <E1SKGOq-0003E2-3B@dinsdale.python.org>
Message-ID: <CADiSq7cgghjewZeACKOWJL9t+aah3ay7jus7qy0Jmts-XSs1xg@mail.gmail.com>

On Wed, Apr 18, 2012 at 7:57 AM, brian.curtin
<python-checkins at python.org> wrote:
> diff --git a/Python/errors.c b/Python/errors.c
> --- a/Python/errors.c
> +++ b/Python/errors.c
> @@ -586,50 +586,43 @@
> ?#endif /* MS_WINDOWS */
>
> ?PyObject *
> -PyErr_SetExcWithArgsKwargs(PyObject *exc, PyObject *args, PyObject *kwargs)
> +PyErr_SetImportError(PyObject *msg, PyObject *name, PyObject *path)
> ?{
> - ? ?PyObject *val;
> + ? ?PyObject *args, *kwargs, *error;
> +
> + ? ?args = PyTuple_New(1);
> + ? ?if (args == NULL)
> + ? ? ? ?return NULL;
> +
> + ? ?kwargs = PyDict_New();
> + ? ?if (args == NULL)
> + ? ? ? ?return NULL;
> +
> + ? ?if (name == NULL)
> + ? ? ? ?name = Py_None;
> +
> + ? ?if (path == NULL)
> + ? ? ? ?path = Py_None;

Py_INCREF's?

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From phd at phdru.name  Wed Apr 18 13:28:49 2012
From: phd at phdru.name (Oleg Broytman)
Date: Wed, 18 Apr 2012 15:28:49 +0400
Subject: [Python-Dev] [Python-checkins] cpython: Fix #14600. Correct
 reference handling and naming of ImportError convenience
In-Reply-To: <CADiSq7cgghjewZeACKOWJL9t+aah3ay7jus7qy0Jmts-XSs1xg@mail.gmail.com>
References: <E1SKGOq-0003E2-3B@dinsdale.python.org>
	<CADiSq7cgghjewZeACKOWJL9t+aah3ay7jus7qy0Jmts-XSs1xg@mail.gmail.com>
Message-ID: <20120418112849.GB3434@iskra.aviel.ru>

On Wed, Apr 18, 2012 at 09:19:03PM +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Wed, Apr 18, 2012 at 7:57 AM, brian.curtin
> <python-checkins at python.org> wrote:
> > diff --git a/Python/errors.c b/Python/errors.c
> > --- a/Python/errors.c
> > +++ b/Python/errors.c
> > @@ -586,50 +586,43 @@
> > +  args = PyTuple_New(1);
> > +  if (args == NULL)
> > +    return NULL;
> > +
> > +  kwargs = PyDict_New();
> > +  if (args == NULL)
> > +    return NULL;

   Shouldn't the second test be
if (kwargs == NULL)
   ???

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From ncoghlan at gmail.com  Wed Apr 18 14:37:31 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 18 Apr 2012 22:37:31 +1000
Subject: [Python-Dev] Setting up a RHEL6 buildbot
In-Reply-To: <CADiSq7cT-mouSB7pGoocA52+Yz7JxqT4aGm=9ex1V+_KLRkELA@mail.gmail.com>
References: <CADiSq7cT-mouSB7pGoocA52+Yz7JxqT4aGm=9ex1V+_KLRkELA@mail.gmail.com>
Message-ID: <CADiSq7f_F8AxZg170z4UiJP8R92rjP1dmG_Qj2AdSjMfdSHYyQ@mail.gmail.com>

On Fri, Mar 23, 2012 at 1:48 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> I'm looking into getting a RHEL6 system set up to add to the buildbot
> fleet.

This is getting closer to being ready to go. Could one of the BB
admins contact me off-list to set up the slave name and password?

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From sdl.web at gmail.com  Wed Apr 18 14:54:28 2012
From: sdl.web at gmail.com (Leo)
Date: Wed, 18 Apr 2012 20:54:28 +0800
Subject: [Python-Dev] webbrowser no longer support 'internet-config' on Mac
Message-ID: <m1ipgxgpyz.fsf@gmail.com>

The doc says supported as in
http://docs.python.org/library/webbrowser.html

but the code has been deleted in
http://hg.python.org/cpython/rev/66b3eda6283f

Leo


From ncoghlan at gmail.com  Wed Apr 18 15:39:34 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 18 Apr 2012 23:39:34 +1000
Subject: [Python-Dev] [Python-checkins] cpython: Fix email post-commit
	review comments.
In-Reply-To: <E1SKUyU-00070j-C9@dinsdale.python.org>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
Message-ID: <CADiSq7dmSZ6KP+Zjx7npOKMDzK1B8wc7v3RmrP9uR6SiLjqSVA@mail.gmail.com>

On Wed, Apr 18, 2012 at 11:31 PM, brian.curtin
<python-checkins at python.org> wrote:
> - ? ?if (name == NULL)
> + ? ?if (name == NULL) {
> + ? ? ? ?Py_INCREF(Py_None);
> ? ? ? ? name = Py_None;
> + ? ?}

A slightly more traditional way to write that would be:

    name = Py_None;
    Py_INCREF(name);

> - ? ?if (path == NULL)
> + ? ?if (path == NULL) {
> + ? ? ? ?Py_INCREF(Py_None);
> ? ? ? ? path = Py_None;
> + ? ?}

Ditto.

>
> ? ? Py_INCREF(msg);
> - ? ?PyTuple_SetItem(args, 0, msg);
> + ? ?PyTuple_SetItem(args, 0, NULL);//msg);

However, *this* looks a lot more suspicious... accidental commit of
debugging code?

(if not for spotting this last problem, I wouldn't have even mentioned
the first two)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From stephen at xemacs.org  Wed Apr 18 15:58:16 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 18 Apr 2012 22:58:16 +0900
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
	<CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>
Message-ID: <CAL_0O1-Bs_7kZ2kTktLTyimCpAoPDfoX3GsZe-zvBjLbBJd0Lw@mail.gmail.com>

On Wed, Apr 18, 2012 at 7:29 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:

> If anyone is motivated to write a perfect (or at least better) glossary, please do it!

We don't want a perfect glossary, we want one we agree on, that
defines terms consistently with the way they're used in the PEP.
However, what I read in this thread is that the PEP protagonist
doesn't feel qualified or motivated to maintain the glossary, and
others posting that surely we agree on what we're talking about even
though the definitions in the PEP are controversial and at least one
(resolution) is close to meaningless in actual use.  It bothers me
that we are writing code without having agreement about the terms that
define what it's trying to accomplish.  Especially when an important
subset of users who I think are likely to care (viz, the scientific
and engineering community) seems likely to use different definitions.

Has anybody asked people on the scipy channels what they think about all this?

> It already contains a link to the  Wikipedia article "Accuracy_and_precision".

Well, its definitions differ of precision and resolution differ from
the PEP's.  I'm disturbed that the PEP does not remark about this
despite citing it.

From rdmurray at bitdance.com  Wed Apr 18 16:08:10 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 18 Apr 2012 10:08:10 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Fix email post-commit
	review comments.
In-Reply-To: <CADiSq7dmSZ6KP+Zjx7npOKMDzK1B8wc7v3RmrP9uR6SiLjqSVA@mail.gmail.com>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<CADiSq7dmSZ6KP+Zjx7npOKMDzK1B8wc7v3RmrP9uR6SiLjqSVA@mail.gmail.com>
Message-ID: <20120418140811.2B8F62509E4@webabinitio.net>

We're seeing segfuilts on the buildbots now.  Example:

http://www.python.org/dev/buildbot/all/builders/x86%20Ubuntu%20Shared%203.x/builds/5715

On Wed, 18 Apr 2012 23:39:34 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Wed, Apr 18, 2012 at 11:31 PM, brian.curtin
> <python-checkins at python.org> wrote:
> > - ?? ??if (name == NULL)
> > + ?? ??if (name == NULL) {
> > + ?? ?? ?? ??Py_INCREF(Py_None);
> > ?? ?? ?? ?? name = Py_None;
> > + ?? ??}
> 
> A slightly more traditional way to write that would be:
> 
>     name = Py_None;
>     Py_INCREF(name);
> 
> > - ?? ??if (path == NULL)
> > + ?? ??if (path == NULL) {
> > + ?? ?? ?? ??Py_INCREF(Py_None);
> > ?? ?? ?? ?? path = Py_None;
> > + ?? ??}
> 
> Ditto.
> 
> >
> > ?? ?? Py_INCREF(msg);
> > - ?? ??PyTuple_SetItem(args, 0, msg);
> > + ?? ??PyTuple_SetItem(args, 0, NULL);//msg);
> 
> However, *this* looks a lot more suspicious... accidental commit of
> debugging code?
> 
> (if not for spotting this last problem, I wouldn't have even mentioned
> the first two)
> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan???? |???? ncoghlan at gmail.com???? |???? Brisbane, Australia
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/rdmurray%40bitdance.com

From rdmurray at bitdance.com  Wed Apr 18 16:09:35 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 18 Apr 2012 10:09:35 -0400
Subject: [Python-Dev] webbrowser no longer support 'internet-config' on
	Mac
In-Reply-To: <m1ipgxgpyz.fsf@gmail.com>
References: <m1ipgxgpyz.fsf@gmail.com>
Message-ID: <20120418140936.21DD72509E4@webabinitio.net>

Please submit a bug report at bugs.python.org.  Bugs posted to this
mailing list tend to get forgotten unless a tracker issue is created.

On Wed, 18 Apr 2012 20:54:28 +0800, Leo <sdl.web at gmail.com> wrote:
> The doc says supported as in
> http://docs.python.org/library/webbrowser.html
> 
> but the code has been deleted in
> http://hg.python.org/cpython/rev/66b3eda6283f
> 
> Leo
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/rdmurray%40bitdance.com

From brett at python.org  Wed Apr 18 16:11:17 2012
From: brett at python.org (Brett Cannon)
Date: Wed, 18 Apr 2012 10:11:17 -0400
Subject: [Python-Dev] webbrowser no longer support 'internet-config' on
	Mac
In-Reply-To: <m1ipgxgpyz.fsf@gmail.com>
References: <m1ipgxgpyz.fsf@gmail.com>
Message-ID: <CAP1=2W4r_-_XNKqqU0Xdayd_ZQQdTzq+PS4_EZbjvStOCZyriQ@mail.gmail.com>

Please file a bug report at bugs.python.org so this isn't lost.

On Wed, Apr 18, 2012 at 08:54, Leo <sdl.web at gmail.com> wrote:

> The doc says supported as in
> http://docs.python.org/library/webbrowser.html
>
> but the code has been deleted in
> http://hg.python.org/cpython/rev/66b3eda6283f
>
> Leo
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120418/634cfde3/attachment.html>

From solipsis at pitrou.net  Wed Apr 18 16:21:50 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 18 Apr 2012 16:21:50 +0200
Subject: [Python-Dev] cpython: Fix email post-commit review comments.
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
Message-ID: <20120418162150.182c5b26@pitrou.net>

On Wed, 18 Apr 2012 15:31:10 +0200
brian.curtin <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/bf23a6c215f6
> changeset:   76388:bf23a6c215f6
> parent:      76385:6762b943ee59
> user:        Brian Curtin <brian at python.org>
> date:        Wed Apr 18 08:30:51 2012 -0500
> summary:
>   Fix email post-commit review comments.
> 
> Add INCREFs, fix args->kwargs, and a second args==NULL check was removed,
> left over from a merger with another function. Instead, checking msg==NULL
> does what that used to do in a roundabout way.

I don't think INCREFs were necessary, actually.
PyDict_SetItemString doesn't steal a reference.

(and here we see why reference-stealing APIs are a nuisance: because
you never know in advance whether a function will steal a reference or
not, and you have to read the docs for each and every C API call you
make)

Regards

Antoine.



From guido at python.org  Wed Apr 18 16:47:13 2012
From: guido at python.org (Guido van Rossum)
Date: Wed, 18 Apr 2012 07:47:13 -0700
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
Message-ID: <CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>

On Wed, Apr 18, 2012 at 1:29 AM, Floris Bruynooghe <flub at devork.be> wrote:
> And since this is OT by now, one of the other pep8 annoyances I
> have[0] is the blanket whitespace around arithmetic operators,
> including **. ?To me the first just looks ugly:
>
>>>> 1024 ** 2
>>>> 1024**2
>
> Certainly when the expressions are larger.

I don't believe PEP 8 requires whitespace around all binary operators.
Where do you read that?

-- 
--Guido van Rossum (python.org/~guido)

From rosuav at gmail.com  Wed Apr 18 17:47:36 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 19 Apr 2012 01:47:36 +1000
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
Message-ID: <CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>

On Thu, Apr 19, 2012 at 12:47 AM, Guido van Rossum <guido at python.org> wrote:
> I don't believe PEP 8 requires whitespace around all binary operators.
> Where do you read that?

Quoting from http://www.python.org/dev/peps/pep-0008/#other-recommendations
(with elision):

Use spaces around arithmetic operators:
   No:
      i=i+1
      submitted +=1
      x = x*2 - 1
      hypot2 = x*x + y*y
      c = (a+b) * (a-b)

End quote.

In my code, whether Python or any other language, I tend to follow the
principle that whitespace is completely optional in these expressions,
but if spaces surround any operator, they should (generally) also
surround all operators of lower precedence in the same expression. So
I would quite happily accept all of the expressions above (except
'submitted', which is inconsistent), but would prefer not to see
something like:

c=(a + b)*(a - b)

which is also forbidden by PEP 8.

ChrisA

From g.brandl at gmx.net  Wed Apr 18 18:25:16 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 18 Apr 2012 18:25:16 +0200
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
Message-ID: <jmmpsn$1k1$1@dough.gmane.org>

On 18.04.2012 17:47, Chris Angelico wrote:
> On Thu, Apr 19, 2012 at 12:47 AM, Guido van Rossum<guido at python.org>  wrote:
>>  I don't believe PEP 8 requires whitespace around all binary operators.
>>  Where do you read that?
>
> Quoting from http://www.python.org/dev/peps/pep-0008/#other-recommendations
> (with elision):
>
> Use spaces around arithmetic operators:
>     No:
>        i=i+1
>        submitted +=1
>        x = x*2 - 1
>        hypot2 = x*x + y*y
>        c = (a+b) * (a-b)
>
> End quote.

I agree that this could be reworded.  Especially when the operands are
as short as in the examples, the last three "No"s read better to me than
the "Yes" entries.  In this case, spacing serves for visually grouping
expressions by precedence, which otherwise could also be indicated by
(semantically unnecessary) parens.

But in all cases discussed here, PEP8 should not be seen as a law.
Its second section ("A Foolish Consistency is the Hobgoblin of Little
Minds") is quite prominent for a reason.

Georg


From guido at python.org  Wed Apr 18 19:38:41 2012
From: guido at python.org (Guido van Rossum)
Date: Wed, 18 Apr 2012 10:38:41 -0700
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <jmmpsn$1k1$1@dough.gmane.org>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
Message-ID: <CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>

On Wed, Apr 18, 2012 at 9:25 AM, Georg Brandl <g.brandl at gmx.net> wrote:
> On 18.04.2012 17:47, Chris Angelico wrote:
>>
>> On Thu, Apr 19, 2012 at 12:47 AM, Guido van Rossum<guido at python.org>
>> ?wrote:
>>>
>>> ?I don't believe PEP 8 requires whitespace around all binary operators.
>>> ?Where do you read that?
>>
>>
>> Quoting from
>> http://www.python.org/dev/peps/pep-0008/#other-recommendations
>> (with elision):
>>
>> Use spaces around arithmetic operators:
>> ? ?No:
>> ? ? ? i=i+1
>> ? ? ? submitted +=1
>> ? ? ? x = x*2 - 1
>> ? ? ? hypot2 = x*x + y*y
>> ? ? ? c = (a+b) * (a-b)
>>
>> End quote.
>
>
> I agree that this could be reworded. ?Especially when the operands are
> as short as in the examples, the last three "No"s read better to me than
> the "Yes" entries. ?In this case, spacing serves for visually grouping
> expressions by precedence, which otherwise could also be indicated by
> (semantically unnecessary) parens.

Indeed. I don't know who put that in, it wasn't me.

> But in all cases discussed here, PEP8 should not be seen as a law.
> Its second section ("A Foolish Consistency is the Hobgoblin of Little
> Minds") is quite prominent for a reason.

I think whoever put that blanket rule in the PEP fell prey to this.

Let's change this to something more reasonable, e.g.

"""
If operators with different priorities are used, consider adding
whitespace around the operators with the lowest priority(ies). This is
very much to taste, however, never use more than one space, and always
have the same amount of whitespace on both sides of a binary operator.
"""

-- 
--Guido van Rossum (python.org/~guido)

From ethan at stoneleaf.us  Wed Apr 18 20:07:26 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 18 Apr 2012 11:07:26 -0700
Subject: [Python-Dev] __hash__ documentation
Message-ID: <4F8F02DE.9020309@stoneleaf.us>

http://bugs.python.org/issue14617

Patch attached to issue.

~Ethan~

From brian at python.org  Wed Apr 18 21:03:27 2012
From: brian at python.org (Brian Curtin)
Date: Wed, 18 Apr 2012 14:03:27 -0500
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <4F8F02DE.9020309@stoneleaf.us>
References: <4F8F02DE.9020309@stoneleaf.us>
Message-ID: <CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>

On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
> http://bugs.python.org/issue14617
>
> Patch attached to issue.

Can I request that you not immediately post issues to python-dev?
Those who follow the bug tracker will see the issue and act
accordingly.

(unless I missed some explicit request that this be posted here, in
which case I apologize)

From ethan at stoneleaf.us  Wed Apr 18 21:19:39 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 18 Apr 2012 12:19:39 -0700
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
Message-ID: <4F8F13CB.6040602@stoneleaf.us>

Brian Curtin wrote:
> On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
> Those who follow the bug tracker will see the issue and act
> accordingly.

How does one follow the bug tracker?

~Ethan~

From benjamin at python.org  Wed Apr 18 21:11:30 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 18 Apr 2012 15:11:30 -0400
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <4F8F13CB.6040602@stoneleaf.us>
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
Message-ID: <CAPZV6o90c572j6ddMtMqQyXA8=kWpBWDVmqXfXynncQ1seToEg@mail.gmail.com>

2012/4/18 Ethan Furman <ethan at stoneleaf.us>:
> Brian Curtin wrote:
>>
>> On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>> Those who follow the bug tracker will see the issue and act
>> accordingly.
>
>
> How does one follow the bug tracker?

One checks occasionally to see if anything interesting has popped up
or, for the insane, subscribes to python-bugs.


-- 
Regards,
Benjamin

From solipsis at pitrou.net  Wed Apr 18 21:14:04 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 18 Apr 2012 21:14:04 +0200
Subject: [Python-Dev] __hash__ documentation
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
Message-ID: <20120418211404.2700d7f7@pitrou.net>

On Wed, 18 Apr 2012 12:19:39 -0700
Ethan Furman <ethan at stoneleaf.us> wrote:
> Brian Curtin wrote:
> > On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
> > Those who follow the bug tracker will see the issue and act
> > accordingly.
> 
> How does one follow the bug tracker?

Checking it frequently is a possibility.
Reading http://mail.python.org/mailman/listinfo/new-bugs-announce is
another.

In any case, annoucing new issues on python-dev would only flood the
mailing-list and infuriate all readers. You should keep it to really
important issues, or if you have a specific question to ask.

Regards

Antoine.



From martin at v.loewis.de  Wed Apr 18 21:29:00 2012
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Wed, 18 Apr 2012 21:29:00 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Issue #11750: The
 Windows API functions scattered in the _subprocess and
In-Reply-To: <E1SKZzK-0001QB-GS@dinsdale.python.org>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
Message-ID: <4F8F15FC.1010705@v.loewis.de>

Am 18.04.2012 20:52, schrieb antoine.pitrou:
> http://hg.python.org/cpython/rev/f3a27d11101a
> changeset:   76405:f3a27d11101a
> user:        Antoine Pitrou <solipsis at pitrou.net>
> date:        Wed Apr 18 20:51:15 2012 +0200
> summary:
>   Issue #11750: The Windows API functions scattered in the _subprocess and
> _multiprocessing.win32 modules now live in a single module "_winapi".
> Patch by sbt.

Can we use Real Names, please?

Regards,
Martin

From solipsis at pitrou.net  Wed Apr 18 21:30:14 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 18 Apr 2012 21:30:14 +0200
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
Message-ID: <20120418213014.00d36cc0@pitrou.net>

On Wed, 18 Apr 2012 21:29:00 +0200
"Martin v. L?wis" <martin at v.loewis.de> wrote:
> Am 18.04.2012 20:52, schrieb antoine.pitrou:
> > http://hg.python.org/cpython/rev/f3a27d11101a
> > changeset:   76405:f3a27d11101a
> > user:        Antoine Pitrou <solipsis at pitrou.net>
> > date:        Wed Apr 18 20:51:15 2012 +0200
> > summary:
> >   Issue #11750: The Windows API functions scattered in the _subprocess and
> > _multiprocessing.win32 modules now live in a single module "_winapi".
> > Patch by sbt.
> 
> Can we use Real Names, please?

Do we have a policy about that? sbt seems happy using a pseudonym (and
I personally don't have a problem with it).

Regards

Antoine.



From martin at v.loewis.de  Wed Apr 18 21:34:01 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 18 Apr 2012 21:34:01 +0200
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <4F8F13CB.6040602@stoneleaf.us>
References: <4F8F02DE.9020309@stoneleaf.us>	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
Message-ID: <4F8F1729.2020903@v.loewis.de>

Am 18.04.2012 21:19, schrieb Ethan Furman:
> Brian Curtin wrote:
>> On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>> Those who follow the bug tracker will see the issue and act
>> accordingly.
> 
> How does one follow the bug tracker?

I do by subscribing to new-bugs-announce.

Regards,
Martin

From ethan at stoneleaf.us  Wed Apr 18 21:18:28 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 18 Apr 2012 12:18:28 -0700
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
Message-ID: <4F8F1384.3080200@stoneleaf.us>

Brian Curtin wrote:
> On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>> http://bugs.python.org/issue14617
>>
>> Patch attached to issue.
> 
> Can I request that you not immediately post issues to python-dev?
> Those who follow the bug tracker will see the issue and act
> accordingly.
> 
> (unless I missed some explicit request that this be posted here, in
> which case I apologize)

No problem, still learning how things work.  :)

~Ethan~

From tjreedy at udel.edu  Wed Apr 18 21:50:13 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 18 Apr 2012 15:50:13 -0400
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
Message-ID: <jmn5tr$4eg$1@dough.gmane.org>

On 4/18/2012 2:45 AM, Stephen J. Turnbull wrote:
> On Wed, Apr 18, 2012 at 8:25 AM, Victor Stinner
> <victor.stinner at gmail.com>  wrote:
>
>> Ok ok, resolution / accuracy / precision are confusing (or at least
>> not well known concepts).
>
> Maybe not to us, but in fields like astronomy and mechanical
> engineering there are commonly accepted definitions:
>
> Resolution: the smallest difference between two physical values that
> results in a different measurement by a given instrument.
>
> Precision: the amount of deviation among measurements of the same
> physical value by a single instrument.
>
> Accuracy: the amount of deviation of measurements by a given
> instrument from true values.

These are standard definitions in US English that I learned in physics 
and statistics decades ago.

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Wed Apr 18 21:56:42 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 18 Apr 2012 15:56:42 -0400
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <4F8F13CB.6040602@stoneleaf.us>
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
Message-ID: <jmn69v$4eg$2@dough.gmane.org>

On 4/18/2012 3:19 PM, Ethan Furman wrote:
> Brian Curtin wrote:
>> On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
>> Those who follow the bug tracker will see the issue and act
>> accordingly.
>
> How does one follow the bug tracker?

I look at the Friday summary, paying particular attention to issues with 
no responses from those who follow it more frequently.

-- 
Terry Jan Reedy


From rdmurray at bitdance.com  Wed Apr 18 21:58:20 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 18 Apr 2012 15:58:20 -0400
Subject: [Python-Dev] PEP 418: Add monotonic time,
	performance counter and process time functions
In-Reply-To: <jmn5tr$4eg$1@dough.gmane.org>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
	<jmn5tr$4eg$1@dough.gmane.org>
Message-ID: <20120418195821.2419F2509E4@webabinitio.net>

On Wed, 18 Apr 2012 15:50:13 -0400, Terry Reedy <tjreedy at udel.edu> wrote:
> On 4/18/2012 2:45 AM, Stephen J. Turnbull wrote:
> > On Wed, Apr 18, 2012 at 8:25 AM, Victor Stinner
> > <victor.stinner at gmail.com>  wrote:
> >
> >> Ok ok, resolution / accuracy / precision are confusing (or at least
> >> not well known concepts).
> >
> > Maybe not to us, but in fields like astronomy and mechanical
> > engineering there are commonly accepted definitions:
> >
> > Resolution: the smallest difference between two physical values that
> > results in a different measurement by a given instrument.
> >
> > Precision: the amount of deviation among measurements of the same
> > physical value by a single instrument.
> >
> > Accuracy: the amount of deviation of measurements by a given
> > instrument from true values.
> 
> These are standard definitions in US English that I learned in physics 
> and statistics decades ago.

My problem was that I was confusing this definition of precision with
the "precision" of the computer representation of the number (that is,
the number of digits in the returned result).

--David

From nad at acm.org  Wed Apr 18 22:04:57 2012
From: nad at acm.org (Ned Deily)
Date: Wed, 18 Apr 2012 13:04:57 -0700
Subject: [Python-Dev] webbrowser no longer support 'internet-config' on
	Mac
References: <m1ipgxgpyz.fsf@gmail.com>
	<20120418140936.21DD72509E4@webabinitio.net>
Message-ID: <nad-F9E429.13045718042012@news.gmane.org>

In article <20120418140936.21DD72509E4 at webabinitio.net>,
 "R. David Murray" <rdmurray at bitdance.com> wrote:
> Please submit a bug report at bugs.python.org.  Bugs posted to this
> mailing list tend to get forgotten unless a tracker issue is created.
> 
> On Wed, 18 Apr 2012 20:54:28 +0800, Leo <sdl.web at gmail.com> wrote:
> > The doc says supported as in
> > http://docs.python.org/library/webbrowser.html
> > 
> > but the code has been deleted in
> > http://hg.python.org/cpython/rev/66b3eda6283f

Thanks for the report: the documentation for the obsolete 
internet-config option is now removed (2.7 - dd23333b579a, 3.2 - 
292cbd59dbe0, 3.3 - b5e6cbacd6ab).

-- 
 Ned Deily,
 nad at acm.org


From nad at acm.org  Wed Apr 18 22:16:01 2012
From: nad at acm.org (Ned Deily)
Date: Wed, 18 Apr 2012 13:16:01 -0700
Subject: [Python-Dev] __hash__ documentation
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
	<20120418211404.2700d7f7@pitrou.net>
Message-ID: <nad-84D831.13160118042012@news.gmane.org>

In article <20120418211404.2700d7f7 at pitrou.net>,
 Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Wed, 18 Apr 2012 12:19:39 -0700
> Ethan Furman <ethan at stoneleaf.us> wrote:
> > Brian Curtin wrote:
> > > On Wed, Apr 18, 2012 at 13:07, Ethan Furman <ethan at stoneleaf.us> wrote:
> > > Those who follow the bug tracker will see the issue and act
> > > accordingly.
> > 
> > How does one follow the bug tracker?
> 
> Checking it frequently is a possibility.
> Reading http://mail.python.org/mailman/listinfo/new-bugs-announce is
> another.

Another is following changes via the gmane.org mirror of the bugs list.  
gmane.org provides web, NNTP newsreader, and RSS feeds of all of the 
mailing lists mirrored there:

http://dir.gmane.org/gmane.comp.python.bugs

Many of the other python.org-hosted mailing lists are mirrored at gmane 
as well.

-- 
 Ned Deily,
 nad at acm.org


From ericsnowcurrently at gmail.com  Thu Apr 19 00:22:57 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Wed, 18 Apr 2012 16:22:57 -0600
Subject: [Python-Dev] cpython: Fix email post-commit review comments.
In-Reply-To: <20120418162150.182c5b26@pitrou.net>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
Message-ID: <CALFfu7Czc22=_uKzdtZMaj70PcPs_HhiwWey2HPEim3MH27krg@mail.gmail.com>

On Wed, Apr 18, 2012 at 8:21 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> (and here we see why reference-stealing APIs are a nuisance: because
> you never know in advance whether a function will steal a reference or
> not, and you have to read the docs for each and every C API call you
> make)

+1

-eric

From greg.ewing at canterbury.ac.nz  Thu Apr 19 00:48:01 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 19 Apr 2012 10:48:01 +1200
Subject: [Python-Dev] cpython: Fix email post-commit review comments.
In-Reply-To: <20120418162150.182c5b26@pitrou.net>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
Message-ID: <4F8F44A1.40902@canterbury.ac.nz>

Antoine Pitrou wrote:

> (and here we see why reference-stealing APIs are a nuisance: because
> you never know in advance whether a function will steal a reference or
> not, and you have to read the docs for each and every C API call you
> make)

Fortunately, they're very rare, so you don't encounter
them often.

Unfortunately, they're very rare, so you're all the more
likely to forget about them and get bitten.

Functions with ref-stealing APIs really ought to have
a naming convention that makes them stand out and remind
you to consult the documentation.

-- 
Greg

From victor.stinner at gmail.com  Thu Apr 19 01:15:43 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 19 Apr 2012 01:15:43 +0200
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAL_0O1-Bs_7kZ2kTktLTyimCpAoPDfoX3GsZe-zvBjLbBJd0Lw@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
	<CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>
	<CAL_0O1-Bs_7kZ2kTktLTyimCpAoPDfoX3GsZe-zvBjLbBJd0Lw@mail.gmail.com>
Message-ID: <CAMpsgwZ2zuC4BULeQ9Dt1v-t3gzeHnVx2nqv6P311OYMVABgVw@mail.gmail.com>

>> If anyone is motivated to write a perfect (or at least better) glossary, please do it!
>
> We don't want a perfect glossary, we want one we agree on, that
> defines terms consistently with the way they're used in the PEP.
> However, what I read in this thread is that the PEP protagonist
> doesn't feel qualified or motivated to maintain the glossary, and
> others posting that surely we agree on what we're talking about even
> though the definitions in the PEP are controversial and at least one
> (resolution) is close to meaningless in actual use. ?It bothers me
> that we are writing code without having agreement about the terms that
> define what it's trying to accomplish. ?Especially when an important
> subset of users who I think are likely to care (viz, the scientific
> and engineering community) seems likely to use different definitions.

Well, I asked on IRC what I should do for these definitions because
I'm too tired to decide what to do. Ezio Melotti (Taggnostr) and R.
David Murray (bitdancer) prefer your definition over the current
definitions of accuracy, precision and resolution in the PEP. So I
replaced these definitions with yours.

Victor

From tseaver at palladion.com  Thu Apr 19 01:22:36 2012
From: tseaver at palladion.com (Tres Seaver)
Date: Wed, 18 Apr 2012 19:22:36 -0400
Subject: [Python-Dev] cpython: Fix email post-commit review comments.
In-Reply-To: <4F8F44A1.40902@canterbury.ac.nz>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
	<4F8F44A1.40902@canterbury.ac.nz>
Message-ID: <jmnibr$225$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/18/2012 06:48 PM, Greg Ewing wrote:

> Functions with ref-stealing APIs really ought to have a naming
> convention that makes them stand out and remind you to consult the
> documentation.

Maybe we should mandate that their names end with '_rtfm'.


Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk+PTLwACgkQ+gerLs4ltQ5YgACg17rdlCVf8YJmGoYP2eANC8ya
RhoAnimJr/5FzR59IELHAyhdXOO1c+uJ
=uWHZ
-----END PGP SIGNATURE-----


From dmalcolm at redhat.com  Thu Apr 19 02:01:30 2012
From: dmalcolm at redhat.com (David Malcolm)
Date: Wed, 18 Apr 2012 20:01:30 -0400
Subject: [Python-Dev] Highlighting reference-stealing APIs [was Re: cpython:
 Fix email post-commit review comments.]
In-Reply-To: <4F8F44A1.40902@canterbury.ac.nz>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net> <4F8F44A1.40902@canterbury.ac.nz>
Message-ID: <1334793691.31525.137.camel@surprise>

On Thu, 2012-04-19 at 10:48 +1200, Greg Ewing wrote:
> Antoine Pitrou wrote:
> 
> > (and here we see why reference-stealing APIs are a nuisance: because
> > you never know in advance whether a function will steal a reference or
> > not, and you have to read the docs for each and every C API call you
> > make)
> 
> Fortunately, they're very rare, so you don't encounter
> them often.
> 
> Unfortunately, they're very rare, so you're all the more
> likely to forget about them and get bitten.
> 
> Functions with ref-stealing APIs really ought to have
> a naming convention that makes them stand out and remind
> you to consult the documentation.
FWIW my refcount static analyzer adds various new compile-time
attributes to gcc:
http://gcc-python-plugin.readthedocs.org/en/latest/cpychecker.html#marking-functions-that-steal-references-to-their-arguments
so you can write declarations like these:

extern void bar(int i, PyObject *obj, int j, PyObject *other)
  CPYCHECKER_STEALS_REFERENCE_TO_ARG(2)
  CPYCHECKER_STEALS_REFERENCE_TO_ARG(4);

There's a similar attribute for functions that return borrowed
references:

  PyObject *foo(void)
    CPYCHECKER_RETURNS_BORROWED_REF;

Perhaps we should add such attributes to the headers for Python 3.3?
(perhaps with a different naming convention?)

Hope this is helpful
Dave


From steve at pearwood.info  Thu Apr 19 02:16:05 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 19 Apr 2012 10:16:05 +1000
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <4F8E6DE7.1080100@avl.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>	<20120416113037.66e4da6f@limelight.wooz.org>	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>	<CACac1F-YDN2jvDuTk_RX2F-7f=Q7u5QAM3NVw3R115LZ8ZccaA@mail.gmail.com>	<CAP7+vJKJd0vQH9Jdx_5eWzu3=swb9uHBAqEeJtrsktEeLWSs1w@mail.gmail.com>
	<4F8E6DE7.1080100@avl.com>
Message-ID: <4F8F5945.8010900@pearwood.info>

Hrvoje Niksic wrote:

> The same oddity occurs with expressions in kwargs calls:
> 
> func(pos1, pos2, keyword=foo + bar)
> 
> I find myself wanting to add parentheses arround the + to make the code 
> clearer.

Then why don't you?

In the above example, spaces around the + are not only optional but 
discouraged, this would be preferred:

func(pos1, pos2, keyword=foo+bar)

but if you insist on using spaces (perhaps because it is part of a larger 
expression) just use parentheses.

func(pos1, pos2, keyword=(foo*spam*ham*eggs + bar/spam**cheese))


Strictly speaking they're not needed, but if they make it easier to read (and 
I think they do) then why would you not use them?



-- 
Steven

From greg.ewing at canterbury.ac.nz  Thu Apr 19 02:23:41 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 19 Apr 2012 12:23:41 +1200
Subject: [Python-Dev] cpython: Fix email post-commit review comments.
In-Reply-To: <jmnibr$225$1@dough.gmane.org>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net> <4F8F44A1.40902@canterbury.ac.nz>
	<jmnibr$225$1@dough.gmane.org>
Message-ID: <4F8F5B0D.2070602@canterbury.ac.nz>

On 19/04/12 11:22, Tres Seaver wrote:

> Maybe we should mandate that their names end with '_rtfm'.

+1

-- 
Greg

From greg at krypto.org  Thu Apr 19 03:04:07 2012
From: greg at krypto.org (Gregory P. Smith)
Date: Wed, 18 Apr 2012 18:04:07 -0700
Subject: [Python-Dev] Highlighting reference-stealing APIs [was Re:
 cpython: Fix email post-commit review comments.]
In-Reply-To: <1334793691.31525.137.camel@surprise>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
	<4F8F44A1.40902@canterbury.ac.nz> <1334793691.31525.137.camel@surprise>
Message-ID: <CAGE7PN+tvKA6_M4w1a8g9v3Rw34oO13xpSmyqcoK+1U9Lp3PPg@mail.gmail.com>

On Wed, Apr 18, 2012 at 5:01 PM, David Malcolm <dmalcolm at redhat.com> wrote:

> On Thu, 2012-04-19 at 10:48 +1200, Greg Ewing wrote:
> > Antoine Pitrou wrote:
> >
> > > (and here we see why reference-stealing APIs are a nuisance: because
> > > you never know in advance whether a function will steal a reference or
> > > not, and you have to read the docs for each and every C API call you
> > > make)
> >
> > Fortunately, they're very rare, so you don't encounter
> > them often.
> >
> > Unfortunately, they're very rare, so you're all the more
> > likely to forget about them and get bitten.
> >
> > Functions with ref-stealing APIs really ought to have
> > a naming convention that makes them stand out and remind
> > you to consult the documentation.
> FWIW my refcount static analyzer adds various new compile-time
> attributes to gcc:
>
> http://gcc-python-plugin.readthedocs.org/en/latest/cpychecker.html#marking-functions-that-steal-references-to-their-arguments
> so you can write declarations like these:
>
> extern void bar(int i, PyObject *obj, int j, PyObject *other)
>  CPYCHECKER_STEALS_REFERENCE_TO_ARG(2)
>  CPYCHECKER_STEALS_REFERENCE_TO_ARG(4);
>
> There's a similar attribute for functions that return borrowed
> references:
>
>  PyObject *foo(void)
>    CPYCHECKER_RETURNS_BORROWED_REF;
>
> Perhaps we should add such attributes to the headers for Python 3.3?
> (perhaps with a different naming convention?)
>

+1  Adding these annotations and setting up a buildbot that builds using
cpychecker would be a great.

-gps


>
> Hope this is helpful
> Dave
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/greg%40krypto.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120418/6517afd5/attachment.html>

From ncoghlan at gmail.com  Thu Apr 19 03:20:03 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Apr 2012 11:20:03 +1000
Subject: [Python-Dev] cpython: Fix email post-commit review comments.
In-Reply-To: <20120418162150.182c5b26@pitrou.net>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
Message-ID: <CADiSq7eQ17H9w-UAb3qgJDVJgeD02Y9iyXdFSRsjtw=b+tLz7Q@mail.gmail.com>

On Thu, Apr 19, 2012 at 12:21 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> I don't think INCREFs were necessary, actually.
> PyDict_SetItemString doesn't steal a reference.

Yes, I was tired when that checkin went by and my brain didn't
register that the function was otherwise using borrowed refs for name
and path, so it was also correct to use borrowed refs to Py_None.

I should have been less cryptic and written out my full question
"Should there be Py_INCREF's here?" rather than using the shorthand (i
genuinely wasn't sure at the time, but that wasn't clear from what I
wrote).

> (and here we see why reference-stealing APIs are a nuisance: because
> you never know in advance whether a function will steal a reference or
> not, and you have to read the docs for each and every C API call you
> make)

Yeah, it would have been nice if there was an explicit hint in the API
names when reference stealing was involved, but I guess it's far too
late now :(

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Thu Apr 19 03:25:13 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Apr 2012 11:25:13 +1000
Subject: [Python-Dev] Highlighting reference-stealing APIs [was Re:
 cpython: Fix email post-commit review comments.]
In-Reply-To: <CAGE7PN+tvKA6_M4w1a8g9v3Rw34oO13xpSmyqcoK+1U9Lp3PPg@mail.gmail.com>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
	<4F8F44A1.40902@canterbury.ac.nz>
	<1334793691.31525.137.camel@surprise>
	<CAGE7PN+tvKA6_M4w1a8g9v3Rw34oO13xpSmyqcoK+1U9Lp3PPg@mail.gmail.com>
Message-ID: <CADiSq7f11svXsFDoVGevz+TZ=G3tb6Lf8AA2BpQnpowqvWtAJw@mail.gmail.com>

On Thu, Apr 19, 2012 at 11:04 AM, Gregory P. Smith <greg at krypto.org> wrote:
> +1 ?Adding these annotations and setting up a buildbot that builds using
> cpychecker would be a great.

Even without the extra annotations, running cpychecker on at least one
of the buildbots might be helpful.

I'm in the process of setting up a buildbot for RHEL 6, once I get it
up and running normally, I'll look into what it would take to install
and enable cpychecker for the builds. (Or, alternatively, I may make
it a separate cron job, similar to the daily refcount leak detection
run).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From stephen at xemacs.org  Thu Apr 19 03:35:49 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 19 Apr 2012 10:35:49 +0900
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <nad-84D831.13160118042012@news.gmane.org>
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
	<20120418211404.2700d7f7@pitrou.net>
	<nad-84D831.13160118042012@news.gmane.org>
Message-ID: <CAL_0O1_Hzc40e023JOTtCfse6=DEYqdyUg-_W896wWMtSmYyXw@mail.gmail.com>

On Thu, Apr 19, 2012 at 5:16 AM, Ned Deily <nad at acm.org> wrote:

>> Ethan Furman <ethan at stoneleaf.us> wrote:

>> > How does one follow the bug tracker?

[informative and useful answers elided]

I would like to summarize this thread and add it to the dev
documentation.  Where should it go?  (If nobody bothers to answer,
I'll assume the answer is "figure it out for yourself" and do that.
The only answers I can't figure out for myself are "Bad idea, don't"
and "I did it already, don't". :-)

Steve

From stephen at xemacs.org  Thu Apr 19 03:47:45 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 19 Apr 2012 10:47:45 +0900
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZ2zuC4BULeQ9Dt1v-t3gzeHnVx2nqv6P311OYMVABgVw@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
	<CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>
	<CAL_0O1-Bs_7kZ2kTktLTyimCpAoPDfoX3GsZe-zvBjLbBJd0Lw@mail.gmail.com>
	<CAMpsgwZ2zuC4BULeQ9Dt1v-t3gzeHnVx2nqv6P311OYMVABgVw@mail.gmail.com>
Message-ID: <CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>

On Thu, Apr 19, 2012 at 8:15 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:

> Well, I asked on IRC what I should do for these definitions because
> I'm too tired to decide what to do. [[...]] I replaced these definitions with yours.

That was nice of you.  In return, I'll go over the PEP to check that
usage is appropriate (eg, in some places "resolution" was used in the
sense of computer science's "precision" == reported digits).  Please
give me 24 hours.

BTW, this not a criticism, you did a great job of putting all that
information together.  But it's worth checking and that is best done
by a second pair of eyes.

Thanks for all your work on this!

Regards,
Steve

From ncoghlan at gmail.com  Thu Apr 19 03:50:39 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Apr 2012 11:50:39 +1000
Subject: [Python-Dev] __hash__ documentation
In-Reply-To: <CAL_0O1_Hzc40e023JOTtCfse6=DEYqdyUg-_W896wWMtSmYyXw@mail.gmail.com>
References: <4F8F02DE.9020309@stoneleaf.us>
	<CAD+XWwo0fndJxkR25YiG9ygZfU0kGE4PxA0njCw4Y94wBVKeoQ@mail.gmail.com>
	<4F8F13CB.6040602@stoneleaf.us>
	<20120418211404.2700d7f7@pitrou.net>
	<nad-84D831.13160118042012@news.gmane.org>
	<CAL_0O1_Hzc40e023JOTtCfse6=DEYqdyUg-_W896wWMtSmYyXw@mail.gmail.com>
Message-ID: <CADiSq7eYvyzd_bYP8OkG0OLCdkRqWeVnSDewcjWW-5Kak3FL1A@mail.gmail.com>

On Thu, Apr 19, 2012 at 11:35 AM, Stephen J. Turnbull
<stephen at xemacs.org> wrote:
> On Thu, Apr 19, 2012 at 5:16 AM, Ned Deily <nad at acm.org> wrote:
>
>>> Ethan Furman <ethan at stoneleaf.us> wrote:
>
>>> > How does one follow the bug tracker?
>
> [informative and useful answers elided]
>
> I would like to summarize this thread and add it to the dev
> documentation. ?Where should it go? ?(If nobody bothers to answer,
> I'll assume the answer is "figure it out for yourself" and do that.
> The only answers I can't figure out for myself are "Bad idea, don't"
> and "I did it already, don't". :-)

Separating out a dedicated "Issue Tracker" section from the general
"Mailing Lists" section here would probably be a good place to start:
http://docs.python.org/devguide/communication.html

A new entry in the Communications section of the dev FAQ that
references the updated section may also be worthwhile:
http://docs.python.org/devguide/faq.html#communications

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From victor.stinner at gmail.com  Thu Apr 19 03:50:43 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 19 Apr 2012 03:50:43 +0200
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
	<CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>
	<CAL_0O1-Bs_7kZ2kTktLTyimCpAoPDfoX3GsZe-zvBjLbBJd0Lw@mail.gmail.com>
	<CAMpsgwZ2zuC4BULeQ9Dt1v-t3gzeHnVx2nqv6P311OYMVABgVw@mail.gmail.com>
	<CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>
Message-ID: <CAMpsgwYLFwwtJkmvJ8E_om4PeE8=fi015-gkOPB957+pqH=FLw@mail.gmail.com>

> That was nice of you. ?In return, I'll go over the PEP to check that
> usage is appropriate (eg, in some places "resolution" was used in the
> sense of computer science's "precision" == reported digits).

Oh, this is very likely :-)

> BTW, this not a criticism, you did a great job of putting all that
> information together. ?But it's worth checking and that is best done
> by a second pair of eyes.

What? The subject of the initial mail contains [RFC]: I posted to PEP
to get as many reviews as possible! I know that it is not a criticism
:-)

Victor

From eliben at gmail.com  Thu Apr 19 05:06:35 2012
From: eliben at gmail.com (Eli Bendersky)
Date: Thu, 19 Apr 2012 05:06:35 +0200
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
Message-ID: <CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>

>>> Quoting from
>>> http://www.python.org/dev/peps/pep-0008/#other-recommendations
>>> (with elision):
>>>
>>> Use spaces around arithmetic operators:
>>> ? ?No:
>>> ? ? ? i=i+1
>>> ? ? ? submitted +=1
>>> ? ? ? x = x*2 - 1
>>> ? ? ? hypot2 = x*x + y*y
>>> ? ? ? c = (a+b) * (a-b)
>>>
>>> End quote.
>>
>>
>> I agree that this could be reworded. ?Especially when the operands are
>> as short as in the examples, the last three "No"s read better to me than
>> the "Yes" entries. ?In this case, spacing serves for visually grouping
>> expressions by precedence, which otherwise could also be indicated by
>> (semantically unnecessary) parens.
>
> Indeed. I don't know who put that in, it wasn't me.
>
>> But in all cases discussed here, PEP8 should not be seen as a law.
>> Its second section ("A Foolish Consistency is the Hobgoblin of Little
>> Minds") is quite prominent for a reason.
>
> I think whoever put that blanket rule in the PEP fell prey to this.
>
> Let's change this to something more reasonable, e.g.
>
> """
> If operators with different priorities are used, consider adding
> whitespace around the operators with the lowest priority(ies). This is
> very much to taste, however, never use more than one space, and always
> have the same amount of whitespace on both sides of a binary operator.
> """

+1, a very welcome change to a piece of PEP8 I've always felt
uncomfortable with. Tiny nitpick: I'd just replace the comma following
"however" with a period or semicolon.

Eli

From rosuav at gmail.com  Thu Apr 19 05:14:17 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 19 Apr 2012 13:14:17 +1000
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
Message-ID: <CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>

On Thu, Apr 19, 2012 at 1:06 PM, Eli Bendersky <eliben at gmail.com> wrote:
> (quoting GvR)
>> Let's change this to something more reasonable, e.g.
>>
>> """
>> If operators with different priorities are used, consider adding
>> whitespace around the operators with the lowest priority(ies). This is
>> very much to taste, however, never use more than one space, and always
>> have the same amount of whitespace on both sides of a binary operator.
>> """
>
> +1, a very welcome change to a piece of PEP8 I've always felt
> uncomfortable with. Tiny nitpick: I'd just replace the comma following
> "however" with a period or semicolon.

Following or preceding? Either works, but there's a slight shift of
meaning depending on which punctuation gets the upgrade. What was the
original intent of the paragraph?

Chris Angelico

From guido at python.org  Thu Apr 19 06:26:46 2012
From: guido at python.org (Guido van Rossum)
Date: Wed, 18 Apr 2012 21:26:46 -0700
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
Message-ID: <CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>

On Wed, Apr 18, 2012 at 8:14 PM, Chris Angelico <rosuav at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 1:06 PM, Eli Bendersky <eliben at gmail.com> wrote:
>> (quoting GvR)
>>> Let's change this to something more reasonable, e.g.
>>>
>>> """
>>> If operators with different priorities are used, consider adding
>>> whitespace around the operators with the lowest priority(ies). This is
>>> very much to taste, however, never use more than one space, and always
>>> have the same amount of whitespace on both sides of a binary operator.
>>> """
>>
>> +1, a very welcome change to a piece of PEP8 I've always felt
>> uncomfortable with. Tiny nitpick: I'd just replace the comma following
>> "however" with a period or semicolon.
>
> Following or preceding? Either works, but there's a slight shift of
> meaning depending on which punctuation gets the upgrade. What was the
> original intent of the paragraph?

I meant the semicolon to be before 'however'.

-- 
--Guido van Rossum (python.org/~guido)

From raymond.hettinger at gmail.com  Thu Apr 19 06:54:47 2012
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 19 Apr 2012 00:54:47 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
Message-ID: <C42B7D23-A870-4FCE-A11B-C7FAA08F47FC@gmail.com>


On Apr 18, 2012, at 1:38 PM, Guido van Rossum wrote:

> 
> Let's change this to something more reasonable, e.g.
> 
> """
> If operators with different priorities are used, consider adding
> whitespace around the operators with the lowest priority(ies). This is
> very much to taste, however, never use more than one space, and always
> have the same amount of whitespace on both sides of a binary operator.
> """

+1


Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120419/52a60745/attachment.html>

From ericsnowcurrently at gmail.com  Thu Apr 19 10:00:51 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 19 Apr 2012 02:00:51 -0600
Subject: [Python-Dev] (no subject)
Message-ID: <CALFfu7A1O2biaPJmMkzrucGFhLF+1cm2A3RA02GiBym9O=uEEA@mail.gmail.com>

How closely is tokenize.detect_encoding() supposed to match
PyTokenizer_FindEncoding()?  From what I can tell, there is a subtle
difference in their behavior that has bearing on PEP 263 handling
during import. [1]  Should any difference be considered a bug, or
should I work around it?  Thanks.

-eric


[1] http://www.python.org/dev/peps/pep-0263/

From ericsnowcurrently at gmail.com  Thu Apr 19 10:04:22 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 19 Apr 2012 02:04:22 -0600
Subject: [Python-Dev] support for encoding detection and PEP 263
Message-ID: <CALFfu7Cnf-iDYhu4DdGVRyFkZQpgoJwTcMtjs5XN4oN=A1Rg4w@mail.gmail.com>

Forgot the subject (going to bed now).

On Thu, Apr 19, 2012 at 2:00 AM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
> How closely is tokenize.detect_encoding() supposed to match
> PyTokenizer_FindEncoding()? ?From what I can tell, there is a subtle
> difference in their behavior that has bearing on PEP 263 handling
> during import. [1] ?Should any difference be considered a bug, or
> should I work around it? ?Thanks.
>
> -eric
>
>
> [1] http://www.python.org/dev/peps/pep-0263/

From stefan_ml at behnel.de  Thu Apr 19 10:55:24 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Thu, 19 Apr 2012 10:55:24 +0200
Subject: [Python-Dev] Cython for cPickle?
Message-ID: <jmojtt$ole$1@dough.gmane.org>

Hi,

I noticed that there is a PEP (3154) and a GSoC proposal about improving
Pickle. Given the recent discussion on this list about using Cython for the
import module, I wonder if it wouldn't make even more sense to switch from
a C (accelerator) implementation to Cython for _pickle.

The rationale is that C code that deals a lot with object operations tends
to be rather verbose, and _pickle specifically looks very verbose in many
places. Some of this is optimised I/O, ok, but most of it seems to take its
complexity from code specialisations for builtin types and a lot of error
handling code. A Cython reimplementation would take a lot of weight out of
this.

Note that the approach won't be as simple as compiling pickle.py. _pickle
uses a lot of optimisations that only work at the C level, at least
efficiently. So the idea would be to rewrite _pickle in Cython instead.
It's currently about 6500 lines of C. Even if we divide that only by a
rather conservative factor of 3, we'd end up with some 2000 lines of Cython
code, all extracted straight from the existing C code. That sounds like
less than two weeks of work, maybe even if we add the marshal module to it.
In less than a month of GSoC time, this could easily reach a point where
it's "close to the speed of what we have" and "fast enough", but a lot more
accessible and maintainable, thus also making it easier to add the
extensions described in the PEP.

What do you think?

Stefan


From martin at v.loewis.de  Thu Apr 19 12:28:39 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 19 Apr 2012 12:28:39 +0200
Subject: [Python-Dev] (no subject)
In-Reply-To: <CALFfu7A1O2biaPJmMkzrucGFhLF+1cm2A3RA02GiBym9O=uEEA@mail.gmail.com>
References: <CALFfu7A1O2biaPJmMkzrucGFhLF+1cm2A3RA02GiBym9O=uEEA@mail.gmail.com>
Message-ID: <4F8FE8D7.4020508@v.loewis.de>

Am 19.04.2012 10:00, schrieb Eric Snow:
> How closely is tokenize.detect_encoding() supposed to match
> PyTokenizer_FindEncoding()?  From what I can tell, there is a subtle
> difference in their behavior that has bearing on PEP 263 handling
> during import. [1]  Should any difference be considered a bug, or
> should I work around it?  Thanks.

If there is such a difference, it's a bug. The authority should be the
PEP.

Regards,
Martin

From martin at v.loewis.de  Thu Apr 19 12:31:51 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 19 Apr 2012 12:31:51 +0200
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <jmojtt$ole$1@dough.gmane.org>
References: <jmojtt$ole$1@dough.gmane.org>
Message-ID: <4F8FE997.4070907@v.loewis.de>

> What do you think?

I think I know what Jim Fulton thinks (as we talked about something
like this a PyCon): don't. He is already sad that cPickle grew so much
pickle features when it was designed as a real fast implementation.
pickle speed is really important to some users, and any loss of
performance needs serious justification. Easier maintenance is not
a sufficient reason.

Regards,
Martin

From ncoghlan at gmail.com  Thu Apr 19 12:38:45 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Apr 2012 20:38:45 +1000
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <jmojtt$ole$1@dough.gmane.org>
References: <jmojtt$ole$1@dough.gmane.org>
Message-ID: <CADiSq7dMMKyoSJA5_tGocWYUHQmpkfev-gsxbVOv=ZyLOjeG6g@mail.gmail.com>

On Thu, Apr 19, 2012 at 6:55 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> What do you think?

I think the possible use of Cython for standard library extension
modules is potentially worth looking into for the 3.4 timeframe (c.f.
the recent multiple checkins sorting out the refcounts for the new
ImportError helper function). There are obviously a lot of factors to
consider before actually proceeding with such an approach (even for
the extension modules), but a side-by-side comparison of pickle.py,
the existing C accelerated pickle module and a Cython accelerated
pickle module (including benchmark numbers) would be a valuable data
point in any such discussion.

However, it would definitely have to be pitched to any interested
students as a proof-of-concept exercise, with a real possibility that
the outcome will end up supporting MvL's reply.

Regards,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From sam.partington at gmail.com  Thu Apr 19 12:42:05 2012
From: sam.partington at gmail.com (Sam Partington)
Date: Thu, 19 Apr 2012 11:42:05 +0100
Subject: [Python-Dev] Highlighting reference-stealing APIs [was Re:
 cpython: Fix email post-commit review comments.]
In-Reply-To: <CADiSq7f11svXsFDoVGevz+TZ=G3tb6Lf8AA2BpQnpowqvWtAJw@mail.gmail.com>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>
	<20120418162150.182c5b26@pitrou.net>
	<4F8F44A1.40902@canterbury.ac.nz>
	<1334793691.31525.137.camel@surprise>
	<CAGE7PN+tvKA6_M4w1a8g9v3Rw34oO13xpSmyqcoK+1U9Lp3PPg@mail.gmail.com>
	<CADiSq7f11svXsFDoVGevz+TZ=G3tb6Lf8AA2BpQnpowqvWtAJw@mail.gmail.com>
Message-ID: <CABuPkmR4GN44AJYHpzC9_F11JdnYOzjBRYpDR+zJ3Up=e1pE-g@mail.gmail.com>

On 19 April 2012 02:20, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 12:21 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> (and here we see why reference-stealing APIs are a nuisance: because
>> you never know in advance whether a function will steal a reference or
>> not, and you have to read the docs for each and every C API call you
>> make)
>
> Yeah, it would have been nice if there was an explicit hint in the API
> names when reference stealing was involved, but I guess it's far too
> late now :(

It's too late to change the fn names sure, but you could change the
argument names in question for reference stealing apis with some kind
of markup.

That would make it fairly easy to write a script that did the checking for you :

int PyTuple_SetItem(PyObject *p, Py_ssize_t pos, PyObject *stolen_o)

Or better yet would be to mark the types :

int PyTuple_SetItem(PyObject *p, Py_ssize_t pos, PyStolenObject* o)
PyBorrowedObject* PyTuple_GetItem(PyObject *p, Py_ssize_t pos)

PyStolenObject and PyBorrowedObject would just be typedefs to PyObject
normally. But a consenting user could define PyENABLE_CHECKED_REFS
before including Python.h which would given

#if defined(PyENABLE_CHECKED_STOLEN_REFS)
struct PyStolenObject;
struct PyBorrowedObject;
#define PyYesIKnowItsStolen(o) ((PyStolenObject*)o)
#define PyYesIKnowItsBorrowed(o) ((PyObject*)o)
#else
typedef PyStolenObject PyObject;
typedef PyBorrowedObject PyObject;
#endif

Forcing the user to use

PyTuple_SetItem(p, pos, PyYesIKnowItsStolen(o))
PyObject * ref = PyYesIKnowItsBorrowed(PyTuple_GetItem(p, pos));

Or else it would fail to compile.  The user could even add her own :

PyStolenObject* IncRefBecauseIKnowItsStolen(PyObject* o) {
PyIncRef(o); return (PyStolenObject*)o; }
PyObject* IncRefBecauseIKnowItsBorrowed(PyBorrowedObject* o) {
PyIncRef(o); return (PyObject*)o; }

This would not require a special gcc build and would be available to
anyone who wanted it. I use a similar, C++ based trick in my python
extension code to avoid the whole issue of ref leaking, but I have to
be careful at the point of calling the python api, having it automatic
would be great.

On a similar note, I have just implemented a wrapper around Python.h
which runtime checks that the GIL is held around every call to the
Python API or else fails very noisily. This was done because it turns
out that wxPython had a ton of non-GIL calls to the API causing random
sporadic set faults in our app.  We now use it on several of our
extensions.  It doesn't require any changes to Python.h,  you just
need to add an include path before the python include path. Would
there be any interest in this?

Sam


On 19 April 2012 02:25, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 11:04 AM, Gregory P. Smith <greg at krypto.org> wrote:
>> +1 ?Adding these annotations and setting up a buildbot that builds using
>> cpychecker would be a great.
>
> Even without the extra annotations, running cpychecker on at least one
> of the buildbots might be helpful.
>
> I'm in the process of setting up a buildbot for RHEL 6, once I get it
> up and running normally, I'll look into what it would take to install
> and enable cpychecker for the builds. (Or, alternatively, I may make
> it a separate cron job, similar to the daily refcount leak detection
> run).
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/sam.partington%40gmail.com

From martin at v.loewis.de  Thu Apr 19 12:59:56 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 19 Apr 2012 12:59:56 +0200
Subject: [Python-Dev] Highlighting reference-stealing APIs [was Re:
 cpython: Fix email post-commit review comments.]
In-Reply-To: <CABuPkmR4GN44AJYHpzC9_F11JdnYOzjBRYpDR+zJ3Up=e1pE-g@mail.gmail.com>
References: <E1SKUyU-00070j-C9@dinsdale.python.org>	<20120418162150.182c5b26@pitrou.net>	<4F8F44A1.40902@canterbury.ac.nz>	<1334793691.31525.137.camel@surprise>	<CAGE7PN+tvKA6_M4w1a8g9v3Rw34oO13xpSmyqcoK+1U9Lp3PPg@mail.gmail.com>	<CADiSq7f11svXsFDoVGevz+TZ=G3tb6Lf8AA2BpQnpowqvWtAJw@mail.gmail.com>
	<CABuPkmR4GN44AJYHpzC9_F11JdnYOzjBRYpDR+zJ3Up=e1pE-g@mail.gmail.com>
Message-ID: <4F8FF02C.1060008@v.loewis.de>

Am 19.04.2012 12:42, schrieb Sam Partington:
> On 19 April 2012 02:20, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> On Thu, Apr 19, 2012 at 12:21 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>> (and here we see why reference-stealing APIs are a nuisance: because
>>> you never know in advance whether a function will steal a reference or
>>> not, and you have to read the docs for each and every C API call you
>>> make)
>>
>> Yeah, it would have been nice if there was an explicit hint in the API
>> names when reference stealing was involved, but I guess it's far too
>> late now :(
> 
> It's too late to change the fn names sure, but you could change the
> argument names in question for reference stealing apis with some kind
> of markup.

While it may too late to change the names, it's not to late to remove
these functions entirely. It will take some time, but it would be
possible to add parallel APIs that neither borrow nor steal references,
and have them preferred over the existing APIs. Then, with Python 4,
the old APIs could go away.

Regards,
Martin

From martin at v.loewis.de  Thu Apr 19 13:19:02 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 19 Apr 2012 13:19:02 +0200
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <20120418213014.00d36cc0@pitrou.net>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net>
Message-ID: <4F8FF4A6.7000704@v.loewis.de>

>>>   Issue #11750: The Windows API functions scattered in the _subprocess and
>>> _multiprocessing.win32 modules now live in a single module "_winapi".
>>> Patch by sbt.
>>
>> Can we use Real Names, please?
> 
> Do we have a policy about that? sbt seems happy using a pseudonym (and
> I personally don't have a problem with it).

We would have to ask a lawyer. Apparently, he signed a form, and
presumably, that can be traced to a real person. However, we need to
be extremely careful not to accept anonymous contributions, as then
barrier to contribute stolen code is much lower. It took Linux a ten
year copyright lawsuit to go through this; I don't want this to happen
for Python.

In any case, the real policy is that we should not accept significant
changes without a contributor form.

I, myself, feel extremely uncomfortable dealing with pseudonyms in the
net, more so since I committed code from (and, IIRC, gave commit rights
to) Reinhold Birkenfeld. Of course, the issue is different when you
*know* it's pseudonym (and no, I have no bad feelings towards Georg
about this at all).

Regards,
Martin

From solipsis at pitrou.net  Thu Apr 19 14:44:06 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 19 Apr 2012 14:44:06 +0200
Subject: [Python-Dev] Cython for cPickle?
References: <jmojtt$ole$1@dough.gmane.org>
Message-ID: <20120419144406.7e650e29@pitrou.net>

On Thu, 19 Apr 2012 10:55:24 +0200
Stefan Behnel <stefan_ml at behnel.de> wrote:
> 
> I noticed that there is a PEP (3154) and a GSoC proposal about improving
> Pickle. Given the recent discussion on this list about using Cython for the
> import module, I wonder if it wouldn't make even more sense to switch from
> a C (accelerator) implementation to Cython for _pickle.

I think that's quite orthogonal to PEP 3154 (which shouldn't add a lot
of new code IMHO).

> Note that the approach won't be as simple as compiling pickle.py. _pickle
> uses a lot of optimisations that only work at the C level, at least
> efficiently. So the idea would be to rewrite _pickle in Cython instead.
> It's currently about 6500 lines of C. Even if we divide that only by a
> rather conservative factor of 3, we'd end up with some 2000 lines of Cython
> code, all extracted straight from the existing C code. That sounds like
> less than two weeks of work, maybe even if we add the marshal module to it.

I think this all needs someone to demonstrate the benefits, in
terms of both readability/maintainability, and performance.

Also, while C is a low-level language, Cython is a different language
than Python when you start using its optimization features. This means
core developers have to learn that language.

Regards

Antoine.



From brian at python.org  Thu Apr 19 15:23:44 2012
From: brian at python.org (Brian Curtin)
Date: Thu, 19 Apr 2012 08:23:44 -0500
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CADiSq7dMMKyoSJA5_tGocWYUHQmpkfev-gsxbVOv=ZyLOjeG6g@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org>
	<CADiSq7dMMKyoSJA5_tGocWYUHQmpkfev-gsxbVOv=ZyLOjeG6g@mail.gmail.com>
Message-ID: <CAD+XWwoFgEp_VckSerD=Bna+FqO0T7jGwhC0DTiMeiWSULGmTQ@mail.gmail.com>

On Thu, Apr 19, 2012 at 05:38, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 6:55 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
>> What do you think?
>
> I think the possible use of Cython for standard library extension
> modules is potentially worth looking into for the 3.4 timeframe (c.f.
> the recent multiple checkins sorting out the refcounts for the new
> ImportError helper function).

I'd rather just "rtfm" as was suggested and get it right than switch
everything around to Cython.

From rdmurray at bitdance.com  Thu Apr 19 15:28:21 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 19 Apr 2012 09:28:21 -0400
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <20120419144406.7e650e29@pitrou.net>
References: <jmojtt$ole$1@dough.gmane.org> <20120419144406.7e650e29@pitrou.net>
Message-ID: <20120419132822.4B8702509E4@webabinitio.net>

On Thu, 19 Apr 2012 14:44:06 +0200, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Also, while C is a low-level language, Cython is a different language
> than Python when you start using its optimization features. This means
> core developers have to learn that language.

Hmm.  On the other hand, perhaps some core developers (present or
future) would prefer to learn Cython over learning C [*].

--David

[*] For this you may actually want to read "learning to modify the Python
C codebase", since in fact I know how to program in C, I just prefer to
do as little of it as possible, and so haven't really learned the Python
C codebase.

From anacrolix at gmail.com  Thu Apr 19 16:13:31 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Thu, 19 Apr 2012 22:13:31 +0800
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <20120419132822.4B8702509E4@webabinitio.net>
References: <jmojtt$ole$1@dough.gmane.org> <20120419144406.7e650e29@pitrou.net>
	<20120419132822.4B8702509E4@webabinitio.net>
Message-ID: <CAB4yi1Pm8i25VmJ_XQgAW7ddRqHiK7DxfG72L8rHhBZBM3b-VA@mail.gmail.com>

Personally I find the unholy product of C and Python that is Cython to be
more complex than the sum of the complexities of its parts. Is it really
wise to be learning Cython without already knowing C, Python, and the
CPython object model?

While code generation alleviates the burden of tedious languages, it's also
infinitely more complex, makes debugging very difficult and adds to
prerequisite knowledge, among other drawbacks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120419/1bfb4c9f/attachment.html>

From barry at python.org  Thu Apr 19 16:55:34 2012
From: barry at python.org (Barry Warsaw)
Date: Thu, 19 Apr 2012 10:55:34 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
Message-ID: <20120419105534.7f90fc29@rivendell>

On Apr 18, 2012, at 09:26 PM, Guido van Rossum wrote:

>On Wed, Apr 18, 2012 at 8:14 PM, Chris Angelico <rosuav at gmail.com> wrote:
>> On Thu, Apr 19, 2012 at 1:06 PM, Eli Bendersky <eliben at gmail.com> wrote:
>>> (quoting GvR)
>>>> Let's change this to something more reasonable, e.g.
>>>>
>>>> """
>>>> If operators with different priorities are used, consider adding
>>>> whitespace around the operators with the lowest priority(ies). This is
>>>> very much to taste, however, never use more than one space, and always
>>>> have the same amount of whitespace on both sides of a binary operator.
>>>> """
>>>
>>> +1, a very welcome change to a piece of PEP8 I've always felt
>>> uncomfortable with. Tiny nitpick: I'd just replace the comma following
>>> "however" with a period or semicolon.
>>
>> Following or preceding? Either works, but there's a slight shift of
>> meaning depending on which punctuation gets the upgrade. What was the
>> original intent of the paragraph?
>
>I meant the semicolon to be before 'however'.

I'll make this change to the PEP.  I'm not entirely sure the Yes/No examples
are great illustrations of this change in wording though.  Here's the diff so
far (uncommitted):

diff -r 34076bfed420 pep-0008.txt
--- a/pep-0008.txt	Thu Apr 19 10:32:50 2012 +0200
+++ b/pep-0008.txt	Thu Apr 19 10:53:15 2012 -0400
@@ -305,7 +305,11 @@
   ``>=``, ``in``, ``not in``, ``is``, ``is not``), Booleans (``and``,
   ``or``, ``not``).
 
-- Use spaces around arithmetic operators:
+- If operators with different priorities are used, consider adding
+  whitespace around the operators with the lowest priority(ies). This
+  is very much to taste; however, never use more than one space, and
+  always have the same amount of whitespace on both sides of a binary
+  operator.
 
   Yes::
 
Cheers,
-Barry

From merwok at netwok.org  Thu Apr 19 17:00:11 2012
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 19 Apr 2012 11:00:11 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <20120419105534.7f90fc29@rivendell>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell>
Message-ID: <4F90287B.7030002@netwok.org>

Hi,

> +- If operators with different priorities are used, consider adding
> +  whitespace around the operators with the lowest priority(ies). This
> +  is very much to taste; however, never use more than one space, and
> +  always have the same amount of whitespace on both sides of a binary
> +  operator.

Does ?this is very much to taste? means that it?s a style judgment where 
each team or individual may make different choices?  I?m not a native 
speaker and I?m not sure about the intended meaning.

Cheers

From rosuav at gmail.com  Thu Apr 19 17:05:24 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Fri, 20 Apr 2012 01:05:24 +1000
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <4F90287B.7030002@netwok.org>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell> <4F90287B.7030002@netwok.org>
Message-ID: <CAPTjJmr2T1=mOU5hrJQJRW1SuiBfHoCrxY14cfWKOCr3aqX8tA@mail.gmail.com>

On Fri, Apr 20, 2012 at 1:00 AM, ?ric Araujo <merwok at netwok.org> wrote:
> Hi,
>
>> +- If operators with different priorities are used, consider adding
>> + ?whitespace around the operators with the lowest priority(ies). This
>> + ?is very much to taste; however, never use more than one space, and
>> + ?always have the same amount of whitespace on both sides of a binary
>> + ?operator.
>
> Does ?this is very much to taste? means that it?s a style judgment where
> each team or individual may make different choices? ?I?m not a native
> speaker and I?m not sure about the intended meaning.

Yes. It's like writing instructions for how to make a cup of tea. You
might want to put in one spoon of sugar, someone else might prefer
two. On the instructions, you simply write "Add sugar to taste", and
that's what this is drawing analogy with. The PEP, as then written,
would happily accept all of these:

x = y*2+z*3
x = y*2 + z*3
x = y * 2 + z * 3

but would advise against:

x =y*2  +  z* 3

ChrisA

From barry at python.org  Thu Apr 19 17:15:38 2012
From: barry at python.org (Barry Warsaw)
Date: Thu, 19 Apr 2012 11:15:38 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <4F90287B.7030002@netwok.org>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell> <4F90287B.7030002@netwok.org>
Message-ID: <20120419111538.52bd0506@rivendell>

On Apr 19, 2012, at 11:00 AM, ?ric Araujo wrote:

>Hi,
>
>> +- If operators with different priorities are used, consider adding
>> +  whitespace around the operators with the lowest priority(ies). This
>> +  is very much to taste; however, never use more than one space, and
>> +  always have the same amount of whitespace on both sides of a binary
>> +  operator.
>
>Does ?this is very much to taste? means that it?s a style judgment where each
>team or individual may make different choices?  I?m not a native speaker and
>I?m not sure about the intended meaning.

If I change that phrase to "Use your own judgement" does that help?

-Barry

From phd at phdru.name  Thu Apr 19 17:21:59 2012
From: phd at phdru.name (Oleg Broytman)
Date: Thu, 19 Apr 2012 19:21:59 +0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <20120419111538.52bd0506@rivendell>
References: <CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell> <4F90287B.7030002@netwok.org>
	<20120419111538.52bd0506@rivendell>
Message-ID: <20120419152159.GA11958@iskra.aviel.ru>

On Thu, Apr 19, 2012 at 11:15:38AM -0400, Barry Warsaw <barry at python.org> wrote:
> On Apr 19, 2012, at 11:00 AM, ??ric Araujo wrote:
> >> +- If operators with different priorities are used, consider adding
> >> +  whitespace around the operators with the lowest priority(ies). This
> >> +  is very much to taste; however, never use more than one space, and
> >> +  always have the same amount of whitespace on both sides of a binary
> >> +  operator.
> >
> >Does ???this is very much to taste??? means that it???s a style judgment where each
> >team or individual may make different choices?  I???m not a native speaker and
> >I???m not sure about the intended meaning.
> 
> If I change that phrase to "Use your own judgement" does that help?

   Yes, in my opinion.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From merwok at netwok.org  Thu Apr 19 17:28:20 2012
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Thu, 19 Apr 2012 11:28:20 -0400
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <20120419111538.52bd0506@rivendell>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell> <4F90287B.7030002@netwok.org>
	<20120419111538.52bd0506@rivendell>
Message-ID: <4F902F14.3050008@netwok.org>

> If I change that phrase to "Use your own judgement" does that help?

It does.  It may also help fight the mindset that PEP 8 is a Law.

Regards

From tshepang at gmail.com  Thu Apr 19 17:32:11 2012
From: tshepang at gmail.com (Tshepang Lekhonkhobe)
Date: Thu, 19 Apr 2012 17:32:11 +0200
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAA77j2DhPgytiRw0kG1v1NM+12dgE4Mh2o5N92=_d+YSUA+Mgg@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell> <4F90287B.7030002@netwok.org>
	<20120419111538.52bd0506@rivendell>
	<CAA77j2DhPgytiRw0kG1v1NM+12dgE4Mh2o5N92=_d+YSUA+Mgg@mail.gmail.com>
Message-ID: <CAA77j2BDxuwdTCz82y=RDSio=72m8gvwO97=eTBOY85BOCb2kQ@mail.gmail.com>

was sent to Barry-only by mistake

On Thu, Apr 19, 2012 at 17:20, Tshepang Lekhonkhobe <tshepang at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 17:15, Barry Warsaw <barry at python.org> wrote:
>> If I change that phrase to "Use your own judgement" does that help?
>
> I would prefer "This is a matter of taste...". Much closer to original
> meaning, and I think it's a more common phrase.

From guido at python.org  Thu Apr 19 17:51:40 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 19 Apr 2012 08:51:40 -0700
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <4F8FF4A6.7000704@v.loewis.de>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
Message-ID: <CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>

On Thu, Apr 19, 2012 at 4:19 AM, "Martin v. L?wis" <martin at v.loewis.de> wrote:
>>>> ? Issue #11750: The Windows API functions scattered in the _subprocess and
>>>> _multiprocessing.win32 modules now live in a single module "_winapi".
>>>> Patch by sbt.
>>>
>>> Can we use Real Names, please?
>>
>> Do we have a policy about that? sbt seems happy using a pseudonym (and
>> I personally don't have a problem with it).
>
> We would have to ask a lawyer. Apparently, he signed a form, and
> presumably, that can be traced to a real person. However, we need to
> be extremely careful not to accept anonymous contributions, as then
> barrier to contribute stolen code is much lower. It took Linux a ten
> year copyright lawsuit to go through this; I don't want this to happen
> for Python.
>
> In any case, the real policy is that we should not accept significant
> changes without a contributor form.
>
> I, myself, feel extremely uncomfortable dealing with pseudonyms in the
> net, more so since I committed code from (and, IIRC, gave commit rights
> to) Reinhold Birkenfeld. Of course, the issue is different when you
> *know* it's pseudonym (and no, I have no bad feelings towards Georg
> about this at all).

I'd like to copy for posterity what I wrote off-list about this incident:

I'm against accepting anonymous patches, period, unless the core
developer who accepts them vets them *very* carefully and can vouch
for them as if the core developer wrote the patch personally. Giving
an anonymous person commit rights does not meet my standard for good
stewardship of the code base. (But... see below.)

Of course, knowing the name is not *sufficient* to give a person
commit rights -- we know what's needed there, which includes a trust
relationship with the contributor over a long time and with multiple
core committers.

This *process* of vetting committers in turn is necessary so that
others, way outside our community, will trust us. When open source was
new, I got regular requests from lawyers working for large companies
wanting to see the list of contributors. Eventually this stopped,
because the lawyers started understanding open source, but part of
that understanding included the idea that a typical open source
project actually has a high moral code of conduct (written or not).

That said, I can think of plenty of reasons why a contributor does not
want their real name published. Some of those are bad -- e.g. if you
worry that you'll be embarrassed by your contributions in the future
I'm not sure I'd want to see your code in the repository; if you don't
want your employer to find out that you're contributing you might be
violating your employment contract and the PSF could get into trouble
for e.g. incorporating patented code; and I'm not sure we'd like to
accept code from convicted fellons (though I'd consider that a gray
area). But some might be acceptable. E.g. someone who is regularly in
the news might not want to attract gawkers or reveal their personal
email address; someone who is hiding from the law in an oppressive
country (even the US, depending on which law we're talking about)
might need to be protected; someone might have fears for their
personal safety.

In all those cases I think there should be some core contributors who
know the real identity of the contributor. These must also know the
reason for the anonymity and agree that it's important to maintain it.
It must also be known to the community at large that the contributor
is using a pseudonym. If the contributor is not comfortable revealing
their identity to any core contributors, I don't think there is enough
of a trust relationship to build on for a successful career as a
contributor to Python.

-- 
--Guido van Rossum (python.org/~guido)

From tshepang at gmail.com  Thu Apr 19 18:02:05 2012
From: tshepang at gmail.com (Tshepang Lekhonkhobe)
Date: Thu, 19 Apr 2012 18:02:05 +0200
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
Message-ID: <CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>

On Thu, Apr 19, 2012 at 17:51, Guido van Rossum <guido at python.org> wrote:
> and I'm not sure we'd like to
> accept code from convicted fellons (though I'd consider that a gray
> area).

This makes me curious... why would that be a problem at all (assuming
the felony is not related to the computing field)?

From glyph at twistedmatrix.com  Thu Apr 19 18:06:51 2012
From: glyph at twistedmatrix.com (Glyph)
Date: Thu, 19 Apr 2012 12:06:51 -0400
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
	scattered in the _subprocess and
In-Reply-To: <CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de> <20120418213014.00d36cc0@pitrou.net>
	<4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
Message-ID: <1D100562-07F0-499B-B927-FA9DD2BA8656@twistedmatrix.com>

On Apr 19, 2012, at 11:51 AM, Guido van Rossum wrote:

> In all those cases I think there should be some core contributors who
> know the real identity of the contributor. These must also know the
> reason for the anonymity and agree that it's important to maintain it.
> It must also be known to the community at large that the contributor
> is using a pseudonym. If the contributor is not comfortable revealing
> their identity to any core contributors, I don't think there is enough
> of a trust relationship to build on for a successful career as a
> contributor to Python.

I do think that python-dev should be clear that by "real" identity you mean "legal" identity.

There are plenty of cases where the name a person is known by in more "real" situations is not in fact their legal name.  There are also cases where legal names are different in different jurisdictions; especially people with CJK names may have different orthographies of the "same" name in different jurisdictions or even completely different names in different places, if they have immigrated to a different country.

So there should be a legal name on file somewhere for copyright provenance purposes, but this should not need to be the same name that is present in commit logs, as long as there's a mapping recorded that can be made available to any interested lawyer.

(Hopefully this is not a practical issue, but this is one of my pet peeves - for obvious reasons.)

-glyph

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120419/5002f08b/attachment.html>

From guido at python.org  Thu Apr 19 18:55:45 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 19 Apr 2012 09:55:45 -0700
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
Message-ID: <CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>

On Thu, Apr 19, 2012 at 9:02 AM, Tshepang Lekhonkhobe
<tshepang at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 17:51, Guido van Rossum <guido at python.org> wrote:
>> and I'm not sure we'd like to
>> accept code from convicted fellons (though I'd consider that a gray
>> area).
>
> This makes me curious... why would that be a problem at all (assuming
> the felony is not related to the computing field)?

Because the person might not be trustworthy, period. Or it might
reflect badly upon Python's reputation. But yes, I could also see
cases where we'd chose to trust the person anyway. This is why I said
it's a gray area -- it can only be determined on a case-by-case basis.
The most likely case might actually be someone like Aaron Swartz.

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Thu Apr 19 18:58:59 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 19 Apr 2012 09:58:59 -0700
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <1D100562-07F0-499B-B927-FA9DD2BA8656@twistedmatrix.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<1D100562-07F0-499B-B927-FA9DD2BA8656@twistedmatrix.com>
Message-ID: <CAP7+vJLDMxHG9uhAnZ=1ExedYwm-KxiyOtaf3ibXpAkUdqRDew@mail.gmail.com>

On Thu, Apr 19, 2012 at 9:06 AM, Glyph <glyph at twistedmatrix.com> wrote:
> On Apr 19, 2012, at 11:51 AM, Guido van Rossum wrote:
>
> In all those cases I think there should be some core contributors who
> know the real identity of the contributor. These must also know the
> reason for the anonymity and agree that it's important to maintain it.
> It must also be known to the community at large that the contributor
> is using a pseudonym. If the contributor is not comfortable revealing
> their identity to any core contributors, I don't think there is enough
> of a trust relationship to build on for a successful career as a
> contributor to Python.
>
>
> I do think that python-dev should be clear that by "real" identity you mean
> "legal" identity.
>
> There are plenty of cases where the name a person is known by in more "real"
> situations is not in fact their legal name. ?There are also cases where
> legal names are different in different jurisdictions; especially people with
> CJK names may have different orthographies of the "same" name in different
> jurisdictions or even completely different names in different places, if
> they have immigrated to a different country.
>
> So there should be a legal name on file somewhere for copyright provenance
> purposes, but this should not need to be the same name that is present in
> commit logs, as long as there's a mapping recorded that can be made
> available to any interested lawyer.
>
> (Hopefully this is not a practical issue, but this is one of my pet peeves -
> for obvious reasons.)

Heh. I was hoping to avoid too  much legal wrangling. Note that we
don't require legal proof of identity; that would be an undue burden
and more than I would personally put up with as a contributor. The
primary concept here is trust, and identity can be seen as an
approximation of that at best.

-- 
--Guido van Rossum (python.org/~guido)

From tshepang at gmail.com  Thu Apr 19 19:13:57 2012
From: tshepang at gmail.com (Tshepang Lekhonkhobe)
Date: Thu, 19 Apr 2012 19:13:57 +0200
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
	<CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>
Message-ID: <CAA77j2D59jAHfYnXaHptv-RdKJMw6QfvgPG_sbfkxB-LdvAhrg@mail.gmail.com>

On Thu, Apr 19, 2012 at 18:55, Guido van Rossum <guido at python.org> wrote:
> On Thu, Apr 19, 2012 at 9:02 AM, Tshepang Lekhonkhobe
> <tshepang at gmail.com> wrote:
>> On Thu, Apr 19, 2012 at 17:51, Guido van Rossum <guido at python.org> wrote:
>>> and I'm not sure we'd like to
>>> accept code from convicted fellons (though I'd consider that a gray
>>> area).
>>
>> This makes me curious... why would that be a problem at all (assuming
>> the felony is not related to the computing field)?
>
> Because the person might not be trustworthy, period. Or it might
> reflect badly upon Python's reputation. But yes, I could also see
> cases where we'd chose to trust the person anyway. This is why I said
> it's a gray area -- it can only be determined on a case-by-case basis.
> The most likely case might actually be someone like Aaron Swartz.

Even if Aaron submits typo fixes for documentation :)

I would think that being core developer would be the only thing that
would require trust. As for a random a contributor, their patches are
always reviewed by core developers before going in, so I don't see any
need for trust there. Identity is another matter of course, but no one
even checks if I'm the real Tshepang Lekhonkhobe.

From guido at python.org  Thu Apr 19 19:21:00 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 19 Apr 2012 10:21:00 -0700
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <CAA77j2D59jAHfYnXaHptv-RdKJMw6QfvgPG_sbfkxB-LdvAhrg@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
	<CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>
	<CAA77j2D59jAHfYnXaHptv-RdKJMw6QfvgPG_sbfkxB-LdvAhrg@mail.gmail.com>
Message-ID: <CAP7+vJJ-ADgtx+3uoo1NdWxKZ5h6h5LHmjE9cFr6NG+vhR4jzQ@mail.gmail.com>

On Thu, Apr 19, 2012 at 10:13 AM, Tshepang Lekhonkhobe
<tshepang at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 18:55, Guido van Rossum <guido at python.org> wrote:
>> On Thu, Apr 19, 2012 at 9:02 AM, Tshepang Lekhonkhobe
>> <tshepang at gmail.com> wrote:
>>> On Thu, Apr 19, 2012 at 17:51, Guido van Rossum <guido at python.org> wrote:
>>>> and I'm not sure we'd like to
>>>> accept code from convicted fellons (though I'd consider that a gray
>>>> area).
>>>
>>> This makes me curious... why would that be a problem at all (assuming
>>> the felony is not related to the computing field)?
>>
>> Because the person might not be trustworthy, period. Or it might
>> reflect badly upon Python's reputation. But yes, I could also see
>> cases where we'd chose to trust the person anyway. This is why I said
>> it's a gray area -- it can only be determined on a case-by-case basis.
>> The most likely case might actually be someone like Aaron Swartz.
>
> Even if Aaron submits typo fixes for documentation :)
>
> I would think that being core developer would be the only thing that
> would require trust. As for a random a contributor, their patches are
> always reviewed by core developers before going in, so I don't see any
> need for trust there. Identity is another matter of course, but no one
> even checks if I'm the real Tshepang Lekhonkhobe.

I don't think you're a core contributor, right? Even if a core
developer reviews the code, it requires a certain level of trust,
especially for complex patches.

-- 
--Guido van Rossum (python.org/~guido)

From solipsis at pitrou.net  Thu Apr 19 19:30:23 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 19 Apr 2012 19:30:23 +0200
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <CAP7+vJJ-ADgtx+3uoo1NdWxKZ5h6h5LHmjE9cFr6NG+vhR4jzQ@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de> <20120418213014.00d36cc0@pitrou.net>
	<4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
	<CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>
	<CAA77j2D59jAHfYnXaHptv-RdKJMw6QfvgPG_sbfkxB-LdvAhrg@mail.gmail.com>
	<CAP7+vJJ-ADgtx+3uoo1NdWxKZ5h6h5LHmjE9cFr6NG+vhR4jzQ@mail.gmail.com>
Message-ID: <20120419193023.468069b6@pitrou.net>

On Thu, 19 Apr 2012 10:21:00 -0700
Guido van Rossum <guido at python.org> wrote:
> On Thu, Apr 19, 2012 at 10:13 AM, Tshepang Lekhonkhobe
> <tshepang at gmail.com> wrote:
> > On Thu, Apr 19, 2012 at 18:55, Guido van Rossum <guido at python.org> wrote:
> >> On Thu, Apr 19, 2012 at 9:02 AM, Tshepang Lekhonkhobe
> >> <tshepang at gmail.com> wrote:
> >>> On Thu, Apr 19, 2012 at 17:51, Guido van Rossum <guido at python.org> wrote:
> >>>> and I'm not sure we'd like to
> >>>> accept code from convicted fellons (though I'd consider that a gray
> >>>> area).
> >>>
> >>> This makes me curious... why would that be a problem at all (assuming
> >>> the felony is not related to the computing field)?
> >>
> >> Because the person might not be trustworthy, period. Or it might
> >> reflect badly upon Python's reputation. But yes, I could also see
> >> cases where we'd chose to trust the person anyway. This is why I said
> >> it's a gray area -- it can only be determined on a case-by-case basis.
> >> The most likely case might actually be someone like Aaron Swartz.
> >
> > Even if Aaron submits typo fixes for documentation :)
> >
> > I would think that being core developer would be the only thing that
> > would require trust. As for a random a contributor, their patches are
> > always reviewed by core developers before going in, so I don't see any
> > need for trust there. Identity is another matter of course, but no one
> > even checks if I'm the real Tshepang Lekhonkhobe.
> 
> I don't think you're a core contributor, right? Even if a core
> developer reviews the code, it requires a certain level of trust,
> especially for complex patches.

I would say trust is gained through previous patches, not through
personal knowledge of the contributor, though.

Regards

Antoine.

From guido at python.org  Thu Apr 19 19:40:00 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 19 Apr 2012 10:40:00 -0700
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <20120419193023.468069b6@pitrou.net>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de>
	<20120418213014.00d36cc0@pitrou.net> <4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
	<CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>
	<CAA77j2D59jAHfYnXaHptv-RdKJMw6QfvgPG_sbfkxB-LdvAhrg@mail.gmail.com>
	<CAP7+vJJ-ADgtx+3uoo1NdWxKZ5h6h5LHmjE9cFr6NG+vhR4jzQ@mail.gmail.com>
	<20120419193023.468069b6@pitrou.net>
Message-ID: <CAP7+vJJi2FXTXNwsaepY5W4MgV4G5PdoDDsc1AOfn2QpH5U63A@mail.gmail.com>

On Thu, Apr 19, 2012 at 10:30 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Thu, 19 Apr 2012 10:21:00 -0700
> Guido van Rossum <guido at python.org> wrote:
>> On Thu, Apr 19, 2012 at 10:13 AM, Tshepang Lekhonkhobe
>> <tshepang at gmail.com> wrote:
>> > On Thu, Apr 19, 2012 at 18:55, Guido van Rossum <guido at python.org> wrote:
>> >> On Thu, Apr 19, 2012 at 9:02 AM, Tshepang Lekhonkhobe
>> >> <tshepang at gmail.com> wrote:
>> >>> On Thu, Apr 19, 2012 at 17:51, Guido van Rossum <guido at python.org> wrote:
>> >>>> and I'm not sure we'd like to
>> >>>> accept code from convicted fellons (though I'd consider that a gray
>> >>>> area).
>> >>>
>> >>> This makes me curious... why would that be a problem at all (assuming
>> >>> the felony is not related to the computing field)?
>> >>
>> >> Because the person might not be trustworthy, period. Or it might
>> >> reflect badly upon Python's reputation. But yes, I could also see
>> >> cases where we'd chose to trust the person anyway. This is why I said
>> >> it's a gray area -- it can only be determined on a case-by-case basis.
>> >> The most likely case might actually be someone like Aaron Swartz.
>> >
>> > Even if Aaron submits typo fixes for documentation :)
>> >
>> > I would think that being core developer would be the only thing that
>> > would require trust. As for a random a contributor, their patches are
>> > always reviewed by core developers before going in, so I don't see any
>> > need for trust there. Identity is another matter of course, but no one
>> > even checks if I'm the real Tshepang Lekhonkhobe.
>>
>> I don't think you're a core contributor, right? Even if a core
>> developer reviews the code, it requires a certain level of trust,
>> especially for complex patches.
>
> I would say trust is gained through previous patches, not through
> personal knowledge of the contributor, though.

You don't have to have face-to-face meetings (I never may most Python
contributors face-to-face until many years later, and some I've never
met) but you do gain insight into their personality through the
interaction *around* patches. To me, that counts just as much as the
objective quality of their patches.

-- 
--Guido van Rossum (python.org/~guido)

From solipsis at pitrou.net  Thu Apr 19 19:43:20 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 19 Apr 2012 19:43:20 +0200
Subject: [Python-Dev] cpython: Issue #11750: The Windows API functions
 scattered in the _subprocess and
In-Reply-To: <CAP7+vJJi2FXTXNwsaepY5W4MgV4G5PdoDDsc1AOfn2QpH5U63A@mail.gmail.com>
References: <E1SKZzK-0001QB-GS@dinsdale.python.org>
	<4F8F15FC.1010705@v.loewis.de> <20120418213014.00d36cc0@pitrou.net>
	<4F8FF4A6.7000704@v.loewis.de>
	<CAP7+vJ+Y2t4cCPsvNcHGg-qFCHXog_+=5oOQ5K-S5PYAdSvEoA@mail.gmail.com>
	<CAA77j2C5681oeeXn3Pq8iWOgG284f8BXJ-PWfHCNKqQQOBPCog@mail.gmail.com>
	<CAP7+vJJRE95Te47g4mpLGT6g+5A4AwnSHAeZSp0X2Rhd57RPdg@mail.gmail.com>
	<CAA77j2D59jAHfYnXaHptv-RdKJMw6QfvgPG_sbfkxB-LdvAhrg@mail.gmail.com>
	<CAP7+vJJ-ADgtx+3uoo1NdWxKZ5h6h5LHmjE9cFr6NG+vhR4jzQ@mail.gmail.com>
	<20120419193023.468069b6@pitrou.net>
	<CAP7+vJJi2FXTXNwsaepY5W4MgV4G5PdoDDsc1AOfn2QpH5U63A@mail.gmail.com>
Message-ID: <1334857400.3345.7.camel@localhost.localdomain>

Le jeudi 19 avril 2012 ? 10:40 -0700, Guido van Rossum a ?crit :
> >>
> >> I don't think you're a core contributor, right? Even if a core
> >> developer reviews the code, it requires a certain level of trust,
> >> especially for complex patches.
> >
> > I would say trust is gained through previous patches, not through
> > personal knowledge of the contributor, though.
> 
> You don't have to have face-to-face meetings (I never may most Python
> contributors face-to-face until many years later, and some I've never
> met) but you do gain insight into their personality through the
> interaction *around* patches. To me, that counts just as much as the
> objective quality of their patches.

Agreed.

Regards

Antoine.



From stefan_ml at behnel.de  Thu Apr 19 23:08:20 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Thu, 19 Apr 2012 23:08:20 +0200
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CAB4yi1Pm8i25VmJ_XQgAW7ddRqHiK7DxfG72L8rHhBZBM3b-VA@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org> <20120419144406.7e650e29@pitrou.net>
	<20120419132822.4B8702509E4@webabinitio.net>
	<CAB4yi1Pm8i25VmJ_XQgAW7ddRqHiK7DxfG72L8rHhBZBM3b-VA@mail.gmail.com>
Message-ID: <jmpus4$t36$1@dough.gmane.org>

Matt Joiner, 19.04.2012 16:13:
> Personally I find the unholy product of C and Python that is Cython to be
> more complex than the sum of the complexities of its parts. Is it really
> wise to be learning Cython without already knowing C, Python, and the
> CPython object model?

The main obstacle that I regularly see for users of the C-API is actually
reference counting and an understanding of what borrowed references and
owned references imply in a given code context. In fact, I can't remember
seeing any C extension code getting posted on Python mailing lists (core
developers excluded) that has no ref-counting bugs or at least a severe
lack of error handling. Usually, such code is also accompanied by a comment
that the author is not sure if everything is correct and asks for advice,
and that's rather independent of the functional complexity of the code
snippet. OTOH, I've also seen a couple of really dangerous code snippets
already that posters apparently meant to show off with, so not everyone is
aware of these obstacles.

Also, the C code by inexperienced programmers tends to be fairly
inefficient because they simply do not know what impact some convenience
functions have. So they tend to optimise prematurely in places where they
feel more comfortable, but that can never make up for the overhead that
simple and very conveniently looking C-API functions introduce in other
places. Value packing comes to mind.

So, from my experience, there is a serious learning curve beyond knowing C,
right from the start when trying to work on C extensions, including
CPython's own code, because the C-API is far from trivial.

And that's the kind of learning curve that Cython tries to lower. It makes
it substantially easier to write correct code, simply by letting you write
Python code instead of C plus C-API code. And once it works, you can start
making it explicitly faster by applying "I know what I'm doing" schemes to
proven hot spots or by partially rewriting it. And if you do not know yet
what you're doing, then *that's* where the learning curve begins. But by
then, your code is basically written, works more or less and can be
benchmarked.


> While code generation alleviates the burden of tedious languages, it's also
> infinitely more complex, makes debugging very difficult and adds to
> prerequisite knowledge, among other drawbacks.

You can use gdb for source level debugging of Cython code and cProfile to
profile it. Try that with C-API code.

Stefan


From victor.stinner at gmail.com  Thu Apr 19 23:10:23 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 19 Apr 2012 23:10:23 +0200
Subject: [Python-Dev] PEP-419: Protecting cleanup statements from
	interruptions
In-Reply-To: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
References: <CAA0gF6qYNRx7BGwo7LjRA8xBWRj0N==hNtSVThF210VaLPRhjQ@mail.gmail.com>
Message-ID: <CAMpsgwYz9dQTV6yJVAHYerxEOiOQpe4k5WYFaHDTqS3q_41syQ@mail.gmail.com>

> PEP: 419
> Title: Protecting cleanup statements from interruptions
> Version: $Revision$
> Last-Modified: $Date$
> Author: Paul Colomiets <paul at colomiets.name>
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 06-Apr-2012
> Python-Version: 3.3

Hi, I think your PEP should at least mention that the
signal.pthread_sigmask() exists, function added to Python 3.3.

signal.pthread_sigmask() is maybe less practical than your proposition
(each finally block has to be patched), but it is the best way to
ignore temporary signals. Using signal.pthread_sigmask(), you
guarantee that EINTR will not occurs... if you use it on all threads!

http://mail.python.org/pipermail/python-ideas/2012-April/014749.html

Victor

From brian at python.org  Thu Apr 19 23:19:41 2012
From: brian at python.org (Brian Curtin)
Date: Thu, 19 Apr 2012 16:19:41 -0500
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <jmpus4$t36$1@dough.gmane.org>
References: <jmojtt$ole$1@dough.gmane.org> <20120419144406.7e650e29@pitrou.net>
	<20120419132822.4B8702509E4@webabinitio.net>
	<CAB4yi1Pm8i25VmJ_XQgAW7ddRqHiK7DxfG72L8rHhBZBM3b-VA@mail.gmail.com>
	<jmpus4$t36$1@dough.gmane.org>
Message-ID: <CAD+XWwq-a55viKcj+btVVtY_nTUBL8DkB_ywEowYsXsoXKdU5w@mail.gmail.com>

On Thu, Apr 19, 2012 at 16:08, Stefan Behnel
>> While code generation alleviates the burden of tedious languages, it's also
>> infinitely more complex, makes debugging very difficult and adds to
>> prerequisite knowledge, among other drawbacks.
>
> You can use gdb for source level debugging of Cython code and cProfile to
> profile it. Try that with C-API code.

I know I'm in the minority of committers being on Windows, but we do
receive a good amount of reports and contributions from Windows users
who dive into the C code. The outside contributors actually gave the
strongest indication that we needed to move to VS2010.

Visual Studio by itself makes debugging unbelievably easy, and with
the Python Tools for VS plugin it even allows Visual Studio's built-in
profiler to work. I know Windows is not on most people's maps, but if
we have to scrap the debugger, that's another learning curve
attachment to evaluate.

From cs at zip.com.au  Fri Apr 20 00:07:28 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 20 Apr 2012 08:07:28 +1000
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>
References: <CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>
Message-ID: <20120419220727.GA15941@cskk.homeip.net>

On 19Apr2012 10:47, Stephen J. Turnbull <stephen at xemacs.org> wrote:
| On Thu, Apr 19, 2012 at 8:15 AM, Victor Stinner
| <victor.stinner at gmail.com> wrote:
| > Well, I asked on IRC what I should do for these definitions because
| > I'm too tired to decide what to do. [[...]] I replaced these definitions with yours.
| 
| That was nice of you.  In return, I'll go over the PEP to check that
| usage is appropriate (eg, in some places "resolution" was used in the
| sense of computer science's "precision" == reported digits).

Hmm. Let me know when you're done too; my counterproposal example
implementation uses .resolution as the name for the metadata specifying
the fineness of the OS call API (not the accuracy of the clock). So I
would like to adjust my metadata to match and send Vicotr updated code
for the snapshot he has in the PEP.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

The Few. The Proud. The Politically Incorrect.  - Steve Masticola

From stefan_ml at behnel.de  Fri Apr 20 00:21:25 2012
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Fri, 20 Apr 2012 00:21:25 +0200
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CAD+XWwq-a55viKcj+btVVtY_nTUBL8DkB_ywEowYsXsoXKdU5w@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org> <20120419144406.7e650e29@pitrou.net>
	<20120419132822.4B8702509E4@webabinitio.net>
	<CAB4yi1Pm8i25VmJ_XQgAW7ddRqHiK7DxfG72L8rHhBZBM3b-VA@mail.gmail.com>
	<jmpus4$t36$1@dough.gmane.org>
	<CAD+XWwq-a55viKcj+btVVtY_nTUBL8DkB_ywEowYsXsoXKdU5w@mail.gmail.com>
Message-ID: <jmq356$t9f$1@dough.gmane.org>

Brian Curtin, 19.04.2012 23:19:
> On Thu, Apr 19, 2012 at 16:08, Stefan Behnel
>>> While code generation alleviates the burden of tedious languages, it's also
>>> infinitely more complex, makes debugging very difficult and adds to
>>> prerequisite knowledge, among other drawbacks.
>>
>> You can use gdb for source level debugging of Cython code and cProfile to
>> profile it. Try that with C-API code.
> 
> I know I'm in the minority of committers being on Windows, but we do
> receive a good amount of reports and contributions from Windows users
> who dive into the C code.

Doesn't match my experience at all - different software target audiences, I
guess.


> Visual Studio by itself makes debugging unbelievably easy, and with
> the Python Tools for VS plugin it even allows Visual Studio's built-in
> profiler to work. I know Windows is not on most people's maps, but if
> we have to scrap the debugger, that's another learning curve
> attachment to evaluate.

What I meant was that there's pdb for debugging Python code (which doesn't
know about the C code it executes) and gdb (or VS) for debugging C code,
from which you can barely infer the Python code it executes. For Cython
code, you can use gdb for both Cython and C, and within limits also for
Python code. Here's a quick intro to see what I mean:

http://docs.cython.org/src/userguide/debugging.html

For profiling, you can use cProfile for Python code (which doesn't tell you
about the C code it executes) and oprofile, callgrind, etc. (incl. VS) for
C code, from which it's non-trivial to infer the relation to the Python
code. With Cython, you can use cProfile for both Cython and Python code as
long as you stay at the source code level, and only need to descend to a
low-level profiler when you care about the exact details, usually assembly
jumps and branches.

Anyway, I guess this is getting off-topic for this list.

Stefan


From brian at python.org  Fri Apr 20 01:35:32 2012
From: brian at python.org (Brian Curtin)
Date: Thu, 19 Apr 2012 18:35:32 -0500
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <jmq356$t9f$1@dough.gmane.org>
References: <jmojtt$ole$1@dough.gmane.org> <20120419144406.7e650e29@pitrou.net>
	<20120419132822.4B8702509E4@webabinitio.net>
	<CAB4yi1Pm8i25VmJ_XQgAW7ddRqHiK7DxfG72L8rHhBZBM3b-VA@mail.gmail.com>
	<jmpus4$t36$1@dough.gmane.org>
	<CAD+XWwq-a55viKcj+btVVtY_nTUBL8DkB_ywEowYsXsoXKdU5w@mail.gmail.com>
	<jmq356$t9f$1@dough.gmane.org>
Message-ID: <CAD+XWwrF0i7jtD=0kzMPD4YyEmGKQd6OCY2UEAnNoDuvizhk0w@mail.gmail.com>

On Thu, Apr 19, 2012 at 17:21, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Brian Curtin, 19.04.2012 23:19:
>> On Thu, Apr 19, 2012 at 16:08, Stefan Behnel
>>>> While code generation alleviates the burden of tedious languages, it's also
>>>> infinitely more complex, makes debugging very difficult and adds to
>>>> prerequisite knowledge, among other drawbacks.
>>>
>>> You can use gdb for source level debugging of Cython code and cProfile to
>>> profile it. Try that with C-API code.
>>
>> I know I'm in the minority of committers being on Windows, but we do
>> receive a good amount of reports and contributions from Windows users
>> who dive into the C code.
>
> Doesn't match my experience at all - different software target audiences, I
> guess.

I'm don't know what this means. I work on CPython, which is the target
audience at hand, and I come across reports and contributions from
Windows users for C extensions.

>> Visual Studio by itself makes debugging unbelievably easy, and with
>> the Python Tools for VS plugin it even allows Visual Studio's built-in
>> profiler to work. I know Windows is not on most people's maps, but if
>> we have to scrap the debugger, that's another learning curve
>> attachment to evaluate.
>
> What I meant was that there's pdb for debugging Python code (which doesn't
> know about the C code it executes) and gdb (or VS) for debugging C code,
> from which you can barely infer the Python code it executes. For Cython
> code, you can use gdb for both Cython and C, and within limits also for
> Python code. Here's a quick intro to see what I mean:
>
> http://docs.cython.org/src/userguide/debugging.html

I know what you meant. What I meant is "easy debugging on Windows goes
away, now I have to setup and learn GDB on Windows". *I* can do that.
Does the rest of the community want to have to do that as well? We
should also take into consideration how something like this affects
the third-party IDEs and their debugger support.

From eric at trueblade.com  Thu Apr 19 22:18:34 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Thu, 19 Apr 2012 16:18:34 -0400
Subject: [Python-Dev] PEP 420: Implicit Namespace Packages
Message-ID: <4F90731A.1080604@trueblade.com>

If you have any comments, please join the discussion over in import-sig.

Eric.

From brett at python.org  Fri Apr 20 04:59:18 2012
From: brett at python.org (Brett Cannon)
Date: Thu, 19 Apr 2012 22:59:18 -0400
Subject: [Python-Dev] [Python-checkins] peps: Note that ImportError will
 no longer be raised due to a missing __init__.py
In-Reply-To: <E1SL0H4-0005LQ-GT@dinsdale.python.org>
References: <E1SL0H4-0005LQ-GT@dinsdale.python.org>
Message-ID: <CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>

It's actually an ImportWarning, not Error (or at least that's what I meant
on import-sig). If the module is eventually found then there is no error.

On Thu, Apr 19, 2012 at 18:56, eric.smith <python-checkins at python.org>wrote:

> http://hg.python.org/peps/rev/af61fe9a56fb
> changeset:   4281:af61fe9a56fb
> user:        Eric V. Smith <eric at trueblade.com>
> date:        Thu Apr 19 18:56:22 2012 -0400
> summary:
>  Note that ImportError will no longer be raised due to a missing
> __init__.py file.
>
> files:
>  pep-0420.txt |  5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
>
>
> diff --git a/pep-0420.txt b/pep-0420.txt
> --- a/pep-0420.txt
> +++ b/pep-0420.txt
> @@ -148,6 +148,11 @@
>  path. With namespace packages, all entries in the path must be
>  scanned.
>
> +Note that an ImportError will no longer be raised for a directory
> +lacking an ``__init__.py`` file. Such a directory will now be imported
> +as a namespace package, whereas in prior Python versions an
> +ImportError would be raised.
> +
>  At PyCon 2012, we had a discussion about namespace packages at which
>  PEP 382 and PEP 402 were rejected, to be replaced by this PEP [1]_.
>
>
> --
> Repository URL: http://hg.python.org/peps
>
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120419/ab9e05df/attachment-0001.html>

From eric at trueblade.com  Fri Apr 20 10:52:33 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Fri, 20 Apr 2012 04:52:33 -0400
Subject: [Python-Dev] [Python-checkins] peps: Note that ImportError will
 no longer be raised due to a missing __init__.py
In-Reply-To: <CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>
References: <E1SL0H4-0005LQ-GT@dinsdale.python.org>
	<CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>
Message-ID: <4F9123D1.7040106@trueblade.com>

On 4/19/2012 10:59 PM, Brett Cannon wrote:
> It's actually an ImportWarning, not Error (or at least that's what I
> meant on import-sig). If the module is eventually found then there is no
> error.

My error. Fixed.

Eric.

> 
> On Thu, Apr 19, 2012 at 18:56, eric.smith <python-checkins at python.org
> <mailto:python-checkins at python.org>> wrote:
> 
>     http://hg.python.org/peps/rev/af61fe9a56fb
>     changeset:   4281:af61fe9a56fb
>     user:        Eric V. Smith <eric at trueblade.com
>     <mailto:eric at trueblade.com>>
>     date:        Thu Apr 19 18:56:22 2012 -0400
>     summary:
>      Note that ImportError will no longer be raised due to a missing
>     __init__.py file.
> 
>     files:
>      pep-0420.txt |  5 +++++
>      1 files changed, 5 insertions(+), 0 deletions(-)
> 
> 
>     diff --git a/pep-0420.txt b/pep-0420.txt
>     --- a/pep-0420.txt
>     +++ b/pep-0420.txt
>     @@ -148,6 +148,11 @@
>      path. With namespace packages, all entries in the path must be
>      scanned.
> 
>     +Note that an ImportError will no longer be raised for a directory
>     +lacking an ``__init__.py`` file. Such a directory will now be imported
>     +as a namespace package, whereas in prior Python versions an
>     +ImportError would be raised.
>     +
>      At PyCon 2012, we had a discussion about namespace packages at which
>      PEP 382 and PEP 402 were rejected, to be replaced by this PEP [1]_.
> 
> 
>     --
>     Repository URL: http://hg.python.org/peps
> 
>     _______________________________________________
>     Python-checkins mailing list
>     Python-checkins at python.org <mailto:Python-checkins at python.org>
>     http://mail.python.org/mailman/listinfo/python-checkins
> 
> 
> 
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins


From solipsis at pitrou.net  Fri Apr 20 13:29:07 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 20 Apr 2012 13:29:07 +0200
Subject: [Python-Dev] OS X buildbots missing
Message-ID: <20120420132907.569f189c@pitrou.net>


Hello,

For the record, we don't have any stable OS X buildbots anymore.
If you want to contribute a build slave (I hear we may have Apple
employees reading this list), please take a look at
http://wiki.python.org/moin/BuildBot

Regards

Antoine.



From kristjan at ccpgames.com  Fri Apr 20 15:28:38 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Fri, 20 Apr 2012 13:28:38 +0000
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <20120418091115.Horde.O35uQ9jz9kRPjmkTkHhisvA@webmail.df.eu>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
	<20120418091115.Horde.O35uQ9jz9kRPjmkTkHhisvA@webmail.df.eu>
Message-ID: <EFE3877620384242A686D52278B7CCD33B7E43@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of martin at v.loewis.de
> Sent: 18. apr?l 2012 07:11
> To: python-dev at python.org
> Subject: Re: [Python-Dev] issue 9141, finalizers and gc module
> 
> Invoking methods in tp_clear I find fairly harmless, in comparison. My only
> concern is that errors are silently ignored. However, I don't think this matters
> in practice, since io objects typically are not part of cycles, anyway.
> 
> > Why not allow it for all objects, then?
> 
> It's *allowed* for all objects. Why do you think it is not?
> 
Oh, because dynamic classes with __del__ methods are deliberately not collected but put in gc.garbage.  And the special case of the generator object, etc. etc.

iobase.c probably documents its own needs well enough.  The fact that I had to raise this question here, though, means that the source code  for gcmodule.c doesn't have enough information to explain exactly the problem that it has with calling finalizers.
It seems to me that it worries that __del__ methods may not run to completion because of attribute errors, and that it would have to silence such errors to not cause unexpected noise.
That is the impression I get from this discussion.  Correctness over memory conservation, or something like that.

Btw, regarding object resurrection, I was working on a patch to get that to work better, particularly with subclasses.
You may want to check out issue 8212, whence this discussion originates.

K



From kristjan at ccpgames.com  Fri Apr 20 15:33:35 2012
From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=)
Date: Fri, 20 Apr 2012 13:33:35 +0000
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <CAK5idxSf=oZNyGjHnCeUyVdBdx8Jb4B4rPjs=BzkcQG=2ROCdg@mail.gmail.com>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
	<20120417203055.689dd7ad@pitrou.net>
	<CAK5idxSf=oZNyGjHnCeUyVdBdx8Jb4B4rPjs=BzkcQG=2ROCdg@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD33B7E57@RKV-IT-EXCH104.ccp.ad.local>

Thanks. I wonder if these semantics might not belong in cPython too, us being consenting adults and all that ?

K

From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Maciej Fijalkowski
Sent: 17. apr?l 2012 21:29
To: Antoine Pitrou
Cc: python-dev at python.org
Subject: Re: [Python-Dev] issue 9141, finalizers and gc module


PyPy breaks cycles randomly. I think a pretty comprehensive description of what happens is here:

http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-1.html
http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-2.html

Cheers,
fijal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120420/18f26b3d/attachment.html>

From fijall at gmail.com  Fri Apr 20 15:35:07 2012
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Fri, 20 Apr 2012 15:35:07 +0200
Subject: [Python-Dev] issue 9141, finalizers and gc module
In-Reply-To: <EFE3877620384242A686D52278B7CCD33B7E57@RKV-IT-EXCH104.ccp.ad.local>
References: <EFE3877620384242A686D52278B7CCD339509E@RKV-IT-EXCH104.ccp.ad.local>
	<20120417164536.Horde.8HL-ZFNNcXdPjYIQOrmmTmA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33958A7@RKV-IT-EXCH104.ccp.ad.local>
	<20120417203055.689dd7ad@pitrou.net>
	<CAK5idxSf=oZNyGjHnCeUyVdBdx8Jb4B4rPjs=BzkcQG=2ROCdg@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD33B7E57@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <CAK5idxR05-m2Bsjm+mM=NYdnUS=6MvedeWJEbwqTKuKfBXnOvg@mail.gmail.com>

On Fri, Apr 20, 2012 at 3:33 PM, Kristj?n Valur J?nsson <
kristjan at ccpgames.com> wrote:

>  Thanks. I wonder if these semantics might not belong in cPython too, us
> being consenting adults and all that J
>

I would say it's saner, but it's just my opinion :)

Cheers,
fijal


> ****
>
> ** **
>
> K****
>
> ** **
>
> *From:* python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:
> python-dev-bounces+kristjan=ccpgames.com at python.org] *On Behalf Of *Maciej
> Fijalkowski
> *Sent:* 17. apr?l 2012 21:29
> *To:* Antoine Pitrou
> *Cc:* python-dev at python.org
>
> *Subject:* Re: [Python-Dev] issue 9141, finalizers and gc module****
>
>  ** **
>
> ** **
>
> PyPy breaks cycles randomly. I think a pretty comprehensive description of
> what happens is here:****
>
> ** **
>
>
> http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-1.html
> ****
>
>
> http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-2.html
> ****
>
> ** **
>
> Cheers,****
>
> fijal****
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120420/f94a80a5/attachment.html>

From eric at trueblade.com  Fri Apr 20 15:54:28 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Fri, 20 Apr 2012 09:54:28 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
Message-ID: <4F916A94.40309@trueblade.com>

On 04/14/2012 02:12 PM, Brett Cannon wrote:
> My multi-year project -- started in 2006 according to my blog -- to
> rewrite import in pure Python and then bootstrap it into CPython as
> *the* implementation of __import__() is finally over (mostly)!

Maybe I'm missing something, but it seems that I need to run
importlib._bootstrap._install(sys, _imp) manually in order to make
__import__ be importlib's version. Is that not supposed to happen
automatically?

Eric.

From brett at python.org  Fri Apr 20 16:59:25 2012
From: brett at python.org (Brett Cannon)
Date: Fri, 20 Apr 2012 10:59:25 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F916A94.40309@trueblade.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<4F916A94.40309@trueblade.com>
Message-ID: <CAP1=2W4tD5BCSZ9JsBCgdP_6KFQNZ==A5LJ4v8m30v_BKhBjQg@mail.gmail.com>

On Fri, Apr 20, 2012 at 09:54, Eric V. Smith <eric at trueblade.com> wrote:

> On 04/14/2012 02:12 PM, Brett Cannon wrote:
> > My multi-year project -- started in 2006 according to my blog -- to
> > rewrite import in pure Python and then bootstrap it into CPython as
> > *the* implementation of __import__() is finally over (mostly)!
>
> Maybe I'm missing something, but it seems that I need to run
> importlib._bootstrap._install(sys, _imp) manually in order to make
> __import__ be importlib's version. Is that not supposed to happen
> automatically?


It's happening automatically. If you look in Python/import.c you will
notice that the code that __import__() eventually calls is calling out into
the Python code. There is still some C code in order to accelerate the case
of hitting sys.modules.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120420/32b90fe5/attachment.html>

From ericsnowcurrently at gmail.com  Fri Apr 20 17:02:14 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 20 Apr 2012 09:02:14 -0600
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <4F916A94.40309@trueblade.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<4F916A94.40309@trueblade.com>
Message-ID: <CALFfu7C7ovVJaSwiCKXtTawtKuPeTHKpydH2XESiFSTEHn1ewg@mail.gmail.com>

On Fri, Apr 20, 2012 at 7:54 AM, Eric V. Smith <eric at trueblade.com> wrote:
> On 04/14/2012 02:12 PM, Brett Cannon wrote:
>> My multi-year project -- started in 2006 according to my blog -- to
>> rewrite import in pure Python and then bootstrap it into CPython as
>> *the* implementation of __import__() is finally over (mostly)!
>
> Maybe I'm missing something, but it seems that I need to run
> importlib._bootstrap._install(sys, _imp) manually in order to make
> __import__ be importlib's version. Is that not supposed to happen
> automatically?

In the default tip (3.3a2+), importlib.__import__ is already
bootstrapped, so you don't need mess with anything.  As well, in any
of the 3.x versions you can bind builtins.__import__ to
importlib.__import__.

If you are making changes to importlib (essentially, changes in
Lib/importlib/_bootstrap.py), you must re-build (make) cpython in
order for your changes to get pulled into the frozen copy of
importlib.  Until you do that, the built-in import machinery will be
the one that existed before your changes.  You could also re-bind
builtins.__import__ to try out the changes without having to re-build,
but ultimately your changes will have to get frozen (into
Python/importlib.h) and will be part of the commit of your changes to
importlib.

Likely you already know all this, but just in case...  :)

-eric

From brett at python.org  Fri Apr 20 17:04:13 2012
From: brett at python.org (Brett Cannon)
Date: Fri, 20 Apr 2012 11:04:13 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CALFfu7C7ovVJaSwiCKXtTawtKuPeTHKpydH2XESiFSTEHn1ewg@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<4F916A94.40309@trueblade.com>
	<CALFfu7C7ovVJaSwiCKXtTawtKuPeTHKpydH2XESiFSTEHn1ewg@mail.gmail.com>
Message-ID: <CAP1=2W7OzRbA_Vm4yocJAOdQX_aSPQcKESApsbmWb_0W2q_efg@mail.gmail.com>

On Fri, Apr 20, 2012 at 11:02, Eric Snow <ericsnowcurrently at gmail.com>wrote:

> On Fri, Apr 20, 2012 at 7:54 AM, Eric V. Smith <eric at trueblade.com> wrote:
> > On 04/14/2012 02:12 PM, Brett Cannon wrote:
> >> My multi-year project -- started in 2006 according to my blog -- to
> >> rewrite import in pure Python and then bootstrap it into CPython as
> >> *the* implementation of __import__() is finally over (mostly)!
> >
> > Maybe I'm missing something, but it seems that I need to run
> > importlib._bootstrap._install(sys, _imp) manually in order to make
> > __import__ be importlib's version. Is that not supposed to happen
> > automatically?
>
> In the default tip (3.3a2+), importlib.__import__ is already
> bootstrapped, so you don't need mess with anything.  As well, in any
> of the 3.x versions you can bind builtins.__import__ to
> importlib.__import__.
>
> If you are making changes to importlib (essentially, changes in
> Lib/importlib/_bootstrap.py), you must re-build (make) cpython in
> order for your changes to get pulled into the frozen copy of
> importlib.  Until you do that, the built-in import machinery will be
> the one that existed before your changes.  You could also re-bind
> builtins.__import__ to try out the changes without having to re-build,
> but ultimately your changes will have to get frozen (into
> Python/importlib.h) and will be part of the commit of your changes to
> importlib.
>
> Likely you already know all this, but just in case...  :)


And if you want to run a test using importlib instead of the frozen code
you can use importlib.test.regrtest to handle the injection for you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120420/fc6c589b/attachment.html>

From eric at trueblade.com  Fri Apr 20 17:07:32 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Fri, 20 Apr 2012 11:07:32 -0400
Subject: [Python-Dev] importlib is now bootstrapped (and what that means)
In-Reply-To: <CAP1=2W4tD5BCSZ9JsBCgdP_6KFQNZ==A5LJ4v8m30v_BKhBjQg@mail.gmail.com>
References: <CAP1=2W5A080aEOd9AYBD9gsTO74jbDh+uU8bha2ozJore1NSvA@mail.gmail.com>
	<4F916A94.40309@trueblade.com>
	<CAP1=2W4tD5BCSZ9JsBCgdP_6KFQNZ==A5LJ4v8m30v_BKhBjQg@mail.gmail.com>
Message-ID: <4F917BB4.7010508@trueblade.com>

On 04/20/2012 10:59 AM, Brett Cannon wrote:
> 
> 
> On Fri, Apr 20, 2012 at 09:54, Eric V. Smith <eric at trueblade.com
> <mailto:eric at trueblade.com>> wrote:
> 
>     On 04/14/2012 02:12 PM, Brett Cannon wrote:
>     > My multi-year project -- started in 2006 according to my blog -- to
>     > rewrite import in pure Python and then bootstrap it into CPython as
>     > *the* implementation of __import__() is finally over (mostly)!
> 
>     Maybe I'm missing something, but it seems that I need to run
>     importlib._bootstrap._install(sys, _imp) manually in order to make
>     __import__ be importlib's version. Is that not supposed to happen
>     automatically?
> 
> 
> It's happening automatically. If you look in Python/import.c you will
> notice that the code that __import__() eventually calls is calling out
> into the Python code. There is still some C code in order to accelerate
> the case of hitting sys.modules.

Okay. But I'm running make, and that's succeeding (and it looks like it
does the right thing), yet it doesn't appear to be picking up my changes
to _bootstrap.py automatically. I'll keep investigating.

Eric.

From status at bugs.python.org  Fri Apr 20 18:07:16 2012
From: status at bugs.python.org (Python tracker)
Date: Fri, 20 Apr 2012 18:07:16 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20120420160716.2967E1C994@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2012-04-13 - 2012-04-20)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    3396 (+19)
  closed 23015 (+44)
  total  26411 (+63)

Open issues with patches: 1440 


Issues opened (45)
==================

#12947: Examples in library/doctest.html lack the flags
http://bugs.python.org/issue12947  reopened by eric.araujo

#13994: incomplete revert in 2.7 Distutils left two copies of customiz
http://bugs.python.org/issue13994  reopened by lemburg

#14572: 2.7.3: sqlite module does not build on centos 5
http://bugs.python.org/issue14572  opened by Joakim.Sernbrant

#14573: json iterencode can not handle general iterators
http://bugs.python.org/issue14573  opened by Aaron.Staley

#14574: SocketServer doesn't handle client disconnects properly
http://bugs.python.org/issue14574  opened by vdjeric

#14576: IDLE cannot connect to subprocess - New solution
http://bugs.python.org/issue14576  opened by clikkeb

#14578: importlib doesn't check Windows registry for paths
http://bugs.python.org/issue14578  opened by brett.cannon

#14579: Vulnerability in the utf-16 decoder after error handling
http://bugs.python.org/issue14579  opened by storchaka

#14580: imp.reload can fail for sub-modules
http://bugs.python.org/issue14580  opened by paul_ollis

#14581: Support case-insensitive file extensions on Windows in importl
http://bugs.python.org/issue14581  opened by brett.cannon

#14583: try/except import fails --without-threads
http://bugs.python.org/issue14583  opened by skrah

#14584: Add gzip support the XMLRPC Server
http://bugs.python.org/issue14584  opened by rhettinger

#14585: Have test_import run more importlib tests
http://bugs.python.org/issue14585  opened by brett.cannon

#14586: TypeError: truncate() takes no keyword arguments
http://bugs.python.org/issue14586  opened by TheBiggerGuy

#14588: PEP 3115 compliant dynamic class creation
http://bugs.python.org/issue14588  opened by durban

#14590: ConfigParser doesn't strip inline comment when delimiter occur
http://bugs.python.org/issue14590  opened by grahamd

#14591: Value returned by random.random() out of valid range
http://bugs.python.org/issue14591  opened by Dave.Reid

#14594: document imp.load_dynamic()
http://bugs.python.org/issue14594  opened by scoder

#14596: struct.unpack memory leak
http://bugs.python.org/issue14596  opened by Robert.Elsner

#14597: Cannot unload dll in ctypes until script exits
http://bugs.python.org/issue14597  opened by plynch76

#14598: _cursesmodule.c fails with ncurses-5.9 on Linux
http://bugs.python.org/issue14598  opened by phaering

#14599: Windows test_import failure
http://bugs.python.org/issue14599  opened by r.david.murray

#14600: Change ImportError reference handling, naming
http://bugs.python.org/issue14600  opened by brian.curtin

#14604: spurious stat() calls in importlib
http://bugs.python.org/issue14604  opened by pitrou

#14605: Make import machinery explicit
http://bugs.python.org/issue14605  opened by brett.cannon

#14606: Memory leak subprocess on Windows
http://bugs.python.org/issue14606  opened by rfs

#14610: configure script hangs on pthread verification and PTHREAD_SCO
http://bugs.python.org/issue14610  opened by Raphael.Cruzeiro

#14611: inspect.getargs fails on some anonymous tuples
http://bugs.python.org/issue14611  opened by taschini

#14613: time.time can return NaN
http://bugs.python.org/issue14613  opened by michael.foord

#14614: PyTuple_SET_ITEM could check bounds in debug mode
http://bugs.python.org/issue14614  opened by pitrou

#14616: subprocess docs should mention pipes.quote/shlex.quote
http://bugs.python.org/issue14616  opened by eric.araujo

#14617: confusing docs with regard to __hash__
http://bugs.python.org/issue14617  opened by stoneleaf

#14618: remove modules_reloading from the interpreter state
http://bugs.python.org/issue14618  opened by eric.snow

#14619: Enhanced variable substitution for databases
http://bugs.python.org/issue14619  opened by rhettinger

#14620: Fatal Python error: Cannot recover from stack overflow.
http://bugs.python.org/issue14620  opened by The-Compiler

#14621: Hash function is not randomized properly
http://bugs.python.org/issue14621  opened by Vlado.Boza

#14624: Faster utf-16 decoder
http://bugs.python.org/issue14624  opened by storchaka

#14625: Faster utf-32 decoder
http://bugs.python.org/issue14625  opened by storchaka

#14626: os module: use keyword-only arguments for dir_fd and nofollow 
http://bugs.python.org/issue14626  opened by larry

#14627: Fatal Python Error when Python startup is interrupted by CTRL+
http://bugs.python.org/issue14627  opened by haypo

#14628: Clarify import statement documentation regarding what gets bou
http://bugs.python.org/issue14628  opened by eric.snow

#14630: non-deterministic behavior of int subclass
http://bugs.python.org/issue14630  opened by brechtm

#14631: Instance methods and WeakRefs don't mix.
http://bugs.python.org/issue14631  opened by Sundance

#14632: Race condition in WatchedFileHandler leads to unhandled except
http://bugs.python.org/issue14632  opened by phlogistonjohn

#14633: test_find_module_encoding should test for a less specific mess
http://bugs.python.org/issue14633  opened by eric.snow



Most recent 15 issues with no replies (15)
==========================================

#14620: Fatal Python error: Cannot recover from stack overflow.
http://bugs.python.org/issue14620

#14616: subprocess docs should mention pipes.quote/shlex.quote
http://bugs.python.org/issue14616

#14610: configure script hangs on pthread verification and PTHREAD_SCO
http://bugs.python.org/issue14610

#14605: Make import machinery explicit
http://bugs.python.org/issue14605

#14604: spurious stat() calls in importlib
http://bugs.python.org/issue14604

#14584: Add gzip support the XMLRPC Server
http://bugs.python.org/issue14584

#14572: 2.7.3: sqlite module does not build on centos 5
http://bugs.python.org/issue14572

#14570: Document json "sort_keys" parameter properly
http://bugs.python.org/issue14570

#14566: run_cgi reverts to using unnormalized path
http://bugs.python.org/issue14566

#14561: python-2.7.2-r3 suffers test failure at test_mhlib
http://bugs.python.org/issue14561

#14558: Documentation for unittest.main does not describe some keyword
http://bugs.python.org/issue14558

#14530: distutils's build_wininst command fails to correctly interpret
http://bugs.python.org/issue14530

#14529: distutils's build_msi command ignores the data_files argument
http://bugs.python.org/issue14529

#14517: Recompilation of sources with Distutils
http://bugs.python.org/issue14517

#14504: Suggestion to improve argparse's help messages for "store_cons
http://bugs.python.org/issue14504



Most recent 15 issues waiting for review (15)
=============================================

#14632: Race condition in WatchedFileHandler leads to unhandled except
http://bugs.python.org/issue14632

#14631: Instance methods and WeakRefs don't mix.
http://bugs.python.org/issue14631

#14625: Faster utf-32 decoder
http://bugs.python.org/issue14625

#14624: Faster utf-16 decoder
http://bugs.python.org/issue14624

#14617: confusing docs with regard to __hash__
http://bugs.python.org/issue14617

#14611: inspect.getargs fails on some anonymous tuples
http://bugs.python.org/issue14611

#14600: Change ImportError reference handling, naming
http://bugs.python.org/issue14600

#14598: _cursesmodule.c fails with ncurses-5.9 on Linux
http://bugs.python.org/issue14598

#14596: struct.unpack memory leak
http://bugs.python.org/issue14596

#14591: Value returned by random.random() out of valid range
http://bugs.python.org/issue14591

#14588: PEP 3115 compliant dynamic class creation
http://bugs.python.org/issue14588

#14586: TypeError: truncate() takes no keyword arguments
http://bugs.python.org/issue14586

#14580: imp.reload can fail for sub-modules
http://bugs.python.org/issue14580

#14579: Vulnerability in the utf-16 decoder after error handling
http://bugs.python.org/issue14579

#14568: HP-UX local libraries not included
http://bugs.python.org/issue14568



Top 10 most discussed issues (10)
=================================

#13959: Re-implement parts of imp in pure Python
http://bugs.python.org/issue13959  19 msgs

#14507: Segfault with deeply nested starmap calls
http://bugs.python.org/issue14507  15 msgs

#14596: struct.unpack memory leak
http://bugs.python.org/issue14596  14 msgs

#2377: Replace __import__ w/ importlib.__import__
http://bugs.python.org/issue2377  13 msgs

#13994: incomplete revert in 2.7 Distutils left two copies of customiz
http://bugs.python.org/issue13994  12 msgs

#14586: TypeError: truncate() takes no keyword arguments
http://bugs.python.org/issue14586  11 msgs

#8212: A tp_dealloc of a subclassed class cannot resurrect an object
http://bugs.python.org/issue8212  10 msgs

#14428: Implementation of the PEP 418
http://bugs.python.org/issue14428  10 msgs

#14621: Hash function is not randomized properly
http://bugs.python.org/issue14621  10 msgs

#10941: imaplib: Internaldate2tuple produces wrong result if date is n
http://bugs.python.org/issue10941   9 msgs



Issues closed (42)
==================

#3493: No Backslash (\) in IDLE 1.2.2
http://bugs.python.org/issue3493  closed by ned.deily

#5113: 2.5.4.3 / test_posix failing on HPUX systems
http://bugs.python.org/issue5113  closed by neologix

#6380: Deadlock during the "import" in the fork()'ed child process if
http://bugs.python.org/issue6380  closed by pitrou

#6657: Copy documentation section
http://bugs.python.org/issue6657  closed by r.david.murray

#8820: IDLE not launching correctly
http://bugs.python.org/issue8820  closed by serwy

#9403: cElementTree: replace PyObject_DEL() by Py_DECREF() to fix a c
http://bugs.python.org/issue9403  closed by haypo

#9803: IDLE closes with save while breakpoint open
http://bugs.python.org/issue9803  closed by serwy

#10576: Add a progress callback to gcmodule
http://bugs.python.org/issue10576  closed by kristjan.jonsson

#11750: Mutualize win32 functions
http://bugs.python.org/issue11750  closed by pitrou

#12599: Use 'is not None' where appropriate in importlib
http://bugs.python.org/issue12599  closed by brett.cannon

#12723: Provide an API in tkSimpleDialog for defining custom validatio
http://bugs.python.org/issue12723  closed by asvetlov

#13496: bisect module: Overflow at index computation
http://bugs.python.org/issue13496  closed by mark.dickinson

#13889: str(float) and round(float) issues with FPU precision
http://bugs.python.org/issue13889  closed by mark.dickinson

#14032: test_cmd_line_script prints undefined 'data' variable
http://bugs.python.org/issue14032  closed by python-dev

#14087: multiprocessing.Condition.wait_for missing
http://bugs.python.org/issue14087  closed by neologix

#14098: provide public C-API for reading/setting sys.exc_info()
http://bugs.python.org/issue14098  closed by loewis

#14308: '_DummyThread' object has no attribute '_Thread__block'
http://bugs.python.org/issue14308  closed by pitrou

#14385: Support other types than dict for __builtins__
http://bugs.python.org/issue14385  closed by haypo

#14386: Expose dict_proxy internal type as types.MappingProxyType
http://bugs.python.org/issue14386  closed by python-dev

#14535: three code examples in docs are not syntax highlighted
http://bugs.python.org/issue14535  closed by ezio.melotti

#14538: HTMLParser: parsing error
http://bugs.python.org/issue14538  closed by ezio.melotti

#14571: float argument required, not NoneType
http://bugs.python.org/issue14571  closed by amaury.forgeotdarc

#14575: IDLE crashes after file open in OS X
http://bugs.python.org/issue14575  closed by ned.deily

#14577: pickling uses __class__ so you can't pickle proxy/mock objects
http://bugs.python.org/issue14577  closed by michael.foord

#14582: Have importlib use return value from a loader's load_module()
http://bugs.python.org/issue14582  closed by brett.cannon

#14587: Certain diacritical marks can and should be capitalized... e.g
http://bugs.python.org/issue14587  closed by r.david.murray

#14589: test_algorithms() of test_ssl fails: certificate of sha256.tbs
http://bugs.python.org/issue14589  closed by pitrou

#14592: old-style (level=-1) importing broken after importlib changes
http://bugs.python.org/issue14592  closed by brett.cannon

#14593: PyErr_SetFromImportErrorWithNameAndPath lacks error checking
http://bugs.python.org/issue14593  closed by pitrou

#14595: Complete your registration to Python tracker -- key	25rVzaHLDO
http://bugs.python.org/issue14595  closed by r.david.murray

#14601: PEP sources not available as documented
http://bugs.python.org/issue14601  closed by benjamin.peterson

#14602: Python build fails on OS X with "$MACOSX_DEPLOYMENT_TARGET mis
http://bugs.python.org/issue14602  closed by ned.deily

#14603: List comprehension in zipfile.namelist
http://bugs.python.org/issue14603  closed by ezio.melotti

#14607: method with special keyword-only argument gives error
http://bugs.python.org/issue14607  closed by python-dev

#14608: Python 2.7.3 x86 msi - msvcr90.dll version mismatch
http://bugs.python.org/issue14608  closed by alexandrul

#14609: can't modify sys.modules during import with importlib
http://bugs.python.org/issue14609  closed by benjamin.peterson

#14612: Crash after modifying f_lineno
http://bugs.python.org/issue14612  closed by python-dev

#14615: pull some import state out of the interpreter state
http://bugs.python.org/issue14615  closed by eric.snow

#14622: Python http.server is dead slow using gethostbyaddr/getfqdn fo
http://bugs.python.org/issue14622  closed by pitrou

#14623: Shutdown exception in daemon thread
http://bugs.python.org/issue14623  closed by pitrou

#14629: discrepency between tokenize.detect_encoding() and PyTokenizer
http://bugs.python.org/issue14629  closed by loewis

#755660: allow HTMLParser to continue after a parse error
http://bugs.python.org/issue755660  closed by ezio.melotti

From janssen at parc.com  Fri Apr 20 18:58:52 2012
From: janssen at parc.com (Bill Janssen)
Date: Fri, 20 Apr 2012 09:58:52 PDT
Subject: [Python-Dev] OS X buildbots missing
In-Reply-To: <20120420132907.569f189c@pitrou.net>
References: <20120420132907.569f189c@pitrou.net>
Message-ID: <47404.1334941132@parc.com>

Antoine Pitrou <solipsis at pitrou.net> wrote:

> For the record, we don't have any stable OS X buildbots anymore.

Sigh.  That's me again.  We are currently installing a virtual private
cloud at our workspace, and I'm seeing a lot of intermittent failures in
that server room.  I need to work out a way in which buildbot restarts
automatically either when the machine reboots, or when it hangs up
(which happens every couple of weeks).

Bill



From roundup-admin at psf.upfronthosting.co.za  Fri Apr 20 19:00:10 2012
From: roundup-admin at psf.upfronthosting.co.za (Python tracker)
Date: Fri, 20 Apr 2012 17:00:10 +0000
Subject: [Python-Dev] Failed issue tracker submission
Message-ID: <20120420170010.0333C1CBB3@psf.upfronthosting.co.za>


An unexpected error occurred during the processing
of your message. The tracker administrator is being
notified.
-------------- next part --------------
Return-Path: <python-dev at python.org>
X-Original-To: report at bugs.python.org
Delivered-To: roundup+tracker at psf.upfronthosting.co.za
Received: from mail.python.org (mail.python.org [82.94.164.166])
	by psf.upfronthosting.co.za (Postfix) with ESMTPS id 833861CBB0
	for <report at bugs.python.org>; Fri, 20 Apr 2012 19:00:09 +0200 (CEST)
Received: from albatross.python.org (localhost [127.0.0.1])
	by mail.python.org (Postfix) with ESMTP id 3VZ3GK10QGzN51
	for <report at bugs.python.org>; Fri, 20 Apr 2012 19:00:09 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901;
	t=1334941209; bh=PuPHOLI7fb95kba5/ecLxSLCC9UpM27v8bYaw31epzE=;
	h=Date:Message-Id:Content-Type:MIME-Version:
	 Content-Transfer-Encoding:From:To:Subject;
	b=ZfbTowau33LvKWnJHYtZ8Fy/cAslebBopL912urudimFDYNg5n7CHpPwxlMLlLTv5
	 tR2OZmCp3w90e6h937L7R6g7mew3xHaxeRbzP06cEK0JTyOQaekSKHBxivVMuU2hjL
	 AE1J6MRlKrxJoqE8dQMyzP7+wM5o39unn76WD6bE=
Received: from localhost (HELO mail.python.org) (127.0.0.1)
  by albatross.python.org with SMTP; 20 Apr 2012 19:00:09 +0200
Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4])
	(using TLSv1 with cipher AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.python.org (Postfix) with ESMTPS
	for <report at bugs.python.org>; Fri, 20 Apr 2012 19:00:09 +0200 (CEST)
Received: from localhost
	([127.0.0.1] helo=dinsdale.python.org ident=hg)
	by dinsdale.python.org with esmtp (Exim 4.72)
	(envelope-from <python-dev at python.org>)
	id 1SLHBo-00063N-NT
	for report at bugs.python.org; Fri, 20 Apr 2012 19:00:08 +0200
Date: Fri, 20 Apr 2012 19:00:08 +0200
Message-Id: <E1SLHBo-00063N-NT at dinsdale.python.org>
Content-Type: text/plain; charset="utf8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
From: python-dev at python.org
To: report at bugs.python.org
Subject: [issue14633]

TmV3IGNoYW5nZXNldCBhMjgxYTY2MjI3MTQgYnkgQnJldHQgQ2Fubm9uIGluIGJyYW5jaCAnZGVm
YXVsdCc6Cklzc3VlICMxNDYzMzogU2ltcGxpZnkgaW1wLmZpbmRfbW9kdWUoKSB0ZXN0IGFmdGVy
IGZpeGVzIGZyb20gaXNzdWUKaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvYTI4MWE2
NjIyNzE0Cg==

From tjreedy at udel.edu  Fri Apr 20 20:09:16 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 20 Apr 2012 14:09:16 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Issue #14581: Windows
 users are allowed to import modules w/o taking
In-Reply-To: <E1SLH5F-0005FE-C6@dinsdale.python.org>
References: <E1SLH5F-0005FE-C6@dinsdale.python.org>
Message-ID: <4F91A64C.2030204@udel.edu>

On 4/20/2012 12:53 PM, brett.cannon wrote:
> http://hg.python.org/cpython/rev/a32be109bd86
> changeset:   76428:a32be109bd86
> user:        Brett Cannon<brett at python.org>
> date:        Fri Apr 20 12:53:14 2012 -0400
> summary:
>    Issue #14581: Windows users are allowed to import modules w/o taking
> the file suffix's case into account, even when doing a case-sensitive
> import.

> +                name, dot, suffix = item.partition('.')

Should this be .rpartition in case there is more than one . in the name?

tjr

From victor.stinner at gmail.com  Sat Apr 21 02:38:35 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 21 Apr 2012 02:38:35 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZvQq0ZyrM8ozDpxS4joAspmY0rPXHQKNz2j3jDDWUigA@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwZvQq0ZyrM8ozDpxS4joAspmY0rPXHQKNz2j3jDDWUigA@mail.gmail.com>
Message-ID: <CAMpsgwZ4fDAC4z_YMhWTg5B0EHP3GJNxXsPPcfgQEK2LEwohdA@mail.gmail.com>

2012/4/15 Victor Stinner <victor.stinner at gmail.com>:
>> Here is a simplified version of the first draft of the PEP 418. The
>> full version can be read online.
>> http://www.python.org/dev/peps/pep-0418/
>
> FYI there is no time.thread_time() function. It would only be
> available on Windows and Linux. It does not use seconds but CPU
> cycles. No module or program of the Python source code need such
> function, whereas all other functions added by the PEP already have
> users in the Python source code, see the Rationale section. For Linux,
> CLOCK_THREAD_CPUTIME_ID is already available in Python 3.3. For
> Windows, you can get GetThreadTimes() using ctypes or win32process.

"Explicit is better than implicit" ! I listed the limitations of the
PEP directly in the PEP:
-------------
Limitations:

* The behaviour of clocks after a system suspend is not defined in the
  documentation of new functions. The behaviour depends on the
  operating system: see the `Monotonic Clocks`_ section below. Some
  recent operating systems provide two clocks, one including time
  elapsed during system suspsend, one not including this time. Most
  operating systems only provide one kind of clock.
* time.monotonic() and time.perf_counter() may or may not be adjusted.
  For example, ``CLOCK_MONOTONIC`` is slewed on Linux, whereas
  ``GetTickCount()`` is not adjusted on Windows.
  ``time.get_clock_info('monotonic')['is_adjusted']`` can be used to check
  if the monotonic clock is adjusted or not.
* No time.thread_time() function is proposed by this PEP because it is
  not needed by Python standard library nor a common asked feature.
  Such function would only be available on Windows and Linux. On
  Linux, it is possible use use
  ``time.clock_gettime(CLOCK_THREAD_CPUTIME_ID)``. On Windows, ctypes or
  another module can be used to call the ``GetThreadTimes()``
  function.
-------------

Victor

From anacrolix at gmail.com  Sat Apr 21 02:54:56 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Sat, 21 Apr 2012 08:54:56 +0800
Subject: [Python-Dev] Failed issue tracker submission
In-Reply-To: <20120420170010.0333C1CBB3@psf.upfronthosting.co.za>
References: <20120420170010.0333C1CBB3@psf.upfronthosting.co.za>
Message-ID: <CAB4yi1O5mgdcWR-2sdWxQ02PnHZGE5=xRFbiYKkaybc2vyh7xQ@mail.gmail.com>

I'm getting one of these every couple of days. What's the deal?
On Apr 21, 2012 1:03 AM, "Python tracker" <
roundup-admin at psf.upfronthosting.co.za> wrote:

>
> An unexpected error occurred during the processing
> of your message. The tracker administrator is being
> notified.
>
> Return-Path: <python-dev at python.org>
> X-Original-To: report at bugs.python.org
> Delivered-To: roundup+tracker at psf.upfronthosting.co.za
> Received: from mail.python.org (mail.python.org [82.94.164.166])
>        by psf.upfronthosting.co.za (Postfix) with ESMTPS id 833861CBB0
>        for <report at bugs.python.org>; Fri, 20 Apr 2012 19:00:09 +0200
> (CEST)
> Received: from albatross.python.org (localhost [127.0.0.1])
>        by mail.python.org (Postfix) with ESMTP id 3VZ3GK10QGzN51
>        for <report at bugs.python.org>; Fri, 20 Apr 2012 19:00:09 +0200
> (CEST)
> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org;
> s=200901;
>        t=1334941209; bh=PuPHOLI7fb95kba5/ecLxSLCC9UpM27v8bYaw31epzE=;
>        h=Date:Message-Id:Content-Type:MIME-Version:
>         Content-Transfer-Encoding:From:To:Subject;
>        b=ZfbTowau33LvKWnJHYtZ8Fy/cAslebBopL912urudimFDYNg5n7CHpPwxlMLlLTv5
>         tR2OZmCp3w90e6h937L7R6g7mew3xHaxeRbzP06cEK0JTyOQaekSKHBxivVMuU2hjL
>         AE1J6MRlKrxJoqE8dQMyzP7+wM5o39unn76WD6bE=
> Received: from localhost (HELO mail.python.org) (127.0.0.1)
>  by albatross.python.org with SMTP; 20 Apr 2012 19:00:09 +0200
> Received: from dinsdale.python.org (svn.python.org[IPv6:2001:888:2000:d::a4])
>        (using TLSv1 with cipher AES256-SHA (256/256 bits))
>        (No client certificate requested)
>        by mail.python.org (Postfix) with ESMTPS
>        for <report at bugs.python.org>; Fri, 20 Apr 2012 19:00:09 +0200
> (CEST)
> Received: from localhost
>        ([127.0.0.1] helo=dinsdale.python.org ident=hg)
>        by dinsdale.python.org with esmtp (Exim 4.72)
>        (envelope-from <python-dev at python.org>)
>        id 1SLHBo-00063N-NT
>        for report at bugs.python.org; Fri, 20 Apr 2012 19:00:08 +0200
> Date: Fri, 20 Apr 2012 19:00:08 +0200
> Message-Id: <E1SLHBo-00063N-NT at dinsdale.python.org>
> Content-Type: text/plain; charset="utf8"
> MIME-Version: 1.0
> Content-Transfer-Encoding: base64
> From: python-dev at python.org
> To: report at bugs.python.org
> Subject: [issue14633]
>
>
> TmV3IGNoYW5nZXNldCBhMjgxYTY2MjI3MTQgYnkgQnJldHQgQ2Fubm9uIGluIGJyYW5jaCAnZGVm
>
> YXVsdCc6Cklzc3VlICMxNDYzMzogU2ltcGxpZnkgaW1wLmZpbmRfbW9kdWUoKSB0ZXN0IGFmdGVy
>
> IGZpeGVzIGZyb20gaXNzdWUKaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvYTI4MWE2
> NjIyNzE0Cg==
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/db7a9ef0/attachment.html>

From brett at python.org  Sat Apr 21 03:59:21 2012
From: brett at python.org (Brett Cannon)
Date: Fri, 20 Apr 2012 21:59:21 -0400
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
Message-ID: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>

As I clean up Python/import.c and move much of its functionality into
Lib/imp.py, I am about to run into some stuff that was not kept private to
the file. Specifically, I have PyImport_GetMagicTag() and NullImporter_Type
which I would like to chop out and move to Lib/imp.py.

>From my reading of PEP 384 that means I would need to at least deprecate
PyImport_getMagicTag(), correct (assuming I follow through with this; I
might not bother)? What about NullImporter_Type (it lacks a Py prefix so I
am not sure if this is considered public or not)?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120420/033de166/attachment.html>

From guido at python.org  Sat Apr 21 04:16:13 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 20 Apr 2012 19:16:13 -0700
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
Message-ID: <CAP7+vJ+G2h-80iCEcR_4hrvEOgC0dihF6eMaTHXb44daCaWxjA@mail.gmail.com>

On Fri, Apr 20, 2012 at 6:59 PM, Brett Cannon <brett at python.org> wrote:
> As I clean up Python/import.c and move much of its functionality into
> Lib/imp.py, I am about to run into some stuff that was not kept private to
> the file. Specifically, I have PyImport_GetMagicTag() and NullImporter_Type
> which I would like to chop out and move to Lib/imp.py.
>
> From my reading of PEP 384 that means I would need to at least deprecate
> PyImport_getMagicTag(), correct (assuming I follow through with this; I
> might not bother)? What about NullImporter_Type (it lacks a Py prefix so I
> am not sure if this is considered public or not)?

Yeah, PyImporter_GetMagicTag() looks like a public API, parallel with
PyImporter_GetMagicNumber(). Maybe it was accidentally not documented?
I'm not sure when it was introduced. Should we even deprecate it? I'd
say do the same thing you're doing for GetMagicNumber().

NullImporter_Type looks like it was accidentally not made static, so
don't fret about that.

-- 
--Guido van Rossum (python.org/~guido)

From rdmurray at bitdance.com  Sat Apr 21 04:25:07 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Fri, 20 Apr 2012 22:25:07 -0400
Subject: [Python-Dev] Failed issue tracker submission
In-Reply-To: <CAB4yi1O5mgdcWR-2sdWxQ02PnHZGE5=xRFbiYKkaybc2vyh7xQ@mail.gmail.com>
References: <20120420170010.0333C1CBB3@psf.upfronthosting.co.za>
	<CAB4yi1O5mgdcWR-2sdWxQ02PnHZGE5=xRFbiYKkaybc2vyh7xQ@mail.gmail.com>
Message-ID: <20120421022508.3D31E2509E5@webabinitio.net>


On Sat, 21 Apr 2012 08:54:56 +0800, Matt Joiner <anacrolix at gmail.com> wrote:
> I'm getting one of these every couple of days. What's the deal?
> On Apr 21, 2012 1:03 AM, "Python tracker" <
> roundup-admin at psf.upfronthosting.co.za> wrote:

There is a bug in the interface between roundup and hg that is new
since roundup was switched to using xapian for indexing.  When an hg
commit mentions more than one issue number, the second (or subsequent,
presumably) issue number triggers a write conflict and results in the
email doing the second issue update being rejected.  Since the email
address associated with the hg update email is python-dev, the bounce
gets sent here.

There are currently only three people who do maintenance work on the
tracker (it used to be just one), and none of us have found time to
try to figure out a fix yet.

--David

From ncoghlan at gmail.com  Sat Apr 21 07:00:42 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 21 Apr 2012 15:00:42 +1000
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <CAP7+vJ+G2h-80iCEcR_4hrvEOgC0dihF6eMaTHXb44daCaWxjA@mail.gmail.com>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
	<CAP7+vJ+G2h-80iCEcR_4hrvEOgC0dihF6eMaTHXb44daCaWxjA@mail.gmail.com>
Message-ID: <CADiSq7eRuPUazga02+1Xv42pdNWqAi24HE9zwH9J621qrW8SLQ@mail.gmail.com>

On Sat, Apr 21, 2012 at 12:16 PM, Guido van Rossum <guido at python.org> wrote:
> Yeah, PyImporter_GetMagicTag() looks like a public API, parallel with
> PyImporter_GetMagicNumber(). Maybe it was accidentally not documented?
> I'm not sure when it was introduced. Should we even deprecate it? I'd
> say do the same thing you're doing for GetMagicNumber().

I'd keep it and just make it a convenience wrapper for the call back
into the Python code.

> NullImporter_Type looks like it was accidentally not made static, so
> don't fret about that.

Yeah, the lack of the Py_ prefix suggests this one being visible is
just an accident of the implementation, and the name is unusual enough
that it never caused a symbol collision for any third parties.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From anacrolix at gmail.com  Sat Apr 21 08:15:16 2012
From: anacrolix at gmail.com (Matt Joiner)
Date: Sat, 21 Apr 2012 14:15:16 +0800
Subject: [Python-Dev] Failed issue tracker submission
In-Reply-To: <20120421022508.3D31E2509E5@webabinitio.net>
References: <20120420170010.0333C1CBB3@psf.upfronthosting.co.za>
	<CAB4yi1O5mgdcWR-2sdWxQ02PnHZGE5=xRFbiYKkaybc2vyh7xQ@mail.gmail.com>
	<20120421022508.3D31E2509E5@webabinitio.net>
Message-ID: <CAB4yi1OYmDzVxJa73yqYAy0O=vEkbr1Apzy_xUGi_KtD16ZY1g@mail.gmail.com>

Cheers
On Apr 21, 2012 10:25 AM, "R. David Murray" <rdmurray at bitdance.com> wrote:

>
> On Sat, 21 Apr 2012 08:54:56 +0800, Matt Joiner <anacrolix at gmail.com>
> wrote:
> > I'm getting one of these every couple of days. What's the deal?
> > On Apr 21, 2012 1:03 AM, "Python tracker" <
> > roundup-admin at psf.upfronthosting.co.za> wrote:
>
> There is a bug in the interface between roundup and hg that is new
> since roundup was switched to using xapian for indexing.  When an hg
> commit mentions more than one issue number, the second (or subsequent,
> presumably) issue number triggers a write conflict and results in the
> email doing the second issue update being rejected.  Since the email
> address associated with the hg update email is python-dev, the bounce
> gets sent here.
>
> There are currently only three people who do maintenance work on the
> tracker (it used to be just one), and none of us have found time to
> try to figure out a fix yet.
>
> --David
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/aa039392/attachment.html>

From ncoghlan at gmail.com  Sat Apr 21 15:09:08 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 21 Apr 2012 23:09:08 +1000
Subject: [Python-Dev] Expose dictproxy through collections rather than the
	types module?
Message-ID: <CADiSq7dMCe0ux+FB=cX=bQzJg=bxPZum2QpMADaz0P0sEgMfhA@mail.gmail.com>

The internal dictproxy class was recently exposed as types.MappingProxyType.

Since it's not very discoverable that way, would anyone object if I
moved things around so it was exposed as collections.MappingProxy
instead? The main benefit to doing so is to get it into the table of
specialised container types at the top of the collections module docs
[1].

[1] http://docs.python.org/dev/library/collections

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ericsnowcurrently at gmail.com  Sat Apr 21 16:09:31 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 21 Apr 2012 08:09:31 -0600
Subject: [Python-Dev] Expose dictproxy through collections rather than
 the types module?
In-Reply-To: <CADiSq7dMCe0ux+FB=cX=bQzJg=bxPZum2QpMADaz0P0sEgMfhA@mail.gmail.com>
References: <CADiSq7dMCe0ux+FB=cX=bQzJg=bxPZum2QpMADaz0P0sEgMfhA@mail.gmail.com>
Message-ID: <CALFfu7CfNN9PjPRn6qN5SkqrQDyK9vYB6AebRozpPaUOGakqTA@mail.gmail.com>

On Apr 21, 2012 7:11 AM, "Nick Coghlan" <ncoghlan at gmail.com> wrote:
>
> The internal dictproxy class was recently exposed as
types.MappingProxyType.
>
> Since it's not very discoverable that way, would anyone object if I
> moved things around so it was exposed as collections.MappingProxy
> instead? The main benefit to doing so is to get it into the table of
> specialised container types at the top of the collections module docs
> [1].

A discussion on this played out in http://bugs.python.org/issue14386.

-eric

>
> [1] http://docs.python.org/dev/library/collections
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/ac90eed2/attachment.html>

From rdmurray at bitdance.com  Sat Apr 21 16:43:16 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Sat, 21 Apr 2012 10:43:16 -0400
Subject: [Python-Dev] Expose dictproxy through collections rather than
	the types module?
In-Reply-To: <CADiSq7dMCe0ux+FB=cX=bQzJg=bxPZum2QpMADaz0P0sEgMfhA@mail.gmail.com>
References: <CADiSq7dMCe0ux+FB=cX=bQzJg=bxPZum2QpMADaz0P0sEgMfhA@mail.gmail.com>
Message-ID: <20120421144317.6E066250147@webabinitio.net>

On Sat, 21 Apr 2012 23:09:08 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Since it's not very discoverable that way, would anyone object if I
> moved things around so it was exposed as collections.MappingProxy
> instead? The main benefit to doing so is to get it into the table of
> specialised container types at the top of the collections module docs

The short answer is yes, someone would mind, which is why it is where it
is.  Read the ticket for more: http://bugs.python.org/issue14386.

--David

From pje at telecommunity.com  Sat Apr 21 16:55:55 2012
From: pje at telecommunity.com (PJ Eby)
Date: Sat, 21 Apr 2012 10:55:55 -0400
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
	<CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
Message-ID: <CALeMXf7yiPnLGTb5KZCB8tUHXr7Hc+u_bccORs7gRp8Eh0NQ0g@mail.gmail.com>

(Sorry I'm so late to this discussion.)

I think that it's important to take into account the fact that PEP 3115
doesn't require namespaces to implement anything more than __setitem__ and
__getitem__ (with the latter not even needing to do anything but raise
KeyError).

Among other things, this means that .update() is right out as a
general-purpose solution to initializing a 3115-compatible class: you have
to loop and set items explicitly.  So, if we're providing helper functions,
there should be a helper that handles this common case by taking the
keywords (or perhaps an ordered sequence of pairs) and doing the looping
for you.

Of course, once you're doing that, you might as well implement it by
passing a closure into __build_class__...

More below:

On Sun, Apr 15, 2012 at 7:48 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

>
> Yup, I believe that was my main objection to exposing __build_class__
> directly. There's no obligation for implementations to build a
> throwaway function to evaluate a class body.
>

Thing is, though, if an implementation is dynamic enough to be capable of
supporting PEP 3115 *at all*  (not to mention standard exec/eval
semantics), it's going to have no problem mimicking __build_class__.

I mean, to implement PEP 3115 namespaces, you *have* to support exec/eval
with arbitrary namespaces.  From that, it's only the tiniest of steps to
wrapping that exec/eval in a function object to pass to __build_class__.

Really, making that function is probably the *least* of the troubles an
alternate implementation is going to have with supporting PEP 3115 (by
far).  Hell, supporting *metaclasses* is the first big hurdle an alternate
implementation has to get over, followed by the exec/eval with arbitrary
namespaces.

Personally, I think __build_class__ should be explicitly exposed and
supported, if for no other reason than that it allows one to re-implement
old-style __metaclass__ support in 2.x modules that rely on it...  and I
have a lot of those to port.  (Which is why I also think the convenience
API for PEP 3115-compatible class creation should actually call
__build_class__ itself.  That way, if it's been replaced, then the replaced
semantics would *also* apply to dynamically-created classes.)

IOW, there'd be two functions: one that's basically "call __build_class__",
and the other that's "call __build_class__ with a convenience function to
inject these values into the prepared dictionary".

Having other convenience functions that reimplement lower-level features
than __build_class__ (like the prepare thing) sounds like a good idea, but
I think we should encourage common cases to just call something that keeps
the __setitem__ issue out of the way.

Thoughts?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/ed37cee6/attachment.html>

From ncoghlan at gmail.com  Sat Apr 21 17:30:34 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 22 Apr 2012 01:30:34 +1000
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CALeMXf7yiPnLGTb5KZCB8tUHXr7Hc+u_bccORs7gRp8Eh0NQ0g@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
	<CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
	<CALeMXf7yiPnLGTb5KZCB8tUHXr7Hc+u_bccORs7gRp8Eh0NQ0g@mail.gmail.com>
Message-ID: <CADiSq7ez91PwYUfkwc3V-EjC-asCLjBctLupdrCzMDKAtKbB_g@mail.gmail.com>

On Sun, Apr 22, 2012 at 12:55 AM, PJ Eby <pje at telecommunity.com> wrote:
> (Sorry I'm so late to this discussion.)
>
> I think that it's important to take into account the fact that PEP 3115
> doesn't require namespaces to implement anything more than __setitem__ and
> __getitem__ (with the latter not even needing to do anything but raise
> KeyError).
>
> Among other things, this means that .update() is right out as a
> general-purpose solution to initializing a 3115-compatible class: you have
> to loop and set items explicitly. ?So, if we're providing helper functions,
> there should be a helper that handles this common case by taking the
> keywords (or perhaps an ordered sequence of pairs) and doing the looping for
> you.
>
> Of course, once you're doing that, you might as well implement it by passing
> a closure into __build_class__...

Yeah, the "operator.build_class" in the tracker issue ended up looking
a whole lot like the signature of CPython's __build_class__. The main
difference is that the class body evaluation argument moves to the end
and becomes optional in order to bring the first two arguments in line
with those of type(). The signature ends up being effectively:

    def build_class(name, bases=(), kwds={}, exec_body=None):
        ...

Accepting an optional callback that is given the prepared namespace as
an argument just makes a lot more sense than either exposing a
separate prepare function or using the existing __build_class__
signature directly (which was designed with the compiler in mind, not
humans).

> Personally, I think __build_class__ should be explicitly exposed and
> supported, if for no other reason than that it allows one to re-implement
> old-style __metaclass__ support in 2.x modules that rely on it... ?and I
> have a lot of those to port. ?(Which?is why I also think the convenience API
> for PEP 3115-compatible class creation should actually call __build_class__
> itself. ?That way, if it's been replaced, then the replaced semantics would
> *also* apply to dynamically-created classes.)

No, we already have one replaceable-per-module PITA like that (i.e.
__import__). I don't want to see us add another one.

> Having other convenience functions that reimplement lower-level features
> than __build_class__ (like the prepare thing) sounds like a good idea, but I
> think we should encourage common cases to just call something that keeps the
> __setitem__ issue out of the way.
>
> Thoughts?

Agreed on the use of a callback to avoid making too many assumptions
about the API provided by the prepared namespace.

Definitely *not* agreed on making __build_class__ part of the language
spec (or even officially supporting people that decide to replace it
with their own alternative in CPython).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sat Apr 21 17:42:47 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 22 Apr 2012 01:42:47 +1000
Subject: [Python-Dev] Expose dictproxy through collections rather than
 the types module?
In-Reply-To: <20120421144317.6E066250147@webabinitio.net>
References: <CADiSq7dMCe0ux+FB=cX=bQzJg=bxPZum2QpMADaz0P0sEgMfhA@mail.gmail.com>
	<20120421144317.6E066250147@webabinitio.net>
Message-ID: <CADiSq7chDtPZNCbX7AR5DT=PAXaM7c7o-g-pRGkfkV+4++o3jQ@mail.gmail.com>

On Sun, Apr 22, 2012 at 12:43 AM, R. David Murray <rdmurray at bitdance.com> wrote:
> On Sat, 21 Apr 2012 23:09:08 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> Since it's not very discoverable that way, would anyone object if I
>> moved things around so it was exposed as collections.MappingProxy
>> instead? The main benefit to doing so is to get it into the table of
>> specialised container types at the top of the collections module docs
>
> The short answer is yes, someone would mind, which is why it is where it
> is. ?Read the ticket for more: http://bugs.python.org/issue14386.

No worries. Someone was asking on python-ideas about creating an
immutable ChainMap instance, and I was going to suggest
collections.MappingProxy as the answer (for future versions, of
course). I was surprised to find it squirrelled away in the types
module instead of being somewhere anyone other than a core dev is
likely to find it.

I personally suspect the lack of demand Raymond describes comes from
people just using mutable dicts and treating them as immutable by
convention - the same way Python programs may have "immutable by
convention" objects which don't actually go to the effort needed to
fully prevent mutation of internal state after creation. Some objects
would be more correct if they did that, but in practice, it's not
worth the hassle to make sure you've implemented it correctly.

Still, it doesn't bother me enough for me to try to persuade Raymond
its sufficiently valuable to make it public through the collections
module.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From barry at python.org  Sat Apr 21 18:10:14 2012
From: barry at python.org (Barry Warsaw)
Date: Sat, 21 Apr 2012 12:10:14 -0400
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
Message-ID: <20120421121014.1c53f194@limelight.wooz.org>

On Apr 20, 2012, at 09:59 PM, Brett Cannon wrote:

>As I clean up Python/import.c and move much of its functionality into
>Lib/imp.py, I am about to run into some stuff that was not kept private to
>the file. Specifically, I have PyImport_GetMagicTag() and NullImporter_Type
>which I would like to chop out and move to Lib/imp.py.
>
>>From my reading of PEP 384 that means I would need to at least deprecate
>PyImport_getMagicTag(), correct (assuming I follow through with this; I
>might not bother)? What about NullImporter_Type (it lacks a Py prefix so I
>am not sure if this is considered public or not)?

I'd have to go back into my archives for the discussions about the PEP, but my
recollection is that we intentionally made PyImport_GetMagicTag() a public API
method.  Thus no leading underscore.  It's a bug that it's not documented, but
OTOH, it's unlikely there are, or would be, many consumers for it.

Strictly speaking, I do think you need to deprecate the APIs.  I like Nick's
suggestion to make them C wrappers which just call back into Python.

-Barry

From pje at telecommunity.com  Sat Apr 21 19:21:14 2012
From: pje at telecommunity.com (PJ Eby)
Date: Sat, 21 Apr 2012 13:21:14 -0400
Subject: [Python-Dev] Providing a mechanism for PEP 3115 compliant
 dynamic class creation
In-Reply-To: <CADiSq7ez91PwYUfkwc3V-EjC-asCLjBctLupdrCzMDKAtKbB_g@mail.gmail.com>
References: <BANLkTi=any_UMyHx76r-VxD4frV7Te16XQ@mail.gmail.com>
	<CACoLFeS9JMj-JoQT2utU-9B6NqvLntBa3z-XXpfNSVSXPDd41g@mail.gmail.com>
	<CADiSq7eJf4FtRZSfSqZCJA+u=77Ziyty9okhFotTFX8k8Ye6Rg@mail.gmail.com>
	<CALeMXf7yiPnLGTb5KZCB8tUHXr7Hc+u_bccORs7gRp8Eh0NQ0g@mail.gmail.com>
	<CADiSq7ez91PwYUfkwc3V-EjC-asCLjBctLupdrCzMDKAtKbB_g@mail.gmail.com>
Message-ID: <CALeMXf7JK-9a4a+UjRw5U+cAbpECEJrgKRt4YXOB+M6QOzg=Og@mail.gmail.com>

On Sat, Apr 21, 2012 at 11:30 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Sun, Apr 22, 2012 at 12:55 AM, PJ Eby <pje at telecommunity.com> wrote:
> > Personally, I think __build_class__ should be explicitly exposed and
> > supported, if for no other reason than that it allows one to re-implement
> > old-style __metaclass__ support in 2.x modules that rely on it...  and I
> > have a lot of those to port.  (Which is why I also think the convenience
> API
> > for PEP 3115-compatible class creation should actually call
> __build_class__
> > itself.  That way, if it's been replaced, then the replaced semantics
> would
> > *also* apply to dynamically-created classes.)
>
> No, we already have one replaceable-per-module PITA like that (i.e.
> __import__). I don't want to see us add another one.
>

Well, it's more like replacing than adding; __metaclass__ has this job in
2.x.  PEP 3115 removed what is (IMO) an important feature: the ability for
method-level decorators to affect the class, without needing user-specified
metaclasses or class decorators.

This is important for e.g. registering methods that are generic functions,
without requiring the addition of redundant metaclass or class-decorator
statements, and it's something that's possible in 2.x using __metaclass__,
but *not* possible under PEP 3115 without hooking __build_class__.
 Replacing builtins.__build_class__ allows the restoration of __metaclass__
support at the class level, which in turn allows porting 2.x code that uses
this facility.

To try to be more concrete, here's an example of sorts:

class Foo:
    @decorate(blah, fah)
    def widget(self, spam):
         ...

If @decorate needs access to the 'Foo' class object, this is not possible
under PEP 3115 without adding an explicit metaclass or class decorator to
support it.  And if you are using such method-level decorators from more
than one source, you will have to combine their class decorators or
metaclasses in some way to get this to work.  Further, if somebody forgets
to add the extra metaclass(es) and/or class decorator(s), things will
quietly break.

However, under 2.x, a straightforward solution is possible (well, to me
it's straightforward) : method decorators can replace the class'
__metaclass__ and chain to the previous one, if it existed.  It's like
giving method decorators a chance to *also* be a class decorator.

Without some *other* way to do this in 3.x, I don't have much of a choice
besides replacing __build_class__ to accomplish this use case.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/9ce44fdc/attachment.html>

From db3l.net at gmail.com  Sat Apr 21 21:50:03 2012
From: db3l.net at gmail.com (David Bolen)
Date: Sat, 21 Apr 2012 15:50:03 -0400
Subject: [Python-Dev] OS X buildbots missing
References: <20120420132907.569f189c@pitrou.net>
Message-ID: <m2ty0ckgpg.fsf@valheru.db3l.homeip.net>

Antoine Pitrou <solipsis at pitrou.net> writes:

> For the record, we don't have any stable OS X buildbots anymore.
> If you want to contribute a build slave (I hear we may have Apple
> employees reading this list), please take a look at
> http://wiki.python.org/moin/BuildBot

I realize it may not qualify for the official stable list as it's a
Tiger-based buildbot, but osx-tiger is an OS X buildbot that's still
chugging along quite nicely (including doing the daily DMG builds).

-- David


From martin at v.loewis.de  Sat Apr 21 22:55:52 2012
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Sat, 21 Apr 2012 22:55:52 +0200
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
Message-ID: <4F931ED8.5010508@v.loewis.de>

> From my reading of PEP 384 that means I would need to at least deprecate
> PyImport_getMagicTag(), correct (assuming I follow through with this; I
> might not bother)? 

All that PEP 384 gives you is that you MAY deprecate certain API
(namely, all API not guaranteed as stable). If an API is not in the
restricted set, this doesn't mean that it SHOULD be deprecated at
some point. So there is no need to deprecate anything.

OTOH, if the new implementation cannot readily support the
API anymore, it can certainly go away. If it was truly private
(i.e. _Py_*), it can go away immediately. Otherwise, it should be
deprecated-then-removed.

Regards,
Martin


From brett at python.org  Sun Apr 22 00:13:57 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 21 Apr 2012 18:13:57 -0400
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <4F931ED8.5010508@v.loewis.de>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
	<4F931ED8.5010508@v.loewis.de>
Message-ID: <CAP1=2W4oSLVCPH9pV-F+5SEtr-0qXS19NaE0z+GUrWKP6h=E8Q@mail.gmail.com>

On Sat, Apr 21, 2012 at 16:55, "Martin v. L?wis" <martin at v.loewis.de> wrote:

> > From my reading of PEP 384 that means I would need to at least deprecate
> > PyImport_getMagicTag(), correct (assuming I follow through with this; I
> > might not bother)?
>
> All that PEP 384 gives you is that you MAY deprecate certain API
> (namely, all API not guaranteed as stable). If an API is not in the
> restricted set, this doesn't mean that it SHOULD be deprecated at
> some point. So there is no need to deprecate anything.
>

I meant "at least deprecate" as in "I can't just remove it from Python 3.3".

-Brett


>
> OTOH, if the new implementation cannot readily support the
> API anymore, it can certainly go away. If it was truly private
> (i.e. _Py_*), it can go away immediately. Otherwise, it should be
> deprecated-then-removed.
>
> Regards,
> Martin
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/f8cad7d0/attachment.html>

From brett at python.org  Sun Apr 22 00:17:19 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 21 Apr 2012 18:17:19 -0400
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <20120421121014.1c53f194@limelight.wooz.org>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
	<20120421121014.1c53f194@limelight.wooz.org>
Message-ID: <CAP1=2W6bWXEmo3v8A9rf7AymjGfQW9Xds57CYiGMDx-B1MRfMw@mail.gmail.com>

On Sat, Apr 21, 2012 at 12:10, Barry Warsaw <barry at python.org> wrote:

> On Apr 20, 2012, at 09:59 PM, Brett Cannon wrote:
>
> >As I clean up Python/import.c and move much of its functionality into
> >Lib/imp.py, I am about to run into some stuff that was not kept private to
> >the file. Specifically, I have PyImport_GetMagicTag() and
> NullImporter_Type
> >which I would like to chop out and move to Lib/imp.py.
> >
> >>From my reading of PEP 384 that means I would need to at least deprecate
> >PyImport_getMagicTag(), correct (assuming I follow through with this; I
> >might not bother)? What about NullImporter_Type (it lacks a Py prefix so I
> >am not sure if this is considered public or not)?
>
> I'd have to go back into my archives for the discussions about the PEP,
> but my
> recollection is that we intentionally made PyImport_GetMagicTag() a public
> API
> method.  Thus no leading underscore.  It's a bug that it's not documented,
> but
> OTOH, it's unlikely there are, or would be, many consumers for it.
>
> Strictly speaking, I do think you need to deprecate the APIs.  I like
> Nick's
> suggestion to make them C wrappers which just call back into Python.
>

That was my plan, but the amount of code it will take to wrap them is
making me not care. =) For PyImport_GetMagicTag() I would need to expose a
new attribute on sys or somewhere which specifies the VM name. For
PyImport_GetMagicNumber() I have to do a bunch of bit twiddling to convert
a bytes object into a long which I am just flat-out not in the mood to
figure out how to do. And all of this will lead to the same amount of C
code as there currently is for what is already implemented, so I just don't
care anymore. =)

But I'm glad the clarifications are there about the stable ABI and how we
are handling it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/4781809b/attachment.html>

From ericsnowcurrently at gmail.com  Sun Apr 22 02:54:55 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 21 Apr 2012 18:54:55 -0600
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <CAP1=2W6bWXEmo3v8A9rf7AymjGfQW9Xds57CYiGMDx-B1MRfMw@mail.gmail.com>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
	<20120421121014.1c53f194@limelight.wooz.org>
	<CAP1=2W6bWXEmo3v8A9rf7AymjGfQW9Xds57CYiGMDx-B1MRfMw@mail.gmail.com>
Message-ID: <CALFfu7BoUUHBe16C8F_J4hUo3iJaQ3nC+sKEFTeK5_nTvG_B5w@mail.gmail.com>

On Sat, Apr 21, 2012 at 4:17 PM, Brett Cannon <brett at python.org> wrote:
> On Sat, Apr 21, 2012 at 12:10, Barry Warsaw <barry at python.org> wrote:
>> Strictly speaking, I do think you need to deprecate the APIs. ?I like
>> Nick's
>> suggestion to make them C wrappers which just call back into Python.
>
>
> That was my plan, but the amount of code it will take to wrap them is making
> me not care. =) For PyImport_GetMagicTag() I would need to expose a new
> attribute on sys or somewhere which specifies the VM name. For
> PyImport_GetMagicNumber() I have to do a bunch of bit twiddling to convert a
> bytes object into a long which I am just flat-out not in the mood to figure
> out how to do. And all of this will lead to the same amount of C code as
> there currently is for what is already implemented, so I just don't care
> anymore. =)

I thought I already (mostly) worked it all out in that patch on
issue13959.  I felt really good about the approach for the magic tag
and magic bytes.

Once find_module() and reload() are done in imp.py, I'm hoping to
follow up on a few things.  That includes the unresolved mailing list
thread about sys.implementation (or whatever it was), which will help
with the magic tag.  Anyway, I don't want to curtail the gutting of
import.c quite yet (as he hears cries of "bring out your dead!").

-eric


p.s.  I understand your sentiment here, considering that mothers are
often exhausted by childbirth and the importlib bootstrap was a big
baby.  You were in labor for, what, 6 years.  <wink>  [There's an
analogy that could keep on giving. :) ]

From ericsnowcurrently at gmail.com  Sun Apr 22 03:02:17 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Sat, 21 Apr 2012 19:02:17 -0600
Subject: [Python-Dev] isolating import state during tests
Message-ID: <CALFfu7CTnPZZAAWD_5Qrp9eYioN=YtPeOuXW-F9PnZ3YLg7S+g@mail.gmail.com>

It looks like the test suite accommodates a stable import state to
some extent, but would it be worth having a PEP-405-esque context
manager to help with this?  For example, something along these lines:


class ImportState:
    # sys.modules is part of the interpreter state, so
    # repopulate (don't replace)
    def __enter__(self):
        self.path = sys.path[:]
        self.modules = sys.modules.copy()
        self.meta_path = sys.meta_path[:]
        self.path_hooks = sys.path_hooks[:]
        self.path_importer_cache = sys.path_importer_cache.copy()

        sys.path = site.getsitepackages()
        sys.modules.clear()
        sys.meta_path = []
        sys.path_hooks = []
        sys.path_importer_cache = {}

    def __exit__(self, *args, **kwargs):
        sys.path = self.path
        sys.modules.clear()
        sys.modules.update(self.modules)
        sys.meta_path = self.meta_path
        sys.path_hooks = self.path_hooks
        sys.path_importer_cache = self.path_importer_cache



# in some unit test:
with ImportState():
    ...  # tests


-eric

From brett at python.org  Sun Apr 22 03:13:50 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 21 Apr 2012 21:13:50 -0400
Subject: [Python-Dev] Handling deprecations in the face of PEP 384
In-Reply-To: <CALFfu7BoUUHBe16C8F_J4hUo3iJaQ3nC+sKEFTeK5_nTvG_B5w@mail.gmail.com>
References: <CAP1=2W6xyPKZ3dECUSW=J1KR7msUoDpEL_EfnpPZQv9p4BO0iw@mail.gmail.com>
	<20120421121014.1c53f194@limelight.wooz.org>
	<CAP1=2W6bWXEmo3v8A9rf7AymjGfQW9Xds57CYiGMDx-B1MRfMw@mail.gmail.com>
	<CALFfu7BoUUHBe16C8F_J4hUo3iJaQ3nC+sKEFTeK5_nTvG_B5w@mail.gmail.com>
Message-ID: <CAP1=2W5jR8nSKqaMUzyHXS6OXHimLnkJJjekJPHRnVCSELsq9Q@mail.gmail.com>

On Sat, Apr 21, 2012 at 20:54, Eric Snow <ericsnowcurrently at gmail.com>wrote:

> On Sat, Apr 21, 2012 at 4:17 PM, Brett Cannon <brett at python.org> wrote:
> > On Sat, Apr 21, 2012 at 12:10, Barry Warsaw <barry at python.org> wrote:
> >> Strictly speaking, I do think you need to deprecate the APIs.  I like
> >> Nick's
> >> suggestion to make them C wrappers which just call back into Python.
> >
> >
> > That was my plan, but the amount of code it will take to wrap them is
> making
> > me not care. =) For PyImport_GetMagicTag() I would need to expose a new
> > attribute on sys or somewhere which specifies the VM name. For
> > PyImport_GetMagicNumber() I have to do a bunch of bit twiddling to
> convert a
> > bytes object into a long which I am just flat-out not in the mood to
> figure
> > out how to do. And all of this will lead to the same amount of C code as
> > there currently is for what is already implemented, so I just don't care
> > anymore. =)
>
> I thought I already (mostly) worked it all out in that patch on
> issue13959.  I felt really good about the approach for the magic tag
> and magic bytes.
>

You didn't update Python/import.c in your patches so that the public C API
continued to function. That's what is going to take a bunch of C code to
continue to maintain, not the Python side of it.


>
> Once find_module() and reload() are done in imp.py, I'm hoping to
> follow up on a few things.  That includes the unresolved mailing list
> thread about sys.implementation (or whatever it was), which will help
> with the magic tag.  Anyway, I don't want to curtail the gutting of
> import.c quite yet (as he hears cries of "bring out your dead!").
>

Even w/ all of that gutted, a decent chunk of coding is holding on to dear
life thanks to PyImport_ExecCodeModuleObject() (and those that call it).
IOW the C API as it is currently exposed is going to end up being the
limiting factor of how many lines get deleted in the very end.


>
> -eric
>
>
> p.s.  I understand your sentiment here, considering that mothers are
> often exhausted by childbirth and the importlib bootstrap was a big
> baby.  You were in labor for, what, 6 years.  <wink>  [There's an
> analogy that could keep on giving. :) ]
>

It's also about maintainability. It isn't worth upping complexity just to
shift some stuff into Python code, especially when it is such simple stuff
as the magic number and tag which places practically zero burden on other
VMs to implement.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/6d11aab3/attachment.html>

From brett at python.org  Sun Apr 22 03:25:44 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 21 Apr 2012 21:25:44 -0400
Subject: [Python-Dev] isolating import state during tests
In-Reply-To: <CALFfu7CTnPZZAAWD_5Qrp9eYioN=YtPeOuXW-F9PnZ3YLg7S+g@mail.gmail.com>
References: <CALFfu7CTnPZZAAWD_5Qrp9eYioN=YtPeOuXW-F9PnZ3YLg7S+g@mail.gmail.com>
Message-ID: <CAP1=2W7--Gm2F3f1eW0AsTbDOQQ9UgG62CkgbUvrG=Zd56hvaQ@mail.gmail.com>

On Sat, Apr 21, 2012 at 21:02, Eric Snow <ericsnowcurrently at gmail.com>wrote:

> It looks like the test suite accommodates a stable import state to
> some extent, but would it be worth having a PEP-405-esque context
> manager to help with this?  For example, something along these lines:
>
>
> class ImportState:
>    # sys.modules is part of the interpreter state, so
>    # repopulate (don't replace)
>    def __enter__(self):
>        self.path = sys.path[:]
>        self.modules = sys.modules.copy()
>        self.meta_path = sys.meta_path[:]
>        self.path_hooks = sys.path_hooks[:]
>        self.path_importer_cache = sys.path_importer_cache.copy()
>
>        sys.path = site.getsitepackages()
>        sys.modules.clear()
>        sys.meta_path = []
>        sys.path_hooks = []
>        sys.path_importer_cache = {}
>
>    def __exit__(self, *args, **kwargs):
>        sys.path = self.path
>        sys.modules.clear()
>        sys.modules.update(self.modules)
>        sys.meta_path = self.meta_path
>        sys.path_hooks = self.path_hooks
>        sys.path_importer_cache = self.path_importer_cache
>
>
>
> # in some unit test:
> with ImportState():
>    ...  # tests
>

That practically all done for you with a combination of
importlib.test.util.uncache and importlib.test.util.import_state.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/5d2762b9/attachment.html>

From brett at python.org  Sun Apr 22 05:53:57 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 21 Apr 2012 23:53:57 -0400
Subject: [Python-Dev] path joining on Windows and imp.cache_from_source()
Message-ID: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>

imp.cache_from_source() (and thus also imp.source_from_cache()) has special
semantics compared to how os.path.join() works. For instance, if you look
at test_imp you will notice it tries to use the same path separator as is
the farthest right in the path it is given::

  self.assertEqual(imp.cache_from_source('\\foo\\bar/baz/qux.py',
True), '\\foo\\bar\\baz/__pycache__/qux.{}.pyc'.format(self.tag))

But if you do the same basic operation using ntpath, you will notice it
simply doesn't care::

  >>> ntpath.join(ntpath.split('a\\b/c/d.py')[0], '__pycache__',
'd.cpython-32.pyc')
  'a\\b/c\\__pycache__\\d.cpython-32.pyc

Basically imp.cache_from_source() goes to a bunch of effort to reuse the
farthest right separator when there is an alternative separator *before*
and path splitting is done. But if you look at ntpath.join(), it doesn't
even attempt that much effort.

Now that we can reuse os.path.join() (directly for source_from_cache(),
indirectly through easy algorithmic copying in cache_from_source()) do we
want to keep the "special" semantics, or can I change it to match what
ntpath would do when there can be more than one path separator on an OS
(i.e. not do anything special)?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/eab3949c/attachment.html>

From martin at v.loewis.de  Sun Apr 22 07:45:27 2012
From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=)
Date: Sun, 22 Apr 2012 07:45:27 +0200
Subject: [Python-Dev] path joining on Windows and imp.cache_from_source()
In-Reply-To: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
References: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
Message-ID: <4F939AF7.1080600@v.loewis.de>

> Now that we can reuse os.path.join() (directly for source_from_cache(),
> indirectly through easy algorithmic copying in cache_from_source()) do
> we want to keep the "special" semantics, or can I change it to match
> what ntpath would do when there can be more than one path separator on
> an OS (i.e. not do anything special)?

This goes back to

http://codereview.appspot.com/842043/diff/1/3#newcode787

where Antoine points out that the code needs to look for altsep.

He then suggests "keep the right-most of both". I don't think he
literally meant that the right-most separator should then also be
used to separate __pycache__, but only that the right-most of
either SEP or ALTSEP is what separates the module name.

In any case, Barry apparently took this comment to mean that the
rightmost separator should be preserved.

So I don't think this is an important feature.

Regards,
Martin

From v+python at g.nevcal.com  Sun Apr 22 07:44:19 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Sat, 21 Apr 2012 22:44:19 -0700
Subject: [Python-Dev] path joining on Windows and imp.cache_from_source()
In-Reply-To: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
References: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
Message-ID: <4F939AB3.4020900@g.nevcal.com>

On 4/21/2012 8:53 PM, Brett Cannon wrote:
> imp.cache_from_source() (and thus also imp.source_from_cache()) has 
> special semantics compared to how os.path.join() works. For instance, 
> if you look at test_imp you will notice it tries to use the same path 
> separator as is the farthest right in the path it is given::
>
>   self.assertEqual(imp.cache_from_source('\\foo\\bar/baz/qux.py', 
> True), '\\foo\\bar\\baz/__pycache__/qux.{}.pyc'.format(self.tag))
>
> But if you do the same basic operation using ntpath, you will notice 
> it simply doesn't care::
>
> >>> ntpath.join(ntpath.split('a\\b/c/d.py')[0], '__pycache__', 
> 'd.cpython-32.pyc')
>   'a\\b/c\\__pycache__\\d.cpython-32.pyc
>
> Basically imp.cache_from_source() goes to a bunch of effort to reuse 
> the farthest right separator when there is an alternative separator 
> *before* and path splitting is done. But if you look at ntpath.join(), 
> it doesn't even attempt that much effort.
>
> Now that we can reuse os.path.join() (directly for 
> source_from_cache(), indirectly through easy algorithmic copying in 
> cache_from_source()) do we want to keep the "special" semantics, or 
> can I change it to match what ntpath would do when there can be more 
> than one path separator on an OS (i.e. not do anything special)?

Is there an issue here with importing from zip files, which use / 
separator, versus importing from the file system, which on Windows can 
use either / or \ ?  I don't know if imp.cache_from_source cares or is 
aware, but it is the only thing I can think of that might have an impact 
on such semantics.  (Well, the other is command line usage, but I don't 
think you are dealing with command lines at that point.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120421/7f9ea2e4/attachment-0001.html>

From brett at python.org  Sun Apr 22 08:00:11 2012
From: brett at python.org (Brett Cannon)
Date: Sun, 22 Apr 2012 02:00:11 -0400
Subject: [Python-Dev] path joining on Windows and imp.cache_from_source()
In-Reply-To: <4F939AB3.4020900@g.nevcal.com>
References: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
	<4F939AB3.4020900@g.nevcal.com>
Message-ID: <CAP1=2W5CDBQ86cAhz=LMPfZGqSzTd3oHTVQER93Nh7qsJWHoSQ@mail.gmail.com>

On Sun, Apr 22, 2012 at 01:44, Glenn Linderman <v+python at g.nevcal.com>wrote:

>  On 4/21/2012 8:53 PM, Brett Cannon wrote:
>
> imp.cache_from_source() (and thus also imp.source_from_cache()) has
> special semantics compared to how os.path.join() works. For instance, if
> you look at test_imp you will notice it tries to use the same path
> separator as is the farthest right in the path it is given::
>
>    self.assertEqual(imp.cache_from_source('\\foo\\bar/baz/qux.py',
> True), '\\foo\\bar\\baz/__pycache__/qux.{}.pyc'.format(self.tag))
>
>  But if you do the same basic operation using ntpath, you will notice it
> simply doesn't care::
>
>    >>> ntpath.join(ntpath.split('a\\b/c/d.py')[0], '__pycache__',
> 'd.cpython-32.pyc')
>   'a\\b/c\\__pycache__\\d.cpython-32.pyc
>
>  Basically imp.cache_from_source() goes to a bunch of effort to reuse the
> farthest right separator when there is an alternative separator *before*
> and path splitting is done. But if you look at ntpath.join(), it doesn't
> even attempt that much effort.
>
>  Now that we can reuse os.path.join() (directly for source_from_cache(),
> indirectly through easy algorithmic copying in cache_from_source()) do we
> want to keep the "special" semantics, or can I change it to match what
> ntpath would do when there can be more than one path separator on an OS
> (i.e. not do anything special)?
>
>
> Is there an issue here with importing from zip files, which use /
> separator, versus importing from the file system, which on Windows can use
> either / or \ ?  I don't know if imp.cache_from_source cares or is aware,
> but it is the only thing I can think of that might have an impact on such
> semantics.  (Well, the other is command line usage, but I don't think you
> are dealing with command lines at that point.)
>

Right now zipimport doesn't even support __pycache__ (I think). Besides,
zipimport already does a string substitution of os.altsep with os.sep (see
Modules/zipimport.c:90 amongst other places) so it also doesn't care in the
end.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120422/c3c50694/attachment.html>

From brett at python.org  Sun Apr 22 08:09:31 2012
From: brett at python.org (Brett Cannon)
Date: Sun, 22 Apr 2012 02:09:31 -0400
Subject: [Python-Dev] path joining on Windows and imp.cache_from_source()
In-Reply-To: <4F939AF7.1080600@v.loewis.de>
References: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
	<4F939AF7.1080600@v.loewis.de>
Message-ID: <CAP1=2W7dHEpoQXjCCsqvpAW72ZBczv=8VG6XjsiioHj7eQ2-6Q@mail.gmail.com>

On Sun, Apr 22, 2012 at 01:45, "Martin v. L?wis" <martin at v.loewis.de> wrote:

> > Now that we can reuse os.path.join() (directly for source_from_cache(),
> > indirectly through easy algorithmic copying in cache_from_source()) do
> > we want to keep the "special" semantics, or can I change it to match
> > what ntpath would do when there can be more than one path separator on
> > an OS (i.e. not do anything special)?
>
> This goes back to
>
> http://codereview.appspot.com/842043/diff/1/3#newcode787
>
> where Antoine points out that the code needs to look for altsep.
>
> He then suggests "keep the right-most of both". I don't think he
> literally meant that the right-most separator should then also be
> used to separate __pycache__, but only that the right-most of
> either SEP or ALTSEP is what separates the module name.
>
> In any case, Barry apparently took this comment to mean that the
> rightmost separator should be preserved.
>
> So I don't think this is an important feature.
>

OK, then I'll go back to ntpath.join()/split() semantics of caring on split
about altsep but not on join to keep it consistent w/ os.path and what
people are used to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120422/16343d78/attachment.html>

From ncoghlan at gmail.com  Sun Apr 22 09:17:52 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 22 Apr 2012 17:17:52 +1000
Subject: [Python-Dev] [Python-checkins] cpython: issue2193 - Update docs
 about the legal characters allowed in Cookie name
In-Reply-To: <E1SLmb2-0002Lh-3t@dinsdale.python.org>
References: <E1SLmb2-0002Lh-3t@dinsdale.python.org>
Message-ID: <CADiSq7cPrPV8wTeeD3JTuTO-m_5-S63sLgEUYk_MW113v=L2Vw@mail.gmail.com>

On Sun, Apr 22, 2012 at 12:32 PM, senthil.kumaran
<python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/8cae3ee7f691
> changeset: ? 76465:8cae3ee7f691
> parent: ? ? ?76462:0a63868c5e95
> user: ? ? ? ?Senthil Kumaran <senthil at uthcode.com>
> date: ? ? ? ?Sun Apr 22 10:31:52 2012 +0800
> summary:
> ?issue2193 - Update docs about the legal characters allowed in Cookie name

You missed the dummy merge from 3.2 to indicate that this change had
been applied to both branches independently.

Should be fixed in my commit for issue #14026

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Sun Apr 22 11:20:11 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 22 Apr 2012 11:20:11 +0200
Subject: [Python-Dev] path joining on Windows and imp.cache_from_source()
References: <CAP1=2W7HMqtwyfozJJ2WN-=fJLP3PfzFz8rks2YBkrp+2CNzbA@mail.gmail.com>
	<4F939AF7.1080600@v.loewis.de>
Message-ID: <20120422112011.2b7f4812@pitrou.net>

On Sun, 22 Apr 2012 07:45:27 +0200
"Martin v. L?wis" <martin at v.loewis.de> wrote:
> 
> This goes back to
> 
> http://codereview.appspot.com/842043/diff/1/3#newcode787
> 
> where Antoine points out that the code needs to look for altsep.
> 
> He then suggests "keep the right-most of both". I don't think he
> literally meant that the right-most separator should then also be
> used to separate __pycache__, but only that the right-most of
> either SEP or ALTSEP is what separates the module name.

Indeed :-)

Thanks

Antoine.



From solipsis at pitrou.net  Sun Apr 22 14:30:16 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 22 Apr 2012 14:30:16 +0200
Subject: [Python-Dev] OS X buildbots missing
References: <20120420132907.569f189c@pitrou.net>
	<m2ty0ckgpg.fsf@valheru.db3l.homeip.net>
Message-ID: <20120422143016.4af091e5@pitrou.net>

On Sat, 21 Apr 2012 15:50:03 -0400
David Bolen <db3l.net at gmail.com> wrote:
> Antoine Pitrou <solipsis at pitrou.net> writes:
> 
> > For the record, we don't have any stable OS X buildbots anymore.
> > If you want to contribute a build slave (I hear we may have Apple
> > employees reading this list), please take a look at
> > http://wiki.python.org/moin/BuildBot
> 
> I realize it may not qualify for the official stable list as it's a
> Tiger-based buildbot, but osx-tiger is an OS X buildbot that's still
> chugging along quite nicely (including doing the daily DMG builds).

Well, the reason it can't qualify for the stable list right now is that
there's a recurrent test_logging failure on it:
http://bugs.python.org/issue14644

If that failure gets fixed, we could see if it's consistently green,
and then put it in the stable bunch.

Regards

Antoine.



From alexandre at peadrop.com  Sun Apr 22 21:27:52 2012
From: alexandre at peadrop.com (Alexandre Vassalotti)
Date: Sun, 22 Apr 2012 15:27:52 -0400
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <jmojtt$ole$1@dough.gmane.org>
References: <jmojtt$ole$1@dough.gmane.org>
Message-ID: <CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>

On Thu, Apr 19, 2012 at 4:55 AM, Stefan Behnel <stefan_ml at behnel.de> wrote:
>
> That sounds like less than two weeks of work, maybe even if we add the
> marshal module to it.
> In less than a month of GSoC time, this could easily reach a point where
> it's "close to the speed of what we have" and "fast enough", but a lot more
> accessible and maintainable, thus also making it easier to add the
> extensions described in the PEP.
>
> What do you think?


As others have pointed out, many users of pickle depend on its performance.
The main reason why _pickle.c is so big is all the low-level optimizations
we have in there. We have custom stack and dictionary implementations just
for the sake of speed. We also have fast paths for I/O operations and
function calls. These optimizations alone are taking easily 2000 lines of
code and they are not micro-optimizations. Each of these were shown to give
speedups from one to several orders of magnitude.

So I disagree that we could easily reach the point where it's "close to the
speed of what we have." And if we were to attempt this, it would be a
multiple months undertaking. I would rather see that time spent on
improving pickle than on yet another reimplementation.

-- Alexandre
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120422/8b77abff/attachment.html>

From db3l.net at gmail.com  Mon Apr 23 00:06:36 2012
From: db3l.net at gmail.com (David Bolen)
Date: Sun, 22 Apr 2012 18:06:36 -0400
Subject: [Python-Dev] OS X buildbots missing
References: <20120420132907.569f189c@pitrou.net>
	<m2ty0ckgpg.fsf@valheru.db3l.homeip.net>
	<20120422143016.4af091e5@pitrou.net>
Message-ID: <m2mx63juab.fsf@valheru.db3l.homeip.net>

Antoine Pitrou <solipsis at pitrou.net> writes:

> Well, the reason it can't qualify for the stable list right now is that
> there's a recurrent test_logging failure on it:
> http://bugs.python.org/issue14644

Yeah, I don't know that I'm necessarily suggesting it be in the stable
set as just mentioning that there is at least an OS X buildbot
available, if only to reference pending another coming on-line.

In the past I think it was tough to keep Tiger green.  Though several
people put a decent amount of effort into cleaning things up and I
believe it's been pretty good for a while now, the current 3.x issue
notwithstanding (and which is probably at least in part a chicken and
egg problem in terms of awareness).  The other branches look good, at
least for the limited history on the web interface, and it looks like
the 3.x change worked.

I know I still build some of my OS X applications under Tiger (since
it's easier to support later OS X versions from an earlier build than
vice versa), but I don't know if Python necessarily wants to require
that for releases, ala stable.

-- David


From martin at v.loewis.de  Mon Apr 23 00:12:57 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Mon, 23 Apr 2012 00:12:57 +0200
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org>
	<CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
Message-ID: <20120423001257.Horde.DoABDbuWis5PlIJpvCYj37A@webmail.df.eu>

> So I disagree that we could easily reach the point where it's "close to the
> speed of what we have." And if we were to attempt this, it would be a
> multiple months undertaking. I would rather see that time spent on
> improving pickle than on yet another reimplementation.

Of course, this being free software, anybody can spend time on whatever they
please, and this should not make anybody feel sad. You just don't get merits
if you work on stuff that nobody cares about.

Regards,
Martin



From greg.ewing at canterbury.ac.nz  Mon Apr 23 00:48:16 2012
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 23 Apr 2012 10:48:16 +1200
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org>
	<CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
Message-ID: <4F948AB0.9090707@canterbury.ac.nz>

Alexandre Vassalotti wrote:
> 
> We have custom stack and 
> dictionary implementations just for the sake of speed. We also have fast 
> paths for I/O operations and function calls.

All of that could very likely be carried over almost
unchanged into a Cython version. I don't see why it
should take multiple months. It's not a matter of
rewriting it from scratch, just translating it from
one dialect (C) to another (the C subset of Cython).

-- 
Greg

From alexandre at peadrop.com  Mon Apr 23 01:27:25 2012
From: alexandre at peadrop.com (Alexandre Vassalotti)
Date: Sun, 22 Apr 2012 19:27:25 -0400
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <20120423001257.Horde.DoABDbuWis5PlIJpvCYj37A@webmail.df.eu>
References: <jmojtt$ole$1@dough.gmane.org>
	<CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
	<20120423001257.Horde.DoABDbuWis5PlIJpvCYj37A@webmail.df.eu>
Message-ID: <CANcUUecLJEckovqbsJMecP+PyAaZYY=NZMXnUKSefJzp_j49YQ@mail.gmail.com>

On Sun, Apr 22, 2012 at 6:12 PM, <martin at v.loewis.de> wrote:

>  So I disagree that we could easily reach the point where it's "close to
>> the
>> speed of what we have." And if we were to attempt this, it would be a
>> multiple months undertaking. I would rather see that time spent on
>> improving pickle than on yet another reimplementation.
>>
>
> Of course, this being free software, anybody can spend time on whatever
> they
> please, and this should not make anybody feel sad. You just don't get
> merits
> if you work on stuff that nobody cares about.


Yes, of course. I don't want to discourage anyone to investigate this
option?in fact, I would very much like to see myself proven wrong. But, if
I understood Stefan correctly, he is proposing to have a GSoC student to do
the work, to which I would feel uneasy about since we have no idea how
valuable this would be as a contribution.

-- Alexandre
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120422/00c98b2b/attachment.html>

From ncoghlan at gmail.com  Mon Apr 23 03:34:30 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 23 Apr 2012 11:34:30 +1000
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CANcUUecLJEckovqbsJMecP+PyAaZYY=NZMXnUKSefJzp_j49YQ@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org>
	<CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
	<20120423001257.Horde.DoABDbuWis5PlIJpvCYj37A@webmail.df.eu>
	<CANcUUecLJEckovqbsJMecP+PyAaZYY=NZMXnUKSefJzp_j49YQ@mail.gmail.com>
Message-ID: <CADiSq7fDtsEfcVdS=g95OP2_yYwbC=MFx1joLxUQDouHToJeGQ@mail.gmail.com>

On Mon, Apr 23, 2012 at 9:27 AM, Alexandre Vassalotti
<alexandre at peadrop.com> wrote:
> On Sun, Apr 22, 2012 at 6:12 PM, <martin at v.loewis.de> wrote:
>> Of course, this being free software, anybody can spend time on whatever
>> they
>> please, and this should not make anybody feel sad. You just don't get
>> merits
>> if you work on stuff that nobody cares about.
>
>
> Yes, of course. I don't want to discourage anyone to investigate this
> option?in fact, I would very much like to see myself proven wrong. But, if I
> understood Stefan correctly, he is proposing to have a GSoC student to do
> the work, to which?I would feel uneasy about since we have no idea how
> valuable this would be?as a contribution.

So long as it's made clear to the students applying that it's a proof
of concept that may return a negative result (i.e. "it was tried, it
proved to be a bad idea") I don't see a problem with it. The freedom
to try out multiple ideas in parallel is one of the great strengths of
open source.

We've had GSoC students try unsuccessful experiments in the past and
have gained useful information as a result (e.g. the main reason I
know the Import Engine API proposed in the deferred PEP 406 isn't
adequate as currently written is because of the design level problems
Greg found when implementing it last summer. The currently documented
design simply doesn't achieve the full objectives of the PEP)

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From guido at python.org  Mon Apr 23 04:09:35 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 22 Apr 2012 19:09:35 -0700
Subject: [Python-Dev] Cython for cPickle?
In-Reply-To: <CADiSq7fDtsEfcVdS=g95OP2_yYwbC=MFx1joLxUQDouHToJeGQ@mail.gmail.com>
References: <jmojtt$ole$1@dough.gmane.org>
	<CANcUUed6MWLQ69Xd=64Ga8FiAo=gGdp8Kn+22NFSTV=P4sABZw@mail.gmail.com>
	<20120423001257.Horde.DoABDbuWis5PlIJpvCYj37A@webmail.df.eu>
	<CANcUUecLJEckovqbsJMecP+PyAaZYY=NZMXnUKSefJzp_j49YQ@mail.gmail.com>
	<CADiSq7fDtsEfcVdS=g95OP2_yYwbC=MFx1joLxUQDouHToJeGQ@mail.gmail.com>
Message-ID: <CAP7+vJLoEJ6Cu=djWxOmDV-1fSDQNsu5+D_4DqTb1PQzVnAk7A@mail.gmail.com>

On Sun, Apr 22, 2012 at 6:34 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Mon, Apr 23, 2012 at 9:27 AM, Alexandre Vassalotti
> <alexandre at peadrop.com> wrote:
>> On Sun, Apr 22, 2012 at 6:12 PM, <martin at v.loewis.de> wrote:
>>> Of course, this being free software, anybody can spend time on whatever
>>> they
>>> please, and this should not make anybody feel sad. You just don't get
>>> merits
>>> if you work on stuff that nobody cares about.
>>
>>
>> Yes, of course. I don't want to discourage anyone to investigate this
>> option?in fact, I would very much like to see myself proven wrong. But, if I
>> understood Stefan correctly, he is proposing to have a GSoC student to do
>> the work, to which?I would feel uneasy about since we have no idea how
>> valuable this would be?as a contribution.
>
> So long as it's made clear to the students applying that it's a proof
> of concept that may return a negative result (i.e. "it was tried, it
> proved to be a bad idea") I don't see a problem with it. The freedom
> to try out multiple ideas in parallel is one of the great strengths of
> open source.
>
> We've had GSoC students try unsuccessful experiments in the past and
> have gained useful information as a result (e.g. the main reason I
> know the Import Engine API proposed in the deferred PEP 406 isn't
> adequate as currently written is because of the design level problems
> Greg found when implementing it last summer. The currently documented
> design simply doesn't achieve the full objectives of the PEP)

However, I think that in this case the success may be predetermined,
or at least not determined by technical success alone. I have a lot of
respect for Cython, but I don't think it is right to have any part of
core Python depend on it. Cython is an incredibly complex and
relatively young (and still fast evolving) piece of technology, while
I think that core dependencies should be minimized and limited to
absolutely fundamental building blocks.

-- 
--Guido van Rossum (python.org/~guido)

From mark at hotpy.org  Mon Apr 23 10:32:04 2012
From: mark at hotpy.org (Mark Shannon)
Date: Mon, 23 Apr 2012 09:32:04 +0100
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
Message-ID: <4F951384.5040705@hotpy.org>

Many (most?) of the function declarations in the CPython header files
are annotated with the PyAPI_FUNC declaration.
Similarly for data declarations and PyAPI_DATA

What do they mean, exactly? From the name I would expect that they are a 
way of declaring a function or datum to be part of the API, but their 
usage seems to be more to do with linkage.

The reason I am asking is that the new dictionary implementation 
declares a few functions used to communicate between dictobject.c, 
typeobject.c and ceval.c which cannot be static functions, but are not 
intended to be part of the API.

Cheers,
Mark.

From flub at devork.be  Mon Apr 23 12:15:31 2012
From: flub at devork.be (Floris Bruynooghe)
Date: Mon, 23 Apr 2012 11:15:31 +0100
Subject: [Python-Dev] Suggested addition to PEP 8 for context managers
In-Reply-To: <CAAWk_Dx=KSc8SzXpGQyAx-wbbUdvEODYrHj7QcF=cxQ7=Fbm0Q@mail.gmail.com>
References: <8402DB76-DF4F-41BD-BD9B-3689AF3D6159@gmail.com>
	<20120416113037.66e4da6f@limelight.wooz.org>
	<CANG+ZTNCwkHapAxZ87L-fcbPfi8FQwK=kSY7OCEApVoHCryJeA@mail.gmail.com>
	<4F8D1377.5020001@redhat.com>
	<20120417122502.0B9D82509E8@webabinitio.net>
	<20120417113631.7fb1b543@resist.wooz.org>
	<CAAWk_DzUK2gRriLp3DYe1PRC8qU3N3Kkes75XSA2hMSKpUvzmQ@mail.gmail.com>
	<CAP7+vJ+i-Qk7uOO_=XxBykMK3G01uTKODWKmpkhUc9+-eFZZWA@mail.gmail.com>
	<CAPTjJmr8SnjwDd=03TOH+T-vnQhh3ZCjMkKqJkctozX3q=JeYQ@mail.gmail.com>
	<jmmpsn$1k1$1@dough.gmane.org>
	<CAP7+vJL3v=SKzhJD8P40hB94wRiBE3MfNY9P7VtY2itC=ZcHag@mail.gmail.com>
	<CAF-Rda8Uqf0e80oPQ4JvmqFtHjsCgt6_Q7dxPo8dJw7CX+QCsA@mail.gmail.com>
	<CAPTjJmpZ7fZu+do3akgvWaE=Yin+gMoxxybbrCnESKFZBhxu3Q@mail.gmail.com>
	<CAP7+vJLqy+Agqfs2G-JRnmkLk1R905yU=wB444UurrgKr1j3gQ@mail.gmail.com>
	<20120419105534.7f90fc29@rivendell>
	<CAAWk_Dx=KSc8SzXpGQyAx-wbbUdvEODYrHj7QcF=cxQ7=Fbm0Q@mail.gmail.com>
Message-ID: <CAAWk_Dy8H8wc0ZSG7vaiYLLggiS17ujTwhqhLfwKk_VPGhSdSw@mail.gmail.com>

[resent since I accidentally dropped the list]

Hi,

On 19 April 2012 15:55, Barry Warsaw <barry at python.org> wrote:
> I'll make this change to the PEP. ?I'm not entirely sure the Yes/No examples
> are great illustrations of this change in wording though. ?Here's the diff so
> far (uncommitted):
>
> diff -r 34076bfed420 pep-0008.txt
> --- a/pep-0008.txt ? ? ?Thu Apr 19 10:32:50 2012 +0200
> +++ b/pep-0008.txt ? ? ?Thu Apr 19 10:53:15 2012 -0400
> @@ -305,7 +305,11 @@
> ? ``>=``, ``in``, ``not in``, ``is``, ``is not``), Booleans (``and``,
> ? ``or``, ``not``).
>
> -- Use spaces around arithmetic operators:
> +- If operators with different priorities are used, consider adding
> + ?whitespace around the operators with the lowest priority(ies). This
> + ?is very much to taste; however, never use more than one space, and
> + ?always have the same amount of whitespace on both sides of a binary
> + ?operator.

While the text is certainly an improvement it seems to me that right
now some of the examples following under the "No:" should be moved to
"Yes:"

"""
No:
i=i+1
submitted +=1
x = x*2 - 1
hypot2 = x*x + y*y
c = (a+b) * (a-b)
"""

In particular "x = x*2 -1" and "hypot2 = x*x + y*y" sound like they
should be under "Yes".

Regards,
Floris


-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org

From senthil at uthcode.com  Mon Apr 23 14:02:36 2012
From: senthil at uthcode.com (Senthil Kumaran)
Date: Mon, 23 Apr 2012 20:02:36 +0800
Subject: [Python-Dev] [Python-checkins] cpython: issue2193 - Update docs
 about the legal characters allowed in Cookie name
In-Reply-To: <CADiSq7cPrPV8wTeeD3JTuTO-m_5-S63sLgEUYk_MW113v=L2Vw@mail.gmail.com>
References: <E1SLmb2-0002Lh-3t@dinsdale.python.org>
	<CADiSq7cPrPV8wTeeD3JTuTO-m_5-S63sLgEUYk_MW113v=L2Vw@mail.gmail.com>
Message-ID: <CAPOVWORtkjJbFm_vkpkJHK8a22+02KF9L307Z0A-R8OXZ6_Y3g@mail.gmail.com>

On Sun, Apr 22, 2012 at 3:17 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> ?issue2193 - Update docs about the legal characters allowed in Cookie name
>
> You missed the dummy merge from 3.2 to indicate that this change had
> been applied to both branches independently.

Yes. Sorry for that. I was being little cautious of  having correct
message in each version and forgot the merge.

> Should be fixed in my commit for issue #14026

Thanks for this, Nick.

-- 
Senthil

From benjamin at python.org  Mon Apr 23 14:58:50 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Mon, 23 Apr 2012 08:58:50 -0400
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <4F951384.5040705@hotpy.org>
References: <4F951384.5040705@hotpy.org>
Message-ID: <CAPZV6o-1C30emhGuzisdi07FqXfxCkJdosO52cBGy3EXPKtSbw@mail.gmail.com>

2012/4/23 Mark Shannon <mark at hotpy.org>:
> Many (most?) of the function declarations in the CPython header files
> are annotated with the PyAPI_FUNC declaration.
> Similarly for data declarations and PyAPI_DATA
>
> What do they mean, exactly? From the name I would expect that they are a way
> of declaring a function or datum to be part of the API, but their usage
> seems to be more to do with linkage.

They define linkage on Windows. I actually don't know if they should
be applied to internal functions.



-- 
Regards,
Benjamin

From kristjan at ccpgames.com  Mon Apr 23 15:05:35 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Mon, 23 Apr 2012 13:05:35 +0000
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <CAPZV6o-1C30emhGuzisdi07FqXfxCkJdosO52cBGy3EXPKtSbw@mail.gmail.com>
References: <4F951384.5040705@hotpy.org>
	<CAPZV6o-1C30emhGuzisdi07FqXfxCkJdosO52cBGy3EXPKtSbw@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD33BBF14@RKV-IT-EXCH104.ccp.ad.local>

IMHO, we are _much_ too generous at applying this to almost whatever gets exposed between .c files.
I have created something called the "restricted" api for our custom python27.dll where I use different
macros (PyAPI_RFUNC, pyAPI_RDATA) to mean that things aren't exported for "restricted" builds.  We
use it to remove some of the easier access points to the dll for hackers to exploit.

Also, once declared exported this way, things become more bothersome to remove again, since once could always argue that someone out there is using these thigns.

K

> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of Benjamin Peterson
> Sent: 23. apr?l 2012 12:59
> To: Mark Shannon
> Cc: Python Dev
> Subject: Re: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
> 
> 2012/4/23 Mark Shannon <mark at hotpy.org>:
> > Many (most?) of the function declarations in the CPython header files
> > are annotated with the PyAPI_FUNC declaration.
> > Similarly for data declarations and PyAPI_DATA
> >
> > What do they mean, exactly? From the name I would expect that they are
> > a way of declaring a function or datum to be part of the API, but
> > their usage seems to be more to do with linkage.
> 
> They define linkage on Windows. I actually don't know if they should be
> applied to internal functions.
> 
> 
> 
> --
> Regards,
> Benjamin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-
> dev/kristjan%40ccpgames.com



From solipsis at pitrou.net  Mon Apr 23 22:22:18 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 23 Apr 2012 22:22:18 +0200
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
Message-ID: <20120423222218.4015b13e@pitrou.net>

On Mon, 23 Apr 2012 17:24:57 +0200
benjamin.peterson <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/6e5855854a2e
> changeset:   76485:6e5855854a2e
> user:        Benjamin Peterson <benjamin at python.org>
> date:        Mon Apr 23 11:24:50 2012 -0400
> summary:
>   Implement PEP 412: Key-sharing dictionaries (closes #13903)

I hope someone can measure the results of this change on real-world
code. Benchmark results with http://hg.python.org/benchmarks/ are not
overly promising.

Regards

Antoine.



From rdmurray at bitdance.com  Mon Apr 23 23:55:57 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Mon, 23 Apr 2012 17:55:57 -0400
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
	dictionaries (closes #13903)
In-Reply-To: <20120423222218.4015b13e@pitrou.net>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
Message-ID: <20120423215558.092532509E3@webabinitio.net>

On Mon, 23 Apr 2012 22:22:18 +0200, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Mon, 23 Apr 2012 17:24:57 +0200
> benjamin.peterson <python-checkins at python.org> wrote:
> > http://hg.python.org/cpython/rev/6e5855854a2e
> > changeset:   76485:6e5855854a2e
> > user:        Benjamin Peterson <benjamin at python.org>
> > date:        Mon Apr 23 11:24:50 2012 -0400
> > summary:
> >   Implement PEP 412: Key-sharing dictionaries (closes #13903)
> 
> I hope someone can measure the results of this change on real-world
> code. Benchmark results with http://hg.python.org/benchmarks/ are not
> overly promising.

I'm pretty sure that anything heavily using sqlalchemy will benefit,
so that would be a good place to look for a real-world benchmark.

--David

From jimjjewett at gmail.com  Tue Apr 24 03:58:13 2012
From: jimjjewett at gmail.com (Jim Jewett)
Date: Mon, 23 Apr 2012 21:58:13 -0400
Subject: [Python-Dev] (time) PEP 418 glossary V2
Message-ID: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>

Glossary
========

Absolute Time
-------------

A measurement of time since a specific Epoch_, typically far in the
past.  Civil Time is the most common example.  Typically contrasted
with a `Duration`_, as (now - epoch) is generally much larger than
any duration that can be appropriately measured with the clock in
question.

Accuracy
--------

The amount of deviation of measurements by a given instrument from true
values. See also the wikipedia article on `Accuracy and precision
<http://en.wikipedia.org/wiki/Accuracy_and_precision>`_.

Inaccuracy in clocks may be caused by lack of `Precision`_, by
`Drift`_, or by an incorrect initial setting of the clock (e.g., timing
of threads is inherently inaccurate because perfect synchronization in
resetting counters is quite difficult).

Adjusted
--------

Resetting a clock, presumably to the correct time.  This may be done
either with a `Step`_ or with `Slew`_.  Adjusting a clock normally
makes it more accurate with respect to the `Absolute Time`_.  The cost
is that any durations currently being measured will show a `Bias`_.
(17 ticks is not the same Duration_ as 17 ticks plus an adjustment.)

Bias
----

Lack of accuracy that is systematically in one direction, as opposed to
random errors.  When a clock is `Adjusted`_, durations overlapping the
adjustment will show a Bias.

Civil Time
----------

Time of day; external to the system.  10:45:13am is a Civil time;
A Duration_ like "45 seconds" is not a Civil time.  Provided by
existing functions ``time.localtime()`` and ``time.gmtime()``, which
are not changed by this PEP.

Clock
-----

An instrument for measuring time.  Different clocks have different
characteristics; for example, a clock with nanosecond Precision_ may
start to Drift_ after a few minutes, while a less precise clock
remained accurate for days.

This PEP is primarily concerned with clocks which use a unit of
seconds, rather than years, or arbitrary units such as a Tick_.

Counter
-------

A clock which increments each time a certain event occurs.  A counter
is strictly monotonic in the mathematical sense, but does not meet
the typical definitions of Monotonic_ when used of a computer clock.
It can be used to generate a unique (and ordered) timestamp, but these
timestamps cannot be mapped to `Civil Time`_; Tick_ creation may well
be bursty, with several advances in the same millisecond followed
by several days without any advance.

CPU Time
--------

A measure of how much CPU effort has been spent on a certain task.
CPU seconds are often normalized (so that a variable number can
occur in the same actual second).  CPU seconds can be important
when profiling, but they do not map directly to user response time,
nor are they directly comparable to (real time) seconds.

Drift
-----

The accumulated error against "true" time, as defined externally to the
system.  Drift may be due to imprecision, or to a difference between
the average rate at which clock time advances and that of real time.

Drift does not include intentional adjustments, but clocks providing
`Absolute Time`_ will eventually have to be Adjusted_ to compensate
for drift.

Duration
--------

Elapsed time.  The difference between the starting and ending times.
Also called Relative Time.  Normally contrasted with `Absolute Time`_.

While a defined Epoch_ technically creates an implicit duration, this
duration is normally too large to be of practical use.

Computers can often supply a clock with better Precision_ or higher
Resolution_ if they do not have to guarantee meaningful comparisons to
any times not generated by the clock itself.

Epoch
-----

The reference point of a clock.  For clocks providing `Civil Time`_,
this is often midnight as the day (and year) rolled over to
January 1, 1970.  A Monotonic_ clock will typically have an undefined
epoch (represented as None).

Latency
-------

Delay.  By the time a call to a clock function returns, `Real Time`_
has advanced, possibly by more than the precision of the clock.

Monotonic
---------

This is a particularly tricky term, as there are several subtly
incompatible definitions in use.  C++ followed the mathematical
definition, so that a monotonic clock only promises not to go
backwards.  In practice, that is not sufficient to be useful, and no
Operating System provides such a weak guarantee.  Most discussions
of a "Monotonic *Clock*" will also assume several additional
guarantees, some of which are explicitly required by the POSIX
specification.

Within this PEP (and Python), the intended meaning is closer to
"the characteristics expected of a monotonic clock in practice".
In addition to not moving backward, a Monotonic Clock should also be
Steady_, and should be convertible to a unit of seconds.  The tradeoffs
often include lack of a defined Epoch_ or mapping to `Civil Time`_,
and being more expensive (in `Latency`_, power usage, or duration spent
within calls to the clock itself) to use.  For example, the clock may
represent (a constant multiplied by) ticks of a specific quartz timer
on a specific CPU core, and calls would therefore require
synchronization between cores.

Precision
---------

This is another tricky term, as there are several different meanings
which are relevant.

This PEP (and python) uses the most common meaning from natural
sciences:  "The amount of deviation among measurements of the same
physical value by a single instrument."  Imprecision in clocks may be
caused by a fluctuation of the rate at which clock time advances
relative to `Real Time`_, including intentional clock adjustments
by slewing.

Note that this is different from the typical computer language meaning
of how many digits to show (perhaps better called resolution).

Note that this is also different from at least one time-related meaning
of precision used in at least some sources.  That usage assumes that a
clock is an oscillator with a given frequency, and measures the
precision with which that oscillator tracks its target frequency,
irrespective of how precisely the computer can read the resulting time.

Note that "precision" as reported by the clock itself may use yet
another definition, and may differ between clocks.

Process Time
------------

Time elapsed since the process began.  It is typically measured in
`CPU time`_ rather than `Real Time`_, and typically does not advance
while the process is suspended.

Real Time
---------

Time in the real world.  This differs from `Civil time`_ in that it is
not `Adjusted`_, but they should otherwise advance in lockstep.

It is not related to the "real time" of "Real Time [Operating]
Systems".  It is sometimes called "wall clock time" to avoid that
ambiguity; unfortunately, that introduces different ambiguities.

Resolution
----------

The smallest difference between two physical values that results
in a different measurement by a given instrument.

Note that the above is in the ideal case; computer clocks in particular
are often prone to reporting more resolution than they can actually
distinguish.

Slew
----

A slight change to a clock's speed, usually intended to correct Drift_
with respect to an external authority.  In other words, the Precision_
is (temporarily) intentionally reduced by some `Bias`_, and short
Duration_ measurements become less comparable, in return for providing
a more accurate `Absolute Time`_.

Stability
---------

Persistence of accuracy.  A measure of expected `Drift`_.

Steady
------

A clock with high Stability_ and relatively high Accuracy_ and
`Precision`_.  In practice, it is often used to indicate a Monotonic_
clock.  In theory, "steady" places greater emphasis on the consistency
of the duration between subsequent ticks; in practice it may simply
indicate more familiarity with C++ than with Posix.

Step
----

An instantaneous change in the represented time.  Instead of speeding
or slowing the clock (`Slew`_), a single offset is permanently added.

System Time
-----------

Time as represented by the Operating System.

Thread Time
-----------

Time elapsed since the thread began.  It is typically measured in
`CPU time`_ rather than `Real Time`_, and typically does not advance
while the thread is idle.

Tick
----

A single increment of a `Counter`_.  Generally used to indicate what
the raw hardware provides, before multiplying by a constant to get
seconds.

Wallclock
---------

Also Wall Clock, Wall Time.  What the clock on the wall says.  This is
typically used as a synonym for `Real Time`_; unfortunately, wall time
is itself ambiguous, particularly (but not only) between Real Time and
`Civil Time`_.

From victor.stinner at gmail.com  Tue Apr 24 01:24:07 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 01:24:07 +0200
Subject: [Python-Dev] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>
References: <CAMpsgwbZThWMRmXxLNisXAGTZJ1poKGBa3z_EzZPDQdjJwpA5g@mail.gmail.com>
	<20120417044821.GA1979@cskk.homeip.net>
	<20120417123545.3DF842509E8@webabinitio.net>
	<CAMpsgwa95_cU_WTDgpQo11rFyM2qyfymsgY-kzUr-8vF8UTHFw@mail.gmail.com>
	<CAL_0O1_LRTNjPzpWdyGYKyCAKEGwxNJ4+VHFvgc7vTyptRmJ5A@mail.gmail.com>
	<CAMpsgwZ3eAkK814R=725LUBas8PVFqdwmOw-iz-FYL6ZvJ3ukw@mail.gmail.com>
	<CAL_0O1-Bs_7kZ2kTktLTyimCpAoPDfoX3GsZe-zvBjLbBJd0Lw@mail.gmail.com>
	<CAMpsgwZ2zuC4BULeQ9Dt1v-t3gzeHnVx2nqv6P311OYMVABgVw@mail.gmail.com>
	<CAL_0O18PexBE5_i1+V67=G4kAA_41AeY=dR6GQA6xfPzCpmY3w@mail.gmail.com>
Message-ID: <CAMpsgwbiWrao90Q-YM7CyHh0PptZoMG7TmAYmif+xwr6wj56AA@mail.gmail.com>

>> Well, I asked on IRC what I should do for these definitions because
>> I'm too tired to decide what to do. [[...]] I replaced these definitions with yours.
>
> That was nice of you. ?In return, I'll go over the PEP to check that
> usage is appropriate (eg, in some places "resolution" was used in the
> sense of computer science's "precision" == reported digits). ?Please
> give me 24 hours.

As you asked me in private, I replaced "Precision" with "Resolution"
almost everywhere in the PEP. I removed the "precision" key of
time.get_clock_info(): use the "resolution" key instead.

No OS provides any information on the precision, accuracy, drift or
anything else of a specific clock. Only the resolution of a clock is
known.

Victor

From ncoghlan at gmail.com  Tue Apr 24 06:31:04 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 24 Apr 2012 14:31:04 +1000
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
Message-ID: <CADiSq7f4srMHazFp=YmZU03NL9yy6iBs_7ih-OdU7bFbuRARKQ@mail.gmail.com>

I like the updated glossary - very good summary of the relevant
terminology and common points of confusion. One minor gripe below (and
it *is* minor, despite the amount of text explaining my point of
view...)

On Tue, Apr 24, 2012 at 11:58 AM, Jim Jewett <jimjjewett at gmail.com> wrote:
> Real Time
> ---------
>
> Time in the real world. ?This differs from `Civil time`_ in that it is
> not `Adjusted`_, but they should otherwise advance in lockstep.
>
> It is not related to the "real time" of "Real Time [Operating]
> Systems". ?It is sometimes called "wall clock time" to avoid that
> ambiguity; unfortunately, that introduces different ambiguities.

"Not related" is simply not true, as this is the exact meaning of
"Real Time" in the term "Real Time Operating System". In the PEP's
terms, a Real Time OS is simply an operating system specifically
designed to allow developers to meet deadlines expressed as the
maximum permitted Real Time Duration between an event occurring and
the system responding to that event.

The power an RTOS gives you over an ordinary OS is sufficiently low
level control over the scheduler (if there's even an entity worth of
the name "scheduler" at all) such that you can *demonstrably* meet
hard real time deadlines (down to a certain lower limit, generally
constrained by hardware). It's a pain to program that way though (and
adequately demonstrating correctness gets harder as the code gets more
complicated), so you often want to use a dedicated processor for the
RTOS bits and a separate processor (with an ordinary OS) for
everything else. (There's a good explanation of many of these
concepts, include separating the hard realtime parts from everything
else, in the Giant Robots of Doom talk from PyCon AU 2010:
http://pyvideo.org/video/481/pyconau-2010--hard-real-time-python--or--giant-ro)

One interesting aspect of using a normal OS is that you can *never*
reliably read a Real Time clock in user level code - the scheduler
doesn't provide adequate guarantees of responsiveness, so there's
always going to be some scheduling jitter in the results. This
generally doesn't matter for measuring durations within a machine
(since the jitter will, with a sufficiently large number of samples,
cancel out between the two measurements), but can be a problem for
absolute time measurements that are intended to be compared with high
precision across different machines.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From martin at v.loewis.de  Tue Apr 24 09:27:33 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 24 Apr 2012 09:27:33 +0200
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <4F951384.5040705@hotpy.org>
References: <4F951384.5040705@hotpy.org>
Message-ID: <4F9655E5.70307@v.loewis.de>

> What do they mean, exactly? From the name I would expect that they are a
> way of declaring a function or datum to be part of the API, but their
> usage seems to be more to do with linkage.

It means that they will be exported from the pythonXY.dll on Windows. In
Windows DLLs, it's not sufficient to make a symbol global (non-static)
to use it in an application, you also have to declare it as
__declspec(dllexport), or list it in the linker definition file (which
is not used today anymore).

Likewise, to use a symbol from a DLL, you also need to declare it
as __declspec(dllimport) in the using application. This will, in
particular, arrange for a slot in the indirect-jump assembler section
of the using DLL, so that the resulting executable will be position-
independent (except for this procedure linkage section).

As we have the same header files both for the implemenation and the
usage, this macro tricky is necessary to sometimes say dllexport,
sometimes dllimport.

Even though it's strictly needed on Windows, and strictly only for
API that we do want to expose, we apply it to all API that is
public on Unix (i.e. all Py* API), in order to avoid API being available
on Unix but not on Windows.

Regards,
Martin

From martin at v.loewis.de  Tue Apr 24 09:31:20 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Tue, 24 Apr 2012 09:31:20 +0200
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <EFE3877620384242A686D52278B7CCD33BBF14@RKV-IT-EXCH104.ccp.ad.local>
References: <4F951384.5040705@hotpy.org>	<CAPZV6o-1C30emhGuzisdi07FqXfxCkJdosO52cBGy3EXPKtSbw@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD33BBF14@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <4F9656C8.30900@v.loewis.de>

Am 23.04.2012 15:05, schrieb Kristj?n Valur J?nsson:
> IMHO, we are _much_ too generous at applying this to almost whatever
> gets exposed between .c files. I have created something called the
> "restricted" api for our custom python27.dll where I use different 
> macros (PyAPI_RFUNC, pyAPI_RDATA) to mean that things aren't exported
> for "restricted" builds.  We use it to remove some of the easier
> access points to the dll for hackers to exploit.
> 
> Also, once declared exported this way, things become more bothersome
> to remove again, since once could always argue that someone out there
> is using these thigns.

For this, PyAPI_FUNC doesn't really matter. A symbol that is listed in
the header file is available on Unix even without such a declaration,
so listing it in the public header file is already the step that makes
it public, not specifying it as PyAPI_FUNC.

I agree that too much API is public, but the right approach is to rename
such API to _Py*, indicating to users that we don't want them to use it.
For existing API, that's tricky; for new API, I think it should be
private by default.

See also PEP 384.

Regards,
Martin

From kristjan at ccpgames.com  Tue Apr 24 12:16:08 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 24 Apr 2012 10:16:08 +0000
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <4F9656C8.30900@v.loewis.de>
References: <4F951384.5040705@hotpy.org>
	<CAPZV6o-1C30emhGuzisdi07FqXfxCkJdosO52cBGy3EXPKtSbw@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD33BBF14@RKV-IT-EXCH104.ccp.ad.local>
	<4F9656C8.30900@v.loewis.de>
Message-ID: <EFE3877620384242A686D52278B7CCD33BE088@RKV-IT-EXCH104.ccp.ad.local>

You know that I'm speaking of Windows, right?
IMHO, we shouldn't put the PyAPI* stuff on functions unless they are actual API functions.
I don't know how the export tables for ELF .so objects is generated, but it surely can't
export _everything_.  Anyway, marking stuff as part of the API makes sense, and marking
functions as being part of the API makes no sense and is wasteful when they are not.
We might even have something similar for the stable api.

> -----Original Message-----
> From: "Martin v. L?wis" [mailto:martin at v.loewis.de]
> Sent: 24. apr?l 2012 07:31
> To: Kristj?n Valur J?nsson
> Cc: Benjamin Peterson; Mark Shannon; Python Dev
> Subject: Re: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
 
> For this, PyAPI_FUNC doesn't really matter. A symbol that is listed in the
> header file is available on Unix even without such a declaration, so listing it in
> the public header file is already the step that makes it public, not specifying it
> as PyAPI_FUNC.
> 
> 
> I agree that too much API is public, but the right approach is to rename such
> API to _Py*, indicating to users that we don't want them to use it.
> For existing API, that's tricky; for new API, I think it should be private by
> default.



From stephen at xemacs.org  Tue Apr 24 09:26:35 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Tue, 24 Apr 2012 16:26:35 +0900
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
Message-ID: <CAL_0O19PwxbDiAuJ+2xRvNAXhXSDSA_7Q0mFuQX9hPmQiOSchw@mail.gmail.com>

Very nice!  Two possible clarifications:

On Tue, Apr 24, 2012 at 10:58 AM, Jim Jewett <jimjjewett at gmail.com> wrote:
> Glossary
> ========
> Bias
> ----
>
> Lack of accuracy that is systematically in one direction, as opposed to
> random errors. ?When a clock is `Adjusted`_, durations overlapping the
> adjustment will show a Bias.

"Conversely, if the clock has experienced `Drift`_, its reports of
`Absolute Time`_ will show Bias until the adjustment takes place."


> Counter
> -------
>
> A clock which increments each time a certain event occurs. ?A counter
> is strictly monotonic in the mathematical sense, but does not meet
> the typical definitions of Monotonic_ when used of a computer clock.
> It can be used to generate a unique (and ordered) timestamp, but these
> timestamps cannot be mapped to `Civil Time`_; Tick_ creation may well

"mapped" -> "algorithmically mapped"

> be bursty, with several advances in the same millisecond followed
> by several days without any advance.
> Duration_ measurements become less comparable, in return for providing
> a more accurate `Absolute Time`_.

From kristjan at ccpgames.com  Tue Apr 24 12:24:16 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 24 Apr 2012 10:24:16 +0000
Subject: [Python-Dev] cpython: Implement PEP 412:
	Key-sharing	dictionaries (closes #13903)
In-Reply-To: <20120423215558.092532509E3@webabinitio.net>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
Message-ID: <EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>

Probably any benchmark involving a large amount of object instances with non-trivial dictionaries.

Benchmarks should measure memory usage too, of course.  Sadly that is not possible in standard
cPython.  Our 2.7 branch has extensive patching to allow custom memory allocators to be used
(it even eliminates the explicit "malloc" calls used here and there in the code) and exposes some
functions, such as sys.getpymalloced(), useful for memory benchmarking.

Perhaps I should write about this on my blog.  Updating the memory allocation macro layer in
cPython for embedding is something I'd be inclined to contribute, but it will involve a large amount
of bikeshedding, I'm sure :)

Btw, this is of great interest to me at the moment, our Shanghai engineers are screaming at the
memory waste incurred by dictionaries.  A 10 item dictionary consumes 1/2k on 32 bits, did you 
know this?

K

> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of R. David Murray
> Sent: 23. apr?l 2012 21:56
> To: Antoine Pitrou
> Cc: python-dev at python.org
> Subject: Re: [Python-Dev] cpython: Implement PEP 412: Key-sharing
> dictionaries (closes #13903)
> 
> On Mon, 23 Apr 2012 22:22:18 +0200, Antoine Pitrou <solipsis at pitrou.net>
> wrote:
> > On Mon, 23 Apr 2012 17:24:57 +0200
> > benjamin.peterson <python-checkins at python.org> wrote:
> > > http://hg.python.org/cpython/rev/6e5855854a2e
> > > changeset:   76485:6e5855854a2e
> > > user:        Benjamin Peterson <benjamin at python.org>
> > > date:        Mon Apr 23 11:24:50 2012 -0400
> > > summary:
> > >   Implement PEP 412: Key-sharing dictionaries (closes #13903)
> >
> > I hope someone can measure the results of this change on real-world
> > code. Benchmark results with http://hg.python.org/benchmarks/ are not
> > overly promising.
> 
> I'm pretty sure that anything heavily using sqlalchemy will benefit, so that
> would be a good place to look for a real-world benchmark.
> 
> --David
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-
> dev/kristjan%40ccpgames.com



From victor.stinner at gmail.com  Tue Apr 24 01:30:43 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 01:30:43 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
Message-ID: <CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>

> Here is a simplified version of the first draft of the PEP 418. The
> full version can be read online.
> http://www.python.org/dev/peps/pep-0418/

Thanks to everyone who helped me to work on this PEP!

I integrated last comments. There is no more open question. (Or did I
miss something?)

I didn't know that it would be so hard to add a such simple function
as time.monotonic()!?

Victor

From kristjan at ccpgames.com  Tue Apr 24 12:19:03 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Tue, 24 Apr 2012 10:19:03 +0000
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <4F9655E5.70307@v.loewis.de>
References: <4F951384.5040705@hotpy.org> <4F9655E5.70307@v.loewis.de>
Message-ID: <EFE3877620384242A686D52278B7CCD33BE0A9@RKV-IT-EXCH104.ccp.ad.local>

Aha, so that is the rationale.  Because the export table on unix is so
generous, we force ourselves to be generous on windows too?
I did some unix programming back in the day.  IRIX, actually (a Sys V derivative).  I'm pretty
sure we had to explicitly specify our .so exports.  But I might be mistaken.

K

> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of "Martin v. L?wis"
> Sent: 24. apr?l 2012 07:28
> To: Mark Shannon
> Cc: Python Dev

> Subject: Re: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
> Even though it's strictly needed on Windows, and strictly only for API that we
> do want to expose, we apply it to all API that is public on Unix (i.e. all Py*
> API), in order to avoid API being available on Unix but not on Windows.



From victor.stinner at gmail.com  Tue Apr 24 12:38:21 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 12:38:21 +0200
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
Message-ID: <CAMpsgwabRAcN-DgJX56kA5z7JsrL=jJF=oWTiCYp1q-vA1hmEg@mail.gmail.com>

> Monotonic
> ---------
>
> This is a particularly tricky term, as there are several subtly
> incompatible definitions in use.

Is it a definition for the glossary?

> ?C++ followed the mathematical
> definition, so that a monotonic clock only promises not to go
> backwards.

The "C++ Timeout Specification" doesn't have any monotonic anymore. It
has a steady_clock, but it's something different.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3128.html#time.clock.monotonic

> ?In practice, that is not sufficient to be useful, and no
> Operating System provides such a weak guarantee. ?Most discussions
> of a "Monotonic *Clock*" will also assume several additional
> guarantees, some of which are explicitly required by the POSIX
> specification.

What do you mean for POSIX? The definition of CLOCK_MONOTONIC by the
POSIX specification is:

"The identifier for the system-wide monotonic clock, which is defined
as a clock whose value cannot be set via clock_settime() and which
cannot have backward clock jumps. The maximum possible clock jump
shall be implementation-defined."
http://pubs.opengroup.org/onlinepubs/000095399/basedefs/time.h.html

time.monotonic() of the PEP 418 gives the same guarantee (cannot go
backward, cannot be set), except for "system-wide" (Python cannot give
this guarantee because of Windows older than Vista).

> ?The tradeoffs
> often include lack of a defined Epoch_ or mapping to `Civil Time`_,

I don't know any monotonic with a defined epoch or mappable to the civil time.

> and being more expensive (in `Latency`_, power usage, or duration spent
> within calls to the clock itself) to use.

CLOCK_MONOTONIC and CLOCK_REALTIME have the same performances on Linux
and FreeBSD. Why would a monotonic clock be more expensive?

> ?For example, the clock may
> represent (a constant multiplied by) ticks of a specific quartz timer
> on a specific CPU core, and calls would therefore require
> synchronization between cores.

I don't think that synchronizing a counter between CPU cores is
something expensive. See the following tables for details:
http://www.python.org/dev/peps/pep-0418/#performance

CLOCK_MONOTONIC and CLOCK_REALTIME use the same hardware clocksource
and so have the same latency depending on the hardware.

Victor

From solipsis at pitrou.net  Tue Apr 24 12:37:46 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 24 Apr 2012 12:37:46 +0200
Subject: [Python-Dev] cpython: Implement PEP 412:
 Key-sharing	dictionaries (closes #13903)
In-Reply-To: <EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120424123746.14173691@pitrou.net>

On Tue, 24 Apr 2012 10:24:16 +0000
Kristj?n Valur J?nsson <kristjan at ccpgames.com> wrote:
> 
> Btw, this is of great interest to me at the moment, our Shanghai engineers are screaming at the
> memory waste incurred by dictionaries.  A 10 item dictionary consumes 1/2k on 32 bits, did you 
> know this?

The sparseness of hash tables is a well-known time/space tradeoff.
See e.g. http://bugs.python.org/issue10408

Regards

Antoine.

From victor.stinner at gmail.com  Tue Apr 24 12:47:27 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 12:47:27 +0200
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
Message-ID: <CAMpsgwYg3xGjS1MJMmzg7=n_9WYfP1nonKehRziWDUO0OFEF+w@mail.gmail.com>

> Precision
> ---------
>
> This is another tricky term,

This is a good reason why it is no more used in the PEP :-)

> Note that "precision" as reported by the clock itself may use yet
> another definition, and may differ between clocks.

Some C function provides the frequency of the clock (and so its
resolution), or directly the resolution, but I don't know any function
providing the precision.

I thought that clock_getres() is the precision, but I was wrong.
clock_getres() is really the resolution announced by the OS, even if
the OS may be pessimistic (and so wrong, ex: OpenBSD and Solaris). But
Python should not try to workaround OS "bugs".

Victor

From solipsis at pitrou.net  Tue Apr 24 13:24:47 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 24 Apr 2012 13:24:47 +0200
Subject: [Python-Dev] Daily reference leaks (8dbcedfd13f8): sum=15528
References: <E1SMWYT-0003Mg-7z@ap.vmr.nerim.net>
Message-ID: <20120424132447.5f25d241@pitrou.net>

On Tue, 24 Apr 2012 05:36:41 +0200
solipsis at pitrou.net wrote:
> results for 8dbcedfd13f8 on branch "default"
> --------------------------------------------
> 
> test_itertools leaked [44, 44, 44] references, sum=132
> test_robotparser leaked [103, 103, 103] references, sum=309
> test_ssl leaked [103, 103, 103] references, sum=309
> test_tempfile leaked [2, 2, 2] references, sum=6
> test_urllib leaked [103, 103, 103] references, sum=309
> test_urllib2 leaked [3208, 3208, 3208] references, sum=9624
> test_urllib2_localnet leaked [1078, 1078, 1078] references, sum=3234
> test_urllib2net leaked [432, 432, 432] references, sum=1296
> test_urllibnet leaked [103, 103, 103] references, sum=309

These seem to have been introduced by changeset 6e5855854a2e: ?Implement
PEP 412: Key-sharing dictionaries (closes #13903)?.

Regards

Antoine.




From mark at hotpy.org  Tue Apr 24 13:29:55 2012
From: mark at hotpy.org (Mark Shannon)
Date: Tue, 24 Apr 2012 12:29:55 +0100
Subject: [Python-Dev] Daily reference leaks (8dbcedfd13f8): sum=15528
In-Reply-To: <20120424132447.5f25d241@pitrou.net>
References: <E1SMWYT-0003Mg-7z@ap.vmr.nerim.net>
	<20120424132447.5f25d241@pitrou.net>
Message-ID: <4F968EB3.7070307@hotpy.org>

Antoine Pitrou wrote:
> On Tue, 24 Apr 2012 05:36:41 +0200
> solipsis at pitrou.net wrote:
>> results for 8dbcedfd13f8 on branch "default"
>> --------------------------------------------
>>
>> test_itertools leaked [44, 44, 44] references, sum=132
>> test_robotparser leaked [103, 103, 103] references, sum=309
>> test_ssl leaked [103, 103, 103] references, sum=309
>> test_tempfile leaked [2, 2, 2] references, sum=6
>> test_urllib leaked [103, 103, 103] references, sum=309
>> test_urllib2 leaked [3208, 3208, 3208] references, sum=9624
>> test_urllib2_localnet leaked [1078, 1078, 1078] references, sum=3234
>> test_urllib2net leaked [432, 432, 432] references, sum=1296
>> test_urllibnet leaked [103, 103, 103] references, sum=309
> 
> These seem to have been introduced by changeset 6e5855854a2e: ?Implement
> PEP 412: Key-sharing dictionaries (closes #13903)?.
> 

I'm investigating at the moment.

Cheers,
Mark.

From ncoghlan at gmail.com  Tue Apr 24 13:41:33 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 24 Apr 2012 21:41:33 +1000
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <CADiSq7eBawDjjcN9x_k8_ggori4QCOn1N8-y50X-6OZshhpOEA@mail.gmail.com>

On Tue, Apr 24, 2012 at 8:24 PM, Kristj?n Valur J?nsson
<kristjan at ccpgames.com> wrote:
> Perhaps I should write about this on my blog. ?Updating the memory allocation macro layer in
> cPython for embedding is something I'd be inclined to contribute, but it will involve a large amount
> of bikeshedding, I'm sure :)

Trawl the tracker before you do - I'm pretty sure there's a patch
(from the Nokia S60 port, IIRC) that adds a couple of macro
definitions so that platform ports and embedding applications can
intercept malloc() and free() calls.

It would be way out of date by now, but I seem to recall thinking it
looked reasonable at a quick glance.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From solipsis at pitrou.net  Tue Apr 24 14:00:37 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 24 Apr 2012 14:00:37 +0200
Subject: [Python-Dev] cpython (2.7): Reorder the entries to put the type
 specific technique last.
References: <E1SMXLH-0002tI-6z@dinsdale.python.org>
Message-ID: <20120424140037.4748face@pitrou.net>

On Tue, 24 Apr 2012 06:27:07 +0200
raymond.hettinger <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/e2a3260f1718
> changeset:   76513:e2a3260f1718
> branch:      2.7
> parent:      76480:db26c4daecbb
> user:        Raymond Hettinger <python at rcn.com>
> date:        Mon Apr 23 21:24:15 2012 -0700
> summary:
>   Reorder the entries to put the type specific technique last.

Do you intend to port all your doc changes to 3.2 and 3.x?

Regards

Antoine.



From mark at hotpy.org  Tue Apr 24 17:26:14 2012
From: mark at hotpy.org (Mark Shannon)
Date: Tue, 24 Apr 2012 16:26:14 +0100
Subject: [Python-Dev] [Python-checkins] cpython (3.2): don't use a slot
 wrapper from a different special method (closes #14658)
In-Reply-To: <E1SMhMu-0000xh-93@dinsdale.python.org>
References: <E1SMhMu-0000xh-93@dinsdale.python.org>
Message-ID: <4F96C616.5020906@hotpy.org>

I'm not happy with this fix.

Admittedly code like:

class S(str):
    __getattr__ = str.__add__
s = S('a')
print(S.b)

is a little weird.
But I think it should work (ie print 'ab') properly.

This works without the patch.

class S(str):
    __getattribute__ = str.__add__
s = S('a')
print(S.b)

(Prints 'ab')

Also "slot wrapper" is a low-level implementation detail and
shouldn't impact the language semantics.

dict.__getitem__ is a slot wrapper; dict.__getitem__ is not.
str.__getitem__ is a slot wrapper; list.__getitem__ is not.
If any of these change then the semantics of the language changes.

Cheers,
Mark

benjamin.peterson wrote:
> http://hg.python.org/cpython/rev/971865f12377
> changeset:   76518:971865f12377
> branch:      3.2
> parent:      76506:f7b002e5cac7
> user:        Benjamin Peterson <benjamin at python.org>
> date:        Tue Apr 24 11:06:25 2012 -0400
> summary:
>   don't use a slot wrapper from a different special method (closes #14658)
> 
> This also alters the fix to #11603. Specifically, setting __repr__ to
> object.__str__ now raises a recursion RuntimeError when str() or repr() is
> called instead of silently bypassing the recursion. I believe this behavior is
> more correct.
> 
> files:
>   Lib/test/test_descr.py |  10 +++++++++-
>   Misc/NEWS              |   6 ++++++
>   Objects/typeobject.c   |   5 +++--
>   3 files changed, 18 insertions(+), 3 deletions(-)
> 
> 
> diff --git a/Lib/test/test_descr.py b/Lib/test/test_descr.py
> --- a/Lib/test/test_descr.py
> +++ b/Lib/test/test_descr.py
> @@ -4430,7 +4430,15 @@
>              pass
>          Foo.__repr__ = Foo.__str__
>          foo = Foo()
> -        str(foo)
> +        self.assertRaises(RuntimeError, str, foo)
> +        self.assertRaises(RuntimeError, repr, foo)
> +
> +    def test_mixing_slot_wrappers(self):
> +        class X(dict):
> +            __setattr__ = dict.__setitem__
> +        x = X()
> +        x.y = 42
> +        self.assertEqual(x["y"], 42)
>  
>      def test_cycle_through_dict(self):
>          # See bug #1469629
> diff --git a/Misc/NEWS b/Misc/NEWS
> --- a/Misc/NEWS
> +++ b/Misc/NEWS
> @@ -10,6 +10,12 @@
>  Core and Builtins
>  -----------------
>  
> +- Issue #11603 (again): Setting __repr__ to __str__ now raises a RuntimeError
> +  when repr() or str() is called on such an object.
> +
> +- Issue #14658: Fix binding a special method to a builtin implementation of a
> +  special method with a different name.
> +
>  - Issue #14630: Fix a memory access bug for instances of a subclass of int
>    with value 0.
>  
> diff --git a/Objects/typeobject.c b/Objects/typeobject.c
> --- a/Objects/typeobject.c
> +++ b/Objects/typeobject.c
> @@ -2928,7 +2928,7 @@
>      unaryfunc f;
>  
>      f = Py_TYPE(self)->tp_repr;
> -    if (f == NULL || f == object_str)
> +    if (f == NULL)
>          f = object_repr;
>      return f(self);
>  }
> @@ -5757,7 +5757,8 @@
>              }
>              continue;
>          }
> -        if (Py_TYPE(descr) == &PyWrapperDescr_Type) {
> +        if (Py_TYPE(descr) == &PyWrapperDescr_Type &&
> +            ((PyWrapperDescrObject *)descr)->d_base->name_strobj == p->name_strobj) {
>              void **tptr = resolve_slotdups(type, p->name_strobj);
>              if (tptr == NULL || tptr == ptr)
>                  generic = p->function;
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://mail.python.org/mailman/listinfo/python-checkins


From benjamin at python.org  Tue Apr 24 17:30:40 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 24 Apr 2012 11:30:40 -0400
Subject: [Python-Dev] [Python-checkins] cpython (3.2): don't use a slot
 wrapper from a different special method (closes #14658)
In-Reply-To: <4F96C616.5020906@hotpy.org>
References: <E1SMhMu-0000xh-93@dinsdale.python.org>
	<4F96C616.5020906@hotpy.org>
Message-ID: <CAPZV6o_So9LwwFF6-CGVNBctYStU7PUdo215juJoMyVW2DoP4A@mail.gmail.com>

2012/4/24 Mark Shannon <mark at hotpy.org>:
> I'm not happy with this fix.

It's not perfect, but it's an improvement.

>
> Admittedly code like:
>
> class S(str):
> ? __getattr__ = str.__add__
> s = S('a')
> print(S.b)
>
> is a little weird.
> But I think it should work (ie print 'ab') properly.
>
> This works without the patch.
>
> class S(str):
> ? __getattribute__ = str.__add__
> s = S('a')
> print(S.b)

Does it?

$ cat > x.py
class S(str):
  __getattribute__ = str.__add__
s = S('a')
print(S.b)
$ python3 x.py
Traceback (most recent call last):
  File "x.py", line 4, in <module>
    print(S.b)
AttributeError: type object 'S' has no attribute 'b'

>
> (Prints 'ab')
>
> Also "slot wrapper" is a low-level implementation detail and
> shouldn't impact the language semantics.
>
> dict.__getitem__ is a slot wrapper; dict.__getitem__ is not.
> str.__getitem__ is a slot wrapper; list.__getitem__ is not.
> If any of these change then the semantics of the language changes.



-- 
Regards,
Benjamin

From mark at hotpy.org  Tue Apr 24 17:36:31 2012
From: mark at hotpy.org (Mark Shannon)
Date: Tue, 24 Apr 2012 16:36:31 +0100
Subject: [Python-Dev] [Python-checkins] cpython (3.2): don't use a slot
 wrapper from a different special method (closes #14658)
In-Reply-To: <CAPZV6o_So9LwwFF6-CGVNBctYStU7PUdo215juJoMyVW2DoP4A@mail.gmail.com>
References: <E1SMhMu-0000xh-93@dinsdale.python.org>	<4F96C616.5020906@hotpy.org>
	<CAPZV6o_So9LwwFF6-CGVNBctYStU7PUdo215juJoMyVW2DoP4A@mail.gmail.com>
Message-ID: <4F96C87F.4050602@hotpy.org>

Benjamin Peterson wrote:
> 2012/4/24 Mark Shannon <mark at hotpy.org>:
>> I'm not happy with this fix.
> 
> It's not perfect, but it's an improvement.
> 
>> Admittedly code like:
>>
>> class S(str):
>>   __getattr__ = str.__add__
>> s = S('a')
>> print(S.b)

My typo, should be:
print(s.b)
(Instance not class)
This doesn't work.

>>
>> is a little weird.
>> But I think it should work (ie print 'ab') properly.
>>
>> This works without the patch.
>>
>> class S(str):
>>   __getattribute__ = str.__add__
>> s = S('a')
>> print(S.b)

Same typo,
this does work (with correct spelling :) )
> 
> Does it?
> 
> $ cat > x.py
> class S(str):
>   __getattribute__ = str.__add__
> s = S('a')
> print(S.b)

> $ python3 x.py
> Traceback (most recent call last):
>   File "x.py", line 4, in <module>
>     print(S.b)
> AttributeError: type object 'S' has no attribute 'b'
> 
>> (Prints 'ab')
>>
>> Also "slot wrapper" is a low-level implementation detail and
>> shouldn't impact the language semantics.
>>
>> dict.__getitem__ is a slot wrapper; dict.__getitem__ is not.
>> str.__getitem__ is a slot wrapper; list.__getitem__ is not.
>> If any of these change then the semantics of the language changes.
> 
> 
> 


From jimjjewett at gmail.com  Tue Apr 24 18:19:49 2012
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 24 Apr 2012 12:19:49 -0400
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CAMpsgwabRAcN-DgJX56kA5z7JsrL=jJF=oWTiCYp1q-vA1hmEg@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
	<CAMpsgwabRAcN-DgJX56kA5z7JsrL=jJF=oWTiCYp1q-vA1hmEg@mail.gmail.com>
Message-ID: <CA+OGgf7u0_Dmv7wbEa8bND4zs0s9HTmGz9ZvuqK71Ndws+Hx5Q@mail.gmail.com>

On Tue, Apr 24, 2012 at 6:38 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:
>> Monotonic
>> ---------

>> This is a particularly tricky term, as there are several subtly
>> incompatible definitions in use.

> Is it a definition for the glossary?

One use case for a PEP is that someone who does *not* have a
background in the area wants to start learning about it.  Even
excluding the general service of education, these people can be
valuable contributors, because they have a fresh perspective.  They
will almost certainly waste some time retracing dead ends, but I would
prefer it be out of a need to prove things to themselves, instead of
just because they misunderstood.

Given the amount of noise we already went through arguing over what
"Monotonic" should mean, I think we have an obligation to provide
these people with a heads-up, even if we don't end up using the term
ourselves.  And I think we *will* use the terms ourselves, if only as
some of the raw os_clock_* choices.

>> ?C++ followed the mathematical definition
>>  ... a monotonic clock only promises not to go backwards.
>> ... additional guarantees, some ... required by the POSIX

Confession:

I based the above statements strictly on posts to python-dev, from
people who seemed to have some experience caring about clock details.

I did not find the relevant portions of either specification.[1]
Every time I started to search, I got pulled back to other tasks, and
the update was just delayed even longer.  I still felt it was worth
consolidating the state of the discussion.  Anyone who feels confident
in this domain is welcome to correct me, and encouraged to send
replacement text.

[1]  Can I assume that Victor's links here are the relevant ones, or
is someone aware of additional/more complete references for these
specifications?
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3128.html#time.clock.monotonic
http://pubs.opengroup.org/onlinepubs/000095399/basedefs/time.h.html

>> ?The tradeoffs often include lack of a defined Epoch_
>> or mapping to `Civil Time`_,

> I don't know any monotonic with a defined epoch or
> mappable to the civil time.

The very basic "seconds (not even milliseconds) since the beginning of
1970" fits that definition, but doesn't seem to fit what most people
mean by "Monotonic Clock".

I'm still a little fuzzy on *why* it shouldn't count as a monotonic
clock.  Is it technically valid, but a lousy implementation because of
insufficient precision or resolution?  Is it because the functions
used in practice (on a modern OS) to retrieve timestamps don't
guarantee to ignore changes to the system clock?

>> and being more expensive (in `Latency`_, power usage, or duration spent
>> within calls to the clock itself) to use.

> CLOCK_MONOTONIC and CLOCK_REALTIME have the same performances on Linux
> and FreeBSD. Why would a monotonic clock be more expensive?

>> ?For example, the clock may
>> represent (a constant multiplied by) ticks of a specific quartz timer
>> on a specific CPU core, and calls would therefore require
>> synchronization between cores.

> I don't think that synchronizing a counter between CPU cores is
> something expensive. See the following tables for details:
> http://www.python.org/dev/peps/pep-0418/#performance

Synchronization is always relatively expensive.  How expensive depends
on a lot of things decides before python was installed.

Looking at the first table there (Linux 3.3 with Intel Core i7-2600 at
3.40GHz (8 cores)), CLOCK_MONOTONIC can be hundreds of times slower
than time(), and over 50 times slower than CLOCK_MONOTONIC_COARSE.  I
would assume that CLOCK_MONOTONIC_COARSE meets the technical
requirements for a monotonic clock, but does less well at meeting the
actual expectations for some combination of
(precision/stability/resolution).


> CLOCK_MONOTONIC and CLOCK_REALTIME use the same hardware clocksource
> and so have the same latency depending on the hardware.

Is this a rule of thumb or a requirement of some standard?

Does that fact that Windows, Mac OS X, and GNU/Hurd don't support
CLOCK_MONOTONIC indicate that there is a (perhaps informal?)
specification that none of their clocks meet, or does it only indicate
that they didn't like the name?

-jJ

From ethan at stoneleaf.us  Tue Apr 24 18:33:38 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 24 Apr 2012 09:33:38 -0700
Subject: [Python-Dev] [Python-checkins] cpython (3.2): don't use a slot
 wrapper from a different special method (closes #14658)
In-Reply-To: <4F96C87F.4050602@hotpy.org>
References: <E1SMhMu-0000xh-93@dinsdale.python.org>	<4F96C616.5020906@hotpy.org>	<CAPZV6o_So9LwwFF6-CGVNBctYStU7PUdo215juJoMyVW2DoP4A@mail.gmail.com>
	<4F96C87F.4050602@hotpy.org>
Message-ID: <4F96D5E2.9030901@stoneleaf.us>

Mark Shannon wrote:
> Benjamin Peterson wrote:
>> 2012/4/24 Mark Shannon <mark at hotpy.org>:
>>> I'm not happy with this fix.
>>
>> It's not perfect, but it's an improvement.
>>
>>> Admittedly code like:
>>>
>>> class S(str):
>>>   __getattr__ = str.__add__
>>> s = S('a')
>>> print(S.b)
> 
> My typo, should be:
> print(s.b)
> (Instance not class)
> 
>>> is a little weird.
>>> But I think it should work (ie print 'ab') properly.

I can easily believe I'm missing something, but here are the results 
with the patch in place:

{'x': 42} 42
{'x': 42} 42
ab

and here's the code:

class Foo1(dict):
     def __getattr__(self, key): return self[key]
     def __setattr__(self, key, value): self[key] = value

class Foo2(dict):
     __getattr__ = dict.__getitem__
     __setattr__ = dict.__setitem__

o1 = Foo1()
o1.x = 42
print(o1, o1.x)

o2 = Foo2()
o2.x = 42
print(o2, o2.x)

class S(str):
    __getattr__ = str.__add__
s = S('a')
print(s.b)

~Ethan~

From victor.stinner at gmail.com  Tue Apr 24 18:35:45 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 18:35:45 +0200
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CA+OGgf7u0_Dmv7wbEa8bND4zs0s9HTmGz9ZvuqK71Ndws+Hx5Q@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
	<CAMpsgwabRAcN-DgJX56kA5z7JsrL=jJF=oWTiCYp1q-vA1hmEg@mail.gmail.com>
	<CA+OGgf7u0_Dmv7wbEa8bND4zs0s9HTmGz9ZvuqK71Ndws+Hx5Q@mail.gmail.com>
Message-ID: <CAMpsgwYpzwQj9GvcTfTOS=Op3Z_REVDjArz=pz8iGY5C8w747w@mail.gmail.com>

>> I don't know any monotonic with a defined epoch or
>> mappable to the civil time.
>
> The very basic "seconds (not even milliseconds) since the beginning of
> 1970" fits that definition, but doesn't seem to fit what most people
> mean by "Monotonic Clock".
>
> I'm still a little fuzzy on *why* it shouldn't count as a monotonic
> clock. ?Is it technically valid, but a lousy implementation because of
> insufficient precision or resolution? ?Is it because the functions
> used in practice (on a modern OS) to retrieve timestamps don't
> guarantee to ignore changes to the system clock?

You mean the time() function? It is the system clock and the system
clock is not monotonic because it can jump backward. It is also
affected when... the system clock is changed :-)

> Looking at the first table there (Linux 3.3 with Intel Core i7-2600 at
> 3.40GHz (8 cores)), CLOCK_MONOTONIC can be hundreds of times slower
> than time(), and over 50 times slower than CLOCK_MONOTONIC_COARSE. ?I
> would assume that CLOCK_MONOTONIC_COARSE meets the technical
> requirements for a monotonic clock, but does less well at meeting the
> actual expectations for some combination of
> (precision/stability/resolution).

I chose CLOCK_MONOTONIC instead of CLOCK_MONOTONIC_COARSE because I
bet that most people prefer a clock with an higher precision over a
faster clock. When the issue #14555 (Add more clock identifiers) will
be done, you will be able to call
time.clock_gettime(time.CLOCK_MONOTONIC_COARSE) in Python if you need
a faster monotonic clock.

>> CLOCK_MONOTONIC and CLOCK_REALTIME use the same hardware clocksource
>> and so have the same latency depending on the hardware.
>
> Is this a rule of thumb or a requirement of some standard?

It is how these clocks are implemented on Linux. I don't know how they
are implemented on other OSes. It was just to say that their
performance should be *very close* on Linux.

> Does that fact that Windows, Mac OS X, and GNU/Hurd don't support
> CLOCK_MONOTONIC indicate that there is a (perhaps informal?)
> specification that none of their clocks meet, or does it only indicate
> that they didn't like the name?

CLOCK_MONOTONIC requires the clock_gettime() function: clock_gettime()
is not available on Windows nor Mac OS X. For Hurd, see:
http://www.gnu.org/software/hurd/open_issues/clock_gettime.html

The PEP 418 uses other monotonic clocks for Windows and Mac OS X, but
GNU/Hurd is the only OS not supporting the new time.monotonic()
function.

Victor

From jimjjewett at gmail.com  Tue Apr 24 18:56:50 2012
From: jimjjewett at gmail.com (Jim Jewett)
Date: Tue, 24 Apr 2012 12:56:50 -0400
Subject: [Python-Dev] [Python-checkins] peps: Note that ImportError will
 no longer be raised due to a missing __init__.py
In-Reply-To: <CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>
References: <E1SL0H4-0005LQ-GT@dinsdale.python.org>
	<CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>
Message-ID: <CA+OGgf4ueteOZaaZSTF1n8X+dmgrw2VG80YRNmuOPVe7j8CJAA@mail.gmail.com>

On Thu, Apr 19, 2012 at 18:56, eric.smith wrote:

> +Note that an ImportError will no longer be raised for a directory
> +lacking an ``__init__.py`` file. Such a directory will now be imported
> +as a namespace package, whereas in prior Python versions an
> +ImportError would be raised.

Given that there is no way to modify the __path__ of a namespace
package (short of restarting python?), *should* it be an error if
there is exactly one directory?

Or is that just a case of "other tools out there, didn't happen to
install them"?

-jJ

From eric at trueblade.com  Tue Apr 24 19:02:30 2012
From: eric at trueblade.com (Eric Smith)
Date: Tue, 24 Apr 2012 13:02:30 -0400 (EDT)
Subject: [Python-Dev] [Python-checkins] peps: Note that ImportError will
 no longer be raised due to a missing __init__.py
In-Reply-To: <CA+OGgf4ueteOZaaZSTF1n8X+dmgrw2VG80YRNmuOPVe7j8CJAA@mail.gmail.com>
References: <E1SL0H4-0005LQ-GT@dinsdale.python.org>
	<CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>
	<CA+OGgf4ueteOZaaZSTF1n8X+dmgrw2VG80YRNmuOPVe7j8CJAA@mail.gmail.com>
Message-ID: <ae9a4e834507d632e7396795238e6f58.squirrel@mail.trueblade.com>

> On Thu, Apr 19, 2012 at 18:56, eric.smith wrote:
>
>> +Note that an ImportError will no longer be raised for a directory
>> +lacking an ``__init__.py`` file. Such a directory will now be imported
>> +as a namespace package, whereas in prior Python versions an
>> +ImportError would be raised.
>
> Given that there is no way to modify the __path__ of a namespace
> package (short of restarting python?), *should* it be an error if
> there is exactly one directory?
>
> Or is that just a case of "other tools out there, didn't happen to
> install them"?

Right. If I just install zope.interfaces and no other zope packages, that
shouldn't be an error.

Eric.


From g.brandl at gmx.net  Tue Apr 24 19:13:51 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 24 Apr 2012 19:13:51 +0200
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <E1SKgIP-0006wX-2a@dinsdale.python.org>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
Message-ID: <jn6mvh$ev3$1@dough.gmane.org>

On 19.04.2012 03:36, ezio.melotti wrote:
> http://hg.python.org/cpython/rev/36c901fcfcda
> changeset:   76413:36c901fcfcda
> branch:      2.7
> user:        Ezio Melotti <ezio.melotti at gmail.com>
> date:        Wed Apr 18 19:08:41 2012 -0600
> summary:
>   #14538: HTMLParser can now parse correctly start tags that contain a bare /.

> diff --git a/Misc/NEWS b/Misc/NEWS
> --- a/Misc/NEWS
> +++ b/Misc/NEWS
> @@ -50,6 +50,9 @@
>  Library
>  -------
>  
> +- Issue #14538: HTMLParser can now parse correctly start tags that contain
> +  a bare '/'.
> +

I think that's misleading: there's no way to "correctly" parse malformed HTML.

Georg


From martin at v.loewis.de  Tue Apr 24 19:21:32 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Tue, 24 Apr 2012 19:21:32 +0200
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <EFE3877620384242A686D52278B7CCD33BE088@RKV-IT-EXCH104.ccp.ad.local>
References: <4F951384.5040705@hotpy.org>
	<CAPZV6o-1C30emhGuzisdi07FqXfxCkJdosO52cBGy3EXPKtSbw@mail.gmail.com>
	<EFE3877620384242A686D52278B7CCD33BBF14@RKV-IT-EXCH104.ccp.ad.local>
	<4F9656C8.30900@v.loewis.de>
	<EFE3877620384242A686D52278B7CCD33BE088@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120424192132.Horde.BVdnRML8999PluEcpdXyOdA@webmail.df.eu>


Quoting Kristj?n Valur J?nsson <kristjan at ccpgames.com>:

> You know that I'm speaking of Windows, right?

Yes, but this may only be valid for CCP; for CPython, we certainly
have to consider Unix as well.

> IMHO, we shouldn't put the PyAPI* stuff on functions unless they are  
> actual API functions.
> I don't know how the export tables for ELF .so objects is generated,  
> but it surely can't export _everything_.

It certainly does. Any global symbol in an ELF shared library gets
exported. There are recent (10 years or so) ways to restrict this,
by declaring symbols as hidden in the object file, but exporting
everything is the default that Python uses.

> Anyway, marking stuff as part of the API makes  sense, and marking
> functions as being part of the API makes no sense and is wasteful  
> when they are not.

There are cases where it's necessary: when an extension module
uses a function that is not in the API.

> We might even have something similar for the stable api.

I don't understand. Everything in the stable api is part
of the API, and thus needs to be exported from the Python DLL.

Regards,
Martin



From martin at v.loewis.de  Tue Apr 24 19:25:09 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Tue, 24 Apr 2012 19:25:09 +0200
Subject: [Python-Dev] What do PyAPI_FUNC & PyAPI_DATA mean?
In-Reply-To: <EFE3877620384242A686D52278B7CCD33BE0A9@RKV-IT-EXCH104.ccp.ad.local>
References: <4F951384.5040705@hotpy.org> <4F9655E5.70307@v.loewis.de>
	<EFE3877620384242A686D52278B7CCD33BE0A9@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120424192509.Horde.7jQ3MML8999PluH1jRaSOBA@webmail.df.eu>


Quoting Kristj?n Valur J?nsson <kristjan at ccpgames.com>:

> Aha, so that is the rationale.  Because the export table on unix is so
> generous, we force ourselves to be generous on windows too?

Yes. If the code compiles and links on Unix, it shall also compile and
link on Windows.

> I did some unix programming back in the day.  IRIX, actually (a Sys  
> V derivative).  I'm pretty
> sure we had to explicitly specify our .so exports.  But I might be mistaken.

Maybe on IRIX, probably in a way that predates ELF. In the old days, on Linux,
you had to globally request address space from Linus Torvalds for  
shared libraries.
These days are long gone. ELF shared libraries are designed to give the same
experience (roughly) as static libraries, wrt. source compatibility.

Regards,
Martin



From edcjones at comcast.net  Tue Apr 24 18:05:46 2012
From: edcjones at comcast.net (Edward C. Jones)
Date: Tue, 24 Apr 2012 12:05:46 -0400
Subject: [Python-Dev] Repeated hangs during "make test"
Message-ID: <4F96CF5A.5020100@comcast.net>

CPython 3.3.0a2 (default, Apr 24 2012, 10:47:03) [GCC 4.4.5]
Linux-2.6.32-5-amd64-x86_64-with-debian-6.0.4 little-endian

Ran "make test".  Hung during test_socket.  Used CNTL-C to exit the test.
test_ssl failed.  Ran "./python -m test -v test_ssl".  Test ok. Ran
"./python -m test -v test_socket" which was ok.

Ran "make test" again.  Hung during test_concurrent_futures.  Used CNTL-C to
exit test_concurrent_futures.  test_ssl failed.  Ran
"./python -m test -v test_ssl".  Test ok.

Ran "make test" a third time.  Hung during test_io.  Used CNTL-C to
exit test_io.  test_ssl failed.  Ran "./python -m test -v test_ssl".  
Test ok.

Did it again.  Same behavior except the hang is in test_buffer.

And again for test_httpservers.

What is going on?


From martin at v.loewis.de  Tue Apr 24 19:43:30 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Tue, 24 Apr 2012 19:43:30 +0200
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>

> Benchmarks should measure memory usage too, of course.  Sadly that  
> is not possible in standard cPython.

It's actually very easy in standard CPython, using sys.getsizeof.

> Btw, this is of great interest to me at the moment, our Shanghai  
> engineers are screaming at the
> memory waste incurred by dictionaries.  A 10 item dictionary  
> consumes 1/2k on 32 bits, did you know this?

I did.

In Python 3.3, this now goes down to 248 bytes (32 bits).

Regards,
Martin



From benjamin at python.org  Tue Apr 24 20:34:55 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 24 Apr 2012 14:34:55 -0400
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <jn6mvh$ev3$1@dough.gmane.org>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
	<jn6mvh$ev3$1@dough.gmane.org>
Message-ID: <CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>

2012/4/24 Georg Brandl <g.brandl at gmx.net>:
> On 19.04.2012 03:36, ezio.melotti wrote:
>> http://hg.python.org/cpython/rev/36c901fcfcda
>> changeset: ? 76413:36c901fcfcda
>> branch: ? ? ?2.7
>> user: ? ? ? ?Ezio Melotti <ezio.melotti at gmail.com>
>> date: ? ? ? ?Wed Apr 18 19:08:41 2012 -0600
>> summary:
>> ? #14538: HTMLParser can now parse correctly start tags that contain a bare /.
>
>> diff --git a/Misc/NEWS b/Misc/NEWS
>> --- a/Misc/NEWS
>> +++ b/Misc/NEWS
>> @@ -50,6 +50,9 @@
>> ?Library
>> ?-------
>>
>> +- Issue #14538: HTMLParser can now parse correctly start tags that contain
>> + ?a bare '/'.
>> +
>
> I think that's misleading: there's no way to "correctly" parse malformed HTML.

There is in the since that you can follow the HTML5 algorithm, which
can "parse" any junk you throw at it.



-- 
Regards,
Benjamin

From fdrake at acm.org  Tue Apr 24 21:00:13 2012
From: fdrake at acm.org (Fred Drake)
Date: Tue, 24 Apr 2012 15:00:13 -0400
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
	<jn6mvh$ev3$1@dough.gmane.org>
	<CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
Message-ID: <CAFT4OTFx9XS4KLESKLGhxqYJU+iAshjLNYfHD=5Xdex6SseNNA@mail.gmail.com>

On Tue, Apr 24, 2012 at 2:34 PM, Benjamin Peterson <benjamin at python.org> wrote:
> There is in the since that you can follow the HTML5 algorithm, which
> can "parse" any junk you throw at it.

This whole can of worms is why I gave up on HTML years ago (well, one
reason among many).

There are markup languages, and there's soup.


  -Fred

-- 
Fred L. Drake, Jr.? ? <fdrake at acm.org>
"A person who won't read has no advantage over one who can't read."
?? --Samuel Langhorne Clemens

From g.brandl at gmx.net  Tue Apr 24 21:02:43 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Tue, 24 Apr 2012 21:02:43 +0200
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
	<jn6mvh$ev3$1@dough.gmane.org>
	<CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
Message-ID: <jn6tbl$5qa$1@dough.gmane.org>

On 24.04.2012 20:34, Benjamin Peterson wrote:
> 2012/4/24 Georg Brandl <g.brandl at gmx.net>:
>> On 19.04.2012 03:36, ezio.melotti wrote:
>>> http://hg.python.org/cpython/rev/36c901fcfcda
>>> changeset:   76413:36c901fcfcda
>>> branch:      2.7
>>> user:        Ezio Melotti <ezio.melotti at gmail.com>
>>> date:        Wed Apr 18 19:08:41 2012 -0600
>>> summary:
>>>   #14538: HTMLParser can now parse correctly start tags that contain a bare /.
>>
>>> diff --git a/Misc/NEWS b/Misc/NEWS
>>> --- a/Misc/NEWS
>>> +++ b/Misc/NEWS
>>> @@ -50,6 +50,9 @@
>>>  Library
>>>  -------
>>>
>>> +- Issue #14538: HTMLParser can now parse correctly start tags that contain
>>> +  a bare '/'.
>>> +
>>
>> I think that's misleading: there's no way to "correctly" parse malformed HTML.
> 
> There is in the since that you can follow the HTML5 algorithm, which
> can "parse" any junk you throw at it.

Ah, good. Then I hope we are following the algorithm here (and are slowly
coming to use it for htmllib in general).

Georg


From benjamin at python.org  Tue Apr 24 21:05:48 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 24 Apr 2012 15:05:48 -0400
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
	<jn6mvh$ev3$1@dough.gmane.org>
	<CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
Message-ID: <CAPZV6o9cwz6Ccri9uLSH2+jv3UzO7P0iMqRfnuYk=BUh4CS=1g@mail.gmail.com>

2012/4/24 Benjamin Peterson <benjamin at python.org>:
> There is in the since

This is confusing, since I meant "sense".


-- 
Regards,
Benjamin

From solipsis at pitrou.net  Tue Apr 24 21:06:22 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 24 Apr 2012 21:06:22 +0200
Subject: [Python-Dev] cpython: Closes Issue #14661: posix module: add
 O_EXEC, O_SEARCH, O_TTY_INIT (I add some
References: <E1SMkyn-0001Fc-Uv@dinsdale.python.org>
Message-ID: <20120424210622.2d120010@pitrou.net>

On Tue, 24 Apr 2012 21:00:49 +0200
jesus.cea <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/2023f48b32b6
> changeset:   76537:2023f48b32b6
> user:        Jesus Cea <jcea at jcea.es>
> date:        Tue Apr 24 20:59:17 2012 +0200
> summary:
>   Closes Issue #14661: posix module: add O_EXEC, O_SEARCH, O_TTY_INIT (I add some Solaris constants too)

Could you please add a Misc/NEWS entry for all this?

Thank you

Antoine.



From merwok at netwok.org  Tue Apr 24 21:34:59 2012
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Tue, 24 Apr 2012 15:34:59 -0400
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <jn6tbl$5qa$1@dough.gmane.org>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
	<jn6mvh$ev3$1@dough.gmane.org>
	<CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
	<jn6tbl$5qa$1@dough.gmane.org>
Message-ID: <4F970063.7000800@netwok.org>

Le 24/04/2012 15:02, Georg Brandl a ?crit :
> On 24.04.2012 20:34, Benjamin Peterson wrote:
>> 2012/4/24 Georg Brandl<g.brandl at gmx.net>:
>>> I think that's misleading: there's no way to "correctly" parse malformed HTML.
>> There is in the since that you can follow the HTML5 algorithm, which
>> can "parse" any junk you throw at it.
> Ah, good. Then I hope we are following the algorithm here (and are slowly
> coming to use it for htmllib in general).

Yes, Ezio?s commits on html.parser/HTMLParser in the last months have 
been following the HTML5 spec.  Ezio, RDM and I have had some discussion 
about that on some bug reports, IRC and private mail and reached the 
agreement to do the useful thing, that is follow HTML5 and not pretend 
that the stdlib parser is strict or validating.

Ezio was thinking about a blog.python.org post to advertise this.

Regards

From brian at python.org  Tue Apr 24 21:41:54 2012
From: brian at python.org (Brian Curtin)
Date: Tue, 24 Apr 2012 14:41:54 -0500
Subject: [Python-Dev] cpython (2.7): #14538: HTMLParser can now parse
 correctly start tags that contain a bare /.
In-Reply-To: <4F970063.7000800@netwok.org>
References: <E1SKgIP-0006wX-2a@dinsdale.python.org>
	<jn6mvh$ev3$1@dough.gmane.org>
	<CAPZV6o8bf0zR7Y5GUD=Td+E44wx4gtn3jtTV2qJ2Zq34dc_dHg@mail.gmail.com>
	<jn6tbl$5qa$1@dough.gmane.org> <4F970063.7000800@netwok.org>
Message-ID: <CAD+XWwq1dyTNfbt_GyDQUTEAYosU=6QftL_Ah9vOGbKDXvoxng@mail.gmail.com>

On Tue, Apr 24, 2012 at 14:34, ?ric Araujo <merwok at netwok.org> wrote:
> Le 24/04/2012 15:02, Georg Brandl a ?crit :
>>
>> On 24.04.2012 20:34, Benjamin Peterson wrote:
>>>
>>> 2012/4/24 Georg Brandl<g.brandl at gmx.net>:
>>>>
>>>> I think that's misleading: there's no way to "correctly" parse malformed
>>>> HTML.
>>>
>>> There is in the since that you can follow the HTML5 algorithm, which
>>> can "parse" any junk you throw at it.
>>
>> Ah, good. Then I hope we are following the algorithm here (and are slowly
>> coming to use it for htmllib in general).
>
>
> Yes, Ezio?s commits on html.parser/HTMLParser in the last months have been
> following the HTML5 spec. ?Ezio, RDM and I have had some discussion about
> that on some bug reports, IRC and private mail and reached the agreement to
> do the useful thing, that is follow HTML5 and not pretend that the stdlib
> parser is strict or validating.
>
> Ezio was thinking about a blog.python.org post to advertise this.

Please do this, and I welcome anyone else who wants to write about
their work on the blog to do so. Contact me for info.

From cf.natali at gmail.com  Tue Apr 24 21:52:37 2012
From: cf.natali at gmail.com (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=)
Date: Tue, 24 Apr 2012 21:52:37 +0200
Subject: [Python-Dev] cpython: Closes Issue #14661: posix module: add
 O_EXEC, O_SEARCH, O_TTY_INIT (I add some
In-Reply-To: <20120424210622.2d120010@pitrou.net>
References: <E1SMkyn-0001Fc-Uv@dinsdale.python.org>
	<20120424210622.2d120010@pitrou.net>
Message-ID: <CAH_1eM0rVa=3V5U06rZcU39-QZjtoT_N=OKJ_Jd3k2vBfrQQxw@mail.gmail.com>

> jesus.cea <python-checkins at python.org> wrote:
>> http://hg.python.org/cpython/rev/2023f48b32b6
>> changeset:   76537:2023f48b32b6
>> user:        Jesus Cea <jcea at jcea.es>
>> date:        Tue Apr 24 20:59:17 2012 +0200
>> summary:
>>   Closes Issue #14661: posix module: add O_EXEC, O_SEARCH, O_TTY_INIT (I
>> add some Solaris constants too)
>
> Could you please add a Misc/NEWS entry for all this?

I also tend to always update Misc/ACKS too, even for "trivial" patches.

From mark at hotpy.org  Tue Apr 24 22:42:22 2012
From: mark at hotpy.org (Mark Shannon)
Date: Tue, 24 Apr 2012 21:42:22 +0100
Subject: [Python-Dev] [Python-checkins] cpython (3.2): don't use a slot
 wrapper from a different special method (closes #14658)
In-Reply-To: <CAPZV6o_So9LwwFF6-CGVNBctYStU7PUdo215juJoMyVW2DoP4A@mail.gmail.com>
References: <E1SMhMu-0000xh-93@dinsdale.python.org>	<4F96C616.5020906@hotpy.org>
	<CAPZV6o_So9LwwFF6-CGVNBctYStU7PUdo215juJoMyVW2DoP4A@mail.gmail.com>
Message-ID: <4F97102E.9090203@hotpy.org>

Benjamin Peterson wrote:
> 2012/4/24 Mark Shannon <mark at hotpy.org>:
>> I'm not happy with this fix.
> 
> It's not perfect, but it's an improvement.
> 
Actually, I think it is probably correct.
I've been trying to break it by assigning various unusual
objects to special attributes and it seems OK so far.

I don't really trust all that slot-wrapper stuff,
but rewriting is a lot of work and would introduce new errors,
so I'll just leave it at that.

[snip]

Cheers,
Mark.




From victor.stinner at gmail.com  Tue Apr 24 22:49:08 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 22:49:08 +0200
Subject: [Python-Dev] [Python-checkins] cpython: Closes Issue #14661:
 posix module: add O_EXEC, O_SEARCH, O_TTY_INIT (I add some
In-Reply-To: <E1SMkyn-0001Fc-Uv@dinsdale.python.org>
References: <E1SMkyn-0001Fc-Uv@dinsdale.python.org>
Message-ID: <CAMpsgwbKqompkzVYTx0eNAXzbFdMFRYq1yf1xOcNNGRnF6AHAg@mail.gmail.com>

2012/4/24 jesus.cea <python-checkins at python.org>:
> http://hg.python.org/cpython/rev/2023f48b32b6
> changeset: ? 76537:2023f48b32b6
> user: ? ? ? ?Jesus Cea <jcea at jcea.es>
> date: ? ? ? ?Tue Apr 24 20:59:17 2012 +0200
> summary:
> ?Closes Issue #14661: posix module: add O_EXEC, O_SEARCH, O_TTY_INIT (I add some Solaris constants too)

Don't you want to document these new constants in Doc/library/os.rst?

Victor

From victor.stinner at gmail.com  Tue Apr 24 22:54:04 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 24 Apr 2012 22:54:04 +0200
Subject: [Python-Dev] Repeated hangs during "make test"
In-Reply-To: <4F96CF5A.5020100@comcast.net>
References: <4F96CF5A.5020100@comcast.net>
Message-ID: <CAMpsgwZTb2xKKH3ob0nqb=74W5eudzS-AOPuZHxTD-XcViL6aA@mail.gmail.com>

2012/4/24 Edward C. Jones <edcjones at comcast.net>:
> CPython 3.3.0a2 (default, Apr 24 2012, 10:47:03) [GCC 4.4.5]
> Linux-2.6.32-5-amd64-x86_64-with-debian-6.0.4 little-endian
>
> Ran "make test". ?Hung during test_socket. ?Used CNTL-C to exit the test.

Can you investigate what is blocked in the test? Can you at least
provide a traceback? You may try the timeout option of -m test.
Example:

$ ./python -m test --timeout=60 # seconds

> What is going on?

I'm unable to reproduce the bug, so I cannot help you :-(

Victor

From solipsis at pitrou.net  Tue Apr 24 23:00:26 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 24 Apr 2012 23:00:26 +0200
Subject: [Python-Dev] Repeated hangs during "make test"
References: <4F96CF5A.5020100@comcast.net>
Message-ID: <20120424230026.7c70fe8b@pitrou.net>

On Tue, 24 Apr 2012 12:05:46 -0400
"Edward C. Jones" <edcjones at comcast.net> wrote:
> CPython 3.3.0a2 (default, Apr 24 2012, 10:47:03) [GCC 4.4.5]
> Linux-2.6.32-5-amd64-x86_64-with-debian-6.0.4 little-endian
> 
> Ran "make test".  Hung during test_socket.  Used CNTL-C to exit the test.
> test_ssl failed.  Ran "./python -m test -v test_ssl".  Test ok. Ran
> "./python -m test -v test_socket" which was ok.
> 
> Ran "make test" again.  Hung during test_concurrent_futures.  Used CNTL-C to
> exit test_concurrent_futures.  test_ssl failed.  Ran
> "./python -m test -v test_ssl".  Test ok.
> 
> Ran "make test" a third time.  Hung during test_io.  Used CNTL-C to
> exit test_io.  test_ssl failed.  Ran "./python -m test -v test_ssl".  
> Test ok.

Remember to pass "-uall" when running tests with "./python -m test
[something]".
Otherwise some test cases get skipped.

Regards

Antoine.



From ethan at stoneleaf.us  Tue Apr 24 22:46:51 2012
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 24 Apr 2012 13:46:51 -0700
Subject: [Python-Dev] netiquette on py-dev
Message-ID: <4F97113B.70601@stoneleaf.us>

Okay, advice please.

When responding to posts, should the poster to whom I am responding be 
listed as well as python-dev, or should my responses just go to python-dev?

I see both ways occuring, and am not sure if one or the other is preferred.

As a reference point, on python-list I almost never have the previous 
respondent's email in the CC list.

~Ethan~

From solipsis at pitrou.net  Tue Apr 24 23:42:38 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 24 Apr 2012 23:42:38 +0200
Subject: [Python-Dev] netiquette on py-dev
References: <4F97113B.70601@stoneleaf.us>
Message-ID: <20120424234238.265c014b@pitrou.net>

On Tue, 24 Apr 2012 13:46:51 -0700
Ethan Furman <ethan at stoneleaf.us> wrote:
> Okay, advice please.
> 
> When responding to posts, should the poster to whom I am responding be 
> listed as well as python-dev, or should my responses just go to python-dev?

I prefer responses to python-dev only myself; I am always a bit annoyed
to get some responses in (half-)private, since that's just duplicate
with me reading the list with gmane.

Regards

Antoine.



From phd at phdru.name  Tue Apr 24 23:50:39 2012
From: phd at phdru.name (Oleg Broytman)
Date: Wed, 25 Apr 2012 01:50:39 +0400
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <4F97113B.70601@stoneleaf.us>
References: <4F97113B.70601@stoneleaf.us>
Message-ID: <20120424215039.GA8126@iskra.aviel.ru>

On Tue, Apr 24, 2012 at 01:46:51PM -0700, Ethan Furman <ethan at stoneleaf.us> wrote:
> When responding to posts, should the poster to whom I am responding
> be listed as well as python-dev, or should my responses just go to
> python-dev?

   I reply to list only, except when I want extra attention (e.g. when I
direct people to comp.lang.python). My MUA has 3 reply commands - reply
to the author, group reply (reply to all) and list reply (mailing lists
are configured) so it's easy for me to choose which way I'm replying.

Oleg.
-- 
     Oleg Broytman            http://phdru.name/            phd at phdru.name
           Programmers don't die, they just GOSUB without RETURN.

From tseaver at palladion.com  Wed Apr 25 00:21:50 2012
From: tseaver at palladion.com (Tres Seaver)
Date: Tue, 24 Apr 2012 18:21:50 -0400
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <4F97113B.70601@stoneleaf.us>
References: <4F97113B.70601@stoneleaf.us>
Message-ID: <jn7920$jl2$1@dough.gmane.org>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/24/2012 04:46 PM, Ethan Furman wrote:
> Okay, advice please.
> 
> When responding to posts, should the poster to whom I am responding be
>  listed as well as python-dev, or should my responses just go to
> python-dev?
> 
> I see both ways occuring, and am not sure if one or the other is
> preferred.
> 
> As a reference point, on python-list I almost never have the previous
>  respondent's email in the CC list.

I prefer not to be CC'ed, as I am gonna read the message on the list
anyway.  I almost never CC the author on a list post, unless specifically
asked (e.g., where the list is not open-subscription, as in a security
response list).  I occasionally CC a third user whom I know is
subscribed, intending that as a "poke" / escalation (they might miss or
defer replying to the message).


Tres.
- -- 
===================================================================
Tres Seaver          +1 540-429-0999          tseaver at palladion.com
Palladion Software   "Excellence by Design"    http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk+XJ34ACgkQ+gerLs4ltQ4zogCeJeqS1eHMJ17FETzUkIQgkw8B
hQkAoNmDcp1WLLAMSqFr9fGDXFtAjO3W
=hKRe
-----END PGP SIGNATURE-----


From ben+python at benfinney.id.au  Wed Apr 25 02:15:17 2012
From: ben+python at benfinney.id.au (Ben Finney)
Date: Wed, 25 Apr 2012 10:15:17 +1000
Subject: [Python-Dev] netiquette on py-dev
References: <4F97113B.70601@stoneleaf.us>
Message-ID: <87r4vcbrai.fsf@benfinney.id.au>

Ethan Furman <ethan at stoneleaf.us> writes:

> When responding to posts, should the poster to whom I am responding be
> listed as well as python-dev, or should my responses just go to
> python-dev?

IMO, the poster to whom you are responding should expect to read your
response in the same forum where their message appeared. So, no need to
send them another copy individually.

-- 
 \     ?There is something wonderful in seeing a wrong-headed majority |
  `\           assailed by truth.? ?John Kenneth Galbraith, 1989-07-28 |
_o__)                                                                  |
Ben Finney


From stephen at xemacs.org  Wed Apr 25 05:08:39 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 25 Apr 2012 12:08:39 +0900
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CA+OGgf7u0_Dmv7wbEa8bND4zs0s9HTmGz9ZvuqK71Ndws+Hx5Q@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>
	<CAMpsgwabRAcN-DgJX56kA5z7JsrL=jJF=oWTiCYp1q-vA1hmEg@mail.gmail.com>
	<CA+OGgf7u0_Dmv7wbEa8bND4zs0s9HTmGz9ZvuqK71Ndws+Hx5Q@mail.gmail.com>
Message-ID: <CAL_0O1-2Kod-6WQvd+iR60XGfnedxuxZCq2TyryKx-8G2bTP7A@mail.gmail.com>

On Wed, Apr 25, 2012 at 1:19 AM, Jim Jewett <jimjjewett at gmail.com> wrote:

> I'm still a little fuzzy on *why* it shouldn't count as a monotonic
> clock.

So are the people who say it shouldn't count (unless you're speaking
of the specific implementation on Unix systems, which can go backward
if the admin or NTP decides it should be so).  I think they are in
general mistaking their use case for a general specification, that's
all.  Even Glyph cited "what other people seem to think" in supporting
the usage where "monotonic" implies "high quality" in some informal
sense, although he does have a spec for what high quality means, and
AIUI an API for it in Twisted.

I think we should just accept that "monotonic" is in more or less
common use as a synonym for "high quality", and warn *our* users that
the implementers of such clocks may be working to a different spec.  I
think the revised glossary's description of "monotonic" does that
pretty well.

From stephen at xemacs.org  Wed Apr 25 05:44:06 2012
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 25 Apr 2012 12:44:06 +0900
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <4F97113B.70601@stoneleaf.us>
References: <4F97113B.70601@stoneleaf.us>
Message-ID: <CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>

On Wed, Apr 25, 2012 at 5:46 AM, Ethan Furman <ethan at stoneleaf.us> wrote:

> When responding to posts, should the poster to whom I am responding be
> listed as well as python-dev, or should my responses just go to python-dev?
>
> I see both ways occuring, and am not sure if one or the other is preferred.

I don't know of any webmail implementations that provide
reply-to-list, so a lot of us end up using reply-to-all.  Cleaning up
the headers requires at least deleting the To (which is where the
author ends up), and perhaps moving the list from Cc to To (to make it
pretty, I don't think a nonempty To is actually required by the RFC).
Especially on a mobile device this is a PITA.

So in most cases I suppose that the duplicate going to the author is
just an issue of "energy conservation" on the part of the responder.

Note that people who are really annoyed by the duplicates can set
their Mailman accounts to no-dupes, and Mailman won't send the post to
that person.  (This has its disadvantages in principle -- no List-*
headers and other list-specific info -- and in implementation -- at
best Mailman can change all your lists at one site, so you need to do
this on every site you subscribe to.  But it's an option.)  This won't
work for people who read on Gmane, of course, since they don't own the
subscription where they're reading the list.

From rosuav at gmail.com  Wed Apr 25 05:58:03 2012
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 25 Apr 2012 13:58:03 +1000
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
References: <4F97113B.70601@stoneleaf.us>
	<CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
Message-ID: <CAPTjJmr=0k92FCD6DtfuxGsu7so=qKi63cLd2tiNUMVfBFi_GA@mail.gmail.com>

On Wed, Apr 25, 2012 at 1:44 PM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> I don't know of any webmail implementations that provide
> reply-to-list, so a lot of us end up using reply-to-all. ?Cleaning up
> the headers requires at least deleting the To (which is where the
> author ends up), and perhaps moving the list from Cc to To (to make it
> pretty, I don't think a nonempty To is actually required by the RFC).
> Especially on a mobile device this is a PITA.

I go the other way: hit Reply, and then replace the author's address
with the list's. I'd much rather have a Reply List though.
Unfortunately no decent webmail seems to have it, and I'm still
looking for a decent non-web-mail client too.

ChrisA

From ncoghlan at gmail.com  Wed Apr 25 07:45:14 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 25 Apr 2012 15:45:14 +1000
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <CAPTjJmr=0k92FCD6DtfuxGsu7so=qKi63cLd2tiNUMVfBFi_GA@mail.gmail.com>
References: <4F97113B.70601@stoneleaf.us>
	<CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
	<CAPTjJmr=0k92FCD6DtfuxGsu7so=qKi63cLd2tiNUMVfBFi_GA@mail.gmail.com>
Message-ID: <CADiSq7dtKAU6Qbrwzo_arpANQwT=YgGpHy6ymjOvonD9tGrPyg@mail.gmail.com>

On Wed, Apr 25, 2012 at 1:58 PM, Chris Angelico <rosuav at gmail.com> wrote:
> On Wed, Apr 25, 2012 at 1:44 PM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
>> I don't know of any webmail implementations that provide
>> reply-to-list, so a lot of us end up using reply-to-all. ?Cleaning up
>> the headers requires at least deleting the To (which is where the
>> author ends up), and perhaps moving the list from Cc to To (to make it
>> pretty, I don't think a nonempty To is actually required by the RFC).
>> Especially on a mobile device this is a PITA.
>
> I go the other way: hit Reply, and then replace the author's address
> with the list's. I'd much rather have a Reply List though.
> Unfortunately no decent webmail seems to have it, and I'm still
> looking for a decent non-web-mail client too.

I used to do that, but switched to using Reply-All instead after
sending too many unintentionally off-list replies.

So yeah, the basic problem is mail clients that don't offer a
"Reply-List" option, with the Gmail web client being a notable
offender.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Wed Apr 25 08:12:38 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 25 Apr 2012 16:12:38 +1000
Subject: [Python-Dev] [Python-checkins] peps: Note that ImportError will
 no longer be raised due to a missing __init__.py
In-Reply-To: <CA+OGgf4ueteOZaaZSTF1n8X+dmgrw2VG80YRNmuOPVe7j8CJAA@mail.gmail.com>
References: <E1SL0H4-0005LQ-GT@dinsdale.python.org>
	<CAP1=2W7yCFaUVwMqb-fOe=NWtNMkb_rnTNM492yTab6nBp+ykw@mail.gmail.com>
	<CA+OGgf4ueteOZaaZSTF1n8X+dmgrw2VG80YRNmuOPVe7j8CJAA@mail.gmail.com>
Message-ID: <CADiSq7fqjrFs98-Ybh5YBy7L1TXj20SmyS5drJdqdX+pVdsW0g@mail.gmail.com>

On Wed, Apr 25, 2012 at 2:56 AM, Jim Jewett <jimjjewett at gmail.com> wrote:
> On Thu, Apr 19, 2012 at 18:56, eric.smith wrote:
>
>> +Note that an ImportError will no longer be raised for a directory
>> +lacking an ``__init__.py`` file. Such a directory will now be imported
>> +as a namespace package, whereas in prior Python versions an
>> +ImportError would be raised.
>
> Given that there is no way to modify the __path__ of a namespace
> package (short of restarting python?), *should* it be an error if
> there is exactly one directory?
>
> Or is that just a case of "other tools out there, didn't happen to
> install them"?

Or you installed all of them into the same directory (as distro
packages are likely to do).

Also, a namespace package __path__ is still just a list - quite
amenable to modification after creation. The only thing we're not
currently promising in PEP 420 is a programmatic interface to redo the
scan.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From g.brandl at gmx.net  Wed Apr 25 09:37:06 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 25 Apr 2012 09:37:06 +0200
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
	patch by Andrew Svetlov
In-Reply-To: <E1SMjqT-0001o8-TV@dinsdale.python.org>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
Message-ID: <jn89i4$qm3$1@dough.gmane.org>

On 24.04.2012 19:48, sandro.tosi wrote:
> http://hg.python.org/cpython/rev/e0e421133d0f
> changeset:   76532:e0e421133d0f
> branch:      2.7
> parent:      76527:22767284de99
> user:        Sandro Tosi <sandro.tosi at gmail.com>
> date:        Tue Apr 24 19:43:33 2012 +0200
> summary:
>   Issue #14448: mention pytz; patch by Andrew Svetlov
> 
> files:
>   Doc/library/datetime.rst |  8 ++++++++
>   1 files changed, 8 insertions(+), 0 deletions(-)
> 
> 
> diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst
> --- a/Doc/library/datetime.rst
> +++ b/Doc/library/datetime.rst
> @@ -1521,6 +1521,14 @@
>  other fixed-offset :class:`tzinfo` subclass (such as a class representing only
>  EST (fixed offset -5 hours), or only EDT (fixed offset -4 hours)).
>  
> +.. seealso::
> +
> +   `pytz <http://pypi.python.org/pypi/pytz/>`_
> +      The Standard Library has no :class:`tzinfo` instances except for UTC, but

             ^^^^^^^^^^^^^^^^ we don't capitalize "standard library"

> +      it exists a third-party library which brings Olson timezone database to

         ^^ there                                    ^ the

Also, I'm not sure everybody knows what the "Olson database" is, so maybe that
should be explained too.

cheers,
Georg


From mark at hotpy.org  Wed Apr 25 10:12:27 2012
From: mark at hotpy.org (Mark Shannon)
Date: Wed, 25 Apr 2012 09:12:27 +0100
Subject: [Python-Dev] [Python-checkins] Daily reference leaks
	(a2cf07135e4f): sum=6
In-Reply-To: <E1SMt4w-0003sV-C8@ap.vmr.nerim.net>
References: <E1SMt4w-0003sV-C8@ap.vmr.nerim.net>
Message-ID: <4F97B1EB.3060306@hotpy.org>

solipsis at pitrou.net wrote:
> results for a2cf07135e4f on branch "default"
> --------------------------------------------
> 
> test_tempfile leaked [2, 2, 2] references, sum=6
> 

These leaks are due to 6e5855854a2e: ?Implement
PEP 412: Key-sharing dictionaries (closes #13903)?.

They both occur in tests for tempfile.TemporaryDirectory,
although I don't know what is special about that code.

I'll investigate further when I have time.

Cheers,
Mark.

From sandro.tosi at gmail.com  Wed Apr 25 10:21:48 2012
From: sandro.tosi at gmail.com (Sandro Tosi)
Date: Wed, 25 Apr 2012 10:21:48 +0200
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
 patch by Andrew Svetlov
In-Reply-To: <jn89i4$qm3$1@dough.gmane.org>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
	<jn89i4$qm3$1@dough.gmane.org>
Message-ID: <CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>

Hi Georg,
thanks for the review!

On Wed, Apr 25, 2012 at 09:37, Georg Brandl <g.brandl at gmx.net> wrote:
> On 24.04.2012 19:48, sandro.tosi wrote:
>> http://hg.python.org/cpython/rev/e0e421133d0f
>> changeset: ? 76532:e0e421133d0f
>> branch: ? ? ?2.7
>> parent: ? ? ?76527:22767284de99
>> user: ? ? ? ?Sandro Tosi <sandro.tosi at gmail.com>
>> date: ? ? ? ?Tue Apr 24 19:43:33 2012 +0200
>> summary:
>> ? Issue #14448: mention pytz; patch by Andrew Svetlov
>>
>> files:
>> ? Doc/library/datetime.rst | ?8 ++++++++
>> ? 1 files changed, 8 insertions(+), 0 deletions(-)
>>
>>
>> diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst
>> --- a/Doc/library/datetime.rst
>> +++ b/Doc/library/datetime.rst
>> @@ -1521,6 +1521,14 @@
>> ?other fixed-offset :class:`tzinfo` subclass (such as a class representing only
>> ?EST (fixed offset -5 hours), or only EDT (fixed offset -4 hours)).
>>
>> +.. seealso::
>> +
>> + ? `pytz <http://pypi.python.org/pypi/pytz/>`_
>> + ? ? ?The Standard Library has no :class:`tzinfo` instances except for UTC, but
>
> ? ? ? ? ? ? ^^^^^^^^^^^^^^^^ we don't capitalize "standard library"
>
>> + ? ? ?it exists a third-party library which brings Olson timezone database to
>
> ? ? ? ? ^^ there ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?^ the

sigh, you're right: I'll fix them once the below point is clarified

> Also, I'm not sure everybody knows what the "Olson database" is, so maybe that
> should be explained too.

I had considered that, but then I found another reference of "Olson
database" in an example right before the seealso note, so I left it as
it is. On a second thought, it might be better to clarify what Olson
db is, do you think a link (f.e to here:
http://www.iana.org/time-zones ) could be enough or (or in addition)
also a brief note is needed?

cheers,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi

From kristjan at ccpgames.com  Wed Apr 25 11:11:51 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Wed, 25 Apr 2012 09:11:51 +0000
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
	<20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>
Message-ID: <EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: python-dev-bounces+kristjan=ccpgames.com at python.org
> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
> Behalf Of martin at v.loewis.de
> Sent: 24. apr?l 2012 17:44
> To: python-dev at python.org
> Subject: Re: [Python-Dev] cpython: Implement PEP 412: Key-sharing
> dictionaries (closes #13903)
> 
> > Benchmarks should measure memory usage too, of course.  Sadly that is
> > not possible in standard cPython.
> 
> It's actually very easy in standard CPython, using sys.getsizeof.
>
Yes, you can query each python object about how big it thinks it is.
What I'm speaking of is more like:
start_allocs, start_mem = allocator.get_current()
allocator.reset_limits()
run_complicated_tests()

end_allocs, end_mem = allocator.get=current()

Print "delta blocks: %d, delta mem: %d"%(end_allocs-start_allocs, end_mem-start_mem)
print "peak blocks: %d, peak mem: %d"%allocator.peak()

 
> > Btw, this is of great interest to me at the moment, our Shanghai
> > engineers are screaming at the memory waste incurred by dictionaries.
> > A 10 item dictionary consumes 1/2k on 32 bits, did you know this?
> 
> I did.
> 
> In Python 3.3, this now goes down to 248 bytes (32 bits).
> 
I'm going to experiment with tunable parameters in 2.7 to trade performance for memory.  In some applications, memory trumps performance.

K


From mark at hotpy.org  Wed Apr 25 11:45:36 2012
From: mark at hotpy.org (Mark Shannon)
Date: Wed, 25 Apr 2012 10:45:36 +0100
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>	<20120423222218.4015b13e@pitrou.net>	<20120423215558.092532509E3@webabinitio.net>	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>	<20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <4F97C7C0.9030209@hotpy.org>

Kristj?n Valur J?nsson wrote:
> 
>> -----Original Message-----
>> From: python-dev-bounces+kristjan=ccpgames.com at python.org
>> [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On
>> Behalf Of martin at v.loewis.de
>> Sent: 24. apr?l 2012 17:44
>> To: python-dev at python.org
>> Subject: Re: [Python-Dev] cpython: Implement PEP 412: Key-sharing
>> dictionaries (closes #13903)
>>
>>> Benchmarks should measure memory usage too, of course.  Sadly that is
>>> not possible in standard cPython.
>> It's actually very easy in standard CPython, using sys.getsizeof.
>>
> Yes, you can query each python object about how big it thinks it is.
> What I'm speaking of is more like:
> start_allocs, start_mem = allocator.get_current()
> allocator.reset_limits()
> run_complicated_tests()
> 
> end_allocs, end_mem = allocator.get=current()
> 
> Print "delta blocks: %d, delta mem: %d"%(end_allocs-start_allocs, end_mem-start_mem)
> print "peak blocks: %d, peak mem: %d"%allocator.peak()

Take a look at the benchmark suite at
http://hg.python.org/benchmarks/
The test runner has an -m option that profiles memory usage,
you could take a look at how that is implemented

Cheers,
Mark.

From ncoghlan at gmail.com  Wed Apr 25 11:55:59 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 25 Apr 2012 19:55:59 +1000
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
 patch by Andrew Svetlov
In-Reply-To: <CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
	<jn89i4$qm3$1@dough.gmane.org>
	<CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>
Message-ID: <CADiSq7dfZPCNFGqiVnV+KCXNKjmMOmnws1b-XkcX1e8UkxG4UA@mail.gmail.com>

On Wed, Apr 25, 2012 at 6:21 PM, Sandro Tosi <sandro.tosi at gmail.com> wrote:
> On Wed, Apr 25, 2012 at 09:37, Georg Brandl <g.brandl at gmx.net> wrote:
>> Also, I'm not sure everybody knows what the "Olson database" is, so maybe that
>> should be explained too.
>
> I had considered that, but then I found another reference of "Olson
> database" in an example right before the seealso note, so I left it as
> it is. On a second thought, it might be better to clarify what Olson
> db is, do you think a link (f.e to here:
> http://www.iana.org/time-zones ) could be enough or (or in addition)
> also a brief note is needed?

I think another "see also" with a link to that page would be
appropriate. With maintenance of the database transferred to the IANA,
I'd also rephrase the reference as the "IANA timezone database (also
known as the Olson database)"

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From kristjan at ccpgames.com  Wed Apr 25 12:32:36 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Wed, 25 Apr 2012 10:32:36 +0000
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <4F97C7C0.9030209@hotpy.org>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
	<20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>
	<4F97C7C0.9030209@hotpy.org>
Message-ID: <EFE3877620384242A686D52278B7CCD33C0533@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> Take a look at the benchmark suite at
> http://hg.python.org/benchmarks/
> The test runner has an -m option that profiles memory usage, you could take
> a look at how that is implemented
>

Yes, out of process monitoring of memory as reported by the OS.  We do gather those counters as well on clients and servers.
But they don't give you the granularity you want when checking for memory leaks and memory usage by certain algorithms.
In the same way that the unittests have reference leak reports, they could just have memory usage reports, if the underlying allocator supported that.

FYI the current state of affairs of the cPython 2.7 branch we use is as follows:
1) We allow the API user to specify the base allocator python uses, both for regular allocs and allocating blocks for the obmalloc one, using:

/* Support for custom allocators */
typedef void *(*PyCCP_Malloc_t)(size_t size, void *arg, const char *file, int line, const char *msg);
typedef void *(*PyCCP_Realloc_t)(void *ptr, size_t size, void *arg, const char *file, int line, const char *msg);
typedef void (*PyCCP_Free_t)(void *ptr, void *arg, const char *file, int line, const char *msg);
typedef size_t (*PyCCP_Msize_t)(void *ptr, void *arg);
typedef struct PyCCP_CustomAllocator_t
{
    PyCCP_Malloc_t  pMalloc;
    PyCCP_Realloc_t pRealloc;
    PyCCP_Free_t    pFree;
    PyCCP_Msize_t   pMsize;    /* can be NULL, or return -1 if no size info is avail. */
    void            *arg;      /* opaque argument for the functions */
} PyCCP_CustomAllocator_t;

/* To set an allocator!  use 0 for the regular allocator, 1 for the block allocator.
 * pass a null pointer to reset to internal default
 */
PyAPI_FUNC(void) PyCCP_SetAllocator(int which, const PyCCP_CustomAllocator_t *); /* for BLUE to set the current context */

/* internal data member */
extern PyCCP_CustomAllocator_t _PyCCP_CustomAllocator[];

2) using ifdefs, the macros will delegate all final allocations through these allocators.  This includes all the "naked" malloc calls scattered about, they are patched up using #defines.

3) Additionally, there is an internal layer of management, before delegating to the external allocators.  This internal manager provides statistics, exposed through the "sys" module.

The layering is something like this, all more or less definable by pre-processor macros. (raw malloc() is turned into something else via pre-processor magic and a special "patch_malloc.h" file added to the modules which uses raw malloc())

          PyMem_Malloc()                         PyObject_Malloc()
                |                                           |
                v                                           v
           Mem bookkeeping                           obj bookkeeping
                |                                           |
                |                                           v
 malloc()       |                                     obmallocator
    |           |                                           |
    v           v                                           v
  PyMem_MALLOC_RAW()                             PyObject_MALLOC_RAW
           |                                       |
           v                                       v
     malloc() or vectored allocator specified through API function


Cheers,

K

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120425/680865b0/attachment.html>

From brian at python.org  Wed Apr 25 15:34:39 2012
From: brian at python.org (Brian Curtin)
Date: Wed, 25 Apr 2012 08:34:39 -0500
Subject: [Python-Dev] [Python-checkins] cpython: Fix #3561. Add an
 option to place the Python installation into the Windows Path
In-Reply-To: <E1SN21s-0000UR-I7@dinsdale.python.org>
References: <E1SN21s-0000UR-I7@dinsdale.python.org>
Message-ID: <CAD+XWwp-E1TeK56iH8Vy336fLy9oMbv8iWJfQAmxcBiqay87xg@mail.gmail.com>

On Wed, Apr 25, 2012 at 08:13, brian.curtin <python-checkins at python.org> wrote:
> http://hg.python.org/cpython/rev/4e9f1017355f
> changeset: ? 76556:4e9f1017355f
> user: ? ? ? ?Brian Curtin <brian at python.org>
> date: ? ? ? ?Wed Apr 25 08:12:37 2012 -0500
> summary:
> ?Fix #3561. Add an option to place the Python installation into the Windows Path environment variable.
>
> files:
> ?Misc/NEWS ? ? ? ?| ? 3 +++
> ?Tools/msi/msi.py | ?22 +++++++++++++++++++---
> ?2 files changed, 22 insertions(+), 3 deletions(-)

http://bugs.python.org/issue14668 was created for updating the
relevant documentation.

I pushed without docs since it's unlikely they'll be done before the
weekend's alpha 3 build, and I didn't want to have this feature wait
an extra month before anyone sees it. Anyone who's installing an alpha
build is probably advanced enough to know what it's doing in the
meantime.

From bkabrda at redhat.com  Wed Apr 25 15:42:53 2012
From: bkabrda at redhat.com (Bohuslav Kabrda)
Date: Wed, 25 Apr 2012 09:42:53 -0400 (EDT)
Subject: [Python-Dev] Building against system expat
In-Reply-To: <e158d0af-3998-4223-8cfe-08aab5ec2c2b@zmail15.collab.prod.int.phx2.redhat.com>
Message-ID: <14a3d93a-ffd2-4884-9145-aea852598fbd@zmail15.collab.prod.int.phx2.redhat.com>

Hi, I'm trying to build Python 3.2.3 against system expat library, that lies out of the ordinary directory structure (under /opt). I also have an older version of expat library in the system. No matter what shell variables or options I pass to configure and make, pyexpat gets linked against the system expat, which results in errors during tests:

pyexpat.cpython-32dmu.so: undefined symbol: XML_SetHashSalt

anyone has any idea what to pass to configure/make to link pyexpat with the other expat?

Thanks!

-- 
Regards,
Bohuslav "Slavek" Kabrda.

From bkabrda at redhat.com  Wed Apr 25 15:14:47 2012
From: bkabrda at redhat.com (Bohuslav Kabrda)
Date: Wed, 25 Apr 2012 09:14:47 -0400 (EDT)
Subject: [Python-Dev] Building against system expat
In-Reply-To: <23861d42-f00c-48ff-8cdd-77e71846d9a7@zmail15.collab.prod.int.phx2.redhat.com>
Message-ID: <d46adbdd-b44d-4ed6-97f0-4e6301b8a4d6@zmail15.collab.prod.int.phx2.redhat.com>

Hi, I'm trying to build Python 3.2.3 against system expat library, that lies out of the ordinary directory structure (under /opt). I also have an older version of expat library in the system. No matter what shell variables or options I pass to configure and make, pyexpat gets linked against the system expat, which results in errors during tests:

pyexpat.cpython-32dmu.so: undefined symbol: XML_SetHashSalt

anyone has any idea what to pass to configure/make to link pyexpat with the other expat?

Thanks!

-- 
Regards,
Bohuslav "Slavek" Kabrda.

From barry at python.org  Wed Apr 25 16:32:31 2012
From: barry at python.org (Barry Warsaw)
Date: Wed, 25 Apr 2012 10:32:31 -0400
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
References: <4F97113B.70601@stoneleaf.us>
	<CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
Message-ID: <20120425103231.0b594242@limelight.wooz.org>

On Apr 25, 2012, at 12:44 PM, Stephen J. Turnbull wrote:

>Note that people who are really annoyed by the duplicates can set
>their Mailman accounts to no-dupes, and Mailman won't send the post to
>that person.  (This has its disadvantages in principle -- no List-*
>headers and other list-specific info -- and in implementation -- at
>best Mailman can change all your lists at one site, so you need to do
>this on every site you subscribe to.  But it's an option.)

Exactly.  My MUA has a reply-to-list that really only works if there's a
List-Post header.  If you reply to one of my list messages and include me in
the CC, I won't get the list copy so I won't get the List-Post header.  Then
my response back will include you in the CC.  I generally won't clean these
up, since it's *your* fault you're getting a dupe. :)

If you reply-to-list and don't CC me, then the copy I get will be the list
copy, which will have a List-Post header, and I'll also reply-to-list.  No
dupes in sight, just like this one.  SJT, FTW.

-Barry

From barry at python.org  Wed Apr 25 16:38:38 2012
From: barry at python.org (Barry Warsaw)
Date: Wed, 25 Apr 2012 10:38:38 -0400
Subject: [Python-Dev] netiquette on py-dev
In-Reply-To: <CAPTjJmr=0k92FCD6DtfuxGsu7so=qKi63cLd2tiNUMVfBFi_GA@mail.gmail.com>
References: <4F97113B.70601@stoneleaf.us>
	<CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
	<CAPTjJmr=0k92FCD6DtfuxGsu7so=qKi63cLd2tiNUMVfBFi_GA@mail.gmail.com>
Message-ID: <20120425103838.4769b6af@limelight.wooz.org>

On Apr 25, 2012, at 01:58 PM, Chris Angelico wrote:

>I go the other way: hit Reply, and then replace the author's address
>with the list's. I'd much rather have a Reply List though.
>Unfortunately no decent webmail seems to have it, and I'm still
>looking for a decent non-web-mail client too.

It's a highly religious and platform-dependent thing.  I'll put in a plug for
Claws Mail, which I use and generally find to be excellent, in that its warts
(which they all have) aren't bad enough to make me want to chuck my laptop out
the window.  It does both IMAP and NNTP pretty well, and can call an external
editor for composition.  It also rarely crashes these days. :)

Oh, and to keep things roughly on topic, it embeds Python so you can write
nice little scripts for a variety of actions.  E.g. I have a little Python
script to automatically pick my python.org address for messages to Python
mailing lists.

That's all I'll say on the subject in this mailing list, but I'm happy to
answer other questions off-line.

-Barry

From steve at pearwood.info  Wed Apr 25 19:20:10 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 26 Apr 2012 03:20:10 +1000
Subject: [Python-Dev] (time) PEP 418 glossary V2
In-Reply-To: <CAL_0O1-2Kod-6WQvd+iR60XGfnedxuxZCq2TyryKx-8G2bTP7A@mail.gmail.com>
References: <CA+OGgf71sVzVYHBGmCD841KFDZbj7tJbQ+n6Xq2V1AoJYPzWYg@mail.gmail.com>	<CAMpsgwabRAcN-DgJX56kA5z7JsrL=jJF=oWTiCYp1q-vA1hmEg@mail.gmail.com>	<CA+OGgf7u0_Dmv7wbEa8bND4zs0s9HTmGz9ZvuqK71Ndws+Hx5Q@mail.gmail.com>
	<CAL_0O1-2Kod-6WQvd+iR60XGfnedxuxZCq2TyryKx-8G2bTP7A@mail.gmail.com>
Message-ID: <4F98324A.1030208@pearwood.info>

Stephen J. Turnbull wrote:
> On Wed, Apr 25, 2012 at 1:19 AM, Jim Jewett <jimjjewett at gmail.com> wrote:
> 
>> I'm still a little fuzzy on *why* it shouldn't count as a monotonic
>> clock.
> 
> So are the people who say it shouldn't count (unless you're speaking
> of the specific implementation on Unix systems, which can go backward
> if the admin or NTP decides it should be so).

The fact that the clock is not monotonic is a pretty good reason for it not to 
count as monotonic. I don't think there's anything fuzzy about that.


> I think they are in
> general mistaking their use case for a general specification, that's
> all.

I'm sorry, am I missing something here? What use case are you talking about?


> Even Glyph cited "what other people seem to think" in supporting
> the usage where "monotonic" implies "high quality" in some informal
> sense, although he does have a spec for what high quality means, and
> AIUI an API for it in Twisted.

Who are these people who think monotonic is a synonym for "high quality"?

Why should we pander to their confusion at the cost of those who do understand 
the difference between monotonic and high quality?


> I think we should just accept that "monotonic" is in more or less
> common use as a synonym for "high quality", and warn *our* users that
> the implementers of such clocks may be working to a different spec.  I
> think the revised glossary's description of "monotonic" does that
> pretty well.

Do I understand correctly that you think it is acceptable to call something 
monotonic regardless of whether or not it actually is monotonic?

If not, I'm not sure I understand what you are suggesting here.



-- 
Steven


From sandro.tosi at gmail.com  Wed Apr 25 19:21:30 2012
From: sandro.tosi at gmail.com (Sandro Tosi)
Date: Wed, 25 Apr 2012 19:21:30 +0200
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
 patch by Andrew Svetlov
In-Reply-To: <CADiSq7dfZPCNFGqiVnV+KCXNKjmMOmnws1b-XkcX1e8UkxG4UA@mail.gmail.com>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
	<jn89i4$qm3$1@dough.gmane.org>
	<CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>
	<CADiSq7dfZPCNFGqiVnV+KCXNKjmMOmnws1b-XkcX1e8UkxG4UA@mail.gmail.com>
Message-ID: <CAB4XWXyFauWQsh-=_OazjMzhgZssSr5BfuNYEBtHo96pjZUg9A@mail.gmail.com>

On Wed, Apr 25, 2012 at 11:55, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Wed, Apr 25, 2012 at 6:21 PM, Sandro Tosi <sandro.tosi at gmail.com> wrote:
>> On Wed, Apr 25, 2012 at 09:37, Georg Brandl <g.brandl at gmx.net> wrote:
>>> Also, I'm not sure everybody knows what the "Olson database" is, so maybe that
>>> should be explained too.
>>
>> I had considered that, but then I found another reference of "Olson
>> database" in an example right before the seealso note, so I left it as
>> it is. On a second thought, it might be better to clarify what Olson
>> db is, do you think a link (f.e to here:
>> http://www.iana.org/time-zones ) could be enough or (or in addition)
>> also a brief note is needed?
>
> I think another "see also" with a link to that page would be
> appropriate. With maintenance of the database transferred to the IANA,
> I'd also rephrase the reference as the "IANA timezone database (also
> known as the Olson database)"

Ah yes, I like that; what about this change (where the IANA tz db
section is brutally copied from their website):

diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst
--- a/Doc/library/datetime.rst
+++ b/Doc/library/datetime.rst
@@ -1524,12 +1524,19 @@
 .. seealso::

    `pytz <http://pypi.python.org/pypi/pytz/>`_
-      The Standard Library has no :class:`tzinfo` instances except for UTC, but
-      it exists a third-party library which brings Olson timezone database to
-      Python: `pytz`.
+      The standard library has no :class:`tzinfo` instances except for UTC, but
+      there exists a third-party library which brings the `IANA timezone
+      database` (also known as the Olson database) to Python: `pytz`.

       `pytz` contains up-to-date information and its usage is recommended.

+   `IANA timezone database <http://www.iana.org/time-zones>`_
+      The Time Zone Database (often called tz or zoneinfo) contains code and
+      data that represent the history of local time for many representative
+      locations around the globe. It is updated periodically to reflect changes
+      made by political bodies to time zone boundaries, UTC offsets, and
+      daylight-saving rules.
+
 .. _strftime-strptime-behavior:

Cheers,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi

From g.brandl at gmx.net  Wed Apr 25 20:40:43 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 25 Apr 2012 20:40:43 +0200
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
	patch by Andrew Svetlov
In-Reply-To: <CAB4XWXyFauWQsh-=_OazjMzhgZssSr5BfuNYEBtHo96pjZUg9A@mail.gmail.com>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
	<jn89i4$qm3$1@dough.gmane.org>
	<CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>
	<CADiSq7dfZPCNFGqiVnV+KCXNKjmMOmnws1b-XkcX1e8UkxG4UA@mail.gmail.com>
	<CAB4XWXyFauWQsh-=_OazjMzhgZssSr5BfuNYEBtHo96pjZUg9A@mail.gmail.com>
Message-ID: <jn9ged$ljm$1@dough.gmane.org>

On 25.04.2012 19:21, Sandro Tosi wrote:
> On Wed, Apr 25, 2012 at 11:55, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> On Wed, Apr 25, 2012 at 6:21 PM, Sandro Tosi <sandro.tosi at gmail.com> wrote:
>>> On Wed, Apr 25, 2012 at 09:37, Georg Brandl <g.brandl at gmx.net> wrote:
>>>> Also, I'm not sure everybody knows what the "Olson database" is, so maybe that
>>>> should be explained too.
>>>
>>> I had considered that, but then I found another reference of "Olson
>>> database" in an example right before the seealso note, so I left it as
>>> it is. On a second thought, it might be better to clarify what Olson
>>> db is, do you think a link (f.e to here:
>>> http://www.iana.org/time-zones ) could be enough or (or in addition)
>>> also a brief note is needed?
>>
>> I think another "see also" with a link to that page would be
>> appropriate. With maintenance of the database transferred to the IANA,
>> I'd also rephrase the reference as the "IANA timezone database (also
>> known as the Olson database)"
> 
> Ah yes, I like that; what about this change (where the IANA tz db
> section is brutally copied from their website):
> 
> diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst
> --- a/Doc/library/datetime.rst
> +++ b/Doc/library/datetime.rst
> @@ -1524,12 +1524,19 @@
>  .. seealso::
> 
>     `pytz <http://pypi.python.org/pypi/pytz/>`_
> -      The Standard Library has no :class:`tzinfo` instances except for UTC, but
> -      it exists a third-party library which brings Olson timezone database to
> -      Python: `pytz`.
> +      The standard library has no :class:`tzinfo` instances except for UTC, but
> +      there exists a third-party library which brings the `IANA timezone
> +      database` (also known as the Olson database) to Python: `pytz`.
> 
>        `pytz` contains up-to-date information and its usage is recommended.

BTW, the single backticks don't do anything usable; use *pytz* to make something
emphasized.

> +   `IANA timezone database <http://www.iana.org/time-zones>`_
> +      The Time Zone Database (often called tz or zoneinfo) contains code and
> +      data that represent the history of local time for many representative
> +      locations around the globe. It is updated periodically to reflect changes
> +      made by political bodies to time zone boundaries, UTC offsets, and
> +      daylight-saving rules.
> +

Maybe it's useful to mention that that database is the one used on Linux (is
it on other Unices?) and Windows has its own?

Georg


From terry_tang2005 at yahoo.com  Wed Apr 25 20:41:33 2012
From: terry_tang2005 at yahoo.com (Terry Tang)
Date: Wed, 25 Apr 2012 11:41:33 -0700 (PDT)
Subject: [Python-Dev] Python 2.7.3 shared library, like _socket.pyd,
	cannot be loaded
Message-ID: <1335379293.79623.YahooMailNeo@web125103.mail.ne1.yahoo.com>

Hi There,

I am integrating Python 2.7.3 into our system on Windows. We embedded Python 2.7.3 interpreter to our system.

The problem we met is, our extended Python interpreter cannot load "_socket.pyd" when "import socket" is executed, for example. Here is the error:

Traceback (most recent call last):
? File "t.py", line 1, in <module>
? ? import socket;
? File "C:\trunk1\third_party\python-2.7.3\win32\lib\socket.py", line 47, in <mo
dule>
? ? import _socket
ImportError: DLL load failed: The specified module could not be found.


I wrote a small program as listed below to manually load "_socket.pyd" from Python 2.7.3 binary installation ?on Windows, and got the same failure.

static void TestDllLoad(const char *dllPath)
{
? ? HINSTANCE socketh=LoadLibraryEx(dllPath,?NULL,?LOAD_WITH_ALTERED_SEARCH_PATH);
? ? if (socketh == NULL) {

fprintf(stderr, "Failed to load shared library: %s\nError: %d\n", dllPath, ?GetLastError());
? ? } else {
fprintf(stderr, "Successfully load shared library: %s\n", dllPath);
? ? }
}

int main()
{
? ? /* The following loading success. */
? ??TestDllLoad("<PathToPython2.3.3>\\DLLs\\_socket.pyd");
? ? /* The following loading failed. */
? ? TestDllLoad("<PathToPython2.7.3>\\DLLs\\_socket.pyd");
? ? return 0;}

I tried MSVC 2008 and a third-party compiler, and got the same result, even after copying "python27.dll" to the testing directory from Python 2.7.3 installation on Windows.

There is a similar failure reported in?http://bugs.python.org/issue4566, but it is marked as fixed and closed.?

Anyone has any idea about the problem?

Thanks a lot.

-Terry
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120425/32555bf4/attachment-0001.html>

From g.brandl at gmx.net  Wed Apr 25 20:44:22 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 25 Apr 2012 20:44:22 +0200
Subject: [Python-Dev] Building against system expat
In-Reply-To: <14a3d93a-ffd2-4884-9145-aea852598fbd@zmail15.collab.prod.int.phx2.redhat.com>
References: <e158d0af-3998-4223-8cfe-08aab5ec2c2b@zmail15.collab.prod.int.phx2.redhat.com>
	<14a3d93a-ffd2-4884-9145-aea852598fbd@zmail15.collab.prod.int.phx2.redhat.com>
Message-ID: <jn9gl9$oul$1@dough.gmane.org>

On 25.04.2012 15:42, Bohuslav Kabrda wrote:
> Hi, I'm trying to build Python 3.2.3 against system expat library, that lies
> out of the ordinary directory structure (under /opt). I also have an older
> version of expat library in the system. No matter what shell variables or
> options I pass to configure and make, pyexpat gets linked against the system
> expat, which results in errors during tests:
> 
> pyexpat.cpython-32dmu.so: undefined symbol: XML_SetHashSalt
> 
> anyone has any idea what to pass to configure/make to link pyexpat with the
> other expat?

You'll have to upgrade your expat.  The XML_SetHashSalt is new in 2.1.0 and
makes it possible to avoid an algorithmic complexity attack; Python uses it
in its newest bugfix releases.  See for example <http://bugs.python.org/issue14234>.

cheers,
Georg


From martin at v.loewis.de  Wed Apr 25 20:57:13 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Wed, 25 Apr 2012 20:57:13 +0200
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
	<20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>
Message-ID: <4F984909.8020006@v.loewis.de>

>>> Benchmarks should measure memory usage too, of course.  Sadly that is
>>> not possible in standard cPython.
>>
>> It's actually very easy in standard CPython, using sys.getsizeof.
>>
> Yes, you can query each python object about how big it thinks it is.
> What I'm speaking of is more like:
> start_allocs, start_mem = allocator.get_current()
> allocator.reset_limits()
> run_complicated_tests()
>
> end_allocs, end_mem = allocator.get=current()

This is easy in a debug build, using sys.getobjects(). In a release 
build, you can use pympler:

start = pympler.muppy.get_size(pympler.muppy.get_objects())
run_complicated_tests()
end = pympler.muppy.get_size(pympler.muppy.get_objects())
print "delta mem: %d" % (end-start)

Regards,
Martin

From neologix at free.fr  Wed Apr 25 21:03:39 2012
From: neologix at free.fr (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=)
Date: Wed, 25 Apr 2012 21:03:39 +0200
Subject: [Python-Dev] [help wanted] - IrDA sockets support
Message-ID: <CAH_1eM0JZ-KEsTJB5PeFwUNF2p8mT25VhJ51sdu-E3TUM5on7A@mail.gmail.com>

Hi,

Issue #1522400 (http://bugs.python.org/issue1522400) has a patch
adding IrDA socket support.
It builds under Linux and Windows, however it cannot go any further
because no developer involved in the issue has access to IrDA capable
devices, which makes testing impossible.
So, if you have access to such devices and are interested, feel free
to chime in and help get this merged.

Cheers,

cf

From g.brandl at gmx.net  Wed Apr 25 21:24:33 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Wed, 25 Apr 2012 21:24:33 +0200
Subject: [Python-Dev] Building against system expat
In-Reply-To: <jn9gl9$oul$1@dough.gmane.org>
References: <e158d0af-3998-4223-8cfe-08aab5ec2c2b@zmail15.collab.prod.int.phx2.redhat.com>
	<14a3d93a-ffd2-4884-9145-aea852598fbd@zmail15.collab.prod.int.phx2.redhat.com>
	<jn9gl9$oul$1@dough.gmane.org>
Message-ID: <jn9j0i$bkt$1@dough.gmane.org>

On 25.04.2012 20:44, Georg Brandl wrote:
> On 25.04.2012 15:42, Bohuslav Kabrda wrote:
>> Hi, I'm trying to build Python 3.2.3 against system expat library, that lies
>> out of the ordinary directory structure (under /opt). I also have an older
>> version of expat library in the system. No matter what shell variables or
>> options I pass to configure and make, pyexpat gets linked against the system
>> expat, which results in errors during tests:
>> 
>> pyexpat.cpython-32dmu.so: undefined symbol: XML_SetHashSalt
>> 
>> anyone has any idea what to pass to configure/make to link pyexpat with the
>> other expat?
> 
> You'll have to upgrade your expat.  The XML_SetHashSalt is new in 2.1.0 and
> makes it possible to avoid an algorithmic complexity attack; Python uses it
> in its newest bugfix releases.  See for example <http://bugs.python.org/issue14234>.

Sorry, I think I misread your request.  Please ignore the reply.

Georg


From barry at python.org  Thu Apr 26 01:54:11 2012
From: barry at python.org (Barry Warsaw)
Date: Wed, 25 Apr 2012 19:54:11 -0400
Subject: [Python-Dev] Python 3 porting
Message-ID: <20120425195411.26dd9c43@limelight.wooz.org>

I want to take this opportunity to make folks aware of several Python 3
porting initiatives and resources.

In Ubuntu 12.10, we are going to be making a big push to target all the
applications and libraries on the desktop CDs to Python 3.  While this is a
goal of Ubuntu, the intent really is to work with the wider Python community
(i.e. *you*!) to help drive more momentum toward Python 3.

We can't do this alone, and we hope you will participate.  While we on Ubuntu
have our own list of priorities, of course we want to push as much of this as
possible upstream so that everyone can benefit, regardless of platform.  We
also want to help spread the word about Python 3, and how easy it can be to
support it.

One of the best ways to get involved is to join the 'python-porting' mailing
list:

    http://mail.python.org/mailman/listinfo/python-porting

which is *the* forum for discussion issues, getting help, and coordinating
with others on Python 3 ports of your favorite upstream projects.

I've also resurrected the #python3 IRC channel on Freenode, for those of you
who want to provide or receive more real-time help.

Web resources for porters include:

 * http://getpython3.com/
   General Python 3 resources, forkable on github

 * http://python3porting.com/
   Lennart Regebro's excellent in-depth porting guide

 * https://wiki.ubuntu.com/Python/3
   My quick guide for porting

 * https://wiki.ubuntu.com/Python/FoundationsQPythonVersions
   Detailed plans for Python 3 on Ubuntu 12.10

 * http://tinyurl.com/6vm3egu
   My recent blog post on Ubuntu's plans for Python 3

 * http://tinyurl.com/7dsyywo
   Ubuntu's top priorities for porting, as a shared Google doc spreadsheet

Many of these pages have additional links and resources for porting, and can
help you find packages that need resources in getting to Python 3.

At the Ubuntu Developer Summit in Oakland, California, May 7-11, 2012, we'll
also be holding some sessions on Python 3, so if you're in the area, please
come by.

    http://uds.ubuntu.com/

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120425/5a50959c/attachment.pgp>

From s.brunthaler at uci.edu  Thu Apr 26 01:56:21 2012
From: s.brunthaler at uci.edu (stefan brunthaler)
Date: Wed, 25 Apr 2012 16:56:21 -0700
Subject: [Python-Dev] Assigning copyright...
Message-ID: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>

Hi,

I only had little time to spend for my open sourcing efforts, which is
why I could not get back to python-dev any time earlier...

Yesterday I forward-ported my patches to revision 76549
(13c30fe3f427), which only took 25mins or so (primarly due to the
small changes necessary to Python itself and the stability of that
parts.) Thanks to a colleague of mine (Per Larsen) I reimplemented
some of the more ugly parts of the code generator, too.
Guido's answer from the last thread was that I should duly assign the
copyright to the PSF. Unfortunatly, I don't really see any other part
than the LICENSE and README files in the Python distribution. Since my
patch basically just adds another subdirectory ("cgen") to the Python
top-level directory, I am not sure if I need to supply other
information to make my code officially PSF compatible.

Am I missing something obvious?

Thanks,
--stefan

From ncoghlan at gmail.com  Thu Apr 26 03:06:35 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 26 Apr 2012 11:06:35 +1000
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
 patch by Andrew Svetlov
In-Reply-To: <jn9ged$ljm$1@dough.gmane.org>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
	<jn89i4$qm3$1@dough.gmane.org>
	<CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>
	<CADiSq7dfZPCNFGqiVnV+KCXNKjmMOmnws1b-XkcX1e8UkxG4UA@mail.gmail.com>
	<CAB4XWXyFauWQsh-=_OazjMzhgZssSr5BfuNYEBtHo96pjZUg9A@mail.gmail.com>
	<jn9ged$ljm$1@dough.gmane.org>
Message-ID: <CADiSq7cgLmJWUU4jqVOouKVLNBDV3ejJHN0mju2muxBOau-hWg@mail.gmail.com>

On Thu, Apr 26, 2012 at 4:40 AM, Georg Brandl <g.brandl at gmx.net> wrote:
> Maybe it's useful to mention that that database is the one used on Linux (is
> it on other Unices?) and Windows has its own?

pytz always uses the Olson/IANA database. I don't think we need to
confuse matters further by mentioning the fact that Microsoft invented
their own system without worrying about what anyone else was doing.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ben+python at benfinney.id.au  Thu Apr 26 04:43:20 2012
From: ben+python at benfinney.id.au (Ben Finney)
Date: Thu, 26 Apr 2012 12:43:20 +1000
Subject: [Python-Dev] netiquette on py-dev
References: <4F97113B.70601@stoneleaf.us>
	<CAL_0O1_Ywhqc=3PU9zVDj36+qV5SZHRXfCuUndwYfeuWi4q2mw@mail.gmail.com>
Message-ID: <87aa1zb4c7.fsf@benfinney.id.au>

"Stephen J. Turnbull" <stephen at xemacs.org> writes:

> I don't know of any webmail implementations that provide
> reply-to-list, so a lot of us end up using reply-to-all.

Right, that puts the responsibility in the right place: the webmail
software vendor needs to add a reply-to-list command, as has been
implemented in many clients for many years and supported explicitly by
standard fields in every message header.

> So in most cases I suppose that the duplicate going to the author is
> just an issue of "energy conservation" on the part of the responder.

I agree that's likely the usual reason. It's saving short-term effort by
passing that effort on to others though, and to that extent is
inconsiderate of other people.

Better is for webmail users to pressure the vendor of the webmail
software to add the ?Reply to list? feature and make it clear this is
the recommended way to reply on a mailing list.

> Note that people who are really annoyed by the duplicates can set
> their Mailman accounts to no-dupes, and Mailman won't send the post to
> that person.

Those of us who don't have a Mailman account don't have that option, as
you noted. I'm not participating in this forum by email at all, and
don't expect *any* copies of its messages in my email.

The problem is with the missing feature of the webmail program, and the
users of that program need to agitate for getting it fixed.


Nick Coghlan <ncoghlan at gmail.com> writes:

> So yeah, the basic problem is mail clients that don't offer a
> "Reply-List" option, with the Gmail web client being a notable
> offender.

It doesn't even need to be extra effort for the user. The ?Reply to
author? command can change to a ?Reply to list? command when the mailing
list fields are present. That's one possible solution; but anything that
gets more people to ?Reply to list? when appropriate is acceptable to me.

-- 
 \       ?Don't you try to outweird me, I get stranger things than you |
  `\          free with my breakfast cereal.? ?Zaphod Beeblebrox, _The |
_o__)            Restaurant At The End Of The Universe_, Douglas Adams |
Ben Finney


From merwok at netwok.org  Thu Apr 26 05:11:13 2012
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Wed, 25 Apr 2012 23:11:13 -0400
Subject: [Python-Dev] Assigning copyright...
In-Reply-To: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>
References: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>
Message-ID: <4F98BCD1.80300@netwok.org>

Hi Stefan,

The PSF does not require copyright assignment (ugh!), only a contributor
agreement.  http://www.python.org/psf/contrib/contrib-form/ should give
you all you need.

Regards

From bkabrda at redhat.com  Thu Apr 26 07:24:58 2012
From: bkabrda at redhat.com (Bohuslav Kabrda)
Date: Thu, 26 Apr 2012 01:24:58 -0400 (EDT)
Subject: [Python-Dev] Building against system expat
In-Reply-To: <jn9gl9$oul$1@dough.gmane.org>
Message-ID: <51cc2df0-4720-482d-9bc0-a3d53c26d7e8@zmail15.collab.prod.int.phx2.redhat.com>

----- Original Message -----
> On 25.04.2012 15:42, Bohuslav Kabrda wrote:
> > Hi, I'm trying to build Python 3.2.3 against system expat library,
> > that lies
> > out of the ordinary directory structure (under /opt). I also have
> > an older
> > version of expat library in the system. No matter what shell
> > variables or
> > options I pass to configure and make, pyexpat gets linked against
> > the system
> > expat, which results in errors during tests:
> > 
> > pyexpat.cpython-32dmu.so: undefined symbol: XML_SetHashSalt
> > 
> > anyone has any idea what to pass to configure/make to link pyexpat
> > with the
> > other expat?
> 
> You'll have to upgrade your expat.  The XML_SetHashSalt is new in
> 2.1.0 and
> makes it possible to avoid an algorithmic complexity attack; Python
> uses it
> in its newest bugfix releases.  See for example
> <http://bugs.python.org/issue14234>.
> 
> cheers,
> Georg
> 

Thanks, actually I found an error in my build script that set the LD_LIBRARY_PATH wrongly, so only the standard .so file was found (that didn't have this symbol), and not the one under /opt.

So, my mistake,
thanks everyone :)

-- 
Regards,
Bohuslav "Slavek" Kabrda.

From ericsnowcurrently at gmail.com  Thu Apr 26 07:31:50 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Wed, 25 Apr 2012 23:31:50 -0600
Subject: [Python-Dev] sys.implementation
In-Reply-To: <CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
Message-ID: <CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>

The proposal of adding sys.implementation has come up a couple times
over the last few years. [1][2]  While the reaction has been
overwhelmingly positive, nothing has come of it.  I've created a
tracker issue and a patch:

    http://bugs.python.org/issue14673

The patch adds a struct sequence that holds ("name" => "CPython",
"version" => sys.version_info).  If later needs dictate more fields,
we can cross that bridge then.

Are there any objections?  Considering the positive reaction and the
scope of the addition, does this need a PEP?

-eric

[1] http://mail.python.org/pipermail/python-dev/2009-October/092893.html
[2] http://mail.python.org/pipermail/python-ideas/2012-April/014878.html

From mark at hotpy.org  Thu Apr 26 08:26:35 2012
From: mark at hotpy.org (Mark Shannon)
Date: Thu, 26 Apr 2012 07:26:35 +0100
Subject: [Python-Dev] Assigning copyright...
In-Reply-To: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>
References: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>
Message-ID: <4F98EA9B.5060906@hotpy.org>

stefan brunthaler wrote:
> Hi,
> 
> I only had little time to spend for my open sourcing efforts, which is
> why I could not get back to python-dev any time earlier...
> 
> Yesterday I forward-ported my patches to revision 76549
> (13c30fe3f427), which only took 25mins or so (primarly due to the
> small changes necessary to Python itself and the stability of that
> parts.) Thanks to a colleague of mine (Per Larsen) I reimplemented
> some of the more ugly parts of the code generator, too.
> Guido's answer from the last thread was that I should duly assign the
> copyright to the PSF. Unfortunatly, I don't really see any other part
> than the LICENSE and README files in the Python distribution. Since my
> patch basically just adds another subdirectory ("cgen") to the Python
> top-level directory, I am not sure if I need to supply other
> information to make my code officially PSF compatible.
> 
> Am I missing something obvious?

A URL for the code repository (with an open-source license),
so code can be reviewed.
It is hard to review and update a giant patch.

Cheers,
Mark.

From taschini at ieee.org  Thu Apr 26 11:39:33 2012
From: taschini at ieee.org (Stefano Taschini)
Date: Thu, 26 Apr 2012 11:39:33 +0200
Subject: [Python-Dev] Is it safe to assume that Python 2.7 is always built
	with unicode support?
Message-ID: <CAPdNJuAVRXttm9-n0ENd2Zmkub8nvso+U3OUovVx=Sffh7RFXA@mail.gmail.com>

Hello every one,

I'm looking into issue 1065986 [1], and in order to submit a patch I need
to know whether I have to take into account the eventuality that cpyhon 2.7
be built without unicode support.

As far as I can see it is no longer possible to configure cpython 2.7 with
--disable-unicode as a consequence of the merge 59157:62babf456005 on 27
Feb 2010 of the commit 59153:8b2048bca33c of the same day.

Since I could not find an discussion on the topic leading explicitly to
this decision, I was wondering whether this is in fact an unintended
consequence of the check introduced in 59153:8b2048bca33c, which excludes
"no" from the acceptable values for configuring unicode support.

In conclusion, can you guys confirm that I don't have to worry that cpython
2.7 could be built with no unicode support? Or not?

If so, shouldn't it be properly documented, at least in Misc/NEWS ?

Bye,
Stefano

[1] http://bugs.python.org/issue1065986
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120426/fe6ff5e7/attachment-0001.html>

From kristjan at ccpgames.com  Thu Apr 26 13:41:28 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Thu, 26 Apr 2012 11:41:28 +0000
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <CADiSq7eBawDjjcN9x_k8_ggori4QCOn1N8-y50X-6OZshhpOEA@mail.gmail.com>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
	<CADiSq7eBawDjjcN9x_k8_ggori4QCOn1N8-y50X-6OZshhpOEA@mail.gmail.com>
Message-ID: <EFE3877620384242A686D52278B7CCD33C19ED@RKV-IT-EXCH104.ccp.ad.local>

Thanks.
Meanwhile, I blogged about tuning the dict implementation.
Preliminary testing seems to indicate that tuning it to conserve memory saves us 2Mb of wasted slots on the login screen.  No small thing on a PS3 system.
http://blog.ccpgames.com/kristjan/2012/04/25/optimizing-the-dict/
I wonder if we shouldn't make those factors into #defines as I did in my 2.7 modifications, and even provide a "memory saving" predefine for embedders.
(Believe it or not, sometimes python performance is not an issue at all, but memory usage is.)

K

> -----Original Message-----
> From: Nick Coghlan [mailto:ncoghlan at gmail.com]
> Sent: 24. apr?l 2012 11:42
> To: Kristj?n Valur J?nsson
> Cc: R. David Murray; Antoine Pitrou; python-dev at python.org
> Subject: Re: [Python-Dev] cpython: Implement PEP 412: Key-sharing
> dictionaries (closes #13903)
> 
> On Tue, Apr 24, 2012 at 8:24 PM, Kristj?n Valur J?nsson
> <kristjan at ccpgames.com> wrote:
> > Perhaps I should write about this on my blog. ?Updating the memory
> > allocation macro layer in cPython for embedding is something I'd be
> > inclined to contribute, but it will involve a large amount of
> > bikeshedding, I'm sure :)
> 
> Trawl the tracker before you do - I'm pretty sure there's a patch (from the
> Nokia S60 port, IIRC) that adds a couple of macro definitions so that platform
> ports and embedding applications can intercept malloc() and free() calls.
> 
> It would be way out of date by now, but I seem to recall thinking it looked
> reasonable at a quick glance.
> 
> Cheers,
> Nick.
> 
> --
> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia



From kristjan at ccpgames.com  Thu Apr 26 15:26:47 2012
From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=)
Date: Thu, 26 Apr 2012 13:26:47 +0000
Subject: [Python-Dev] cpython: Implement PEP 412: Key-sharing
 dictionaries (closes #13903)
In-Reply-To: <4F984909.8020006@v.loewis.de>
References: <E1SML8L-0002LG-NT@dinsdale.python.org>
	<20120423222218.4015b13e@pitrou.net>
	<20120423215558.092532509E3@webabinitio.net>
	<EFE3877620384242A686D52278B7CCD33BE13F@RKV-IT-EXCH104.ccp.ad.local>
	<20120424194330.Horde.bSQePsL8999PluZC5dJCcCA@webmail.df.eu>
	<EFE3877620384242A686D52278B7CCD33C02BB@RKV-IT-EXCH104.ccp.ad.local>
	<4F984909.8020006@v.loewis.de>
Message-ID: <EFE3877620384242A686D52278B7CCD33C1BAA@RKV-IT-EXCH104.ccp.ad.local>



> -----Original Message-----
> From: "Martin v. L?wis" [mailto:martin at v.loewis.de]
> 
> This is easy in a debug build, using sys.getobjects(). In a release build, you can
> use pympler:
> 
> start = pympler.muppy.get_size(pympler.muppy.get_objects())
> run_complicated_tests()
> end = pympler.muppy.get_size(pympler.muppy.get_objects())
> print "delta mem: %d" % (end-start)

Thanks for pointing out pympler to me.  Sounds like fun, I'll try it out.  
I should point out that gc.get_objects() also works, if you don't care about stuff like ints and floats.

Another reason why I like the runtime stats we have built in, however, is that they provide no query overhead.
You can query the current resource usage as often as you like and this is important in a running app.  We log python memory usage every second or so.

Cheers,

K


From martin at v.loewis.de  Thu Apr 26 16:01:42 2012
From: martin at v.loewis.de (martin at v.loewis.de)
Date: Thu, 26 Apr 2012 16:01:42 +0200
Subject: [Python-Dev] Is it safe to assume that Python 2.7 is always
 built with unicode support?
In-Reply-To: <CAPdNJuAVRXttm9-n0ENd2Zmkub8nvso+U3OUovVx=Sffh7RFXA@mail.gmail.com>
References: <CAPdNJuAVRXttm9-n0ENd2Zmkub8nvso+U3OUovVx=Sffh7RFXA@mail.gmail.com>
Message-ID: <20120426160142.Horde.upMNK9jz9kRPmVVGDU3FVvA@webmail.df.eu>

> I'm looking into issue 1065986 [1], and in order to submit a patch I need
> to know whether I have to take into account the eventuality that cpyhon 2.7
> be built without unicode support.

It's intended (at least, it is *my* intention) that Python 2.7 can be built
without Unicode support, and it's a bug if that is not possible anymore.
Certain embedded configurations might want that.

That doesn't mean that the bug needs to be fixed; this can be deferred until
somebody actually requests that bug being fixed, or better, until somebody
contributes a patch to do so.

However, it *does* mean that we shouldn't further break the feature, at least
not knowingly.

OTOH, it's clear that certain functionality cannot work if Unicode is  
disabled,
so it may be acceptable if pydoc breaks in such a configuration.

Regards,
Martin



From barry at python.org  Thu Apr 26 16:31:50 2012
From: barry at python.org (Barry Warsaw)
Date: Thu, 26 Apr 2012 10:31:50 -0400
Subject: [Python-Dev] sys.implementation
In-Reply-To: <CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
Message-ID: <20120426103150.4898a678@limelight.wooz.org>

On Apr 25, 2012, at 11:31 PM, Eric Snow wrote:

>The proposal of adding sys.implementation has come up a couple times
>over the last few years. [1][2]  While the reaction has been
>overwhelmingly positive, nothing has come of it.  I've created a
>tracker issue and a patch:
>
>    http://bugs.python.org/issue14673
>
>The patch adds a struct sequence that holds ("name" => "CPython",
>"version" => sys.version_info).  If later needs dictate more fields,
>we can cross that bridge then.
>
>Are there any objections?  Considering the positive reaction and the
>scope of the addition, does this need a PEP?

It's somewhat of a corner case, but I think a PEP couldn't hurt.  The
rationale section would be useful, at least.

-Barry

From benjamin at python.org  Thu Apr 26 17:00:15 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Thu, 26 Apr 2012 11:00:15 -0400
Subject: [Python-Dev] [Python-checkins] cpython: Close #10142: Support
	for SEEK_HOLE/SEEK_DATA
In-Reply-To: <E1SNPrG-0006nm-GQ@dinsdale.python.org>
References: <E1SNPrG-0006nm-GQ@dinsdale.python.org>
Message-ID: <CAPZV6o86aTcVQJw9LatGO+moZHaHv_9ABrEJj6rar_c-GXinNA@mail.gmail.com>

2012/4/26 jesus.cea <python-checkins at python.org>:
> http://hg.python.org/cpython/rev/86dc014cdd74
> changeset: ? 76570:86dc014cdd74
> user: ? ? ? ?Jesus Cea <jcea at jcea.es>
> date: ? ? ? ?Thu Apr 26 16:39:35 2012 +0200
> summary:
> ?Close #10142: Support for SEEK_HOLE/SEEK_DATA
>
> files:
> ?Doc/library/io.rst ? ? ? | ? 5 +++++
> ?Doc/library/os.rst ? ? ? | ? 4 ++++
> ?Lib/_pyio.py ? ? ? ? ? ? | ?12 +++---------
> ?Lib/os.py ? ? ? ? ? ? ? ?| ? 1 +
> ?Lib/test/test_posix.py ? | ?20 ++++++++++++++++++++
> ?Misc/NEWS ? ? ? ? ? ? ? ?| ? 2 ++
> ?Modules/_io/bufferedio.c | ?21 ++++++++++++++++++---
> ?Modules/posixmodule.c ? ?| ? 7 +++++++
> ?8 files changed, 60 insertions(+), 12 deletions(-)
>
>
> diff --git a/Doc/library/io.rst b/Doc/library/io.rst
> --- a/Doc/library/io.rst
> +++ b/Doc/library/io.rst
> @@ -291,6 +291,11 @@
> ? ? ? .. versionadded:: 3.1
> ? ? ? ? ?The ``SEEK_*`` constants.
>
> + ? ? ?.. versionadded:: 3.3
> + ? ? ? ? Some operating systems could support additional values, like
> + ? ? ? ? :data:`os.SEEK_HOLE` or :data:`os.SEEK_DATA`. The valid values
> + ? ? ? ? for a file could depend on it being open in text or binary mode.
> +

Why are they only listed in "os" and not "io".

> ? ?.. method:: seekable()
>
> ? ? ? Return ``True`` if the stream supports random access. ?If ``False``,
> diff --git a/Doc/library/os.rst b/Doc/library/os.rst
> --- a/Doc/library/os.rst
> +++ b/Doc/library/os.rst
> @@ -992,6 +992,10 @@
> ? ?Parameters to the :func:`lseek` function. Their values are 0, 1, and 2,
> ? ?respectively. Availability: Windows, Unix.
>
> + ? .. versionadded:: 3.3
> + ? ? ?Some operating systems could support additional values, like

"Some operating systems may support" is better. (They applies to other
parts in the docs, too.)

> + ? ? ?:data:`os.SEEK_HOLE` or :data:`os.SEEK_DATA`.
> +

Since we're explicitly listing which ones we support, it would be nice
to explain what they do.



-- 
Regards,
Benjamin

From taschini at ieee.org  Thu Apr 26 17:07:46 2012
From: taschini at ieee.org (Stefano Taschini)
Date: Thu, 26 Apr 2012 17:07:46 +0200
Subject: [Python-Dev] Is it safe to assume that Python 2.7 is always
 built with unicode support?
In-Reply-To: <20120426160142.Horde.upMNK9jz9kRPmVVGDU3FVvA@webmail.df.eu>
References: <CAPdNJuAVRXttm9-n0ENd2Zmkub8nvso+U3OUovVx=Sffh7RFXA@mail.gmail.com>
	<20120426160142.Horde.upMNK9jz9kRPmVVGDU3FVvA@webmail.df.eu>
Message-ID: <CAPdNJuBU9gCU5tuMJT8yP9gjkgD2QhJDgptOZyx_L_fcEykR_w@mail.gmail.com>

Understood.

May I suggest that http://bugs.python.org/issue8767 be reopened, to make
things clear?

    Stefano


On 26 April 2012 16:01, <martin at v.loewis.de> wrote:

> I'm looking into issue 1065986 [1], and in order to submit a patch I need
>> to know whether I have to take into account the eventuality that cpyhon
>> 2.7
>> be built without unicode support.
>>
>
> It's intended (at least, it is *my* intention) that Python 2.7 can be built
> without Unicode support, and it's a bug if that is not possible anymore.
> Certain embedded configurations might want that.
>
> That doesn't mean that the bug needs to be fixed; this can be deferred
> until
> somebody actually requests that bug being fixed, or better, until somebody
> contributes a patch to do so.
>
> However, it *does* mean that we shouldn't further break the feature, at
> least
> not knowingly.
>
> OTOH, it's clear that certain functionality cannot work if Unicode is
> disabled,
> so it may be acceptable if pydoc breaks in such a configuration.
>
> Regards,
> Martin
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120426/e0dc660c/attachment.html>

From rdmurray at bitdance.com  Thu Apr 26 18:05:28 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 26 Apr 2012 12:05:28 -0400
Subject: [Python-Dev] Is it safe to assume that Python 2.7 is always
	built with unicode support?
In-Reply-To: <CAPdNJuBU9gCU5tuMJT8yP9gjkgD2QhJDgptOZyx_L_fcEykR_w@mail.gmail.com>
References: <CAPdNJuAVRXttm9-n0ENd2Zmkub8nvso+U3OUovVx=Sffh7RFXA@mail.gmail.com>
	<20120426160142.Horde.upMNK9jz9kRPmVVGDU3FVvA@webmail.df.eu>
	<CAPdNJuBU9gCU5tuMJT8yP9gjkgD2QhJDgptOZyx_L_fcEykR_w@mail.gmail.com>
Message-ID: <20120426160529.24C9C250631@webabinitio.net>

On Thu, 26 Apr 2012 17:07:46 +0200, Stefano Taschini <taschini at ieee.org> wrote:
> May I suggest that http://bugs.python.org/issue8767 be reopened, to make
> things clear?

Done.

--David

PS: we prefer no top-posting on this list.  It makes it far easier
to retain just enough context to make a message stand on its own
when properly edited.

From ericsnowcurrently at gmail.com  Thu Apr 26 18:21:34 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 26 Apr 2012 10:21:34 -0600
Subject: [Python-Dev] sys.implementation
In-Reply-To: <20120426103150.4898a678@limelight.wooz.org>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<20120426103150.4898a678@limelight.wooz.org>
Message-ID: <CALFfu7B+E+o2r=GbN14A=i8_Qrav_T5aOALdsOpSD88yBwA2GQ@mail.gmail.com>

On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw <barry at python.org> wrote:
> On Apr 25, 2012, at 11:31 PM, Eric Snow wrote:
>>Are there any objections? ?Considering the positive reaction and the
>>scope of the addition, does this need a PEP?
>
> It's somewhat of a corner case, but I think a PEP couldn't hurt. ?The
> rationale section would be useful, at least.
>
> -Barry

Yeah, I'm finding little bits and pieces that would be nice to have
recorded in one place.  I'll get something up in the next couple days.

-eric

From s.brunthaler at uci.edu  Thu Apr 26 20:33:16 2012
From: s.brunthaler at uci.edu (stefan brunthaler)
Date: Thu, 26 Apr 2012 11:33:16 -0700
Subject: [Python-Dev] Assigning copyright...
In-Reply-To: <4F98EA9B.5060906@hotpy.org>
References: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>
	<4F98EA9B.5060906@hotpy.org>
Message-ID: <CA+j1x0n0wc6r6QgVgEPH10FZfBGaBYi3ezbBeDQnZ4FO+2Q=7g@mail.gmail.com>

Hello Mark,

> A URL for the code repository (with an open-source license),
> so code can be reviewed.
> It is hard to review and update a giant patch.

OK, I took Nick's advice to heart and created a fork from the official
cpython mirror on bitbucket. You can view the code patched  in
(branch: inca-only) under the following URL:
https://bitbucket.org/sbrunthaler/cpython-inline-caching

Since it is a fork, it contains the usual LICENSE from Python.

Regarding Eric's hint: It seems that this agreement needs to be signed
and mailed. Can I sign/scan and email it to somebody? (Or should I
wait until there is a decision regarding a potential integration?) The
way I understood Guido's last message it is best to use Apache 2
license without retaining my own copyright. I am perfectly fine with
that but am not sure if using the fork with sub-directories including
the official LICENSE takes care of that. Obviously, I don't have too
much experience in this area, so if I am missing something blatantly
obvious, I apologize beforehand...

Best,
--stefan

From martin at v.loewis.de  Thu Apr 26 21:07:35 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Thu, 26 Apr 2012 21:07:35 +0200
Subject: [Python-Dev] Assigning copyright...
In-Reply-To: <CA+j1x0n0wc6r6QgVgEPH10FZfBGaBYi3ezbBeDQnZ4FO+2Q=7g@mail.gmail.com>
References: <CA+j1x0=c_uF2fD3SAVHeLuBOGcweya0pVhrZhyS7fNhNaj0f0g@mail.gmail.com>
	<4F98EA9B.5060906@hotpy.org>
	<CA+j1x0n0wc6r6QgVgEPH10FZfBGaBYi3ezbBeDQnZ4FO+2Q=7g@mail.gmail.com>
Message-ID: <4F999CF7.6020304@v.loewis.de>

> Regarding Eric's hint: It seems that this agreement needs to be signed
> and mailed. Can I sign/scan and email it to somebody?

Yes, see

http://www.python.org/psf/contrib/

Regards,
Martin

From vinay_sajip at yahoo.co.uk  Thu Apr 26 21:10:48 2012
From: vinay_sajip at yahoo.co.uk (Vinay Sajip)
Date: Thu, 26 Apr 2012 19:10:48 +0000 (UTC)
Subject: [Python-Dev] Changes in html.parser may cause breakage in client
	code
Message-ID: <loom.20120426T205136-611@post.gmane.org>

Following recent changes in html.parser, the Python 3 port of Django I'm working
on has started failing while parsing HTML.

The reason appears to be that Django uses some module-level data in html.parser,
for example tagfind, which is a regular expression pattern. This has changed
recently (Ezio changed it in ba4baaddac8d).

Now tagfind (and other such patterns) are not marked as private (though not
documented), but should they be? The following script (tagfind.py):

    import html.parser as Parser

    data = '<select name="stuff">'

    m = Parser.tagfind.match(data, 1)
    print('%r -> %r' % (Parser.tagfind.pattern, data[1:m.end()]))

gives different results on 3.2 and 3.3:

    $ python3.2 tagfind.py
    '[a-zA-Z][-.a-zA-Z0-9:_]*' -> 'select'
    $ python3.3 tagfind.py
    '([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\\s|/(?!>))*' -> 'select '

The trailing space later causes a mismatch with the end tag, and leads to the
errors. Django's use of the tagfind pattern is in a subclass of HTMLParser, in
an overridden parse_startag method.

Do we need to indicate more strongly that data like tagfind are private? Or has
the change introduced inadvertent breakage, requiring a fix in Python?

Regards,

Vinay Sajip


From guido at python.org  Thu Apr 26 21:21:49 2012
From: guido at python.org (Guido van Rossum)
Date: Thu, 26 Apr 2012 12:21:49 -0700
Subject: [Python-Dev] Changes in html.parser may cause breakage in
	client code
In-Reply-To: <loom.20120426T205136-611@post.gmane.org>
References: <loom.20120426T205136-611@post.gmane.org>
Message-ID: <CAP7+vJKk=8N7MmtrGFSeWMDfnJfx08+cwnE50zvTJwJ-hYSiBA@mail.gmail.com>

On Thu, Apr 26, 2012 at 12:10 PM, Vinay Sajip <vinay_sajip at yahoo.co.uk> wrote:
> Following recent changes in html.parser, the Python 3 port of Django I'm working
> on has started failing while parsing HTML.
>
> The reason appears to be that Django uses some module-level data in html.parser,
> for example tagfind, which is a regular expression pattern. This has changed
> recently (Ezio changed it in ba4baaddac8d).
>
> Now tagfind (and other such patterns) are not marked as private (though not
> documented), but should they be? The following script (tagfind.py):
>
> ? ?import html.parser as Parser
>
> ? ?data = '<select name="stuff">'
>
> ? ?m = Parser.tagfind.match(data, 1)
> ? ?print('%r -> %r' % (Parser.tagfind.pattern, data[1:m.end()]))
>
> gives different results on 3.2 and 3.3:
>
> ? ?$ python3.2 tagfind.py
> ? ?'[a-zA-Z][-.a-zA-Z0-9:_]*' -> 'select'
> ? ?$ python3.3 tagfind.py
> ? ?'([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\\s|/(?!>))*' -> 'select '
>
> The trailing space later causes a mismatch with the end tag, and leads to the
> errors. Django's use of the tagfind pattern is in a subclass of HTMLParser, in
> an overridden parse_startag method.
>
> Do we need to indicate more strongly that data like tagfind are private? Or has
> the change introduced inadvertent breakage, requiring a fix in Python?

I think both. Looks like it wasn't meant to be exported. But it should
have been marked as such. And I think it would behoove us to reduce
random failures in important 3rd party libraries by keeping the old
version around (but mark it as deprecated with an explaining comment,
and submit a Django fix to stop using it).

Also the module should be updated to use _tagfind internally (and
likewise for other accidental exports).

Traditionally we've been really lax about this stuff. We should strive
to improve and clarify the exact boundaries of our APIs better.

-- 
--Guido van Rossum (python.org/~guido)

From g.brandl at gmx.net  Thu Apr 26 21:26:08 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 26 Apr 2012 21:26:08 +0200
Subject: [Python-Dev] Changes in html.parser may cause breakage in
	client code
In-Reply-To: <loom.20120426T205136-611@post.gmane.org>
References: <loom.20120426T205136-611@post.gmane.org>
Message-ID: <jnc7fh$hvo$1@dough.gmane.org>

On 26.04.2012 21:10, Vinay Sajip wrote:
> Following recent changes in html.parser, the Python 3 port of Django I'm working
> on has started failing while parsing HTML.
> 
> The reason appears to be that Django uses some module-level data in html.parser,
> for example tagfind, which is a regular expression pattern. This has changed
> recently (Ezio changed it in ba4baaddac8d).
> 
> Now tagfind (and other such patterns) are not marked as private (though not
> documented), but should they be? The following script (tagfind.py):
> 
>     import html.parser as Parser
> 
>     data = '<select name="stuff">'
> 
>     m = Parser.tagfind.match(data, 1)
>     print('%r -> %r' % (Parser.tagfind.pattern, data[1:m.end()]))
> 
> gives different results on 3.2 and 3.3:
> 
>     $ python3.2 tagfind.py
>     '[a-zA-Z][-.a-zA-Z0-9:_]*' -> 'select'
>     $ python3.3 tagfind.py
>     '([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\\s|/(?!>))*' -> 'select '
> 
> The trailing space later causes a mismatch with the end tag, and leads to the
> errors. Django's use of the tagfind pattern is in a subclass of HTMLParser, in
> an overridden parse_startag method.
> 
> Do we need to indicate more strongly that data like tagfind are private? Or has
> the change introduced inadvertent breakage, requiring a fix in Python?

Since it's a module level constant without a leading underscore, IMO it was
okay for Django to use it, even if not documented.

In this case, especially since we actually have evidence of someone using the
constant, I would keep it as-is and use a new (underscored, this time) name for
the new pattern.

And yes, I think that we do need to indicate private-ness of module-level data.

Georg


From tismer at stackless.com  Thu Apr 26 23:30:40 2012
From: tismer at stackless.com (Christian Tismer)
Date: Thu, 26 Apr 2012 23:30:40 +0200
Subject: [Python-Dev] package imports, sys.path and os.chdir()
Message-ID: <4F99BE80.2090509@stackless.com>

Howdy,

I have a small problem/observation with imports.

I have several packages to import, which works all fine, as long
as the packages are imported from directories found on the installed
site-packages, via .pth etc.

The only problem is the automatically prepended empty string in sys.path.
Depending from where I start my application, the values stored
in package.__file__ and package.__path__ are absolute or relative
paths.

So, if my pwd is the directory that contains my top-level modules,
even though sys.path contains correct absolute entries for that, in this
case the '' entry wins.

Assume this:

<- cwd is here
    moda
        modb

 >>> import moda

Some code happens to chdir away, and later some code does

 >>> from moda import modb

Since the __path__ entry is now a relative path, this second import fails.

Although it is no recommended practice to leave a changed chdir(), I
don't see why this is so. When a module is imported, would it not be
better to always make __file__ and __path__ absolute?

I see the module path, hidden by the '' entry not as a feature but
an undesired side-effect.

No big deal and easy to work around, I just would like to understand why.

cheers -- chris

-- 
Christian Tismer             :^)<mailto:tismer at stackless.com>
tismerysoft GmbH             :     Have a break! Take a ride on Python's
Karl-Liebknecht-Str. 121     :    *Starship* http://starship.python.net/
14482 Potsdam                :     PGP key ->  http://pgp.uni-mainz.de
work +49 173 24 18 776  mobile +49 173 24 18 776  fax n.a.
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
       whom do you want to sponsor today?   http://www.stackless.com/


From ncoghlan at gmail.com  Fri Apr 27 02:33:14 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 27 Apr 2012 10:33:14 +1000
Subject: [Python-Dev] Changes in html.parser may cause breakage in
	client code
In-Reply-To: <CAP7+vJKk=8N7MmtrGFSeWMDfnJfx08+cwnE50zvTJwJ-hYSiBA@mail.gmail.com>
References: <loom.20120426T205136-611@post.gmane.org>
	<CAP7+vJKk=8N7MmtrGFSeWMDfnJfx08+cwnE50zvTJwJ-hYSiBA@mail.gmail.com>
Message-ID: <CADiSq7dzJ2v=x1MGOLe8nkkDRxqfUiunHwfFYdwBXzY6TX0Ayg@mail.gmail.com>

On Fri, Apr 27, 2012 at 5:21 AM, Guido van Rossum <guido at python.org> wrote:
> Traditionally we've been really lax about this stuff. We should strive
> to improve and clarify the exact boundaries of our APIs better.

Yeah, I must admit in my own projects these days I habitually mark all
module level and class level names with a leading underscore until I
make a conscious decision to make them part of the relevant public
API. I also do this for any new helper attributes and
functions/methods I add to the stdlib.

One key catalyst for this was when PJE pointed out a bug years ago in
the behaviour of the -m switch that meant I had to introduce a *new*
helper function to runpy, because runpy.run_module was public, and I
needed to change the signature in a backwards incompatible way to fix
the bug (and thus the current runpy._run_module_as_main hook was
born).

When I use dir() and help() as much as I do to explore unfamiliar
APIs, I feel obliged to make sure that introspecting my own code
accurately reflects which names are part of the public API and which
are just implementation details.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Fri Apr 27 02:39:00 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 27 Apr 2012 10:39:00 +1000
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <4F99BE80.2090509@stackless.com>
References: <4F99BE80.2090509@stackless.com>
Message-ID: <CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>

On Fri, Apr 27, 2012 at 7:30 AM, Christian Tismer <tismer at stackless.com> wrote:
> No big deal and easy to work around, I just would like to understand why.

I don't like it either and want to change it, but I'm also not going
to mess with it until the importlib bootstrapping is fully integrated
and stable.

For the moment, there's a workaround in runpy to ensure at least
__main__.__file__ is always absolute (even when using the -m switch).
Longer term, I'd like to see __file__ and __path__ entries to be
guaranteed to be *always* absolutely, even when they're imported
relative to the current working directory.

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From larry at hastings.org  Fri Apr 27 05:29:07 2012
From: larry at hastings.org (Larry Hastings)
Date: Thu, 26 Apr 2012 20:29:07 -0700
Subject: [Python-Dev] sys.implementation
In-Reply-To: <CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
Message-ID: <4F9A1283.2010206@hastings.org>

On 04/25/2012 10:31 PM, Eric Snow wrote:
> The patch adds a struct sequence that holds ("name" =>  "CPython",
> "version" =>  sys.version_info).  If later needs dictate more fields,
> we can cross that bridge then.

My one bit of bike-shedding: I don't think it's desirable that this 
object be iterable.  Therefore I suggest you don't use struct sequence.


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120426/136d1f51/attachment.html>

From ezio.melotti at gmail.com  Fri Apr 27 07:23:13 2012
From: ezio.melotti at gmail.com (Ezio Melotti)
Date: Fri, 27 Apr 2012 08:23:13 +0300
Subject: [Python-Dev] Changes in html.parser may cause breakage in
 client code
In-Reply-To: <loom.20120426T205136-611@post.gmane.org>
References: <loom.20120426T205136-611@post.gmane.org>
Message-ID: <4F9A2D41.1050409@gmail.com>

Hi,

On 26/04/2012 22.10, Vinay Sajip wrote:
> Following recent changes in html.parser, the Python 3 port of Django I'm working
> on has started failing while parsing HTML.
>
> The reason appears to be that Django uses some module-level data in html.parser,
> for example tagfind, which is a regular expression pattern. This has changed
> recently (Ezio changed it in ba4baaddac8d).

html.parser doesn't use any private _name, so I was considering part of 
the public API only the documented names.  Several methods are marked 
with an "# internal" comment, but that's not visible unless you go read 
the source code.

> Now tagfind (and other such patterns) are not marked as private (though not
> documented), but should they be? The following script (tagfind.py):
>
>      import html.parser as Parser
>
>      data = '<select name="stuff">'
>
>      m = Parser.tagfind.match(data, 1)
>      print('%r ->  %r' % (Parser.tagfind.pattern, data[1:m.end()]))
>
> gives different results on 3.2 and 3.3:
>
>      $ python3.2 tagfind.py
>      '[a-zA-Z][-.a-zA-Z0-9:_]*' ->  'select'
>      $ python3.3 tagfind.py
>      '([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\\s|/(?!>))*' ->  'select'
>
> The trailing space later causes a mismatch with the end tag, and leads to the
> errors. Django's use of the tagfind pattern is in a subclass of HTMLParser, in
> an overridden parse_startag method.

Django shouldn't override parse_starttag (internal and undocumented), 
but just use handle_starttag (public and documented).
I see two possible reasons why it's overriding parse_starttag:
  1) Django is working around an HTMLParser bug.  In this case the bug 
could have been fixed (leading to the breakage of the now-useless 
workaround), and now you could be able to use the original 
parse_starttag and have the correct result.  If it is indeed working 
around a bug and the bug is still present, you should report it upstream.
  2) Django is implementing an additional feature.  Depending on what 
exactly the code is doing you might want to open a new feature request 
on the bug tracker. For example the original parse_starttag sets a 
self.lasttag attribute with the correct name of the last tag parsed.  
Note however that both parse_starttag and self.lasttag are internal and 
shouldn't be used directly (but lasttag could be exposed and documented 
if people really think that it's useful).

> Do we need to indicate more strongly that data like tagfind are private? Or has
> the change introduced inadvertent breakage, requiring a fix in Python?

I'm not sure that reverting the regex, deprecate all the exposed 
internal names, and add/use internal _names instead is a good idea at 
this point.  This will cause more breakage, and it would require an 
extensive renaming.  I can add notes to the documentation/docstrings and 
specify what's private and what's not though.
OTOH, if this specific fix is not released yet I can still do something 
to limit/avoid the breakage.

Best Regards,
Ezio Melotti

> Regards,
>
> Vinay Sajip
>


From ericsnowcurrently at gmail.com  Fri Apr 27 08:05:08 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 27 Apr 2012 00:05:08 -0600
Subject: [Python-Dev] sys.implementation
In-Reply-To: <4F9A1283.2010206@hastings.org>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<4F9A1283.2010206@hastings.org>
Message-ID: <CALFfu7DKHS+S7wCQNYXkEj_R+1oStBTJLpEGoa8iW9oQ-WJUiA@mail.gmail.com>

On Thu, Apr 26, 2012 at 9:29 PM, Larry Hastings <larry at hastings.org> wrote:
> My one bit of bike-shedding: I don't think it's desirable that this object
> be iterable.? Therefore I suggest you don't use struct sequence.

Good point.  Noted.

-eric

From ericsnowcurrently at gmail.com  Fri Apr 27 09:34:22 2012
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 27 Apr 2012 01:34:22 -0600
Subject: [Python-Dev] sys.implementation
In-Reply-To: <20120426103150.4898a678@limelight.wooz.org>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<20120426103150.4898a678@limelight.wooz.org>
Message-ID: <CALFfu7DbaC0Dhvzzif+kBT42J13JFfgb8tdyR58yh2=h_t-kZQ@mail.gmail.com>

On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw <barry at python.org> wrote:
> It's somewhat of a corner case, but I think a PEP couldn't hurt. ?The
> rationale section would be useful, at least.

  http://mail.python.org/pipermail/python-ideas/2012-April/014954.html

-eric

From guido at python.org  Fri Apr 27 16:36:06 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 27 Apr 2012 07:36:06 -0700
Subject: [Python-Dev] Changes in html.parser may cause breakage in
	client code
In-Reply-To: <4F9A2D41.1050409@gmail.com>
References: <loom.20120426T205136-611@post.gmane.org>
	<4F9A2D41.1050409@gmail.com>
Message-ID: <CAP7+vJ+U8oduPUKRFBCNdUBa0cJkN9JU7Z0vGq=72geUuD3+RA@mail.gmail.com>

Someone should contact the Django folks. Alex Gaynor?

On Thursday, April 26, 2012, Ezio Melotti wrote:

> Hi,
>
> On 26/04/2012 22.10, Vinay Sajip wrote:
>
>> Following recent changes in html.parser, the Python 3 port of Django I'm
>> working
>> on has started failing while parsing HTML.
>>
>> The reason appears to be that Django uses some module-level data in
>> html.parser,
>> for example tagfind, which is a regular expression pattern. This has
>> changed
>> recently (Ezio changed it in ba4baaddac8d).
>>
>
> html.parser doesn't use any private _name, so I was considering part of
> the public API only the documented names.  Several methods are marked with
> an "# internal" comment, but that's not visible unless you go read the
> source code.
>
>  Now tagfind (and other such patterns) are not marked as private (though
>> not
>> documented), but should they be? The following script (tagfind.py):
>>
>>     import html.parser as Parser
>>
>>     data = '<select name="stuff">'
>>
>>     m = Parser.tagfind.match(data, 1)
>>     print('%r ->  %r' % (Parser.tagfind.pattern, data[1:m.end()]))
>>
>> gives different results on 3.2 and 3.3:
>>
>>     $ python3.2 tagfind.py
>>     '[a-zA-Z][-.a-zA-Z0-9:_]*' ->  'select'
>>     $ python3.3 tagfind.py
>>     '([a-zA-Z][-.a-zA-Z0-9:_]*)(?:**\\s|/(?!>))*' ->  'select'
>>
>> The trailing space later causes a mismatch with the end tag, and leads to
>> the
>> errors. Django's use of the tagfind pattern is in a subclass of
>> HTMLParser, in
>> an overridden parse_startag method.
>>
>
> Django shouldn't override parse_starttag (internal and undocumented), but
> just use handle_starttag (public and documented).
> I see two possible reasons why it's overriding parse_starttag:
>  1) Django is working around an HTMLParser bug.  In this case the bug
> could have been fixed (leading to the breakage of the now-useless
> workaround), and now you could be able to use the original parse_starttag
> and have the correct result.  If it is indeed working around a bug and the
> bug is still present, you should report it upstream.
>  2) Django is implementing an additional feature.  Depending on what
> exactly the code is doing you might want to open a new feature request on
> the bug tracker. For example the original parse_starttag sets a
> self.lasttag attribute with the correct name of the last tag parsed.  Note
> however that both parse_starttag and self.lasttag are internal and
> shouldn't be used directly (but lasttag could be exposed and documented if
> people really think that it's useful).
>
>  Do we need to indicate more strongly that data like tagfind are private?
>> Or has
>> the change introduced inadvertent breakage, requiring a fix in Python?
>>
>
> I'm not sure that reverting the regex, deprecate all the exposed internal
> names, and add/use internal _names instead is a good idea at this point.
>  This will cause more breakage, and it would require an extensive renaming.
>  I can add notes to the documentation/docstrings and specify what's private
> and what's not though.
> OTOH, if this specific fix is not released yet I can still do something to
> limit/avoid the breakage.
>
> Best Regards,
> Ezio Melotti
>
>  Regards,
>>
>> Vinay Sajip
>>
>>
> ______________________________**_________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/**mailman/listinfo/python-dev<http://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: http://mail.python.org/**mailman/options/python-dev/**
> guido%40python.org<http://mail.python.org/mailman/options/python-dev/guido%40python.org>
>


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120427/fafe844d/attachment.html>

From tismer at stackless.com  Fri Apr 27 16:39:20 2012
From: tismer at stackless.com (Christian Tismer)
Date: Fri, 27 Apr 2012 16:39:20 +0200
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
Message-ID: <4F9AAF98.7050303@stackless.com>

On 27.04.12 02:39, Nick Coghlan wrote:
> On Fri, Apr 27, 2012 at 7:30 AM, Christian Tismer<tismer at stackless.com>  wrote:
>> No big deal and easy to work around, I just would like to understand why.
> I don't like it either and want to change it, but I'm also not going
> to mess with it until the importlib bootstrapping is fully integrated
> and stable.
>
> For the moment, there's a workaround in runpy to ensure at least
> __main__.__file__ is always absolute (even when using the -m switch).
> Longer term, I'd like to see __file__ and __path__ entries to be
> guaranteed to be *always* absolutely, even when they're imported
> relative to the current working directory.
>

Is there a recommendable way to fix this? I would like to tell people
what to do to make imports reliable. Either I put something into
the toplevel __init__ code, or I hack something into .pth or sitecustomize,
and then forget about this.

But I fear hacking __init__ is the only safe way that works without
a special python setup, which makes the whole reasoning rather
useless, because I can _not_ forget about this.... waah ;-)

cheers - chris

-- 
Christian Tismer             :^)<mailto:tismer at stackless.com>
tismerysoft GmbH             :     Have a break! Take a ride on Python's
Karl-Liebknecht-Str. 121     :    *Starship* http://starship.python.net/
14482 Potsdam                :     PGP key ->  http://pgp.uni-mainz.de
work +49 173 24 18 776  mobile +49 173 24 18 776  fax n.a.
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
       whom do you want to sponsor today?   http://www.stackless.com/


From status at bugs.python.org  Fri Apr 27 18:07:14 2012
From: status at bugs.python.org (Python tracker)
Date: Fri, 27 Apr 2012 18:07:14 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20120427160714.C11BB1CAC7@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2012-04-20 - 2012-04-27)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    3405 ( +9)
  closed 23056 (+41)
  total  26461 (+50)

Open issues with patches: 1446 


Issues opened (37)
==================

#5057: Unicode-width	dependent	optimization	leads	to	non-portable pyc
http://bugs.python.org/issue5057  reopened by arigo

#8767: Configure: Cannot disable unicode
http://bugs.python.org/issue8767  reopened by r.david.murray

#10142: Support for SEEK_HOLE/SEEK_DATA
http://bugs.python.org/issue10142  reopened by jcea

#13903: New shared-keys dictionary implementation
http://bugs.python.org/issue13903  reopened by Mark.Shannon

#14339: Optimizing bin, oct and hex
http://bugs.python.org/issue14339  reopened by loewis

#14635: telnetlib uses select instead of poll - limited to FD_SETSIZE 
http://bugs.python.org/issue14635  opened by gregory.p.smith

#14639: Different behavior for urllib2 in Python 2.7
http://bugs.python.org/issue14639  opened by Diego.Manenti.Martins

#14642: Fix importlib.h build rule to not depend on hg
http://bugs.python.org/issue14642  opened by brett.cannon

#14643: Security page out of date
http://bugs.python.org/issue14643  opened by pitrou

#14645: Generator does not translate linesep characters in certain cir
http://bugs.python.org/issue14645  opened by r.david.murray

#14646: Require loaders set __loader__ and __package__
http://bugs.python.org/issue14646  opened by brett.cannon

#14647: imp.reload() on a package leads to a segfault or a GC assertio
http://bugs.python.org/issue14647  opened by brett.cannon

#14649: doctest.DocTestSuite error misleading when module has no docst
http://bugs.python.org/issue14649  opened by cjerdonek

#14651: pysetup run cmd can't handle option values in the setup.cfg
http://bugs.python.org/issue14651  opened by shimizukawa

#14652: Better error messages for wsgiref validator failures
http://bugs.python.org/issue14652  opened by ssm

#14653: Improve mktime_tz to use calendar.timegm instead of time.mktim
http://bugs.python.org/issue14653  opened by mitar

#14654: More fast utf-8 decoding
http://bugs.python.org/issue14654  opened by storchaka

#14655: traceback module docs should show how to print/fomat an except
http://bugs.python.org/issue14655  opened by r.david.murray

#14656: Add a macro for unreachable code
http://bugs.python.org/issue14656  opened by benjamin.peterson

#14657: Avoid two importlib copies
http://bugs.python.org/issue14657  opened by pitrou

#14660: Implement PEP 420: Implicit Namespace Packages
http://bugs.python.org/issue14660  opened by eric.smith

#14662: shutil.move broken in 2.7.3 on OSX (chflags fails)
http://bugs.python.org/issue14662  opened by grobian

#14665: faulthandler prints tracebacks in reverse order
http://bugs.python.org/issue14665  opened by pitrou

#14666: test_sendall_interrupted hangs on FreeBSD with a zombi multipr
http://bugs.python.org/issue14666  opened by haypo

#14667: No IDLE
http://bugs.python.org/issue14667  opened by James.Lu

#14668: Document the path option in the Windows installer
http://bugs.python.org/issue14668  opened by brian.curtin

#14669: test_multiprocessing failure on OS X Tiger
http://bugs.python.org/issue14669  opened by pitrou

#14672: Windows installer: add desktop shortcut(s)
http://bugs.python.org/issue14672  opened by jdigital

#14673: add sys.implementation
http://bugs.python.org/issue14673  opened by eric.snow

#14674: Add link to RFC 4627 from json documentation
http://bugs.python.org/issue14674  opened by storchaka

#14675: make distutils.ccompiler.CCompiler an abstract class
http://bugs.python.org/issue14675  opened by ramchandra.apte

#14676: DeprecationWarning missing in default warning filters document
http://bugs.python.org/issue14676  opened by petere

#14678: Update zipimport to support importlib.invalidate_caches()
http://bugs.python.org/issue14678  opened by brett.cannon

#14679: Changes to html.parser break third-party code
http://bugs.python.org/issue14679  opened by vinay.sajip

#14680: pydoc with -w option does not work for a lot of help topics
http://bugs.python.org/issue14680  opened by gregor.hoch

#14682: Backport missing errnos to 2.7
http://bugs.python.org/issue14682  opened by hynek

#1065986: Fix pydoc crashing on unicode strings
http://bugs.python.org/issue1065986  reopened by r.david.murray



Most recent 15 issues with no replies (15)
==========================================

#14680: pydoc with -w option does not work for a lot of help topics
http://bugs.python.org/issue14680

#14679: Changes to html.parser break third-party code
http://bugs.python.org/issue14679

#14676: DeprecationWarning missing in default warning filters document
http://bugs.python.org/issue14676

#14674: Add link to RFC 4627 from json documentation
http://bugs.python.org/issue14674

#14652: Better error messages for wsgiref validator failures
http://bugs.python.org/issue14652

#14649: doctest.DocTestSuite error misleading when module has no docst
http://bugs.python.org/issue14649

#14645: Generator does not translate linesep characters in certain cir
http://bugs.python.org/issue14645

#14616: subprocess docs should mention pipes.quote/shlex.quote
http://bugs.python.org/issue14616

#14584: Add gzip support to xmlrpc.server
http://bugs.python.org/issue14584

#14570: Document json "sort_keys" parameter properly
http://bugs.python.org/issue14570

#14566: run_cgi reverts to using unnormalized path
http://bugs.python.org/issue14566

#14561: python-2.7.2-r3 suffers test failure at test_mhlib
http://bugs.python.org/issue14561

#14558: Documentation for unittest.main does not describe some keyword
http://bugs.python.org/issue14558

#14530: distutils's build_wininst command fails to correctly interpret
http://bugs.python.org/issue14530

#14529: distutils's build_msi command ignores the data_files argument
http://bugs.python.org/issue14529



Most recent 15 issues waiting for review (15)
=============================================

#14676: DeprecationWarning missing in default warning filters document
http://bugs.python.org/issue14676

#14673: add sys.implementation
http://bugs.python.org/issue14673

#14669: test_multiprocessing failure on OS X Tiger
http://bugs.python.org/issue14669

#14666: test_sendall_interrupted hangs on FreeBSD with a zombi multipr
http://bugs.python.org/issue14666

#14665: faulthandler prints tracebacks in reverse order
http://bugs.python.org/issue14665

#14657: Avoid two importlib copies
http://bugs.python.org/issue14657

#14656: Add a macro for unreachable code
http://bugs.python.org/issue14656

#14654: More fast utf-8 decoding
http://bugs.python.org/issue14654

#14652: Better error messages for wsgiref validator failures
http://bugs.python.org/issue14652

#14651: pysetup run cmd can't handle option values in the setup.cfg
http://bugs.python.org/issue14651

#14631: Instance methods and WeakRefs don't mix.
http://bugs.python.org/issue14631

#14625: Faster utf-32 decoder
http://bugs.python.org/issue14625

#14624: Faster utf-16 decoder
http://bugs.python.org/issue14624

#14617: confusing docs with regard to __hash__
http://bugs.python.org/issue14617

#14611: inspect.getargs fails on some anonymous tuples
http://bugs.python.org/issue14611



Top 10 most discussed issues (10)
=================================

#14657: Avoid two importlib copies
http://bugs.python.org/issue14657  47 msgs

#3177: Add shutil.open
http://bugs.python.org/issue3177  30 msgs

#14605: Make import machinery explicit
http://bugs.python.org/issue14605  28 msgs

#13210: Support Visual Studio 2010
http://bugs.python.org/issue13210  20 msgs

#13959: Re-implement parts of imp in pure Python
http://bugs.python.org/issue13959  17 msgs

#14642: Fix importlib.h build rule to not depend on hg
http://bugs.python.org/issue14642  14 msgs

#14579: CVE-2012-2135: Vulnerability in the utf-16 decoder after error
http://bugs.python.org/issue14579  13 msgs

#14666: test_sendall_interrupted hangs on FreeBSD with a zombi multipr
http://bugs.python.org/issue14666  13 msgs

#11618: Locks broken wrt timeouts on Windows
http://bugs.python.org/issue11618  12 msgs

#10142: Support for SEEK_HOLE/SEEK_DATA
http://bugs.python.org/issue10142  11 msgs



Issues closed (41)
==================

#2193: Cookie Colon Name Bug
http://bugs.python.org/issue2193  closed by orsenthil

#2857: Add "java modified utf-8" codec
http://bugs.python.org/issue2857  closed by loewis

#4892: Sending Connection-objects over multiprocessing connections fa
http://bugs.python.org/issue4892  closed by pitrou

#8427: toplevel jumps to another location on the screen
http://bugs.python.org/issue8427  closed by ned.deily

#11574: TextIOWrapper: Unicode Fallback Encoding on Python 3.3
http://bugs.python.org/issue11574  closed by haypo

#12632: Python 3 doesn't support cp65001 as the OEM code page
http://bugs.python.org/issue12632  closed by haypo

#13478: No documentation for timeit.default_timer
http://bugs.python.org/issue13478  closed by sandro.tosi

#13587: Correcting the typos error in Doc/howto/urllib2.rst
http://bugs.python.org/issue13587  closed by sandro.tosi

#13621: Unicode performance regression in python3.3 vs python3.2
http://bugs.python.org/issue13621  closed by loewis

#14026: test_cmd_line_script should include more sys.argv checks
http://bugs.python.org/issue14026  closed by ncoghlan

#14160: TarFile.extractfile fails to extract targets of top-level rela
http://bugs.python.org/issue14160  closed by lars.gustaebel

#14448: Mention pytz in datetime's docs
http://bugs.python.org/issue14448  closed by sandro.tosi

#14554: test module: correction
http://bugs.python.org/issue14554  closed by sandro.tosi

#14581: Support case-insensitive file extensions on Windows in importl
http://bugs.python.org/issue14581  closed by brett.cannon

#14585: Have test_import run more importlib tests
http://bugs.python.org/issue14585  closed by brett.cannon

#14599: Windows test_import failure thanks to ImportError.path
http://bugs.python.org/issue14599  closed by brett.cannon

#14606: Memory leak subprocess on Windows
http://bugs.python.org/issue14606  closed by neologix

#14628: Clarify import statement documentation regarding what gets bou
http://bugs.python.org/issue14628  closed by brett.cannon

#14630: non-deterministic behavior of int subclass
http://bugs.python.org/issue14630  closed by mark.dickinson

#14632: Race condition in WatchedFileHandler leads to unhandled except
http://bugs.python.org/issue14632  closed by vinay.sajip

#14633: test_find_module_encoding should test for a less specific mess
http://bugs.python.org/issue14633  closed by brett.cannon

#14634: Mock cannot autospec functions with keyword-only arguments.
http://bugs.python.org/issue14634  closed by python-dev

#14636: Mock could check for exceptions in side effect list
http://bugs.python.org/issue14636  closed by python-dev

#14637: test.test_import.PathsTests.test_UNC_path is failing
http://bugs.python.org/issue14637  closed by brett.cannon

#14638: pydoc error on instance of a custom class
http://bugs.python.org/issue14638  closed by r.david.murray

#14640: Typos in pyporting.rst
http://bugs.python.org/issue14640  closed by r.david.murray

#14641: Minor fixes in sockets.rst
http://bugs.python.org/issue14641  closed by sandro.tosi

#14644: test_logging failure on OS X Tiger
http://bugs.python.org/issue14644  closed by vinay.sajip

#14648: Attempt to format ascii and non-ascii strings together fails w
http://bugs.python.org/issue14648  closed by python-dev

#14650: 1-character typo in shutil docstring
http://bugs.python.org/issue14650  closed by sandro.tosi

#14658: Overwriting dict.__getattr__ is inconsistent
http://bugs.python.org/issue14658  closed by python-dev

#14659: HP multi-thread environment python core in PyObject_GC_UnTrack
http://bugs.python.org/issue14659  closed by amaury.forgeotdarc

#14661: posix module: add O_EXEC, O_SEARCH, O_TTY_INIT
http://bugs.python.org/issue14661  closed by jcea

#14663: Cannot comment out comments
http://bugs.python.org/issue14663  closed by r.david.murray

#14664: Skipping a test mixin gives metaclass error
http://bugs.python.org/issue14664  closed by pitrou

#14670: subprocess.call with  pipe character in argument
http://bugs.python.org/issue14670  closed by r.david.murray

#14671: isinstance(obj, object) returns True for _old style_ class ins
http://bugs.python.org/issue14671  closed by benjamin.peterson

#14677: Python 2.6 Printing Error
http://bugs.python.org/issue14677  closed by eric.smith

#14681: Problem in installation of version 2.3.5 on mac OS X 10.5.8
http://bugs.python.org/issue14681  closed by pitrou

#14683: os.path.isdir.__name__ is "_isdir" on Windows (2.7.3)
http://bugs.python.org/issue14683  closed by loewis

#1346572: Remove inconsistent behavior between import and zipimport
http://bugs.python.org/issue1346572  closed by eric.araujo

From tjreedy at udel.edu  Fri Apr 27 19:23:56 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 27 Apr 2012 13:23:56 -0400
Subject: [Python-Dev] Changes in html.parser may cause breakage in
	client code
In-Reply-To: <4F9A2D41.1050409@gmail.com>
References: <loom.20120426T205136-611@post.gmane.org>
	<4F9A2D41.1050409@gmail.com>
Message-ID: <jneknk$ujs$1@dough.gmane.org>

On 4/27/2012 1:23 AM, Ezio Melotti wrote:

> html.parser doesn't use any private _name, so I was considering part of
> the public API only the documented names. Several methods are marked
> with an "# internal" comment, but that's not visible unless you go read
> the source code.

I could not find __all__ defined. Perhaps defining that would help.

-- 
Terry Jan Reedy


From v+python at g.nevcal.com  Fri Apr 27 19:40:43 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Fri, 27 Apr 2012 10:40:43 -0700
Subject: [Python-Dev] sys.implementation
In-Reply-To: <CALFfu7DbaC0Dhvzzif+kBT42J13JFfgb8tdyR58yh2=h_t-kZQ@mail.gmail.com>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<20120426103150.4898a678@limelight.wooz.org>
	<CALFfu7DbaC0Dhvzzif+kBT42J13JFfgb8tdyR58yh2=h_t-kZQ@mail.gmail.com>
Message-ID: <4F9ADA1B.6030300@g.nevcal.com>

On 4/27/2012 12:34 AM, Eric Snow wrote:
> On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw<barry at python.org>  wrote:
>> It's somewhat of a corner case, but I think a PEP couldn't hurt.  The
>> rationale section would be useful, at least.
>    http://mail.python.org/pipermail/python-ideas/2012-April/014954.html

The idea of having separate versions for CPython and stdlib has been 
raised recently, although I believe it has been mostly been deferred or 
discarded.  Should that be resurrected, sys.implementation may be a good 
repository for the additional version info defining the stdlib version.

However, this PEP raises the following question in my mind: is the sys 
module part of the stdlib? Before reaching a hasty conclusion, consider 
the following points:

1) with this proposal, the contents of sys.implementation will vary 
between implementations.  If stdlib is to be shared among 
implementations, then it seems sys.implementation should not be part of 
the stdlib, but rather part of the implementation. Is sys considered 
part of the implementation or part of the stdlib? I've always perceived 
it as part of the stdlib, because of the way it is documented.

2) importlib wants to be part of the stdlib, and thus available to other 
implementations, but it must be built-in or frozen. The goal with 
importlib is a common implementation in Python, that can be used by all 
implementations.  I am not clear on whether the accelerated C code is 
part of the stdlib, or part of an implementation optimization, nor how 
the structuring of such things is arranged to separate stdlib from 
implementation (if it is; if it isn't, should it be?)

3) can anything that must be built-in or frozen be part of the stdlib? I 
don't see why not, if it is common to all implementations, even if it 
depends on data it obtains from the implementation via some mechanism 
such as the proposed sys.implementation. However, if it is not common, I 
don't know how it can be standard/stdlib... which raises issues in my 
understanding of the various modules available as Python with C 
accelerators, and I know there are pure C modules that are part of the 
stdlib. So I think this idea of making the stdlib more sharable between 
implementations is still a work-in-progress, even a design-in-progress, 
but maybe part of the solution is to separate, or at least delineate, 
things that can be common, from things that cannot be common to all 
implementations.

My conclusion is that sys.implementation clearly should not be part of 
the stdlib, but rather be part of the language implementation.  Whether 
it then fits with the rest of what is in sys, or not, I am not qualified 
to say.  If not, perhaps a new module name is warranted... perhaps 
"implementation" at the top level of the namespace.

So my thoughts are:

Things that are part of the stdlib should be available in Python source 
to be shared across implementations.  Things that are not available in 
Python source cannot be shared across implementations, and therefore 
should not be part of the stdlib, but rather part of an 
implementation-specific library, or part of the language specification.

Or maybe stdlib should be an umbrella term, with the following subsets: 
common (Python implementation available, and not dependent on 
implementation-specific details except in very standardized ways), 
implementation-specific (provided by each implementation, either in 
implementation-specific Python, or some other implementation language), 
accelerators (a faster version of a common module, provided by an 
implementation when necessary for performance).

In this situation, sys.implementation, as proposed, should be 
implementation-specific.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120427/d3b40c2b/attachment.html>

From rdmurray at bitdance.com  Fri Apr 27 20:49:55 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Fri, 27 Apr 2012 14:49:55 -0400
Subject: [Python-Dev] sys.implementation
In-Reply-To: <4F9ADA1B.6030300@g.nevcal.com>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<20120426103150.4898a678@limelight.wooz.org>
	<CALFfu7DbaC0Dhvzzif+kBT42J13JFfgb8tdyR58yh2=h_t-kZQ@mail.gmail.com>
	<4F9ADA1B.6030300@g.nevcal.com>
Message-ID: <20120427184956.4AA4D2500D8@webabinitio.net>

On Fri, 27 Apr 2012 10:40:43 -0700, Glenn Linderman <v+python at g.nevcal.com> wrote:
> On 4/27/2012 12:34 AM, Eric Snow wrote:
> > On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw<barry at python.org>  wrote:
> >> It's somewhat of a corner case, but I think a PEP couldn't hurt.  The
> >> rationale section would be useful, at least.
> >    http://mail.python.org/pipermail/python-ideas/2012-April/014954.html
> 
> My conclusion is that sys.implementation clearly should not be part of 
> the stdlib, but rather be part of the language implementation.  Whether 
> it then fits with the rest of what is in sys, or not, I am not qualified 
> to say.  If not, perhaps a new module name is warranted... perhaps 
> "implementation" at the top level of the namespace.

IMO, there are two different things here that you are conflating(*): the
*implementation* of the stdlib, and the stdlib *API*.  sys.implementation
would be a part of the API that any conforming implementation of
python+stdlib would be required to implement.

We also have a goal of making as much of the *implementation* of the
stdlib usable by any python implementation as possible, but as you say
that is a work in progress.

There are, by the way, many things documented in the "library"
documentation that are in fact provided by the language implementation
itself.  All of the fundamental types, for example.

--David

(*) the Oracle lawyers sometimes seem to be trying to get
the judge and jury to make the same mistake.

From brett at python.org  Fri Apr 27 22:00:48 2012
From: brett at python.org (Brett Cannon)
Date: Fri, 27 Apr 2012 16:00:48 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <4F9AAF98.7050303@stackless.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
Message-ID: <CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>

On Fri, Apr 27, 2012 at 10:39, Christian Tismer <tismer at stackless.com>wrote:

> On 27.04.12 02:39, Nick Coghlan wrote:
>
>> On Fri, Apr 27, 2012 at 7:30 AM, Christian Tismer<tismer at stackless.com>
>>  wrote:
>>
>>> No big deal and easy to work around, I just would like to understand why.
>>>
>> I don't like it either and want to change it, but I'm also not going
>> to mess with it until the importlib bootstrapping is fully integrated
>> and stable.
>>
>> For the moment, there's a workaround in runpy to ensure at least
>> __main__.__file__ is always absolute (even when using the -m switch).
>> Longer term, I'd like to see __file__ and __path__ entries to be
>> guaranteed to be *always* absolutely, even when they're imported
>> relative to the current working directory.
>>
>>
> Is there a recommendable way to fix this? I would like to tell people
> what to do to make imports reliable. Either I put something into
> the toplevel __init__ code, or I hack something into .pth or sitecustomize,
> and then forget about this.
>
>
No, there isn't.


> But I fear hacking __init__ is the only safe way that works without
> a special python setup, which makes the whole reasoning rather
> useless, because I can _not_ forget about this.... waah ;-)
>

Yeah, to guarantee the semantics you are after you have to grab that ''
entry in sys.path as early as possible and substitute it with the cwd so
that its initial value propagates through the interpreter. Importlib is
already having to jump through some hoops to treat it as '.' and even that
doesn't get you what you want since that will change when the cwd is moved.

I'm personally in favour of changing the insertion of '' to sys.path to
inserting the cwd when the interpreter is launched.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120427/3c557b51/attachment-0001.html>

From v+python at g.nevcal.com  Fri Apr 27 22:11:03 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Fri, 27 Apr 2012 13:11:03 -0700
Subject: [Python-Dev] sys.implementation
In-Reply-To: <20120427184956.4AA4D2500D8@webabinitio.net>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<20120426103150.4898a678@limelight.wooz.org>
	<CALFfu7DbaC0Dhvzzif+kBT42J13JFfgb8tdyR58yh2=h_t-kZQ@mail.gmail.com>
	<4F9ADA1B.6030300@g.nevcal.com>
	<20120427184956.4AA4D2500D8@webabinitio.net>
Message-ID: <4F9AFD57.5010002@g.nevcal.com>

On 4/27/2012 11:49 AM, R. David Murray wrote:
> On Fri, 27 Apr 2012 10:40:43 -0700, Glenn Linderman<v+python at g.nevcal.com>  wrote:
>> On 4/27/2012 12:34 AM, Eric Snow wrote:
>>> On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw<barry at python.org>   wrote:
>>>> It's somewhat of a corner case, but I think a PEP couldn't hurt.  The
>>>> rationale section would be useful, at least.
>>>     http://mail.python.org/pipermail/python-ideas/2012-April/014954.html
>> My conclusion is that sys.implementation clearly should not be part of
>> the stdlib, but rather be part of the language implementation.  Whether
>> it then fits with the rest of what is in sys, or not, I am not qualified
>> to say.  If not, perhaps a new module name is warranted... perhaps
>> "implementation" at the top level of the namespace.
> IMO, there are two different things here that you are conflating(*): the
> *implementation* of the stdlib, and the stdlib *API*.  sys.implementation
> would be a part of the API that any conforming implementation of
> python+stdlib would be required to implement.

Hmm.  OK.

> We also have a goal of making as much of the *implementation* of the
> stdlib usable by any python implementation as possible, but as you say
> that is a work in progress.

OK.
> There are, by the way, many things documented in the "library"
> documentation that are in fact provided by the language implementation
> itself.  All of the fundamental types, for example.

I was aware of this last, but wasn't thinking about it during these 
musings... although the thoughts of documentation also crossed my mind, 
I didn't mention them, figuring it could come up later.

So "library" documentation already covers all three categories of stuff 
that I mentioned, plus one more (restated here for clarity, with better 
wording):

* language implementation
* implementation dependent modules
* implementation independent modules
* implementation dependent optimizations of implementation independent 
modules

 From the perspective of a user of a single implementation of the 
language + library, it really doesn't matter how the documentation is 
organized, or whether the documentation notes which of the above 4 
categories an item falls in.

 From the perspective of a user of multiple implementations, or 
perspective of a developer of an implementation other than CPython, 
knowledge of the category could be useful for both portability and 
performance planning. Organizing the documentation in some manner to be 
aware of such categories may help other implementations provide 
appropriate addenda.  The closer any of them get to tracking the Py3 
trunk in real time, the more so.

Here's a ponderable: In the long term, should the documentation be 
unified for multiple implementations?  Or should it be split into 4 
pieces, so that alternate implementations could swap in their own 
sections for implementation dependent items?

>
> --David
>
> (*) the Oracle lawyers sometimes seem to be trying to get
> the judge and jury to make the same mistake.
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/v%2Bpython%40g.nevcal.com
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120427/458615ac/attachment.html>

From v+python at g.nevcal.com  Fri Apr 27 22:37:27 2012
From: v+python at g.nevcal.com (Glenn Linderman)
Date: Fri, 27 Apr 2012 13:37:27 -0700
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
Message-ID: <4F9B0387.30303@g.nevcal.com>

On 4/27/2012 1:00 PM, Brett Cannon wrote:
> I'm personally in favour of changing the insertion of '' to sys.path 
> to inserting the cwd when the interpreter is launched.
+1
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120427/119e45b1/attachment.html>

From tismer at stackless.com  Fri Apr 27 23:21:25 2012
From: tismer at stackless.com (Christian Tismer)
Date: Fri, 27 Apr 2012 23:21:25 +0200
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
Message-ID: <4F9B0DD5.3010703@stackless.com>

On 27.04.12 22:00, Brett Cannon wrote:
>
>
> On Fri, Apr 27, 2012 at 10:39, Christian Tismer <tismer at stackless.com 
> <mailto:tismer at stackless.com>> wrote:
>
>     On 27.04.12 02:39, Nick Coghlan wrote:
>
>         On Fri, Apr 27, 2012 at 7:30 AM, Christian
>         Tismer<tismer at stackless.com <mailto:tismer at stackless.com>>  wrote:
>
>             No big deal and easy to work around, I just would like to
>             understand why.
>
>         I don't like it either and want to change it, but I'm also not
>         going
>         to mess with it until the importlib bootstrapping is fully
>         integrated
>         and stable.
>
>         For the moment, there's a workaround in runpy to ensure at least
>         __main__.__file__ is always absolute (even when using the -m
>         switch).
>         Longer term, I'd like to see __file__ and __path__ entries to be
>         guaranteed to be *always* absolutely, even when they're imported
>         relative to the current working directory.
>
>
>     Is there a recommendable way to fix this? I would like to tell people
>     what to do to make imports reliable. Either I put something into
>     the toplevel __init__ code, or I hack something into .pth or
>     sitecustomize,
>     and then forget about this.
>
>
> No, there isn't.
>
>     But I fear hacking __init__ is the only safe way that works without
>     a special python setup, which makes the whole reasoning rather
>     useless, because I can _not_ forget about this.... waah ;-)
>
>
> Yeah, to guarantee the semantics you are after you have to grab that 
> '' entry in sys.path as early as possible and substitute it with the 
> cwd so that its initial value propagates through the interpreter. 
> Importlib is already having to jump through some hoops to treat it as 
> '.' and even that doesn't get you what you want since that will change 
> when the cwd is moved.
>
> I'm personally in favour of changing the insertion of '' to sys.path 
> to inserting the cwd when the interpreter is launched.

Thanks Brett, that sounds pretty reasonable. '' always was too implicit 
for me.

cheers - chris

-- 
Christian Tismer             :^)<mailto:tismer at stackless.com>
tismerysoft GmbH             :     Have a break! Take a ride on Python's
Karl-Liebknecht-Str. 121     :    *Starship* http://starship.python.net/
14482 Potsdam                :     PGP key ->  http://pgp.uni-mainz.de
work +49 173 24 18 776  mobile +49 173 24 18 776  fax n.a.
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
       whom do you want to sponsor today?   http://www.stackless.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120427/65c99456/attachment.html>

From guido at python.org  Sat Apr 28 00:38:17 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 27 Apr 2012 15:38:17 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
Message-ID: <CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>

Hi Victor,

I read most of the PEP and I think it is ready for acceptance! Thanks
for your patience in shepherding this through such a difficult and
long discussion. Also thanks to the many other contributors,
especially those who ended up as co-authors. We will have an awesome
new set of time APIs! Now let the implementation roll...

--Guido

On Mon, Apr 23, 2012 at 4:30 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
>> Here is a simplified version of the first draft of the PEP 418. The
>> full version can be read online.
>> http://www.python.org/dev/peps/pep-0418/
>
> Thanks to everyone who helped me to work on this PEP!
>
> I integrated last comments. There is no more open question. (Or did I
> miss something?)
>
> I didn't know that it would be so hard to add a such simple function
> as time.monotonic()!?
>
> Victor
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)

From carl at meyerloewen.net  Sat Apr 28 00:07:47 2012
From: carl at meyerloewen.net (Carl Meyer)
Date: Fri, 27 Apr 2012 16:07:47 -0600
Subject: [Python-Dev] Changes in html.parser may cause breakage in
 client code
In-Reply-To: <CAP7+vJ+U8oduPUKRFBCNdUBa0cJkN9JU7Z0vGq=72geUuD3+RA@mail.gmail.com>
References: <loom.20120426T205136-611@post.gmane.org>
	<4F9A2D41.1050409@gmail.com>
	<CAP7+vJ+U8oduPUKRFBCNdUBa0cJkN9JU7Z0vGq=72geUuD3+RA@mail.gmail.com>
Message-ID: <4F9B18B3.2070909@meyerloewen.net>

On 04/27/2012 08:36 AM, Guido van Rossum wrote:
> Someone should contact the Django folks. Alex Gaynor?

I committed the relevant code to Django (though I didn't write the 
patch), and I've been following this thread. I have it on my todo list 
to review this code again with Ezio's suggestions in mind. So you can 
consider "the Django folks" contacted.

Carl

From guido at python.org  Sat Apr 28 01:29:37 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 27 Apr 2012 16:29:37 -0700
Subject: [Python-Dev] Changes in html.parser may cause breakage in
	client code
In-Reply-To: <4F9B18B3.2070909@meyerloewen.net>
References: <loom.20120426T205136-611@post.gmane.org>
	<4F9A2D41.1050409@gmail.com>
	<CAP7+vJ+U8oduPUKRFBCNdUBa0cJkN9JU7Z0vGq=72geUuD3+RA@mail.gmail.com>
	<4F9B18B3.2070909@meyerloewen.net>
Message-ID: <CAP7+vJLtWJRWyO8-ii5C2axzL0snDN2bAhLKzCtu_ho5faPSOg@mail.gmail.com>

Awesome!

On Fri, Apr 27, 2012 at 3:07 PM, Carl Meyer <carl at meyerloewen.net> wrote:
> On 04/27/2012 08:36 AM, Guido van Rossum wrote:
>>
>> Someone should contact the Django folks. Alex Gaynor?
>
>
> I committed the relevant code to Django (though I didn't write the patch),
> and I've been following this thread. I have it on my todo list to review
> this code again with Ezio's suggestions in mind. So you can consider "the
> Django folks" contacted.
>
> Carl
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)

From steve at pearwood.info  Sat Apr 28 02:50:37 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 28 Apr 2012 10:50:37 +1000
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
Message-ID: <4F9B3EDD.1060002@pearwood.info>

Some issues with the PEP 418:


1) time.clock is deprecated, but also supported by get_clock_info. Why bother 
supporting it if you don't want people to use it?


2) get_clock_info returns a dict. Why not a namedtuple?


3) The dict returned by get_clock_info includes an optional key, 
"is_adjusted". Why is it optional?


4) The section on mach_absolute_time states:

    According to the documentation (Technical Q&A QA1398), mach_timebase_info()
    is always equal to one and never fails, even if the function may fail
    according to its prototype.

I've read the linked technical note and I can't see anything about it always 
being equal to one. I don't think your description is accurate.


5) In the glossary, you mark some terms in angle brackets <> but there is no 
definition for them:

   <nanosecond>
   <strictly monotonic>
   <clock monotonic> (which I think should be <monotonic clock> instead)


6) A stylistic suggestion: the glossary entries for Accuracy and Precision 
should each say "Contrast <the other>" and link to the Wikipedia article.


7) There is a mismatch in tenses between "Adjusted" and "Resetting" in the 
glossary. Suggest something like this instead:

     Adjusted: Reset to the correct time. This may be done either
     with a <Step> or by <Slewing>.


8) The glossary defines steady as high stability and "relatively high accuracy
    and precision". But surely that is not correct -- a clock that ticks every
    once per second (low precision) is still steady.


9) The perf_counter pseudocode seems a bit unusual (unPythonic?) to me. Rather 
than checking flags at call-time, could you not use different function 
definitions at compile time?


10) The "Alternatives" section should list arguments made for and against the 
alternative APIs, not just list them.


Thanks for your excellent work Victor!



-- 
Steven

From victor.stinner at gmail.com  Sat Apr 28 03:23:28 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 28 Apr 2012 03:23:28 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9B3EDD.1060002@pearwood.info>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
Message-ID: <CAMpsgwbYJBkoS9+t8ZnSOTzOb-8es-vN99VWyr3iPzZTK_ZCTg@mail.gmail.com>

> 1) time.clock is deprecated, but also supported by get_clock_info. Why
> bother supporting it if you don't want people to use it?

It will not be removed before Python 4, the function is still used by
Python < 3.3.

> 2) get_clock_info returns a dict. Why not a namedtuple?

I don't like the tuple API. I prefer a dict over a (named)tuple
because there is an optional key, and we migh add other optional keys
later.

> 3) The dict returned by get_clock_info includes an optional key,
> "is_adjusted". Why is it optional?

The value is not know for some platforms for some clock. I don't know
if process/thread time can be set for example. Sometimes the value is
hardcoded, sometimes the flag comes from the OS (ex: on Windows).

> 4) The section on mach_absolute_time states:
>
> ? According to the documentation (Technical Q&A QA1398),
> mach_timebase_info()
> ? is always equal to one and never fails, even if the function may fail
> ? according to its prototype.
>
> I've read the linked technical note and I can't see anything about it always
> being equal to one. I don't think your description is accurate.

I don't remember where it does come from. I removed the sentence.

> 9) The perf_counter pseudocode seems a bit unusual (unPythonic?) to me.
> Rather than checking flags at call-time, could you not use different
> function definitions at compile time?

It's just a pseudo-code. I prefer to avoid duplication of code. The
pseudo-code is based on the C implemenation which use #ifdef;

Victor

From guido at python.org  Sat Apr 28 05:40:28 2012
From: guido at python.org (Guido van Rossum)
Date: Fri, 27 Apr 2012 20:40:28 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9B3EDD.1060002@pearwood.info>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
Message-ID: <CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>

On Fri, Apr 27, 2012 at 5:50 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> Some issues with the PEP 418:
>
>
> 1) time.clock is deprecated, but also supported by get_clock_info. Why
> bother supporting it if you don't want people to use it?

I see the deprecation of clock() as mostly symbolic -- it's used way
too much to do anything about it (as the PEP acknowledges). So I think
it's reasonable we should return info about it.

> 2) get_clock_info returns a dict. Why not a namedtuple?

Future flexibility. And there's no need for it to be a *tuple*.

> 3) The dict returned by get_clock_info includes an optional key,
> "is_adjusted". Why is it optional?

I wondered that myself, but I suspect it means "we don't know".

> 4) The section on mach_absolute_time states:
>
> ? According to the documentation (Technical Q&A QA1398),
> mach_timebase_info()
> ? is always equal to one and never fails, even if the function may fail
> ? according to its prototype.
>
> I've read the linked technical note and I can't see anything about it always
> being equal to one. I don't think your description is accurate.

Ok, you & Victor will have to figure that one out.

> 5) In the glossary, you mark some terms in angle brackets <> but there is no
> definition for them:
>
> ?<nanosecond>
> ?<strictly monotonic>
> ?<clock monotonic> (which I think should be <monotonic clock> instead)
>
>
> 6) A stylistic suggestion: the glossary entries for Accuracy and Precision
> should each say "Contrast <the other>" and link to the Wikipedia article.
>
>
> 7) There is a mismatch in tenses between "Adjusted" and "Resetting" in the
> glossary. Suggest something like this instead:
>
> ? ?Adjusted: Reset to the correct time. This may be done either
> ? ?with a <Step> or by <Slewing>.
>
>
> 8) The glossary defines steady as high stability and "relatively high
> accuracy
> ? and precision". But surely that is not correct -- a clock that ticks every
> ? once per second (low precision) is still steady.
>
>
> 9) The perf_counter pseudocode seems a bit unusual (unPythonic?) to me.
> Rather than checking flags at call-time, could you not use different
> function definitions at compile time?
>
>
> 10) The "Alternatives" section should list arguments made for and against
> the alternative APIs, not just list them.
>
>
> Thanks for your excellent work Victor!

Surely those are all very minor quibbles. I have one myself: at some
point it says:

    On Linux, it is possible to use time.clock_gettime(CLOCK_THREAD_CPUTIME_ID).

But the PEP doesn't define a function by that name. Is it an editing
glitch? (Some of the pseudo code also uses this.)

-- 
--Guido van Rossum (python.org/~guido)

From victor.stinner at gmail.com  Sat Apr 28 09:40:30 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 28 Apr 2012 09:40:30 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
Message-ID: <CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>

> Surely those are all very minor quibbles. I have one myself: at some
> point it says:
>
> ? ?On Linux, it is possible to use time.clock_gettime(CLOCK_THREAD_CPUTIME_ID).
>
> But the PEP doesn't define a function by that name. Is it an editing
> glitch? (Some of the pseudo code also uses this.)

It is this function:
http://docs.python.org/dev/library/time.html#time.clock_gettime

It's just a binding of the C function clock_gettime(). Should the PEP
describe all functions used by the PEP?

Victor

From ncoghlan at gmail.com  Sat Apr 28 09:58:37 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 28 Apr 2012 17:58:37 +1000
Subject: [Python-Dev] sys.implementation
In-Reply-To: <4F9AFD57.5010002@g.nevcal.com>
References: <CALFfu7DYyZMUp40MDR9-vhpOkPvr=cwt5EmMHEGTrmix_kZbYg@mail.gmail.com>
	<CALFfu7B4GpCNgyxLUmtji1LLKLgFPPvR2Zd3e2V6UW4_0HoR7g@mail.gmail.com>
	<CALFfu7BRQcKurhxzkmO-NLDoGesvFuW4kozYBOv2ovEyQm5JVA@mail.gmail.com>
	<20120426103150.4898a678@limelight.wooz.org>
	<CALFfu7DbaC0Dhvzzif+kBT42J13JFfgb8tdyR58yh2=h_t-kZQ@mail.gmail.com>
	<4F9ADA1B.6030300@g.nevcal.com>
	<20120427184956.4AA4D2500D8@webabinitio.net>
	<4F9AFD57.5010002@g.nevcal.com>
Message-ID: <CADiSq7ensh8id4oVrdjWZD_7Vfak=zJcnKBz4qD+R6rJatjNrg@mail.gmail.com>

On Sat, Apr 28, 2012 at 6:11 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> Here's a ponderable: In the long term, should the documentation be unified
> for multiple implementations?? Or should it be split into 4 pieces, so that
> alternate implementations could swap in their own sections for
> implementation dependent items?

Probably not, because the boundary between language, standard library
and implementation *is* blurry. The blurriness in the descriptions
reflects the blurriness in reality.

Anything that doesn't have dedicated syntax is, in a formal sense,
part of the standard library rather than the core language definition.
That includes the GC API, the weakref API, the sys module, the
operator module, the builtins module, the types module, etc. The
language specification itself just states that there *is* a builtin
namespace and you *can* do imports. It is then up to the standard
library specification to describe the *contents* of the builtin
namespace, as well as state what other modules and packages can be
imported by default.

However, the various parts of the standard library differ can wildly
in how *closely coupled* they are to a particular implementation.

Some, such as builtins, gc, operator, weakref, types and sys are
*very* tightly coupled with a specific implementation and always will
be. If someone is writing a new implementation of Python, they're
almost certainly going to have to write new version of these modules
from scratch that interoperate correctly with their code generator and
evaluation loop.

Historically, the import machinery was similarly coupled to a specific
implementation. The goal of bootstrapping importlib as the main import
implementation is to change that so that most of the import machinery
is decoupled from the implementation, with the bare minimum remaining
as implementation specific code (specifically, the code needed to
carry out the bootstrapping process, such as supporting frozen and
builtin modules).

Other modules may differ in performance characteristics between
implementations, particular for older C modules in CPython which don't
have a pure Python counterpart.

So, yes, I agree the four categories you list are helpful in
*thinking* about the questions involved, but, no, I don't think it's a
good principle for organising the documentation (precisely because the
categories are related to implementation details that shouldn't matter
all that much to end users).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From ncoghlan at gmail.com  Sat Apr 28 10:08:08 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 28 Apr 2012 18:08:08 +1000
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
Message-ID: <CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>

On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
> I'm personally in favour of changing the insertion of '' to sys.path to
> inserting the cwd when the interpreter is launched.

I'm not, because it breaks importing from the interactive prompt if
you change directory after starting the session.

The existing workaround for applications is pretty trivial:

  # Somewhere in your initialisation code
  for i, entry in enumerate(sys.path):
      sys.path[i] = os.path.abspath(i)

The fix for the import system is similarly trivial: call
os.path.abspath when calculating __file__ (just as runpy now does and
the import emulation in pkgutil always has).

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From victor.stinner at gmail.com  Sat Apr 28 10:48:35 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 28 Apr 2012 10:48:35 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9B3EDD.1060002@pearwood.info>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
Message-ID: <CAMpsgwYwHMXGx93V1zycjetUOnhbmVuzvDZqR8e86Mi8-YeMhw@mail.gmail.com>

> 3) The dict returned by get_clock_info includes an optional key,
> "is_adjusted". Why is it optional?

More complete answer.

Rules used to fill the is_adjusted flag:

 - System clock: is_adjusted=1 because the clock can be set manually
by the system administrator, except on Windows: is_adjusted is 0 if
GetSystemTimeAdjustment() returns isTimeAdjustmentDisabled=1
 - Process time: is_adjusted=0 because I don't know an OS where the
process time can be modified
 - Monotonic clocks: is_adjusted=0 on Windows, Mac OS X and Solaris,
is_adjusted=1 on Linux, it is not set otherwise. We may also set
is_adjusted to 0 on other OSes and so the key would not be optional
anymore.

Said differently, is_adjusted is only 1 for system clocks and for the
monotonic clock on Linux.

Victor

From sandro.tosi at gmail.com  Sat Apr 28 11:18:50 2012
From: sandro.tosi at gmail.com (Sandro Tosi)
Date: Sat, 28 Apr 2012 11:18:50 +0200
Subject: [Python-Dev] cpython (2.7): Issue #14448: mention pytz;
 patch by Andrew Svetlov
In-Reply-To: <CADiSq7cgLmJWUU4jqVOouKVLNBDV3ejJHN0mju2muxBOau-hWg@mail.gmail.com>
References: <E1SMjqT-0001o8-TV@dinsdale.python.org>
	<jn89i4$qm3$1@dough.gmane.org>
	<CAB4XWXzFLp3KU--gAUSixuHgQH6YKxqDMob3M3jrO5M89mq+Sg@mail.gmail.com>
	<CADiSq7dfZPCNFGqiVnV+KCXNKjmMOmnws1b-XkcX1e8UkxG4UA@mail.gmail.com>
	<CAB4XWXyFauWQsh-=_OazjMzhgZssSr5BfuNYEBtHo96pjZUg9A@mail.gmail.com>
	<jn9ged$ljm$1@dough.gmane.org>
	<CADiSq7cgLmJWUU4jqVOouKVLNBDV3ejJHN0mju2muxBOau-hWg@mail.gmail.com>
Message-ID: <CAB4XWXwa-0x6bGYo-cG6RxdQMKwnf7m5Ob1xMDcdGVnrqvbzgQ@mail.gmail.com>

On Wed, Apr 25, 2012 at 20:40, Georg Brandl <g.brandl at gmx.net> wrote:
> BTW, the single backticks don't do anything usable; use *pytz* to make something
> emphasized.

yep, done.

On Thu, Apr 26, 2012 at 03:06, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Thu, Apr 26, 2012 at 4:40 AM, Georg Brandl <g.brandl at gmx.net> wrote:
>> Maybe it's useful to mention that that database is the one used on Linux (is
>> it on other Unices?) and Windows has its own?
>
> pytz always uses the Olson/IANA database. I don't think we need to
> confuse matters further by mentioning the fact that Microsoft invented
> their own system without worrying about what anyone else was doing.

I agree with that, so i'm about to commit a very similar diff than the
one posted here.

Thanks for your suggestions!

-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi

From benjamin at python.org  Sat Apr 28 15:35:41 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Sat, 28 Apr 2012 09:35:41 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
Message-ID: <CAPZV6o_FeneBy0f-QW3YCZNRh3oVP6uewwQJ=RzgkrT8S+=e+Q@mail.gmail.com>

2012/4/28 Nick Coghlan <ncoghlan at gmail.com>:
> On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
>> I'm personally in favour of changing the insertion of '' to sys.path to
>> inserting the cwd when the interpreter is launched.
>
> I'm not, because it breaks importing from the interactive prompt if
> you change directory after starting the session.
>
> The existing workaround for applications is pretty trivial:
>
> ?# Somewhere in your initialisation code
> ?for i, entry in enumerate(sys.path):
> ? ? ?sys.path[i] = os.path.abspath(i)
>
> The fix for the import system is similarly trivial: call
> os.path.abspath when calculating __file__ (just as runpy now does and
> the import emulation in pkgutil always has).

I thought __file__ was required to be absolute in Python 3.



-- 
Regards,
Benjamin

From guido at python.org  Sat Apr 28 16:02:13 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 28 Apr 2012 07:02:13 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>
Message-ID: <CAP7+vJLoPXayZi37rJDeZjXjf6OEm_TL__3uqL9z-wjxPVe0rA@mail.gmail.com>

On Sat, Apr 28, 2012 at 12:40 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:
>> Surely those are all very minor quibbles. I have one myself: at some
>> point it says:
>>
>> ? ?On Linux, it is possible to use time.clock_gettime(CLOCK_THREAD_CPUTIME_ID).
>>
>> But the PEP doesn't define a function by that name. Is it an editing
>> glitch? (Some of the pseudo code also uses this.)
>
> It is this function:
> http://docs.python.org/dev/library/time.html#time.clock_gettime
>
> It's just a binding of the C function clock_gettime(). Should the PEP
> describe all functions used by the PEP?

Oh, now I'm confused. Se in 3.3 we're adding a bunch of other new
functions to the time module that aren't described by the PEP? Aren't
those functions redundant? Or did I miss some part of the conversation
where this was discussed? What's *their* history?

-- 
--Guido van Rossum (python.org/~guido)

From solipsis at pitrou.net  Sat Apr 28 16:51:01 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 28 Apr 2012 16:51:01 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>
	<CAP7+vJLoPXayZi37rJDeZjXjf6OEm_TL__3uqL9z-wjxPVe0rA@mail.gmail.com>
Message-ID: <20120428165101.2c27d044@pitrou.net>

On Sat, 28 Apr 2012 07:02:13 -0700
Guido van Rossum <guido at python.org> wrote:
> >
> > It is this function:
> > http://docs.python.org/dev/library/time.html#time.clock_gettime
> >
> > It's just a binding of the C function clock_gettime(). Should the PEP
> > describe all functions used by the PEP?
> 
> Oh, now I'm confused. Se in 3.3 we're adding a bunch of other new
> functions to the time module that aren't described by the PEP? Aren't
> those functions redundant? Or did I miss some part of the conversation
> where this was discussed? What's *their* history?

time.clock_gettime() (and the related constants
CLOCK_{REALTIME,MONOTONIC, etc.}) is a thin wrapper around the
corresponding POSIX function, it's there for people who want low-level
control over their choice of APIs:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_gettime.html

As a thin wrapper, adding it to the time module was pretty much
uncontroversial, I think. The PEP proposes cross-platform
functions with consistent semantics, which is where a discussion was
needed.

Regards

Antoine.



From guido at python.org  Sat Apr 28 17:57:05 2012
From: guido at python.org (Guido van Rossum)
Date: Sat, 28 Apr 2012 08:57:05 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <20120428165101.2c27d044@pitrou.net>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>
	<CAP7+vJLoPXayZi37rJDeZjXjf6OEm_TL__3uqL9z-wjxPVe0rA@mail.gmail.com>
	<20120428165101.2c27d044@pitrou.net>
Message-ID: <CAP7+vJKh-3CpyPCf8GrUYNMqdGkoSrczaZF80u3Szc8ZQi7==A@mail.gmail.com>

On Sat, Apr 28, 2012 at 7:51 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Sat, 28 Apr 2012 07:02:13 -0700
> Guido van Rossum <guido at python.org> wrote:
>> >
>> > It is this function:
>> > http://docs.python.org/dev/library/time.html#time.clock_gettime
>> >
>> > It's just a binding of the C function clock_gettime(). Should the PEP
>> > describe all functions used by the PEP?
>>
>> Oh, now I'm confused. Se in 3.3 we're adding a bunch of other new
>> functions to the time module that aren't described by the PEP? Aren't
>> those functions redundant? Or did I miss some part of the conversation
>> where this was discussed? What's *their* history?
>
> time.clock_gettime() (and the related constants
> CLOCK_{REALTIME,MONOTONIC, etc.}) is a thin wrapper around the
> corresponding POSIX function, it's there for people who want low-level
> control over their choice of APIs:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_gettime.html
>
> As a thin wrapper, adding it to the time module was pretty much
> uncontroversial, I think. The PEP proposes cross-platform
> functions with consistent semantics, which is where a discussion was
> needed.

True, but does this mean clock_gettime and friends only exist on
POSIX? Shouldn't they be in the os or posix module then? I guess I'm
fine with either place but I don't know if enough thought was put into
the decision. Up until now the time module had only cross-platform
functions (even if clock()'s semantics vary widely).

-- 
--Guido van Rossum (python.org/~guido)

From rdmurray at bitdance.com  Sat Apr 28 18:16:54 2012
From: rdmurray at bitdance.com (R. David Murray)
Date: Sat, 28 Apr 2012 12:16:54 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
Message-ID: <20120428161654.E0B572500D2@webabinitio.net>

On Sat, 28 Apr 2012 18:08:08 +1000, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
> > I'm personally in favour of changing the insertion of '' to sys.path to
> > inserting the cwd when the interpreter is launched.
> 
> I'm not, because it breaks importing from the interactive prompt if
> you change directory after starting the session.

Heh.  I've never thought of doing that.  I would not have expected it
to work (change directory from the interactive prompt and be able to
import something located in the new cwd).  I don't know why I wouldn't
have expected it to work, I just didn't.

That said, could this insertion of '' only happen when the interactive
prompt is actually posted, and otherwise use cwd?

--David

From julia.lawall at lip6.fr  Sat Apr 28 10:06:52 2012
From: julia.lawall at lip6.fr (Julia Lawall)
Date: Sat, 28 Apr 2012 10:06:52 +0200 (CEST)
Subject: [Python-Dev] questions about memory management
Message-ID: <alpine.DEB.2.02.1204281003100.1917@hadrien>

In Python-3.2.3/Python/import.c, in the function 
_PyImport_FixupExtensionUnicode, is any call to PyDict_DelItemString 
needed before the final failure returns?

     modules = PyImport_GetModuleDict();
     if (PyDict_SetItemString(modules, name, mod) < 0)
         return -1;
     if (_PyState_AddModule(mod, def) < 0) {
         PyDict_DelItemString(modules, name);
         return -1;
     }
     if (def->m_size == -1) {
         if (def->m_base.m_copy) {
             /* Somebody already imported the module,
                likely under a different name.
                XXX this should really not happen. */
             Py_DECREF(def->m_base.m_copy);
             def->m_base.m_copy = NULL;
         }
         dict = PyModule_GetDict(mod);
         if (dict == NULL)
             return -1;
         def->m_base.m_copy = PyDict_Copy(dict);
         if (def->m_base.m_copy == NULL)
             return -1;
     }

In Python-3.2.3/Modules/ossaudiodev.c, in the function build_namelists, is 
it intentional that labels is not freed in the last failure case:

     if (PyModule_AddObject(module, "control_labels", labels) == -1)
         goto error2;
     if (PyModule_AddObject(module, "control_names", names) == -1)
         goto error1;

     return 0;

error2:
     Py_XDECREF(labels);
error1:
     Py_XDECREF(names);
     return -1;

thanks,
julia

From solipsis at pitrou.net  Sat Apr 28 20:13:03 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 28 Apr 2012 20:13:03 +0200
Subject: [Python-Dev] questions about memory management
References: <alpine.DEB.2.02.1204281003100.1917@hadrien>
Message-ID: <20120428201303.1c58a5bb@pitrou.net>


Hello Julia,

On Sat, 28 Apr 2012 10:06:52 +0200 (CEST)
Julia Lawall <julia.lawall at lip6.fr> wrote:
> In Python-3.2.3/Python/import.c, in the function 
> _PyImport_FixupExtensionUnicode, is any call to PyDict_DelItemString 
> needed before the final failure returns?

I would say it probably does, but it would need further examination.
Some error-checking code paths in our C code base may lack proper
cleanup, especially when an error is unlikely.
Could you open an issue at http://bugs.python.org with this?

> In Python-3.2.3/Modules/ossaudiodev.c, in the function build_namelists, is 
> it intentional that labels is not freed in the last failure case:

The successful call to PyModule_AddObject() steals a reference to
`labels`, so it doesn't need to be decrefed again (the reference is
not owned by the init function anymore).

Regards

Antoine.

>      if (PyModule_AddObject(module, "control_labels", labels) == -1)
>          goto error2;
>      if (PyModule_AddObject(module, "control_names", names) == -1)
>          goto error1;
> 
>      return 0;
> 
> error2:
>      Py_XDECREF(labels);
> error1:
>      Py_XDECREF(names);
>      return -1;
> 
> thanks,
> julia



From brett at python.org  Sat Apr 28 21:16:00 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 28 Apr 2012 15:16:00 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
Message-ID: <CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>

On Sat, Apr 28, 2012 at 04:08, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
> > I'm personally in favour of changing the insertion of '' to sys.path to
> > inserting the cwd when the interpreter is launched.
>
> I'm not, because it breaks importing from the interactive prompt if
> you change directory after starting the session.
>
>
Who does that? I mean what possible need do you have to start the
interpreter in one directory, but then need to chdir somewhere else where
you are doing your actual importing from, and in a way where you can't
simply attach the directory you want to use into sys.path?



> The existing workaround for applications is pretty trivial:
>
>  # Somewhere in your initialisation code
>  for i, entry in enumerate(sys.path):
>      sys.path[i] = os.path.abspath(i)
>
> The fix for the import system is similarly trivial: call
> os.path.abspath when calculating __file__ (just as runpy now does and
> the import emulation in pkgutil always has).
>

You say trivial, I say a pain as that means porting over os.path.abspath()
into importlib._bootstrap that works for all platforms.

-Brett


>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120428/c2fa3ee6/attachment.html>

From brett at python.org  Sat Apr 28 21:17:12 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 28 Apr 2012 15:17:12 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAPZV6o_FeneBy0f-QW3YCZNRh3oVP6uewwQJ=RzgkrT8S+=e+Q@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<CAPZV6o_FeneBy0f-QW3YCZNRh3oVP6uewwQJ=RzgkrT8S+=e+Q@mail.gmail.com>
Message-ID: <CAP1=2W7iBzBCQ4_TN5tGzv=XTwKHyaKVCOEvKKpun-js8azopw@mail.gmail.com>

On Sat, Apr 28, 2012 at 09:35, Benjamin Peterson <benjamin at python.org>wrote:

> 2012/4/28 Nick Coghlan <ncoghlan at gmail.com>:
> > On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
> >> I'm personally in favour of changing the insertion of '' to sys.path to
> >> inserting the cwd when the interpreter is launched.
> >
> > I'm not, because it breaks importing from the interactive prompt if
> > you change directory after starting the session.
> >
> > The existing workaround for applications is pretty trivial:
> >
> >  # Somewhere in your initialisation code
> >  for i, entry in enumerate(sys.path):
> >      sys.path[i] = os.path.abspath(i)
> >
> > The fix for the import system is similarly trivial: call
> > os.path.abspath when calculating __file__ (just as runpy now does and
> > the import emulation in pkgutil always has).
>
> I thought __file__ was required to be absolute in Python 3.
>

Not that I'm specifically aware of. Since site makes all entries in
sys.path absolute it is really only an issue if you launch without site or
the '' entry in sys.path.

-Brett


>
>
>
> --
> Regards,
> Benjamin
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120428/3b376ee3/attachment.html>

From brett at python.org  Sat Apr 28 21:20:58 2012
From: brett at python.org (Brett Cannon)
Date: Sat, 28 Apr 2012 15:20:58 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <20120428161654.E0B572500D2@webabinitio.net>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<20120428161654.E0B572500D2@webabinitio.net>
Message-ID: <CAP1=2W4zy6T6L9R=XECaAFEkxgFsLh+61KqEirTstHvnNoGSpQ@mail.gmail.com>

On Sat, Apr 28, 2012 at 12:16, R. David Murray <rdmurray at bitdance.com>wrote:

> On Sat, 28 Apr 2012 18:08:08 +1000, Nick Coghlan <ncoghlan at gmail.com>
> wrote:
> > On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
> > > I'm personally in favour of changing the insertion of '' to sys.path to
> > > inserting the cwd when the interpreter is launched.
> >
> > I'm not, because it breaks importing from the interactive prompt if
> > you change directory after starting the session.
>
> Heh.  I've never thought of doing that.  I would not have expected it
> to work (change directory from the interactive prompt and be able to
> import something located in the new cwd).  I don't know why I wouldn't
> have expected it to work, I just didn't.
>
> That said, could this insertion of '' only happen when the interactive
> prompt is actually posted, and otherwise use cwd?


If the decision to keep this entry around stands, can we consider changing
it to '.' instead of the empty string? It mucks up stuff if you are not
careful (e.g. ``os.listdir('')`` or ``"/".join(['', 'filename.py'])``).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120428/192c212d/attachment.html>

From victor.stinner at gmail.com  Sat Apr 28 22:32:54 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 28 Apr 2012 22:32:54 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAP7+vJKh-3CpyPCf8GrUYNMqdGkoSrczaZF80u3Szc8ZQi7==A@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>
	<CAP7+vJLoPXayZi37rJDeZjXjf6OEm_TL__3uqL9z-wjxPVe0rA@mail.gmail.com>
	<20120428165101.2c27d044@pitrou.net>
	<CAP7+vJKh-3CpyPCf8GrUYNMqdGkoSrczaZF80u3Szc8ZQi7==A@mail.gmail.com>
Message-ID: <CAMpsgwagDBVBxnY5Gcx92bn_j5JgGPpvo5qrs6rZcTafxsuPYw@mail.gmail.com>

>> As a thin wrapper, adding it to the time module was pretty much
>> uncontroversial, I think. The PEP proposes cross-platform
>> functions with consistent semantics, which is where a discussion was
>> needed.
>
> True, but does this mean clock_gettime and friends only exist on
> POSIX? Shouldn't they be in the os or posix module then? I guess I'm
> fine with either place but I don't know if enough thought was put into
> the decision. Up until now the time module had only cross-platform
> functions (even if clock()'s semantics vary widely).

The os module is big enough. Low level networks functions are not in
the os module, but in the socket module.

Not all functions of the time module are always available. For
example, time.tzset() is not available on all platforms. Another
example, the new time.monotonic() is not available on all platforms
;-)

Oh, I forgot to mention that time.clock_*() functions are not always
available in the doc.

Victor

From eric at trueblade.com  Sun Apr 29 01:20:51 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Sat, 28 Apr 2012 19:20:51 -0400
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
Message-ID: <4F9C7B53.30602@trueblade.com>

On 4/27/2012 11:40 PM, Guido van Rossum wrote:
> On Fri, Apr 27, 2012 at 5:50 PM, Steven D'Aprano <steve at pearwood.info> wrote:
>> 2) get_clock_info returns a dict. Why not a namedtuple?
> 
> Future flexibility. And there's no need for it to be a *tuple*.

I haven't been paying attention to this discussion, so this isn't a
comment on any time functions specifically.

But we generally use a namedtuple (or structseq) for things like
get_clock_info. For example, for sys.float_info there's no need for it
to be a tuple, and it can be extended in the future, yet it's a structseq.

Same for sys.flags, although it's its own type, not a structseq. It is
also indexable, and we've added fields to it (hash_randomization was
added in 2.7.3).

So I think a structseq would work for get_clock_info as well. It's
unfortunate we don't have a similar type which isn't a tuple, but the
types we do have work well enough in practice.

Eric.

From victor.stinner at gmail.com  Sun Apr 29 03:21:52 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 29 Apr 2012 03:21:52 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9C7B53.30602@trueblade.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<4F9C7B53.30602@trueblade.com>
Message-ID: <CAMpsgwZdm=417RiRc76OKL6h_jOUW61QVwWhCQhRNhLjAthkNg@mail.gmail.com>

>>> 2) get_clock_info returns a dict. Why not a namedtuple?
>>
>> Future flexibility. And there's no need for it to be a *tuple*.
>
> I haven't been paying attention to this discussion, so this isn't a
> comment on any time functions specifically.
>
> But we generally use a namedtuple (or structseq) for things like
> get_clock_info. For example, for sys.float_info there's no need for it
> to be a tuple, and it can be extended in the future, yet it's a structseq.

Ok ok, I changed the is_adjusted flag to make it mandatory and I
changed get_clock_info() to return a clock_info object. clock_info is
a structseq.

I didn't mention that clock_info can be read using an index because I
really don't like the tuple-like API.

Victor

From victor.stinner at gmail.com  Sun Apr 29 03:26:26 2012
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sun, 29 Apr 2012 03:26:26 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
Message-ID: <CAMpsgwYF2XudqjN=HLKtCiCHcZzzDMFp0MN97FLVwPU9M0GNFQ@mail.gmail.com>

Hi Guido,

2012/4/28 Guido van Rossum <guido at python.org>:
> I read most of the PEP and I think it is ready for acceptance! Thanks
> for your patience in shepherding this through such a difficult and
> long discussion.

You're welcome, but many developers helped me!

> Also thanks to the many other contributors,
> especially those who ended up as co-authors. We will have an awesome
> new set of time APIs! Now let the implementation roll...

The PEP is not accepted yet at:
http://www.python.org/dev/peps/pep-0418/

Did you forget to update its status, or are you waiting for something?

Anyway I commited the implementation of the PEP 418 (after the last
change on the API of time.get_clock_info()). Let's see how buildbots
feel with monotonic time.

Victor

From tjreedy at udel.edu  Sun Apr 29 04:02:06 2012
From: tjreedy at udel.edu (Terry Reedy)
Date: Sat, 28 Apr 2012 22:02:06 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>
Message-ID: <jni7f8$bkv$1@dough.gmane.org>

On 4/28/2012 3:16 PM, Brett Cannon wrote:

> Who does that? I mean what possible need do you have to start the
> interpreter in one directory, but then need to chdir somewhere else
> where you are doing your actual importing from, and in a way where you
> can't simply attach the directory you want to use into sys.path?

Idle, at least on Windows, when started from the installed icon, starts 
in the directory of the associated pythonw.exe. There is no choice. And 
that is a bad place to put user files for import. So anyone using Idle 
and importing user files does just what you think is strange. Windows 
ain't *nix. If one opens a file in another directory*, that becomes the 
new current directory and imports from that directory work. I would not 
want that to change. I presume that changing '' to '.' would not change 
that.

*and the easiest way to do *that* is from the 'recent files' list. I 
almost never type a path on Windows.

-- 
Terry Jan Reedy


From steve at pearwood.info  Sun Apr 29 05:20:10 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 29 Apr 2012 13:20:10 +1000
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>	<4F9AAF98.7050303@stackless.com>	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>
Message-ID: <4F9CB36A.1030009@pearwood.info>

Brett Cannon wrote:
> On Sat, Apr 28, 2012 at 04:08, Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
>> On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org> wrote:
>>> I'm personally in favour of changing the insertion of '' to sys.path to
>>> inserting the cwd when the interpreter is launched.
>> I'm not, because it breaks importing from the interactive prompt if
>> you change directory after starting the session.
>>
>>
> Who does that?

Me.

You're asking this as if it were a bizarre and disturbing thing to do. It's 
not as if changing directory is an unsupported hack.

When I use the Python interactive interpreter for interactive exploration or 
testing, sometimes I discover I'm in the wrong directory. If I've just started 
a fresh session, I'll probably just exit back to the shell, cd, then start 
Python again. But if there's significant history in the current session, I'll 
just change directories and continue on.


> I mean what possible need do you have to start the
> interpreter in one directory, but then need to chdir somewhere else where
> you are doing your actual importing from, and in a way where you can't
> simply attach the directory you want to use into sys.path?

Of course I could manipulate sys.path. But chances are that I still have to 
change directory anyway, so that reading and writing data files go where I 
want without having to specify absolute paths.



-- 
Steven


From pje at telecommunity.com  Sun Apr 29 05:41:09 2012
From: pje at telecommunity.com (PJ Eby)
Date: Sat, 28 Apr 2012 23:41:09 -0400
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <20120428161654.E0B572500D2@webabinitio.net>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<20120428161654.E0B572500D2@webabinitio.net>
Message-ID: <CALeMXf5TYWThe6HntzwPOP4=Kzj+-Rn9fnSmDL9kk+_JApk8NA@mail.gmail.com>

On Sat, Apr 28, 2012 at 12:16 PM, R. David Murray <rdmurray at bitdance.com>wrote:

> That said, could this insertion of '' only happen when the interactive
> prompt is actually posted, and otherwise use cwd?
>

That's already the case.  Actually, sys.path[0] is *always* the absolute
path of the script directory -- regardless of whether you invoked the
script by a relative path or an absolute one, and regardless of whether
you're importing 'site' -- at least on Linux and Cygwin and WIndows, for
all Python versions I've used regularly, and 3.2 besides.

It isn't the value of cwd unless you happen to run a script from the same
directory as the script itself.  But even then, it's absolute, and not an
empty string: the empty string is only present for interactive sessions.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120428/303d513b/attachment.html>

From ncoghlan at gmail.com  Sun Apr 29 07:05:17 2012
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 29 Apr 2012 15:05:17 +1000
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CALeMXf5TYWThe6HntzwPOP4=Kzj+-Rn9fnSmDL9kk+_JApk8NA@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<20120428161654.E0B572500D2@webabinitio.net>
	<CALeMXf5TYWThe6HntzwPOP4=Kzj+-Rn9fnSmDL9kk+_JApk8NA@mail.gmail.com>
Message-ID: <CADiSq7e_NTgH5hKOQWjPeU3SfqqFD2WZSKYFcMCFiigdNYZ9RA@mail.gmail.com>

On Sun, Apr 29, 2012 at 1:41 PM, PJ Eby <pje at telecommunity.com> wrote:
> That's already the case. ?Actually, sys.path[0] is *always* the absolute
> path of the script directory -- regardless of whether you invoked the script
> by a relative path or an absolute one, and regardless of whether you're
> importing 'site' -- at least on Linux and Cygwin and WIndows, for all Python
> versions I've used regularly, and 3.2 besides.

"-c" and "-m" also insert the empty string as sys.path[0] in order to
find local files. They could just as easily insert the full cwd
explicitly though, and, in fact, they arguably should. (I say
arguably, because changing this *would* be a backwards incompatible
change - there's no such issue with requiring __file__ to be
absolute).

If we fixed that, then you could only get relative filenames from the
interactive prompt.

There's another way we can go with this, though: something I'm working
on at the moment is having usage of the frozen importlib be
*temporary*, switching to the full Python source version as soon as
possible (i.e. as soon as the frozen version is able to retrieve the
full version from disk).

There's a trick that becomes possible if we go down that path: we can
have some elements of importlib._bootstrap that *don't run* during the
initial bootstrapping phase.

Specifically, we can have module level code that looks like this:

    if __name__.startswith("importlib."):
        # Import system has been bootstrapped with the frozen version,
we now have full stdlib access
        # and other parts of the interpreter have also been fully initialised
        from os.path import abspath as _abspath
        _debug_msg = print
    else:
        # Running from the frozen copy, there's things we can't do yet
because the interpreter is not fully configured
        def _abspath(entry):
            # During the bootstrap process, we let relative paths
slide. It will only happen if someone shadows the stdlib in their
            # current directory.
            return entry
        def _debug_msg(*args, **kwds):
            # Standard streams are not initialised yet
            pass

Cheers,
Nick.

-- 
Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia

From larry at hastings.org  Sun Apr 29 10:41:58 2012
From: larry at hastings.org (Larry Hastings)
Date: Sun, 29 Apr 2012 01:41:58 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9C7B53.30602@trueblade.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<4F9C7B53.30602@trueblade.com>
Message-ID: <4F9CFED6.3090703@hastings.org>

On 04/28/2012 04:20 PM, Eric V. Smith wrote:
> But we generally use a namedtuple (or structseq) for things like
> get_clock_info. For example, for sys.float_info there's no need for it
> to be a tuple, and it can be extended in the future, yet it's a structseq.

I'd prefer an object to a dict, but not a tuple / structseq.  There's no 
need for the members to be iterable.


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120429/6192771b/attachment.html>

From eric at trueblade.com  Sun Apr 29 11:01:58 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Sun, 29 Apr 2012 05:01:58 -0400
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9CFED6.3090703@hastings.org>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<4F9C7B53.30602@trueblade.com> <4F9CFED6.3090703@hastings.org>
Message-ID: <4F9D0386.1030402@trueblade.com>

On 4/29/2012 4:41 AM, Larry Hastings wrote:
> On 04/28/2012 04:20 PM, Eric V. Smith wrote:
>> But we generally use a namedtuple (or structseq) for things like
>> get_clock_info. For example, for sys.float_info there's no need for it
>> to be a tuple, and it can be extended in the future, yet it's a structseq.
> 
> I'd prefer an object to a dict, but not a tuple / structseq.  There's no
> need for the members to be iterable.

I agree with you, but there's already plenty of precedent for this. A
quick check shows sys.flags, sys.float_info, and os.stat(); I'm sure
there's more.

Iteration for these isn't very useful, but structseq is the handiest
type we have:

>>> for v in sys.float_info:
...   print(v)
...
1.79769313486e+308
1024
308
2.22507385851e-308
-1021
-307
15
53
2.22044604925e-16
2
1

For python code I use namedtuple (or my own recordtype), which are
iterable but almost no one iterates over them.

Eric.

From larry at hastings.org  Sun Apr 29 11:12:41 2012
From: larry at hastings.org (Larry Hastings)
Date: Sun, 29 Apr 2012 02:12:41 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9D0386.1030402@trueblade.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<4F9C7B53.30602@trueblade.com> <4F9CFED6.3090703@hastings.org>
	<4F9D0386.1030402@trueblade.com>
Message-ID: <4F9D0609.3060801@hastings.org>


On 04/29/2012 02:01 AM, Eric V. Smith wrote:
> On 4/29/2012 4:41 AM, Larry Hastings wrote:
>> I'd prefer an object to a dict, but not a tuple / structseq.  There's no
>> need for the members to be iterable.
> I agree with you, but there's already plenty of precedent for this.
> [...] Iteration for these isn't very useful, but structseq is the handiest
> type we have:

The times, they are a-changin'.  I've been meaning to start whacking the 
things which are iterable which really shouldn't be.  Like, who uses 
destructuring assignment with the os.stat result anymore?  Puh-leez, 
that's so 1996.  That really oughta be deprecated.

Anyway, it'd be swell if we could stop adding new ones.  Maybe we need a 
clone of structseq that removes iterability?  (I was thinking, we could 
hack structseq so it didn't behave iterably if n_in_sequence was 0.  
But, no, it inherits from tuple, such shenanigans are a bad idea.)


//arry/

p.s. MvL gets credit for the original observation, and the suggestion of 
deprecating iterability.  As usual I'm standing on somebody else's 
shoulders.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120429/ef711e3b/attachment.html>

From tismer at stackless.com  Sun Apr 29 13:14:42 2012
From: tismer at stackless.com (Christian Tismer)
Date: Sun, 29 Apr 2012 13:14:42 +0200
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<CAP1=2W5eFdUAuZfT0gZ+91gWvzVs7XVs-_hKtgYqbyVSdZZ6_A@mail.gmail.com>
Message-ID: <4F9D22A2.8090704@stackless.com>

On 28.04.12 21:16, Brett Cannon wrote:
>
>
> On Sat, Apr 28, 2012 at 04:08, Nick Coghlan <ncoghlan at gmail.com 
> <mailto:ncoghlan at gmail.com>> wrote:
>
>     On Sat, Apr 28, 2012 at 6:00 AM, Brett Cannon <brett at python.org
>     <mailto:brett at python.org>> wrote:
>     > I'm personally in favour of changing the insertion of '' to
>     sys.path to
>     > inserting the cwd when the interpreter is launched.
>
>     I'm not, because it breaks importing from the interactive prompt if
>     you change directory after starting the session.
>
>
> Who does that? I mean what possible need do you have to start the 
> interpreter in one directory, but then need to chdir somewhere else 
> where you are doing your actual importing from, and in a way where you 
> can't simply attach the directory you want to use into sys.path?
>

Well, it depends on which hat I'm wearing.

Scenario 1:
I am designing a big application. This application shall run without 
problems,
with disambiguated imports, and by no means should hit anything that is not
meant to be imported.
In this case, I need to remove '' from sys.path and replace it with an 
absolute entry.

Update: I see this works already unless "-c" and "-m" are present (hum).

Scenario 2:
I am playing with the application, want to try several modules, or even 
several versions
of modules. I do use os.chdir() to get into a certain context, try 
imports, remove them
again, chdir() to a different directory with a slightly changed module, 
et cetera.
In this case, I need '' (or as has been mentioned '.') to have 
flexibility for testing,
debugging and exploration.

These scenarios are both perfectly valid for their use case, but they 
have pretty
different implication for imports, and especially for sys.path.

So the real question I was after was "can os.chdir() be freely used?"

It would be great to get "yes" or "no", but the answer is right now "it 
depends".

cheers - chris

-- 
Christian Tismer             :^)<mailto:tismer at stackless.com>
tismerysoft GmbH             :     Have a break! Take a ride on Python's
Karl-Liebknecht-Str. 121     :    *Starship* http://starship.python.net/
14482 Potsdam                :     PGP key ->  http://pgp.uni-mainz.de
work +49 173 24 18 776  mobile +49 173 24 18 776  fax n.a.
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
       whom do you want to sponsor today?   http://www.stackless.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20120429/cc6b87d3/attachment.html>

From steve at pearwood.info  Sun Apr 29 14:29:59 2012
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 29 Apr 2012 22:29:59 +1000
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9D0609.3060801@hastings.org>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>	<4F9B3EDD.1060002@pearwood.info>	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>	<4F9C7B53.30602@trueblade.com>
	<4F9CFED6.3090703@hastings.org>	<4F9D0386.1030402@trueblade.com>
	<4F9D0609.3060801@hastings.org>
Message-ID: <4F9D3447.9030205@pearwood.info>

Larry Hastings wrote:
> 
> On 04/29/2012 02:01 AM, Eric V. Smith wrote:
>> On 4/29/2012 4:41 AM, Larry Hastings wrote:
>>> I'd prefer an object to a dict, but not a tuple / structseq.  There's no
>>> need for the members to be iterable.
>> I agree with you, but there's already plenty of precedent for this.
>> [...] Iteration for these isn't very useful, but structseq is the 
>> handiest
>> type we have:
> 
> The times, they are a-changin'.  I've been meaning to start whacking the 
> things which are iterable which really shouldn't be.  Like, who uses 
> destructuring assignment with the os.stat result anymore?  Puh-leez, 
> that's so 1996.  That really oughta be deprecated.

Why? What problems does it cause?

If it isn't broken, don't break it.



-- 
Steven


From solipsis at pitrou.net  Sun Apr 29 14:38:01 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 29 Apr 2012 14:38:01 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<4F9C7B53.30602@trueblade.com> <4F9CFED6.3090703@hastings.org>
	<4F9D0386.1030402@trueblade.com> <4F9D0609.3060801@hastings.org>
Message-ID: <20120429143801.7c3af135@pitrou.net>

On Sun, 29 Apr 2012 02:12:41 -0700
Larry Hastings <larry at hastings.org> wrote:
> 
> On 04/29/2012 02:01 AM, Eric V. Smith wrote:
> > On 4/29/2012 4:41 AM, Larry Hastings wrote:
> >> I'd prefer an object to a dict, but not a tuple / structseq.  There's no
> >> need for the members to be iterable.
> > I agree with you, but there's already plenty of precedent for this.
> > [...] Iteration for these isn't very useful, but structseq is the handiest
> > type we have:
> 
> The times, they are a-changin'.  I've been meaning to start whacking the 
> things which are iterable which really shouldn't be.  Like, who uses 
> destructuring assignment with the os.stat result anymore?  Puh-leez, 
> that's so 1996.  That really oughta be deprecated.

Some types can benefit from being hashable and having a minimal
footprint (hence tuple-like). However it's not the case for
get_clock_info(), since you're unlikely to have more than one instance
alive in a given invokation.

Regards

Antoine.



From solipsis at pitrou.net  Sun Apr 29 14:39:01 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 29 Apr 2012 14:39:01 +0200
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<CAMpsgwYF2XudqjN=HLKtCiCHcZzzDMFp0MN97FLVwPU9M0GNFQ@mail.gmail.com>
Message-ID: <20120429143901.1d9aa7ce@pitrou.net>

On Sun, 29 Apr 2012 03:26:26 +0200
Victor Stinner <victor.stinner at gmail.com> wrote:

> Hi Guido,
> 
> 2012/4/28 Guido van Rossum <guido at python.org>:
> > I read most of the PEP and I think it is ready for acceptance! Thanks
> > for your patience in shepherding this through such a difficult and
> > long discussion.
> 
> You're welcome, but many developers helped me!
> 
> > Also thanks to the many other contributors,
> > especially those who ended up as co-authors. We will have an awesome
> > new set of time APIs! Now let the implementation roll...
> 
> The PEP is not accepted yet at:
> http://www.python.org/dev/peps/pep-0418/
> 
> Did you forget to update its status, or are you waiting for something?
> 
> Anyway I commited the implementation of the PEP 418 (after the last
> change on the API of time.get_clock_info()). Let's see how buildbots
> feel with monotonic time.

Hopefully they'll be monotonously green!

cheers

Antoine.



From tismer at stackless.com  Sun Apr 29 15:37:40 2012
From: tismer at stackless.com (Christian Tismer)
Date: Sun, 29 Apr 2012 15:37:40 +0200
Subject: [Python-Dev] package imports, sys.path and os.chdir()
In-Reply-To: <CADiSq7e_NTgH5hKOQWjPeU3SfqqFD2WZSKYFcMCFiigdNYZ9RA@mail.gmail.com>
References: <4F99BE80.2090509@stackless.com>
	<CADiSq7fPU8bC6u3Wbt0gd4=NzQAYUtoW8uHMgrKG3v2vaTB76g@mail.gmail.com>
	<4F9AAF98.7050303@stackless.com>
	<CAP1=2W7vVxgc1UQrKdc-5h9YxaKiPQobxhsg4opJbDxokA4z1g@mail.gmail.com>
	<CADiSq7d=1-wWZTY8BgJ8y3Z39WRJ+K+RQXYi188pbjCBH5p=rg@mail.gmail.com>
	<20120428161654.E0B572500D2@webabinitio.net>
	<CALeMXf5TYWThe6HntzwPOP4=Kzj+-Rn9fnSmDL9kk+_JApk8NA@mail.gmail.com>
	<CADiSq7e_NTgH5hKOQWjPeU3SfqqFD2WZSKYFcMCFiigdNYZ9RA@mail.gmail.com>
Message-ID: <4F9D4424.5080508@stackless.com>

On 29.04.12 07:05, Nick Coghlan wrote:
> On Sun, Apr 29, 2012 at 1:41 PM, PJ Eby<pje at telecommunity.com>  wrote:
>> That's already the case.  Actually, sys.path[0] is *always* the absolute
>> path of the script directory -- regardless of whether you invoked the script
>> by a relative path or an absolute one, and regardless of whether you're
>> importing 'site' -- at least on Linux and Cygwin and WIndows, for all Python
>> versions I've used regularly, and 3.2 besides.
> "-c" and "-m" also insert the empty string as sys.path[0] in order to
> find local files. They could just as easily insert the full cwd
> explicitly though, and, in fact, they arguably should. (I say
> arguably, because changing this *would* be a backwards incompatible
> change - there's no such issue with requiring __file__ to be
> absolute).

As a note: I tried to find out where and when the empty string actually
got inserted into sys.path. Not very easy, had to run the C debugger
to understand that:

It happens in sysmodule.c

PyMain
     PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind);

that calls

PySys_SetArgvEx(int argc, char **argv, int updatepath)

and the logic weather to use the empty string or a full path etc.
is deeply hidden in a C function as a side effect. Brrrrrr!

It would be much cleaner and easier if that stuff would be ignored
today and called a Python implementation, instead.

Is that in the plans to get rid of C for such stuff? I hope so :-)

cheers -- Chris

-- 
Christian Tismer             :^)<mailto:tismer at stackless.com>
tismerysoft GmbH             :     Have a break! Take a ride on Python's
Karl-Liebknecht-Str. 121     :    *Starship* http://starship.python.net/
14482 Potsdam                :     PGP key ->  http://pgp.uni-mainz.de
work +49 173 24 18 776  mobile +49 173 24 18 776  fax n.a.
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
       whom do you want to sponsor today?   http://www.stackless.com/


From guido at python.org  Sun Apr 29 16:37:42 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 29 Apr 2012 07:37:42 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <4F9D3447.9030205@pearwood.info>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<4F9C7B53.30602@trueblade.com> <4F9CFED6.3090703@hastings.org>
	<4F9D0386.1030402@trueblade.com> <4F9D0609.3060801@hastings.org>
	<4F9D3447.9030205@pearwood.info>
Message-ID: <CAP7+vJLVmT7TUmqCDy_btQ=jgaQgMvkDo6DmEMS-81OHqcEWVQ@mail.gmail.com>

On Sun, Apr 29, 2012 at 5:29 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> Larry Hastings wrote:
>>
>>
>> On 04/29/2012 02:01 AM, Eric V. Smith wrote:
>>>
>>> On 4/29/2012 4:41 AM, Larry Hastings wrote:
>>>>
>>>> I'd prefer an object to a dict, but not a tuple / structseq. ?There's no
>>>> need for the members to be iterable.
>>>
>>> I agree with you, but there's already plenty of precedent for this.
>>> [...] Iteration for these isn't very useful, but structseq is the
>>> handiest
>>> type we have:
>>
>>
>> The times, they are a-changin'. ?I've been meaning to start whacking the
>> things which are iterable which really shouldn't be. ?Like, who uses
>> destructuring assignment with the os.stat result anymore? ?Puh-leez, that's
>> so 1996. ?That really oughta be deprecated.
>
>
> Why? What problems does it cause?
>
> If it isn't broken, don't break it.

It's an anti-pattern. You basically have to look up or copy/paste the
order of the fields to get it right. And there are many fields in the
stats structure that can't be added to the sequence because of the
requirement not to break backwards compatibility with code that
expects a fixed number of fields (in 1996 we also didn't have *
unpacking :-). So you're getting a legacy-determined subset of the
values anyway.

Ditto for times; while the first 6 fields are easy (y/m/d h/m/s) the
three after that are just fluff (weekday and some tz related things
that I can never remember) and again there is important stuff missing
like finer precision and useful tz info.

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Sun Apr 29 16:40:46 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 29 Apr 2012 07:40:46 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwYF2XudqjN=HLKtCiCHcZzzDMFp0MN97FLVwPU9M0GNFQ@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<CAMpsgwYF2XudqjN=HLKtCiCHcZzzDMFp0MN97FLVwPU9M0GNFQ@mail.gmail.com>
Message-ID: <CAP7+vJLGsM07f1QbPKFkrOL4hJ2CTQgSH1jiR7U+NwYWDgScaw@mail.gmail.com>

On Sat, Apr 28, 2012 at 6:26 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
> Hi Guido,
>
> 2012/4/28 Guido van Rossum <guido at python.org>:
>> I read most of the PEP and I think it is ready for acceptance! Thanks
>> for your patience in shepherding this through such a difficult and
>> long discussion.
>
> You're welcome, but many developers helped me!

I tried to imply that in the next sentence. :-) Still, without your
push it would not have happened.

>> Also thanks to the many other contributors,
>> especially those who ended up as co-authors. We will have an awesome
>> new set of time APIs! Now let the implementation roll...
>
> The PEP is not accepted yet at:
> http://www.python.org/dev/peps/pep-0418/
>
> Did you forget to update its status, or are you waiting for something?

To get to a machine with a checkout. Done.

> Anyway I commited the implementation of the PEP 418 (after the last
> change on the API of time.get_clock_info()). Let's see how buildbots
> feel with monotonic time.

Awesome!

-- 
--Guido van Rossum (python.org/~guido)

From guido at python.org  Sun Apr 29 16:49:25 2012
From: guido at python.org (Guido van Rossum)
Date: Sun, 29 Apr 2012 07:49:25 -0700
Subject: [Python-Dev] [RFC] PEP 418: Add monotonic time,
 performance counter and process time functions
In-Reply-To: <CAMpsgwagDBVBxnY5Gcx92bn_j5JgGPpvo5qrs6rZcTafxsuPYw@mail.gmail.com>
References: <CAMpsgwZxZiNcqTznROd6MiPnkhmy1XtKzVPCozWp+sSiZ8dUTg@mail.gmail.com>
	<CAMpsgwa=2fJY8nSnLzeX8VgRFpFbWjJAX5LUK0bzd3amGOQ+AA@mail.gmail.com>
	<CAP7+vJ+fQsH48MO88+_g_=UdwUBjhP8Z=yCkg+i2CVSdg5Kdsw@mail.gmail.com>
	<4F9B3EDD.1060002@pearwood.info>
	<CAP7+vJLUNpQEFth6WKJwJa-WUorqUeBED-J85mLw+m2bognefQ@mail.gmail.com>
	<CAMpsgwYODjairfQ-1juOt2GaVVzzQffLXbK_GVtSX9ZXC--Tdg@mail.gmail.com>
	<CAP7+vJLoPXayZi37rJDeZjXjf6OEm_TL__3uqL9z-wjxPVe0rA@mail.gmail.com>
	<20120428165101.2c27d044@pitrou.net>
	<CAP7+vJKh-3CpyPCf8GrUYNMqdGkoSrczaZF80u3Szc8ZQi7==A@mail.gmail.com>
	<CAMpsgwagDBVBxnY5Gcx92bn_j5JgGPpvo5qrs6rZcTafxsuPYw@mail.gmail.com>
Message-ID: <CAP7+vJJZBkioZ9sBRN8z4WxB3NaMKR0O78=MzMEXOht7voTqdg@mail.gmail.com>

On Sat, Apr 28, 2012 at 1:32 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
>>> As a thin wrapper, adding it to the time module was pretty much
>>> uncontroversial, I think. The PEP proposes cross-platform
>>> functions with consistent semantics, which is where a discussion was
>>> needed.
>>
>> True, but does this mean clock_gettime and friends only exist on
>> POSIX? Shouldn't they be in the os or posix module then? I guess I'm
>> fine with either place but I don't know if enough thought was put into
>> the decision. Up until now the time module had only cross-platform
>> functions (even if clock()'s semantics vary widely).
>
> The os module is big enough. Low level networks functions are not in
> the os module, but in the socket module.

There are subtle other reasons for that (such as that on Windows,
socket file descriptors and os file descriptors are different things).

But I'm fine with leaving these in the time module.

> Not all functions of the time module are always available. For
> example, time.tzset() is not available on all platforms. Another
> example, the new time.monotonic() is not available on all platforms
> ;-)
>
> Oh, I forgot to mention that time.clock_*() functions are not always
> available in the doc.

Yeah, I think the docs can use some work Maybe from someone interested
in contributing to docs specifically? I don't want to make Victor
responsible for everything. But the new docs for the time module are a
but confusing due to the explosion of new functions and constants.
E.g. several descriptions use 'clk_id' without explaining it. Maybe a
separate subsection can be created for low-level and/or
platform-specific items, leaving the main time module to explain the
traditional functions and the new portable functions from PEP 418?

-- 
--Guido van Rossum (python.org/~guido)

From benjamin at python.org  Mon Apr 30 01:25:16 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Sun, 29 Apr 2012 19:25:16 -0400
Subject: [Python-Dev] time.clock_info() field names
Message-ID: <CAPZV6o_hja_Shfp7=08A2+Ufi1LE7NaLS=RVHkak-2QpWEO+wA@mail.gmail.com>

Hi,
I see PEP 418 gives time.clock_info() two boolean fields named
"is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and
a bit ugly, and they could just be renamed "monotonic" and "adjusted".

Thoughts?

-- 
Regards,
Benjamin

From solipsis at pitrou.net  Mon Apr 30 01:33:57 2012
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 30 Apr 2012 01:33:57 +0200
Subject: [Python-Dev] time.clock_info() field names
References: <CAPZV6o_hja_Shfp7=08A2+Ufi1LE7NaLS=RVHkak-2QpWEO+wA@mail.gmail.com>
Message-ID: <20120430013357.347ee8a1@pitrou.net>

On Sun, 29 Apr 2012 19:25:16 -0400
Benjamin Peterson <benjamin at python.org> wrote:
> Hi,
> I see PEP 418 gives time.clock_info() two boolean fields named
> "is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and
> a bit ugly, and they could just be renamed "monotonic" and "adjusted".
> 
> Thoughts?

Agreed.

cheers

Antoine.



From jimjjewett at gmail.com  Mon Apr 30 03:06:48 2012
From: jimjjewett at gmail.com (Jim J. Jewett)
Date: Sun, 29 Apr 2012 18:06:48 -0700 (PDT)
Subject: [Python-Dev]  time.clock_info() field names
In-Reply-To: <CAPZV6o_hja_Shfp7=08A2+Ufi1LE7NaLS=RVHkak-2QpWEO+wA@mail.gmail.com>
References: <CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA@mail.gmail.com>
Message-ID: <4f9de5a8.e89c320a.4321.2854@mx.google.com>



In http://mail.python.org/pipermail/python-dev/2012-April/119134.html
Benjamin Peterson wrote:

> I see PEP 418 gives time.clock_info() two boolean fields named
> "is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and
> a bit ugly, and they could just be renamed "monotonic" and "adjusted".

I agree with monotonic, but I think it should be "adjustable".

To me, "adjusted" and "is_adjusted" both imply that an adjustment
has already been made; "adjustable" only implies that it is possible.

I do remember concerns (including Stephen J. Turnbull's
<CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA at mail.gmail.com> )
that "adjustable" should imply at least a list of past adjustments,
and preferably a way to make an adjustment.

I just think that stating it is adjustable (without saying how, or
whether and when it already happened) is less wrong than claiming it
is already adjusted just in case it might have been.

-jJ

-- 

If there are still threading problems with my replies, please 
email me with details, so that I can try to resolve them.  -jJ


From benjamin at python.org  Mon Apr 30 03:31:34 2012
From: benjamin at python.org (Benjamin Peterson)
Date: Sun, 29 Apr 2012 21:31:34 -0400
Subject: [Python-Dev] time.clock_info() field names
In-Reply-To: <4f9de5a8.e89c320a.4321.2854@mx.google.com>
References: <CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA@mail.gmail.com>
	<CAPZV6o_hja_Shfp7=08A2+Ufi1LE7NaLS=RVHkak-2QpWEO+wA@mail.gmail.com>
	<4f9de5a8.e89c320a.4321.2854@mx.google.com>
Message-ID: <CAPZV6o_527sUSpM8-Pf4c9mF_GtzDsnZsAiGgUV-ZMo=k1879Q@mail.gmail.com>

2012/4/29 Jim J. Jewett <jimjjewett at gmail.com>:
>
>
> In http://mail.python.org/pipermail/python-dev/2012-April/119134.html
> Benjamin Peterson wrote:
>
>> I see PEP 418 gives time.clock_info() two boolean fields named
>> "is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and
>> a bit ugly, and they could just be renamed "monotonic" and "adjusted".
>
> I agree with monotonic, but I think it should be "adjustable".

I don't really care, but I think "adjusted" is fine. As in "this clock
is adjusted (occasionally)".


-- 
Regards,
Benjamin

From eric at trueblade.com  Mon Apr 30 09:36:00 2012
From: eric at trueblade.com (Eric V. Smith)
Date: Mon, 30 Apr 2012 03:36:00 -0400
Subject: [Python-Dev] [Python-checkins] devguide: Record Richard Oudkerk.
In-Reply-To: <E1SOVnq-00007u-Mj@dinsdale.python.org>
References: <E1SOVnq-00007u-Mj@dinsdale.python.org>
Message-ID: <4F9E40E0.2040502@trueblade.com>

> +- Richard Oudkerk was given push privileges on Apr 29 2012 by Antoine Pitrou
> +  on recommendation by Charles-Fran?ois Natali and Jesse Noller, for various
> +  contributions to multiprocessing (and original authorship of
> +  multiprocessing's predecessor, the processing package).

Could one of you (Antoine, Charles-Francois, or Jesse) ask Richard to
subscribe to python-committers? Or if you're reading this, Richard,
could you subscribe? It's at
http://mail.python.org/mailman/listinfo/python-committers

I think there may have been some other recent committers for whom I
didn't see subscribe requests, but I don't track it all that closely.

Eric.

From mark at hotpy.org  Mon Apr 30 10:26:14 2012
From: mark at hotpy.org (Mark Shannon)
Date: Mon, 30 Apr 2012 09:26:14 +0100
Subject: [Python-Dev] time.clock_info() field names
In-Reply-To: <CAPZV6o_527sUSpM8-Pf4c9mF_GtzDsnZsAiGgUV-ZMo=k1879Q@mail.gmail.com>
References: <CAL_0O19nmi0+zB+tV8poZDAffNdTnohxo9y5dbw+E2q=9rX9YA@mail.gmail.com>	<CAPZV6o_hja_Shfp7=08A2+Ufi1LE7NaLS=RVHkak-2QpWEO+wA@mail.gmail.com>	<4f9de5a8.e89c320a.4321.2854@mx.google.com>
	<CAPZV6o_527sUSpM8-Pf4c9mF_GtzDsnZsAiGgUV-ZMo=k1879Q@mail.gmail.com>
Message-ID: <4F9E4CA6.9070700@hotpy.org>

Benjamin Peterson wrote:
> 2012/4/29 Jim J. Jewett <jimjjewett at gmail.com>:
>>
>> In http://mail.python.org/pipermail/python-dev/2012-April/119134.html
>> Benjamin Peterson wrote:
>>
>>> I see PEP 418 gives time.clock_info() two boolean fields named
>>> "is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and
>>> a bit ugly, and they could just be renamed "monotonic" and "adjusted".
>> I agree with monotonic, but I think it should be "adjustable".
> 
> I don't really care, but I think "adjusted" is fine. As in "this clock
> is adjusted (occasionally)".

monotonic is an adjective, whereas adjusted is (part of) a verb. I think 
both should be adjectives. Does "adjusted" mean that it has been 
adjusted, that it can be adjusted or it will be adjusted?

Cheers,
Mark.

From xdegaye at gmail.com  Mon Apr 30 12:31:40 2012
From: xdegaye at gmail.com (Xavier de Gaye)
Date: Mon, 30 Apr 2012 12:31:40 +0200
Subject: [Python-Dev] The step command of pdb is broken
Message-ID: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>

Issue http://bugs.python.org/issue13183 raises the point that the step
command of pdb is broken. This issue is 6 months old. A patch and test
case have been proposed. The 'Lifecycle of a Patch' at
http://docs.python.org/devguide/patch.html says
<quote>
If your patch has not received any notice from reviewers (i.e., no
comment made) after a substantial amount of time then you may email
python-dev at python.org asking for someone to take a look at your patch.
</quote>
I am the author of pyclewn, a Vim front end to pdb and gdb, and I
would be grateful for any progress on this issue.

The following pdb session shows the problem when running the three
modules main.py, foo.py and bar.py. After the second step command, pdb
does not stop (as it should) at lines foo.py:5 and foo.py:6, nor does
it stop to print the return value of increment().
=================================================
main.py
     1  import foo
     2
     3  result = foo.increment(100)
     4  print('result', result)
foo.py
     1  import bar
     2
     3  def increment(arg):
     4      v =  bar.value()
     5      result = arg + v
     6      return result
bar.py
     1  def value():
     2      return 5
=================================================
$ python -m pdb main.py
> /path_to/main.py(1)<module>()
-> import foo
(Pdb) import sys; sys.version
'3.3.0a2+ (default:2c27093fd11f, Apr 30 2012, 10:51:35) \n[GCC 4.3.2]'
(Pdb) break bar.py:2
Breakpoint 1 at /path_to/bar.py:2
(Pdb) continue
> /path_to/bar.py(2)value()
-> return 5
(Pdb) step
--Return--
> /path_to/bar.py(2)value()->5
-> return 5
(Pdb) step
> /path_to/main.py(4)<module>()
-> print('result', result)
(Pdb)
=================================================


Xavier

From g.brandl at gmx.net  Mon Apr 30 12:52:57 2012
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 30 Apr 2012 12:52:57 +0200
Subject: [Python-Dev] cpython: Issue #14428: Use the new
 time.perf_counter() and time.process_time() functions
In-Reply-To: <E1SOIYi-0002QU-Tl@dinsdale.python.org>
References: <E1SOIYi-0002QU-Tl@dinsdale.python.org>
Message-ID: <jnlqt8$8tc$1@dough.gmane.org>

On 29.04.2012 03:04, victor.stinner wrote:
> http://hg.python.org/cpython/rev/bd195749c0a2
> changeset:   76599:bd195749c0a2
> user:        Victor Stinner <victor.stinner at gmail.com>
> date:        Sun Apr 29 03:01:20 2012 +0200
> summary:
>   Issue #14428: Use the new time.perf_counter() and time.process_time() functions

[...]

> diff --git a/Lib/timeit.py b/Lib/timeit.py
> --- a/Lib/timeit.py
> +++ b/Lib/timeit.py
> @@ -15,8 +15,8 @@
>    -n/--number N: how many times to execute 'statement' (default: see below)
>    -r/--repeat N: how many times to repeat the timer (default 3)
>    -s/--setup S: statement to be executed once initially (default 'pass')
> -  -t/--time: use time.time() (default on Unix)
> -  -c/--clock: use time.clock() (default on Windows)
> +  -t/--time: use time.time()
> +  -c/--clock: use time.clock()

Does it make sense to keep the options this way?  IMO the distinction should be
to use either perf_counter() or process_time(), and the options could implement
this (-t -> perf_counter, -c -> process_time).

Georg



From cs at zip.com.au  Mon Apr 30 13:06:16 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Mon, 30 Apr 2012 21:06:16 +1000
Subject: [Python-Dev] time.clock_info() field names
In-Reply-To: <CAPZV6o_527sUSpM8-Pf4c9mF_GtzDsnZsAiGgUV-ZMo=k1879Q@mail.gmail.com>
References: <CAPZV6o_527sUSpM8-Pf4c9mF_GtzDsnZsAiGgUV-ZMo=k1879Q@mail.gmail.com>
Message-ID: <20120430110616.GA21860@cskk.homeip.net>

On 29Apr2012 21:31, Benjamin Peterson <benjamin at python.org> wrote:
| 2012/4/29 Jim J. Jewett <jimjjewett at gmail.com>:
| > In http://mail.python.org/pipermail/python-dev/2012-April/119134.html
| > Benjamin Peterson wrote:
| >
| >> I see PEP 418 gives time.clock_info() two boolean fields named
| >> "is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and
| >> a bit ugly, and they could just be renamed "monotonic" and "adjusted".
| >
| > I agree with monotonic, but I think it should be "adjustable".
| 
| I don't really care, but I think "adjusted" is fine. As in "this clock
| is adjusted (occasionally)".

-1 on "adjustable". That suggests the user can adjust it, not that the
OS may adjust it.

+1 on "adjusted" over "is_adjusted".
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Winter is gods' way of telling us to polish.
        - Peter Harper <bo165 at freenet.carleton.ca> <harperp at algonquinc.on.ca>

From cs at zip.com.au  Mon Apr 30 13:09:21 2012
From: cs at zip.com.au (Cameron Simpson)
Date: Mon, 30 Apr 2012 21:09:21 +1000
Subject: [Python-Dev] time.clock_info() field names
In-Reply-To: <4F9E4CA6.9070700@hotpy.org>
References: <4F9E4CA6.9070700@hotpy.org>
Message-ID: <20120430110921.GA21947@cskk.homeip.net>

On 30Apr2012 09:26, Mark Shannon <mark at hotpy.org> wrote:
| monotonic is an adjective,

Yes.

| whereas adjusted is (part of) a verb.

No. It is an adjective.

| I think 
| both should be adjectives. Does "adjusted" mean that it has been 
| adjusted, that it can be adjusted or it will be adjusted?

That depends on context. Reach for the doco.

Of course, in the context of the PEP means "may be adjusted by exterior clock
maintenance like NTP, and in fact this may have already happened". I am
unhappy with that filled with underscores and used as the name:-(

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

Experience is what you get when you don't get what you want.

From tshepang at gmail.com  Mon Apr 30 14:04:04 2012
From: tshepang at gmail.com (Tshepang Lekhonkhobe)
Date: Mon, 30 Apr 2012 14:04:04 +0200
Subject: [Python-Dev] suggestion regarding the contributor agreement form
Message-ID: <CAA77j2BQ_cvtVNnnFBL=pJueREFNsnSgJ6mSAaPOPCytaWYLzg@mail.gmail.com>

Hi,

It's not very obvious that printing this page
http://www.python.org/psf/contrib/contrib-form/ actually prints only
the form. Can you rather offer a downloadable image/pdf.

As an aside, on Chromium, it appears on 2 separate pages, when there's
enough space on the first.

From senthil at uthcode.com  Mon Apr 30 14:12:58 2012
From: senthil at uthcode.com (Senthil Kumaran)
Date: Mon, 30 Apr 2012 20:12:58 +0800
Subject: [Python-Dev] [Python-checkins] cpython (3.2): #14236: fix docs
	for \S.
In-Reply-To: <E1SORVN-0005ki-EP@dinsdale.python.org>
References: <E1SORVN-0005ki-EP@dinsdale.python.org>
Message-ID: <20120430121258.GB3102@mathmagic>

On Sun, Apr 29, 2012 at 12:37:25PM +0200, ezio.melotti wrote:
>               range of Unicode whitespace characters.
> -    \S       Matches any non-whitespace character; equiv. to [^ \t\n\r\f\v].
> +    \S       Matches any non-whitespace character; equivalent to [^\s].

Is this correct? While I understand what meant (or implied) \s is not
a valid ascii character in the documentation we denoted the sets using
ascii characters only.

-- 
Senthil

From guido at python.org  Mon Apr 30 17:42:53 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 30 Apr 2012 08:42:53 -0700
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
Message-ID: <CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>

IT would be good if the author of one of the pdb add-ons such as (I
believe) pdb2 could comment on this bug.

On Mon, Apr 30, 2012 at 3:31 AM, Xavier de Gaye <xdegaye at gmail.com> wrote:
> Issue http://bugs.python.org/issue13183 raises the point that the step
> command of pdb is broken. This issue is 6 months old. A patch and test
> case have been proposed. The 'Lifecycle of a Patch' at
> http://docs.python.org/devguide/patch.html says
> <quote>
> If your patch has not received any notice from reviewers (i.e., no
> comment made) after a substantial amount of time then you may email
> python-dev at python.org asking for someone to take a look at your patch.
> </quote>
> I am the author of pyclewn, a Vim front end to pdb and gdb, and I
> would be grateful for any progress on this issue.
>
> The following pdb session shows the problem when running the three
> modules main.py, foo.py and bar.py. After the second step command, pdb
> does not stop (as it should) at lines foo.py:5 and foo.py:6, nor does
> it stop to print the return value of increment().
> =================================================
> main.py
> ? ? 1 ?import foo
> ? ? 2
> ? ? 3 ?result = foo.increment(100)
> ? ? 4 ?print('result', result)
> foo.py
> ? ? 1 ?import bar
> ? ? 2
> ? ? 3 ?def increment(arg):
> ? ? 4 ? ? ?v = ?bar.value()
> ? ? 5 ? ? ?result = arg + v
> ? ? 6 ? ? ?return result
> bar.py
> ? ? 1 ?def value():
> ? ? 2 ? ? ?return 5
> =================================================
> $ python -m pdb main.py
>> /path_to/main.py(1)<module>()
> -> import foo
> (Pdb) import sys; sys.version
> '3.3.0a2+ (default:2c27093fd11f, Apr 30 2012, 10:51:35) \n[GCC 4.3.2]'
> (Pdb) break bar.py:2
> Breakpoint 1 at /path_to/bar.py:2
> (Pdb) continue
>> /path_to/bar.py(2)value()
> -> return 5
> (Pdb) step
> --Return--
>> /path_to/bar.py(2)value()->5
> -> return 5
> (Pdb) step
>> /path_to/main.py(4)<module>()
> -> print('result', result)
> (Pdb)
> =================================================
>
>
> Xavier
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)

From barry at python.org  Mon Apr 30 18:09:02 2012
From: barry at python.org (Barry Warsaw)
Date: Mon, 30 Apr 2012 12:09:02 -0400
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
	<CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
Message-ID: <20120430120902.0f15f53f@resist.wooz.org>

On Apr 30, 2012, at 08:42 AM, Guido van Rossum wrote:

>IT would be good if the author of one of the pdb add-ons such as (I
>believe) pdb2 could comment on this bug.

Maybe we should take this opportunity (Python 3.3) to consider adopting one of
the pdb add-ons or borging the best of their bits into the stdlib?

-Barry

From guido at python.org  Mon Apr 30 18:12:49 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 30 Apr 2012 09:12:49 -0700
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <20120430120902.0f15f53f@resist.wooz.org>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
	<CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
	<20120430120902.0f15f53f@resist.wooz.org>
Message-ID: <CAP7+vJKATr9S=-7SXBR2_x5zm4N9Rwf-P4L7hXuZH3Gak32GUg@mail.gmail.com>

On Mon, Apr 30, 2012 at 9:09 AM, Barry Warsaw <barry at python.org> wrote:
> On Apr 30, 2012, at 08:42 AM, Guido van Rossum wrote:
>
>>IT would be good if the author of one of the pdb add-ons such as (I
>>believe) pdb2 could comment on this bug.
>
> Maybe we should take this opportunity (Python 3.3) to consider adopting one of
> the pdb add-ons or borging the best of their bits into the stdlib?

I thought we already took most of the useful bits of one of these...
(Admitted I'm vague on details and haven't the time to research.)

-- 
--Guido van Rossum (python.org/~guido)

From senthil at uthcode.com  Mon Apr 30 18:57:51 2012
From: senthil at uthcode.com (Senthil Kumaran)
Date: Tue, 1 May 2012 00:57:51 +0800
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <20120430120902.0f15f53f@resist.wooz.org>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
	<CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
	<20120430120902.0f15f53f@resist.wooz.org>
Message-ID: <20120430165750.GA11689@mathmagic>

On Mon, Apr 30, 2012 at 12:09:02PM -0400, Barry Warsaw wrote:
> Maybe we should take this opportunity (Python 3.3) to consider adopting one of
> the pdb add-ons or borging the best of their bits into the stdlib?

Irrespective of this - Issue13183 seems to be an easy to verify bug in
3.2 and 3.3. I think, it would most visible if you were to use a full
screen debugger and you will notice that the return call indicator has
jumped to the next statement (skipping return) when returning. I
guess, that's why Xavier (pyclewn author) noted it.  The fix seems
fine too.

I have just requested an additional info and this particular one could
be fixed.

Thanks,
Senthil


From guido at python.org  Mon Apr 30 19:05:34 2012
From: guido at python.org (Guido van Rossum)
Date: Mon, 30 Apr 2012 10:05:34 -0700
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <20120430165750.GA11689@mathmagic>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
	<CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
	<20120430120902.0f15f53f@resist.wooz.org>
	<20120430165750.GA11689@mathmagic>
Message-ID: <CAP7+vJLsg=vCzACYCnYbkd5tsxi4Nbje5vV5a-uK7of5jFbcXQ@mail.gmail.com>

Senthil, if you can shepherd this patch to completion that would be great!

On Mon, Apr 30, 2012 at 9:57 AM, Senthil Kumaran <senthil at uthcode.com> wrote:
> On Mon, Apr 30, 2012 at 12:09:02PM -0400, Barry Warsaw wrote:
>> Maybe we should take this opportunity (Python 3.3) to consider adopting one of
>> the pdb add-ons or borging the best of their bits into the stdlib?
>
> Irrespective of this - Issue13183 seems to be an easy to verify bug in
> 3.2 and 3.3. I think, it would most visible if you were to use a full
> screen debugger and you will notice that the return call indicator has
> jumped to the next statement (skipping return) when returning. I
> guess, that's why Xavier (pyclewn author) noted it. ?The fix seems
> fine too.
>
> I have just requested an additional info and this particular one could
> be fixed.
>
> Thanks,
> Senthil
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)

From xdegaye at gmail.com  Mon Apr 30 22:08:52 2012
From: xdegaye at gmail.com (Xavier de Gaye)
Date: Mon, 30 Apr 2012 22:08:52 +0200
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <20120430165750.GA11689@mathmagic>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
	<CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
	<20120430120902.0f15f53f@resist.wooz.org>
	<20120430165750.GA11689@mathmagic>
Message-ID: <CAN4cRFzZxE5U_Z3cj6EqdDf6-RFLji42Mr8od6HuMFSMerdTRQ@mail.gmail.com>

On Mon, Apr 30, 2012 at 6:57 PM, Senthil Kumaran wrote:
> Irrespective of this - Issue13183 seems to be an easy to verify bug in
> 3.2 and 3.3. I think, it would most visible if you were to use a full
> screen debugger and you will notice that the return call indicator has
> jumped to the next statement (skipping return) when returning. I
> guess, that's why Xavier (pyclewn author) noted it. ?The fix seems
> fine too.
>
> I have just requested an additional info and this particular one could
> be fixed.


Thanks for your help on this issue Senthil.

Xavier

From martin at v.loewis.de  Mon Apr 30 22:46:20 2012
From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=)
Date: Mon, 30 Apr 2012 22:46:20 +0200
Subject: [Python-Dev] The step command of pdb is broken
In-Reply-To: <20120430120902.0f15f53f@resist.wooz.org>
References: <CAN4cRFwTa-V5nLPnJ1XXho3BXi_UiXWgX1dGFrJ_Qt-eSi6xYQ@mail.gmail.com>
	<CAP7+vJLjO45kyvX3Ru+r=yFV4qP28CCvt=2LUyGFj3Diz_-2pg@mail.gmail.com>
	<20120430120902.0f15f53f@resist.wooz.org>
Message-ID: <4F9EFA1C.4040403@v.loewis.de>

On 30.04.2012 18:09, Barry Warsaw wrote:
> On Apr 30, 2012, at 08:42 AM, Guido van Rossum wrote:
>
>> IT would be good if the author of one of the pdb add-ons such as (I
>> believe) pdb2 could comment on this bug.
>
> Maybe we should take this opportunity (Python 3.3) to consider adopting one of
> the pdb add-ons or borging the best of their bits into the stdlib?

I think the same policies should apply that I want to see followed for 
any other inclusion into the stdlib: we shouldn't "adopt" any code that
is not explicitly contributed, by it's author.

That's not only the legal issues, but also the responsibility for the
code. Otherwise, we end up with code that still nobody owns, and the
out-of-core version still gets better support.

Regards,
Martin