From larry at hastings.org  Mon Jun  1 05:52:10 2015
From: larry at hastings.org (Larry Hastings)
Date: Sun, 31 May 2015 20:52:10 -0700
Subject: [Python-Dev] Can someone configure the buildbots to build the
 3.5 branch?
In-Reply-To: <CAKJDb-OicQJQ27kN6iGjGOSb4fN6NWN_iaO493Beiz3q9mowtg@mail.gmail.com>
References: <5567ABD3.6090108@hastings.org>
 <CAKJDb-OicQJQ27kN6iGjGOSb4fN6NWN_iaO493Beiz3q9mowtg@mail.gmail.com>
Message-ID: <556BD6EA.1000507@hastings.org>

On 05/30/2015 08:25 PM, Zachary Ware wrote:
> On Thu, May 28, 2015 at 6:59 PM, Larry Hastings <larry at hastings.org> wrote:
>> The buildbots currently live in a state of denial about the 3.5 branch.
>> Could someone whisper tenderly in their collective shell-like ears so that
>> they start building 3.5, in addition to 3.4 and trunk?
> The 3.5 branch seems to be set up on the buildbots, we'll see how it
> goes when somebody commits something to 3.5.

It looks like they're working--thanks!


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150531/5b27f7e4/attachment.html>

From larry at hastings.org  Mon Jun  1 06:37:59 2015
From: larry at hastings.org (Larry Hastings)
Date: Sun, 31 May 2015 21:37:59 -0700
Subject: [Python-Dev] [RELEASED] Python 3.5.0b2 is now available
Message-ID: <556BE1A7.1090101@hastings.org>



On behalf of the Python development community and the Python 3.5 release 
team, I'm relieved to announce the availability of Python 3.5.0b2.  
Python 3.5.0b1 had a major regression (see 
http://bugs.python.org/issue24285 for more information) and as such was 
not suitable for testing Python 3.5. Therefore we've made this extra 
beta release, only a week later.  Anyone trying Python 3.5.0b1 should 
switch immediately to testing with Python 3.5.0b2.

Python 3.5 has now entered "feature freeze".  By default new features 
may no longer be added to Python 3.5.  (However, there are a handful of 
features that weren't quite ready for Python 3.5.0 beta 2; these were 
granted exceptions to the freeze, and are scheduled to be added before 
beta 3.)

This is a preview release, and its use is not recommended for production 
settings.

Three important notes for Windows users about Python 3.5.0b2:

  * If installing Python 3.5.0b2 as a non-privileged user, you may need
    to escalate to administrator privileges to install an update to your
    C runtime libraries.
  * There is now a third type of Windows build for Python 3.5.  In
    addition to the conventional installer and the web-based installer,
    Python 3.5 now has an embeddable release designed to be deployed as
    part of a larger application's installer for apps using or extending
    Python.  During the 3.5 alpha releases, this was an executable
    installer; as of 3.5.0 beta 1 the embeddable build of Python is now
    shipped in a zip file.


You can find Python 3.5.0b2 here:

    https://www.python.org/downloads/release/python-350b2/

Happy hacking,


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150531/74487c9e/attachment-0001.html>

From ncoghlan at gmail.com  Mon Jun  1 09:14:01 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 1 Jun 2015 17:14:01 +1000
Subject: [Python-Dev] [RELEASED] Python 3.5.0b2 is now available
In-Reply-To: <556BE1A7.1090101@hastings.org>
References: <556BE1A7.1090101@hastings.org>
Message-ID: <CADiSq7e9vgZYTg+EO1+XdJoquyDbEHBV4+MBHMHTkLdCndj-FQ@mail.gmail.com>

Thanks Larry, and my apologies to the release team for the extra work!

Regards,
Nick.

From arigo at tunes.org  Mon Jun  1 12:44:12 2015
From: arigo at tunes.org (Armin Rigo)
Date: Mon, 1 Jun 2015 12:44:12 +0200
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <556A45D0.7070606@hastings.org>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CAF4280+-RDftrwUtM4Ro6E7NcMAaQsOvg0MUcNMTXaUo8v9LZg@mail.gmail.com>
 <7A4DF5A2-5D04-4F5A-9883-B5E815A14909@gmail.com>
 <CAP1=2W5cZZzwQj0kqHgKozoKhCL_2fvx2cydNqOZBVW1BNGEdw@mail.gmail.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
Message-ID: <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>

Hi Larry,

On 31 May 2015 at 01:20, Larry Hastings <larry at hastings.org> wrote:
> p.s. Supporting this patch also helps cut into PyPy's reported performance
> lead--that is, if they ever upgrade speed.pypy.org from comparing against
> Python *2.7.2*.

Right, we should do this upgrade when 2.7.11 is out.

There is some irony in your comment which seems to imply "PyPy is
cheating by comparing with an old Python 2.7.2": it is inside a thread
which started because "we didn't backport performance improvements to
2.7.x so far".

Just to convince myself, I just ran a performance comparison.  I ran
the same benchmark suite as speed.pypy.org, with 2.7.2 against 2.7.10,
both freshly compiled with no "configure" options at all.  The
differences are usually in the noise, but range from +5% to... -60%.
If anything, this seems to show that CPython should take more care
about performance regressions.  If someone is interested:

* "raytrace-simple" is 1.19 times slower
* "bm_mako" is 1.29 times slower
* "spitfire_cstringio" is 1.60 times slower
* a number of other benchmarks are around 1.08.

The "7.0x faster" number on speed.pypy.org would be significantly
*higher* if we upgraded the baseline to 2.7.10 now.


A bient?t,

Armin.

From mal at egenix.com  Mon Jun  1 13:14:27 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 01 Jun 2015 13:14:27 +0200
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>	<7A4DF5A2-5D04-4F5A-9883-B5E815A14909@gmail.com>	<CAP1=2W5cZZzwQj0kqHgKozoKhCL_2fvx2cydNqOZBVW1BNGEdw@mail.gmail.com>	<CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>	<mk7asm$37i$1@ger.gmane.org>	<CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>	<CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>	<CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>	<CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>	<5568FAF6.6040802@python.org>
 <20150530015621.518b537f@fsol>	<CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>	<556906F0.4090504@sdamon.com>	<CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>	<CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>	<556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
Message-ID: <556C3E93.50108@egenix.com>

On 01.06.2015 12:44, Armin Rigo wrote:
> Hi Larry,
> 
> On 31 May 2015 at 01:20, Larry Hastings <larry at hastings.org> wrote:
>> p.s. Supporting this patch also helps cut into PyPy's reported performance
>> lead--that is, if they ever upgrade speed.pypy.org from comparing against
>> Python *2.7.2*.
> 
> Right, we should do this upgrade when 2.7.11 is out.
> 
> There is some irony in your comment which seems to imply "PyPy is
> cheating by comparing with an old Python 2.7.2": it is inside a thread
> which started because "we didn't backport performance improvements to
> 2.7.x so far".
> 
> Just to convince myself, I just ran a performance comparison.  I ran
> the same benchmark suite as speed.pypy.org, with 2.7.2 against 2.7.10,
> both freshly compiled with no "configure" options at all.  The
> differences are usually in the noise, but range from +5% to... -60%.
> If anything, this seems to show that CPython should take more care
> about performance regressions.  If someone is interested:
> 
> * "raytrace-simple" is 1.19 times slower
> * "bm_mako" is 1.29 times slower
> * "spitfire_cstringio" is 1.60 times slower
> * a number of other benchmarks are around 1.08.
> 
> The "7.0x faster" number on speed.pypy.org would be significantly
> *higher* if we upgraded the baseline to 2.7.10 now.

If someone were to volunteer to set up and run speed.python.org,
I think we could add some additional focus on performance
regressions. Right now, we don't have any way of reliably
and reproducibly testing Python performance.

Hint: The PSF would most likely fund such adventures :-)

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 01 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From pmiscml at gmail.com  Mon Jun  1 13:38:48 2015
From: pmiscml at gmail.com (Paul Sokolovsky)
Date: Mon, 1 Jun 2015 14:38:48 +0300
Subject: [Python-Dev] 2.7 is here until 2020,
 please don't call it a waste.
In-Reply-To: <556C3E93.50108@egenix.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CAP1=2W5cZZzwQj0kqHgKozoKhCL_2fvx2cydNqOZBVW1BNGEdw@mail.gmail.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
Message-ID: <20150601143848.477d4ce4@x230>

Hello,

On Mon, 01 Jun 2015 13:14:27 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:

[]

> > The "7.0x faster" number on speed.pypy.org would be significantly
> > *higher* if we upgraded the baseline to 2.7.10 now.
> 
> If someone were to volunteer to set up and run speed.python.org,
> I think we could add some additional focus on performance
> regressions. Right now, we don't have any way of reliably
> and reproducibly testing Python performance.

Just for the note, we had similar concerns with performance and other
regressions in MicroPython, and in the end,
http://micropython.org/resources/code-dashboard/ was set up. Performance
tracking is simplistic so far and consists only of running pystones;
mostly the executable size for different configuration is tracked, as
that's the most distinctive trait of MicroPython.


-- 
Best regards,
 Paul                          mailto:pmiscml at gmail.com

From p.f.moore at gmail.com  Mon Jun  1 15:52:39 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 1 Jun 2015 14:52:39 +0100
Subject: [Python-Dev] Windows new 3.5 installer: Command line uninstall
Message-ID: <CACac1F94Yv+-eTZN3OxV8K09ubH5rh+zcv0H2rTd=+LEMj6YAA@mail.gmail.com>

I'm trying to script a command line uninstall of Python 3.5. The new
installer has a nice well-documented command line interface, but
there's nothing much about how to uninstall.

Looking at the installed products, I see

>get-wmiobject Win32_Product -filter 'name like "Python 3.5.0b2%"' | select Name

Name
----
Python 3.5.0b2 pip Bootstrap (64-bit)
Python 3.5.0b2 Development Libraries (64-bit)
Python 3.5.0b2 C Runtime (64-bit)
Python 3.5.0b2 Tcl/Tk Support (64-bit)
Python 3.5.0b2 Standard Library (64-bit)
Python 3.5.0b2 Executables (64-bit)
Python 3.5.0b2 Core Interpreter (64-bit)
Python 3.5.0b2 Launcher (32-bit)
Python 3.5.0b2 Documentation (64-bit)
Python 3.5.0b2 Utility Scripts (64-bit)
Python 3.5.0b2 Test Suite (64-bit)

Normally I'd be able to call the Uninstall method on the installed
product, but here, I'm not sure which of the above I'd call
"Uninstall" on (there's only one "Programs and Features" entry for
3.5, and I can't tell which of these is equivalent to it - or if I
need to uninstall each in turn, I don't know what order they need to
be done in).

Specifically

(get-wmiobject -query 'select * from Win32_Product  where name =
"Python 2.7.8 (64-bit)"').Uninstall()

is an automated, command line silent uninstall of Python 2.7.8. What
(if anything) would be the equivalent for 3.5.0b2?

Paul

From encukou at gmail.com  Mon Jun  1 16:38:35 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Mon, 1 Jun 2015 16:38:35 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
Message-ID: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>

Hello,
The new test_importlib.extension.test_loader is currently leaking
references, (issue24268). There is a simple hack to stop this, but I'm
inclined to not apply quick hacks and rather dig into the root cause.
(It's a test module, the refleaks are relatively harmless.)

The tests are based directly on the "xxlimited" example,
xxlimited.Xxo, which exhibits the same bug -- it's just not tested.
It's is caused by a combination of a few factors, but I'm not sure
what's a bug and what's just undocumented behavior, so I'm asking for
input to put me on the right track.

As reported in issue16690, heap types with a na?ve custom tp_dealloc
leak references to the type when instantiated. According to [0], it
seems that tp_dealloc should check if it has been overridden, and if
so, decref the type. This needs to be documented (regardless of the
solution to the other problems), and I intend to document it.
We can change xxlimited to do the check and decref, but isn't it too
ugly for a module that showcases the extension module API?
(xxlimited.Xxo can technically skip the check, since it doesn't allow
subclasses, but that would be setting a nasty trap for anyone learning
from that example.)

The nice way out would be taking advantage of PEP 442: xxlimited.Xxo
can ditch tp_dealloc in favor of tp_traverse and tp_finalize (the
former of which it needs anyway to behave correctly). Unfortunately,
tp_finalize is not available in the stable ABI (issue24345). I think
it should be added; is it too late for 3.5?

Another problem is that xxlimited is untested. It's only built for
non-debug builds, because Py_LIMITED_API and Py_DEBUG are
incompatible. Would it make sense to build and test it without
Py_LIMITED_API in debug mode, instead of not building it at all?


[0] http://bugs.python.org/issue15653#msg168449

From solipsis at pitrou.net  Mon Jun  1 17:33:19 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 1 Jun 2015 17:33:19 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
Message-ID: <20150601173319.6e321656@fsol>

On Mon, 1 Jun 2015 16:38:35 +0200
Petr Viktorin <encukou at gmail.com> wrote:
> Hello,
> The new test_importlib.extension.test_loader is currently leaking
> references, (issue24268). There is a simple hack to stop this, but I'm
> inclined to not apply quick hacks and rather dig into the root cause.
> (It's a test module, the refleaks are relatively harmless.)
> 
> The tests are based directly on the "xxlimited" example,
> xxlimited.Xxo, which exhibits the same bug -- it's just not tested.
> It's is caused by a combination of a few factors, but I'm not sure
> what's a bug and what's just undocumented behavior, so I'm asking for
> input to put me on the right track.

Yes, the issue is really nasty. There are several situations to take
into account:
- derived heap type inherits from base heap type, both have custom
  deallocs
- derived heap type inherits from base heap type, only one has
  a custom dealloc
- derived heap type inherits from Python-defined class (the latter
  having subtype_dealloc)
- derived heap type inherits from static type
- ...

It is unreasonable to expect developers of C extensions come up with
the correct incantation (and ideally, they shouldn't have to think
about it at all).

> The nice way out would be taking advantage of PEP 442: xxlimited.Xxo
> can ditch tp_dealloc in favor of tp_traverse and tp_finalize (the
> former of which it needs anyway to behave correctly). Unfortunately,
> tp_finalize is not available in the stable ABI (issue24345). I think
> it should be added; is it too late for 3.5?

Well, but.... the stable ABI is supposed to be a subset of the API
that's safe to program against, regardless of the Python version (at
least from the point where the stable ABI was introduced). What happens
if you define a Py_tp_finalize and run your C extension type on a
pre-3.5 version? Do you get an error at definition time? A resource
leak? A crash?

I don't get why Benjamin committed the change so quick.

Regards

Antoine.



From encukou at gmail.com  Mon Jun  1 17:52:13 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Mon, 1 Jun 2015 17:52:13 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
In-Reply-To: <20150601173319.6e321656@fsol>
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
 <20150601173319.6e321656@fsol>
Message-ID: <CA+=+wqAvrBnvNH11xE6iEkoWYGtKZkLnbUZyysBY8Nosag4wow@mail.gmail.com>

On Mon, Jun 1, 2015 at 5:33 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Mon, 1 Jun 2015 16:38:35 +0200
> Petr Viktorin <encukou at gmail.com> wrote:
[...]
>> The nice way out would be taking advantage of PEP 442: xxlimited.Xxo
>> can ditch tp_dealloc in favor of tp_traverse and tp_finalize (the
>> former of which it needs anyway to behave correctly). Unfortunately,
>> tp_finalize is not available in the stable ABI (issue24345). I think
>> it should be added; is it too late for 3.5?
>
> Well, but.... the stable ABI is supposed to be a subset of the API
> that's safe to program against, regardless of the Python version (at
> least from the point where the stable ABI was introduced).

That part's not true. From the PEP:
During evolution of Python, new ABI functions will be added.
Applications using them will then have a requirement on a minimum
version of Python; this PEP provides no mechanism for such
applications to fall back when the Python library is too old.

> What happens
> if you define a Py_tp_finalize and run your C extension type on a
> pre-3.5 version? Do you get an error at definition time? A resource
> leak? A crash?

Looking at PyType_FromSpecWithBases code, you should get
RuntimeError("invalid slot offset") in this case.

From solipsis at pitrou.net  Mon Jun  1 18:00:13 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 1 Jun 2015 18:00:13 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
In-Reply-To: <CA+=+wqAvrBnvNH11xE6iEkoWYGtKZkLnbUZyysBY8Nosag4wow@mail.gmail.com>
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
 <20150601173319.6e321656@fsol>
 <CA+=+wqAvrBnvNH11xE6iEkoWYGtKZkLnbUZyysBY8Nosag4wow@mail.gmail.com>
Message-ID: <20150601180013.6815f6ab@fsol>

On Mon, 1 Jun 2015 17:52:13 +0200
Petr Viktorin <encukou at gmail.com> wrote:
> On Mon, Jun 1, 2015 at 5:33 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> > On Mon, 1 Jun 2015 16:38:35 +0200
> > Petr Viktorin <encukou at gmail.com> wrote:
> [...]
> >> The nice way out would be taking advantage of PEP 442: xxlimited.Xxo
> >> can ditch tp_dealloc in favor of tp_traverse and tp_finalize (the
> >> former of which it needs anyway to behave correctly). Unfortunately,
> >> tp_finalize is not available in the stable ABI (issue24345). I think
> >> it should be added; is it too late for 3.5?
> >
> > Well, but.... the stable ABI is supposed to be a subset of the API
> > that's safe to program against, regardless of the Python version (at
> > least from the point where the stable ABI was introduced).
> 
> That part's not true. From the PEP:
> During evolution of Python, new ABI functions will be added.
> Applications using them will then have a requirement on a minimum
> version of Python; this PEP provides no mechanism for such
> applications to fall back when the Python library is too old.

I think we have been laxist with additions to the stable ABI:
apparently, they should be conditioned on the API version requested by
the user.  For example, in pystate.h:

#if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x03030000
/* New in 3.3 */
PyAPI_FUNC(int) PyState_AddModule(PyObject*, struct PyModuleDef*);
PyAPI_FUNC(int) PyState_RemoveModule(struct PyModuleDef*);
#endif

(those were added by Martin, so I assume he knew what he was doing :-))

This way, failing to restrict yourself to a given API version fails at
compile time, not at runtime. However, it's also more work to do so
when adding stuff, which is why we tend to skimp on it.

Regards

Antoine.

From benjamin at python.org  Mon Jun  1 18:09:37 2015
From: benjamin at python.org (Benjamin Peterson)
Date: Mon, 01 Jun 2015 12:09:37 -0400
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
 tp_dealloc
In-Reply-To: <20150601173319.6e321656@fsol>
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
 <20150601173319.6e321656@fsol>
Message-ID: <1433174977.3042714.283674617.46779355@webmail.messagingengine.com>



On Mon, Jun 1, 2015, at 11:33, Antoine Pitrou wrote:
> On Mon, 1 Jun 2015 16:38:35 +0200
> Petr Viktorin <encukou at gmail.com> wrote:
> > Hello,
> > The new test_importlib.extension.test_loader is currently leaking
> > references, (issue24268). There is a simple hack to stop this, but I'm
> > inclined to not apply quick hacks and rather dig into the root cause.
> > (It's a test module, the refleaks are relatively harmless.)
> > 
> > The tests are based directly on the "xxlimited" example,
> > xxlimited.Xxo, which exhibits the same bug -- it's just not tested.
> > It's is caused by a combination of a few factors, but I'm not sure
> > what's a bug and what's just undocumented behavior, so I'm asking for
> > input to put me on the right track.
> 
> Yes, the issue is really nasty. There are several situations to take
> into account:
> - derived heap type inherits from base heap type, both have custom
>   deallocs
> - derived heap type inherits from base heap type, only one has
>   a custom dealloc
> - derived heap type inherits from Python-defined class (the latter
>   having subtype_dealloc)
> - derived heap type inherits from static type
> - ...
> 
> It is unreasonable to expect developers of C extensions come up with
> the correct incantation (and ideally, they shouldn't have to think
> about it at all).
> 
> > The nice way out would be taking advantage of PEP 442: xxlimited.Xxo
> > can ditch tp_dealloc in favor of tp_traverse and tp_finalize (the
> > former of which it needs anyway to behave correctly). Unfortunately,
> > tp_finalize is not available in the stable ABI (issue24345). I think
> > it should be added; is it too late for 3.5?
> 
> Well, but.... the stable ABI is supposed to be a subset of the API
> that's safe to program against, regardless of the Python version (at
> least from the point where the stable ABI was introduced). What happens
> if you define a Py_tp_finalize and run your C extension type on a
> pre-3.5 version? Do you get an error at definition time? A resource
> leak? A crash?
> 
> I don't get why Benjamin committed the change so quick.

I thought all the slots were supposed to be available through the stable
ABI, since you need them to properly define c types.

From solipsis at pitrou.net  Mon Jun  1 18:52:03 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 1 Jun 2015 18:52:03 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
 <20150601173319.6e321656@fsol>
 <1433174977.3042714.283674617.46779355@webmail.messagingengine.com>
Message-ID: <20150601185203.7bc96ca2@fsol>

On Mon, 01 Jun 2015 12:09:37 -0400
Benjamin Peterson <benjamin at python.org> wrote:
> 
> I thought all the slots were supposed to be available through the stable
> ABI, since you need them to properly define c types.

You're right, I was thinking specifically about the ABI versioning
issues (which I exposed in my reply to Petr).

Regards

Antoine.



From yselivanov.ml at gmail.com  Mon Jun  1 20:04:33 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Mon, 01 Jun 2015 14:04:33 -0400
Subject: [Python-Dev] [Python-checkins] peps: We have a 3.6 release PEP.
In-Reply-To: <20150601175955.83475.33562@psf.io>
References: <20150601175955.83475.33562@psf.io>
Message-ID: <556C9EB1.1040508@gmail.com>

It looks like we now have two 3.6 release PEPs: 494 & 495

Yury

On 2015-06-01 1:59 PM, barry.warsaw wrote:
> https://hg.python.org/peps/rev/a50d51e8c748
> changeset:   5888:a50d51e8c748
> user:        Barry Warsaw <barry at python.org>
> date:        Mon Jun 01 13:59:32 2015 -0400
> summary:
>    We have a 3.6 release PEP.
>
> files:
>    pep-0495.txt |  66 ++++++++++++++++++++++++++++++++++++++++
>    1 files changed, 66 insertions(+), 0 deletions(-)
>
>
> diff --git a/pep-0495.txt b/pep-0495.txt
> new file mode 100644
> --- /dev/null
> +++ b/pep-0495.txt
> @@ -0,0 +1,66 @@
> +PEP: 495
> +Title: Python 3.6 Release Schedule
> +Version: $Revision$
> +Last-Modified: $Date$
> +Author: Ned Deily <nad at acm.org>
> +Status: Active
> +Type: Informational
> +Content-Type: text/x-rst
> +Created: 30-May-2015
> +Python-Version: 3.6
> +
> +
> +Abstract
> +========
> +
> +This document describes the development and release schedule for Python 3.6.
> +The schedule primarily concerns itself with PEP-sized items.
> +
> +.. Small features may be added up to the first beta release.  Bugs may be
> +   fixed until the final release, which is tentatively planned for late 2016.
> +
> +
> +Release Manager and Crew
> +========================
> +
> +- 3.6 Release Manager: Ned Deily
> +- Windows installers: Steve Dower
> +- Mac installers: Ned Deily
> +- Documentation: Georg Brandl
> +
> +
> +Release Schedule
> +================
> +
> +The releases:
> +
> +- 3.6.0 alpha 1: TBD
> +- 3.6.0 beta 1: TBD
> +- 3.6.0 candidate 1: TBD
> +- 3.6.0 final: TBD (late 2016?)
> +
> +(Beta 1 is also "feature freeze"--no new features beyond this point.)
> +
> +
> +Features for 3.6
> +================
> +
> +Proposed changes for 3.6:
> +
> +* TBD
> +
> +
> +Copyright
> +=========
> +
> +This document has been placed in the public domain.
> +
> +
> +..
> +  Local Variables:
> +  mode: indented-text
> +  indent-tabs-mode: nil
> +  sentence-end-double-space: t
> +  fill-column: 70
> +  coding: utf-8
> +  End:
>
>
>
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> https://mail.python.org/mailman/listinfo/python-checkins


From barry at python.org  Mon Jun  1 20:07:45 2015
From: barry at python.org (Barry Warsaw)
Date: Mon, 1 Jun 2015 14:07:45 -0400
Subject: [Python-Dev] RM for 3.6?
References: <mkbnml$fj$1@ger.gmane.org>
Message-ID: <20150601140745.6664919e@anarchist.wooz.org>

On May 30, 2015, at 10:09 AM, Serhiy Storchaka wrote:

>Isn't it a time to assign release manager for 3.6-3.7?

Indeed!  Please welcome Ned Deily as RM for 3.6:

https://www.python.org/dev/peps/pep-0494/

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150601/f8a0e9ab/attachment.sig>

From stefan_ml at behnel.de  Mon Jun  1 20:24:54 2015
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Mon, 01 Jun 2015 20:24:54 +0200
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <20150601140745.6664919e@anarchist.wooz.org>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org>
Message-ID: <mki81n$tm$1@ger.gmane.org>

Barry Warsaw schrieb am 01.06.2015 um 20:07:
> On May 30, 2015, at 10:09 AM, Serhiy Storchaka wrote:
>> Isn't it a time to assign release manager for 3.6-3.7?
> 
> Indeed!  Please welcome Ned Deily as RM for 3.6:
> 
> https://www.python.org/dev/peps/pep-0494/

Does he know already?

Stefan



From nad at acm.org  Mon Jun  1 20:29:49 2015
From: nad at acm.org (Ned Deily)
Date: Mon, 1 Jun 2015 11:29:49 -0700
Subject: [Python-Dev] [Python-checkins] peps: We have a 3.6 release PEP.
In-Reply-To: <556C9EB1.1040508@gmail.com>
References: <20150601175955.83475.33562@psf.io> <556C9EB1.1040508@gmail.com>
Message-ID: <2E08DB4C-7AAC-403A-9F04-EC162F1DF36D@acm.org>

> On Jun 1, 2015, at 11:04, Yury Selivanov <yselivanov.ml at gmail.com> wrote:
> 
> It looks like we now have two 3.6 release PEPs: 494 & 495

Sorry about the confusion.  The second checkin (for 495) has been reverted; PEP 494 will be the ongoing PEP.



From benjamin at python.org  Mon Jun  1 20:33:54 2015
From: benjamin at python.org (Benjamin Peterson)
Date: Mon, 01 Jun 2015 14:33:54 -0400
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <mki81n$tm$1@ger.gmane.org>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org>
 <mki81n$tm$1@ger.gmane.org>
Message-ID: <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>



On Mon, Jun 1, 2015, at 14:24, Stefan Behnel wrote:
> Barry Warsaw schrieb am 01.06.2015 um 20:07:
> > On May 30, 2015, at 10:09 AM, Serhiy Storchaka wrote:
> >> Isn't it a time to assign release manager for 3.6-3.7?
> > 
> > Indeed!  Please welcome Ned Deily as RM for 3.6:
> > 
> > https://www.python.org/dev/peps/pep-0494/
> 
> Does he know already?

The suck^H^H^H^H man even volunteered!

From nad at acm.org  Mon Jun  1 20:35:35 2015
From: nad at acm.org (Ned Deily)
Date: Mon, 1 Jun 2015 11:35:35 -0700
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org> <mki81n$tm$1@ger.gmane.org>
 <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>
Message-ID: <F1EC8666-5DC0-433F-BEF6-C659E97B8991@acm.org>

> On Jun 1, 2015, at 11:33, Benjamin Peterson <benjamin at python.org> wrote:
> 
> 
> 
> On Mon, Jun 1, 2015, at 14:24, Stefan Behnel wrote:
>> Barry Warsaw schrieb am 01.06.2015 um 20:07:
>>> On May 30, 2015, at 10:09 AM, Serhiy Storchaka wrote:
>>>> Isn't it a time to assign release manager for 3.6-3.7?
>>> 
>>> Indeed!  Please welcome Ned Deily as RM for 3.6:
>>> 
>>> https://www.python.org/dev/peps/pep-0494/
>> 
>> Does he know already?
> 
> The suck^H^H^H^H man even volunteered!

I thought I was volunteering to get a pony.  I was misinformed.


--
  Ned Deily
  nad at acm.org -- []



From skip.montanaro at gmail.com  Mon Jun  1 21:05:16 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Mon, 1 Jun 2015 14:05:16 -0500
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <F1EC8666-5DC0-433F-BEF6-C659E97B8991@acm.org>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org>
 <mki81n$tm$1@ger.gmane.org>
 <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>
 <F1EC8666-5DC0-433F-BEF6-C659E97B8991@acm.org>
Message-ID: <CANc-5UxY-jX-vV4p4PcRyS=EaCMA8txenJjhH62cyR1CD4BjGw@mail.gmail.com>

On Mon, Jun 1, 2015 at 1:35 PM, Ned Deily <nad at acm.org> wrote:

> I thought I was volunteering to get a pony.  I was misinformed.


Ned,

Not to worry. I'm sure that by the time 3.6a0 is due, the PSF will be able
to scrape together the funds for a pony, perhaps even one with super powers:



Skip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150601/a6811211/attachment.html>

From Steve.Dower at microsoft.com  Mon Jun  1 21:21:32 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Mon, 1 Jun 2015 19:21:32 +0000
Subject: [Python-Dev] Windows new 3.5 installer: Command line uninstall
In-Reply-To: <CACac1F94Yv+-eTZN3OxV8K09ubH5rh+zcv0H2rTd=+LEMj6YAA@mail.gmail.com>
References: <CACac1F94Yv+-eTZN3OxV8K09ubH5rh+zcv0H2rTd=+LEMj6YAA@mail.gmail.com>
Message-ID: <BY1PR03MB14664BFF15D6CA095D3953B1F5B60@BY1PR03MB1466.namprd03.prod.outlook.com>

You need the original exe and can pass /uninstall to that.

There is an uninstall entry in the registry that uses the cached version of the exe (via Programs and Features), but I don't know why WMI doesn't see it. I'll check that out a bit later.

You can uninstall with the web installer exe, and it won't download anything, but the version does have to match exactly.

Cheers,
Steve

Top-posted from my Windows Phone
________________________________
From: Paul Moore<mailto:p.f.moore at gmail.com>
Sent: ?6/?1/?2015 6:54
To: Python Dev<mailto:python-dev at python.org>
Subject: [Python-Dev] Windows new 3.5 installer: Command line uninstall

I'm trying to script a command line uninstall of Python 3.5. The new
installer has a nice well-documented command line interface, but
there's nothing much about how to uninstall.

Looking at the installed products, I see

>get-wmiobject Win32_Product -filter 'name like "Python 3.5.0b2%"' | select Name

Name
----
Python 3.5.0b2 pip Bootstrap (64-bit)
Python 3.5.0b2 Development Libraries (64-bit)
Python 3.5.0b2 C Runtime (64-bit)
Python 3.5.0b2 Tcl/Tk Support (64-bit)
Python 3.5.0b2 Standard Library (64-bit)
Python 3.5.0b2 Executables (64-bit)
Python 3.5.0b2 Core Interpreter (64-bit)
Python 3.5.0b2 Launcher (32-bit)
Python 3.5.0b2 Documentation (64-bit)
Python 3.5.0b2 Utility Scripts (64-bit)
Python 3.5.0b2 Test Suite (64-bit)

Normally I'd be able to call the Uninstall method on the installed
product, but here, I'm not sure which of the above I'd call
"Uninstall" on (there's only one "Programs and Features" entry for
3.5, and I can't tell which of these is equivalent to it - or if I
need to uninstall each in turn, I don't know what order they need to
be done in).

Specifically

(get-wmiobject -query 'select * from Win32_Product  where name =
"Python 2.7.8 (64-bit)"').Uninstall()

is an automated, command line silent uninstall of Python 2.7.8. What
(if anything) would be the equivalent for 3.5.0b2?

Paul
_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40microsoft.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150601/c79de553/attachment.html>

From p.f.moore at gmail.com  Mon Jun  1 21:49:12 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 1 Jun 2015 20:49:12 +0100
Subject: [Python-Dev] Windows new 3.5 installer: Command line uninstall
In-Reply-To: <BY1PR03MB14664BFF15D6CA095D3953B1F5B60@BY1PR03MB1466.namprd03.prod.outlook.com>
References: <CACac1F94Yv+-eTZN3OxV8K09ubH5rh+zcv0H2rTd=+LEMj6YAA@mail.gmail.com>
 <BY1PR03MB14664BFF15D6CA095D3953B1F5B60@BY1PR03MB1466.namprd03.prod.outlook.com>
Message-ID: <CACac1F-nOTycH=zdVGz4_Pb-ixf94xtJGNpG0PXPTvpqZq2cMw@mail.gmail.com>

On 1 June 2015 at 20:21, Steve Dower <Steve.Dower at microsoft.com> wrote:
> You need the original exe and can pass /uninstall to that.

OK, that's cool. Looks like I missed that in the docs. (Checks) Yes, I did :-(

Paul

From rosuav at gmail.com  Mon Jun  1 21:55:32 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Tue, 2 Jun 2015 05:55:32 +1000
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <CANc-5UxY-jX-vV4p4PcRyS=EaCMA8txenJjhH62cyR1CD4BjGw@mail.gmail.com>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org>
 <mki81n$tm$1@ger.gmane.org>
 <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>
 <F1EC8666-5DC0-433F-BEF6-C659E97B8991@acm.org>
 <CANc-5UxY-jX-vV4p4PcRyS=EaCMA8txenJjhH62cyR1CD4BjGw@mail.gmail.com>
Message-ID: <CAPTjJmpyspqRj939FmpjjRweZ0zShgdwX1bA=CsZKCeymtAuvQ@mail.gmail.com>

On Tue, Jun 2, 2015 at 5:05 AM, Skip Montanaro <skip.montanaro at gmail.com> wrote:
>
>
> On Mon, Jun 1, 2015 at 1:35 PM, Ned Deily <nad at acm.org> wrote:
>>
>> I thought I was volunteering to get a pony.  I was misinformed.
>
>
> Ned,
>
> Not to worry. I'm sure that by the time 3.6a0 is due, the PSF will be able to scrape together the funds for a pony, perhaps even one with super powers:

Nice :) I think if someone could create a pic using the Python logo as
a cutie mark (sp?), my MLP-obsessed sister would finally have a reason
to learn Python.

ChrisA

From gmludo at gmail.com  Mon Jun  1 22:20:56 2015
From: gmludo at gmail.com (Ludovic Gasc)
Date: Mon, 1 Jun 2015 22:20:56 +0200
Subject: [Python-Dev] Python 3 migration status update across some key
 subcommunities (was Re: 2.7 is here until 2020,
 please don't call it a waste.)
In-Reply-To: <CADiSq7fjMQhMzwpp=VS4+0mhM=s25BaKBJ-bRBSBuGOkjAug8g@mail.gmail.com>
References: <CADiSq7fjMQhMzwpp=VS4+0mhM=s25BaKBJ-bRBSBuGOkjAug8g@mail.gmail.com>
Message-ID: <CAON-fpFQdBwcK-f-+qcZAyPyPa_U4KYaTk6ZMNf6Qv0ETvs64Q@mail.gmail.com>

2015-05-31 16:15 GMT+02:00 Nick Coghlan <ncoghlan at gmail.com>:

> On 31 May 2015 at 19:07, Ludovic Gasc <gmludo at gmail.com> wrote:
> > About Python 3 migration, I think that one of our best control stick is
> > newcomers, and by extension, Python trainers/teachers.
> > If newcomers learn first Python 3, when they will start to work
> > professionally, they should help to rationalize the Python 3 migration
> > inside existing dev teams, especially because they don't have an interest
> > conflict based on the fact that they haven't written plenty of code with
> > Python 2.
> > 2020 is around the corner, 5 years shouldn't be enough to change the
> > community mind, I don't know.
>
> The education community started switching a while back - if you watch
> Carrie-Anne Philbin's PyCon UK 2014 keynote, one of her requests for
> the broader Python community was for everyone else to just catch up
> already in order to reduce student's confusion (she phrased it more
> politely than that, though). Educators need to tweak examples and
> exercises to account for a version switch, but that's substantially
> easier than migrating hundreds of thousands or even millions of lines
> of production code.
>

About the French article about Python 3 from a teacher on the production
field, it's available again:
https://translate.google.com/translate?hl=fr&sl=fr&tl=en&u=http%3A%2F%2Fsametmax.com%2Fpython-3-est-fait-pour-les-nouveaux-venus%2F



>
> And yes, if you learn Python 3 first, subsequently encountering Python
> 2's quirks and cruft is likely to encourage folks that know both
> versions of the language to start advocating for a version upgrade :)
>

Exactly ;-)


> After accounting for the "Wow, the existing Python 2 install base is
> even larger than we realised" factour, the migration is actually in a
> pretty good place overall these days. The "enterprise" crowd really
> are likely to be the only ones that might need the full remaining 5
> years of migration time (and they may potentially have even more time,
> if they're relying on a commercial redistributor).
>

More than a full-monty toolbox to migrate to Python 3 now available, I've a
good hope when I see numbers like this:
http://blog.frite-camembert.net/python-survey-2014.html

Even if we have a statistical bia because it's only Pythonists who keep an
eye on the Python actuality who have answer to this survey, by definition,
people who are more aware about Python 3 migration than the "average"
Python user, however, I don't remember exactly the theorem, but I know to
diffuse a piece of information inside a community and to be accepted, the
most work will be with the first 10%.
After 10%, you have enough people to start to invert the network/group
effects to keep the previous status.

BTW, during PyCON, Guido did a keynote where he noticed that a lot of
libraries don't support Python 3 yet, however a lot of famous Python
packages are already ported.
If about the absolute values, I was agree with him (more or less 5000
Python 3 packages on 55000 of PyPI if I remember), we should maybe do some
"datamining" with PyPI data to have a more precise vision of the situation,
for example:

1. Requests with 4195790 of downloads supports Python 3. hikvision,
9 downloads, doesn't support Python 3. It isn't fair to count with the same
weight both products.
In term of probability, you have more chances in your project to use
Requests that use hikvision. If we ponderate each package by the number of
download and calculate the percentage, we should have more of less the
probability that a lambda project has all dependencies available for Python
3.

2. The acceleration of Python 3 adoption on PyPI: More complicated to
calculate that because we need to know when the Python 3 trove classifier
has appear on each library metadata. However, I'm pretty sure that we
should see an acceleration. And we should be capable to do a prediction for
a date range when a majority of Python packages will be available for
Python 3.

3. Maybe some other hidden data, maybe Python scientific community should
have better ideas.

Web frameworks have allowed Python 3 development for a while now, and
> with Django switching their tutorial to Python 3 by default, Django
> downloads via pip show one of the highest proportions of Python 3
> adoption on PyPI. www.python.org itself is now a production Python 3
> Django web service, and the next generation of pypi.python.org will be
> a Pyramid application that's also running on Python 3.
>

Pretty cool :-)


> The dedicated async/await syntax in 3.5 represents a decent carrot to
> encourage migration for anyone currently using yield (or yield from)
> based coroutines, since the distinct syntax not only allows for easier
> local reasoning about whether something is an iterator or a coroutine,
> it also provides a much improved user experience for asynchronous
> iterators and context managers (including finally handling the
> "asynchronous database transaction as a context manager" case, which
> previous versions of Python couldn't really do at all).
>

Clearly, it should be cool that Python 3 becomes trending from async
pattern community.


> The matrix multiplication operator is similarly a major improvement
> for the science and data analysis part of the Python community.
>
> In terms of reducing *barriers* to adoption, after inviting them to
> speak at the 2014 language summit, we spent a fair bit of time with
> the Twisted and Mercurial folks over the past year or so working
> through "What's still missing from Python 3 for your use cases?", as
> Python 3.4 was still missing some features for binary data
> manipulation where we'd been a bit too ruthless in pruning back the
> binary side of things when deciding what counted as text-only
> features, and what was applicable to binary data as well. So 3.5
> brings back binary interpolation, adds a hex() method to bytes, and
> adds binary data support directly to a couple of standard library
> modules (tempfile, difflib).
>

+10.
I've made a daemon to provision a lot of stuff, I've a lot of code inside
to switch between bytes and string exactly for that.

If I understand the situation correctly, the work Guido et al have
> been doing on PEP 484 and type hinting standardisation is also aimed
> at reducing barriers to Python 3 adoption, by making it possible to
> develop better migration tools that are more semantically aware than
> the existing syntax focused tools. The type hinting actually acts as a
> carrot as well, since it's a feature that mainly shows its value when
> attempting to scale a *team* to larger sizes (as it lets you delegate
> more of the code review process to an automated tool, letting the
> human reviewers spend more time focusing on higher level semantic
> concerns).
>

Yes, indeed, it's another carrot.

Finally, both Debian/Ubuntu and Fedora are well advanced in their
> efforts to replace Python 2 with Python 3 in their respective default
> images (but keeping Py2 available in their package repos). That work
> is close to finished now (myself, Slavek Kabrda, Barry Warsaw, and
> Matthias Klose had some good opportunities to discuss that at PyCon),
> although there are still some significant rough edges to figure out
> (such as coming up with a coherent cross-platform story for what we're
> going to do with the Python symlink), as well as a few more key
> projects to either migrate entirely, or at least finish porting to the
> source compatible subset of Python 2 & 3 (e.g. Samba).
>

Yes, at least from my professional experience, Python-Debian team has done
an awesome job for that: It works like a charm.


>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150601/7ffb37e5/attachment-0001.html>

From larry at hastings.org  Tue Jun  2 01:50:23 2015
From: larry at hastings.org (Larry Hastings)
Date: Mon, 01 Jun 2015 16:50:23 -0700
Subject: [Python-Dev] [python-committers] Ned Deily is release manager
	for Python 3.6
In-Reply-To: <20150601140903.5e0579b9@anarchist.wooz.org>
References: <20150601140903.5e0579b9@anarchist.wooz.org>
Message-ID: <556CEFBF.3090605@hastings.org>

On 06/01/2015 11:09 AM, Barry Warsaw wrote:
> Please welcome Ned Deily as RM for Python 3.6.
>
> https://www.python.org/dev/peps/pep-0494/

We're in good hands.  For two versions now Benjamin and Ned have been 
quietly cleaning up my mistakes after (nearly) every release. My guess 
is, Ned won't bother making the mistakes in the first place!


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150601/47ab9b5d/attachment.html>

From breamoreboy at yahoo.co.uk  Tue Jun  2 05:21:55 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Tue, 02 Jun 2015 04:21:55 +0100
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org> <mki81n$tm$1@ger.gmane.org>
 <1433183634.12976.283864353.3E1122A9@webmail.messagingengine.com>
Message-ID: <mkj7gm$fvv$1@ger.gmane.org>

On 01/06/2015 19:33, Benjamin Peterson wrote:
>
>
> On Mon, Jun 1, 2015, at 14:24, Stefan Behnel wrote:
>> Barry Warsaw schrieb am 01.06.2015 um 20:07:
>>> On May 30, 2015, at 10:09 AM, Serhiy Storchaka wrote:
>>>> Isn't it a time to assign release manager for 3.6-3.7?
>>>
>>> Indeed!  Please welcome Ned Deily as RM for 3.6:
>>>
>>> https://www.python.org/dev/peps/pep-0494/
>>
>> Does he know already?
>
> The suck^H^H^H^H man even volunteered!
>

Was that "volunteered" as in RM or the Comfy Chair?

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From storchaka at gmail.com  Tue Jun  2 05:46:14 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Tue, 02 Jun 2015 06:46:14 +0300
Subject: [Python-Dev] cpython (2.7): Issue #24357: Change host in
 socket.getaddrinfo example to one that
In-Reply-To: <20150602015800.111622.81707@psf.io>
References: <20150602015800.111622.81707@psf.io>
Message-ID: <mkj8u7$an3$1@ger.gmane.org>

On 02.06.15 04:58, ned.deily wrote:
> https://hg.python.org/cpython/rev/30da21d2fa4f
> changeset:   96458:30da21d2fa4f
> branch:      2.7
> parent:      96454:5e8fa1b13516
> user:        Ned Deily <nad at acm.org>
> date:        Mon Jun 01 18:45:49 2015 -0700
> summary:
>    Issue #24357: Change host in socket.getaddrinfo example to one that
> does support IPv6 and IPv4; www.python.org currently does not.
>
> files:
>    Doc/library/socket.rst |  8 ++++----
>    1 files changed, 4 insertions(+), 4 deletions(-)
>
>
> diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst
> --- a/Doc/library/socket.rst
> +++ b/Doc/library/socket.rst
> @@ -262,12 +262,12 @@
>      method.
>
>      The following example fetches address information for a hypothetical TCP
> -   connection to ``www.python.org`` on port 80 (results may differ on your
> +   connection to ``google.com`` on port 80 (results may differ on your
>      system if IPv6 isn't enabled)::

example.org does support IPv6 and IPv4 too, and is not commercial name.



From storchaka at gmail.com  Tue Jun  2 05:51:53 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Tue, 02 Jun 2015 06:51:53 +0300
Subject: [Python-Dev] RM for 3.6?
In-Reply-To: <20150601140745.6664919e@anarchist.wooz.org>
References: <mkbnml$fj$1@ger.gmane.org>
 <20150601140745.6664919e@anarchist.wooz.org>
Message-ID: <mkj98p$fcg$1@ger.gmane.org>

On 01.06.15 21:07, Barry Warsaw wrote:
> On May 30, 2015, at 10:09 AM, Serhiy Storchaka wrote:
>
>> Isn't it a time to assign release manager for 3.6-3.7?
>
> Indeed!  Please welcome Ned Deily as RM for 3.6:
>
> https://www.python.org/dev/peps/pep-0494/

Good news for all us! Ned many times saved me from mistakes.


From nad at acm.org  Tue Jun  2 06:25:19 2015
From: nad at acm.org (Ned Deily)
Date: Mon, 1 Jun 2015 21:25:19 -0700
Subject: [Python-Dev] cpython (2.7): Issue #24357: Change host in
	socket.getaddrinfo example to one that
In-Reply-To: <mkj8u7$an3$1@ger.gmane.org>
References: <20150602015800.111622.81707@psf.io> <mkj8u7$an3$1@ger.gmane.org>
Message-ID: <713B1C6E-938B-4B83-8981-5911C742E15C@acm.org>

On Jun 1, 2015, at 20:46, Serhiy Storchaka <storchaka at gmail.com> wrote:
> On 02.06.15 04:58, ned.deily wrote:
>> https://hg.python.org/cpython/rev/30da21d2fa4f
>> changeset:   96458:30da21d2fa4f
>> branch:      2.7
>> parent:      96454:5e8fa1b13516
>> user:        Ned Deily <nad at acm.org>
>> date:        Mon Jun 01 18:45:49 2015 -0700
>> summary:
>>   Issue #24357: Change host in socket.getaddrinfo example to one that
>> does support IPv6 and IPv4; www.python.org currently does not.
> example.org does support IPv6 and IPv4 too, and is not commercial name.

Good suggestion, thanks!

--
  Ned Deily
  nad at acm.org -- []



From yselivanov.ml at gmail.com  Tue Jun  2 06:39:33 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Tue, 02 Jun 2015 00:39:33 -0400
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
Message-ID: <556D3385.5000009@gmail.com>

Hi Ben,

On 2015-05-31 8:35 AM, Ben Leslie wrote:
> Hi Yury,
>
> I'm just starting my exploration into using async/await; all my
> 'real-world' scenarios are currently hypothetical.
>
> One such hypothetical scenario however is that if I have a server
> process running, with some set of concurrent connections, each managed
> by a co-routine. Each co-routine is of some arbitrary complexity e.g:
> some combination of reading files, reading from database, reading from
> peripherals. If I notice one of those co-routines appears stuck and
> not making progress, I'd very much like to debug that, and preferably
> in a way that doesn't necessarily stop the rest of the server (or even
> the co-routine that appears stuck).
>
> The problem with the "if debug: log(...)" approach is that you need
> foreknowledge of the fault state occurring; on a busy server you don't
> want to just be logging every 'switch()'. I guess you could do
> something like "switch_state[outer_coro] = get_current_stack_frames()"
> on each switch. To me double book-keeping something that the
> interpreter already knows seems somewhat wasteful but maybe it isn't
> really too bad.

I guess it all depends on how "switching" is organized in your
framework of choice.  In asyncio, for instance, all the code that
knows about coroutines is in tasks.py.  `Task` class is responsible
for running coroutines, and it's the single place where you would
need to put the "if debug: ..." line for debugging "slow" Futures--
the only thing that coroutines can "stuck" with (the other thing
is accidentally calling blocking code, but your proposal wouldn't
help with that).

Yury

From fijall at gmail.com  Tue Jun  2 21:07:40 2015
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Tue, 2 Jun 2015 21:07:40 +0200
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <556C3E93.50108@egenix.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <7A4DF5A2-5D04-4F5A-9883-B5E815A14909@gmail.com>
 <CAP1=2W5cZZzwQj0kqHgKozoKhCL_2fvx2cydNqOZBVW1BNGEdw@mail.gmail.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
Message-ID: <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>

Hi

There was a PSF-sponsored effort to improve the situation with the
https://bitbucket.org/pypy/codespeed2/src being written (thank you
PSF). It's not better enough than codespeed that I would like, but
gives some opportunities.

That said, we have a benchmark machine for benchmarking cpython and I
never deployed nightly benchmarks of cpython for a variety of reasons.

* would be cool to get a small VM to set up the web front

* people told me that py3k is only interesting, so I did not set it up
for py3k because benchmarks are mostly missing

I'm willing to set up a nightly speed.python.org using nightly build
on python 2 and possible python 3 if there is an interest. I need
support from someone maintaining python buildbot to setup builds and a
VM to set up stuff, otherwise I'm good to go

DISCLAIMER: I did facilitate in codespeed rewrite that was not as
successful as I would have hoped. I did not receive any money from the
PSF on that though.

Cheers,
fijal


On Mon, Jun 1, 2015 at 1:14 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> On 01.06.2015 12:44, Armin Rigo wrote:
>> Hi Larry,
>>
>> On 31 May 2015 at 01:20, Larry Hastings <larry at hastings.org> wrote:
>>> p.s. Supporting this patch also helps cut into PyPy's reported performance
>>> lead--that is, if they ever upgrade speed.pypy.org from comparing against
>>> Python *2.7.2*.
>>
>> Right, we should do this upgrade when 2.7.11 is out.
>>
>> There is some irony in your comment which seems to imply "PyPy is
>> cheating by comparing with an old Python 2.7.2": it is inside a thread
>> which started because "we didn't backport performance improvements to
>> 2.7.x so far".
>>
>> Just to convince myself, I just ran a performance comparison.  I ran
>> the same benchmark suite as speed.pypy.org, with 2.7.2 against 2.7.10,
>> both freshly compiled with no "configure" options at all.  The
>> differences are usually in the noise, but range from +5% to... -60%.
>> If anything, this seems to show that CPython should take more care
>> about performance regressions.  If someone is interested:
>>
>> * "raytrace-simple" is 1.19 times slower
>> * "bm_mako" is 1.29 times slower
>> * "spitfire_cstringio" is 1.60 times slower
>> * a number of other benchmarks are around 1.08.
>>
>> The "7.0x faster" number on speed.pypy.org would be significantly
>> *higher* if we upgraded the baseline to 2.7.10 now.
>
> If someone were to volunteer to set up and run speed.python.org,
> I think we could add some additional focus on performance
> regressions. Right now, we don't have any way of reliably
> and reproducibly testing Python performance.
>
> Hint: The PSF would most likely fund such adventures :-)
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Source  (#1, Jun 01 2015)
>>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
> ________________________________________________________________________
>
> ::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::
>
>    eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>     D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>            Registered at Amtsgericht Duesseldorf: HRB 46611
>                http://www.egenix.com/company/contact/
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com

From encukou at gmail.com  Tue Jun  2 21:09:52 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Tue, 2 Jun 2015 21:09:52 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
In-Reply-To: <20150601180013.6815f6ab@fsol>
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
 <20150601173319.6e321656@fsol>
 <CA+=+wqAvrBnvNH11xE6iEkoWYGtKZkLnbUZyysBY8Nosag4wow@mail.gmail.com>
 <20150601180013.6815f6ab@fsol>
Message-ID: <CA+=+wqDsO1WaZ0y9v-JcoLnzy9LQArEeys_No614-geZ5PKUGQ@mail.gmail.com>

On Mon, Jun 1, 2015 at 6:00 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
[...]
> I think we have been laxist with additions to the stable ABI:
> apparently, they should be conditioned on the API version requested by
> the user.  For example, in pystate.h:
>
> #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x03030000
> /* New in 3.3 */
> PyAPI_FUNC(int) PyState_AddModule(PyObject*, struct PyModuleDef*);
> PyAPI_FUNC(int) PyState_RemoveModule(struct PyModuleDef*);
> #endif
>
> (those were added by Martin, so I assume he knew what he was doing :-))
>
> This way, failing to restrict yourself to a given API version fails at
> compile time, not at runtime. However, it's also more work to do so
> when adding stuff, which is why we tend to skimp on it.

I see! I completely missed that memo.
I filed a patch that wraps my 3.5 additions as issue 24365.

I think this should be in the PEP, so people like me can find it. Does
the attached wording look good?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pep384.patch
Type: text/x-patch
Size: 860 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150602/a6360209/attachment.bin>

From brett at python.org  Tue Jun  2 23:02:25 2015
From: brett at python.org (Brett Cannon)
Date: Tue, 02 Jun 2015 21:02:25 +0000
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <7A4DF5A2-5D04-4F5A-9883-B5E815A14909@gmail.com>
 <CAP1=2W5cZZzwQj0kqHgKozoKhCL_2fvx2cydNqOZBVW1BNGEdw@mail.gmail.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
Message-ID: <CAP1=2W5OPB0ELwg2Jaom7nPTraRRuK=g7X=PaV33hbrKyS02PA@mail.gmail.com>

On Tue, Jun 2, 2015 at 3:08 PM Maciej Fijalkowski <fijall at gmail.com> wrote:

> Hi
>
> There was a PSF-sponsored effort to improve the situation with the
> https://bitbucket.org/pypy/codespeed2/src being written (thank you
> PSF). It's not better enough than codespeed that I would like, but
> gives some opportunities.
>
> That said, we have a benchmark machine for benchmarking cpython and I
> never deployed nightly benchmarks of cpython for a variety of reasons.
>
> * would be cool to get a small VM to set up the web front
>
> * people told me that py3k is only interesting, so I did not set it up
> for py3k because benchmarks are mostly missing
>

Missing how? hg.python.org/benchmarks has plenty of Python 3-compatible
benchmarks to have interesting results.


>
> I'm willing to set up a nightly speed.python.org using nightly build
> on python 2 and possible python 3 if there is an interest. I need
> support from someone maintaining python buildbot to setup builds and a
> VM to set up stuff, otherwise I'm good to go
>

I suspect there is interest.

-Brett


>
> DISCLAIMER: I did facilitate in codespeed rewrite that was not as
> successful as I would have hoped. I did not receive any money from the
> PSF on that though.
>
> Cheers,
> fijal
>
>
> On Mon, Jun 1, 2015 at 1:14 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> > On 01.06.2015 12:44, Armin Rigo wrote:
> >> Hi Larry,
> >>
> >> On 31 May 2015 at 01:20, Larry Hastings <larry at hastings.org> wrote:
> >>> p.s. Supporting this patch also helps cut into PyPy's reported
> performance
> >>> lead--that is, if they ever upgrade speed.pypy.org from comparing
> against
> >>> Python *2.7.2*.
> >>
> >> Right, we should do this upgrade when 2.7.11 is out.
> >>
> >> There is some irony in your comment which seems to imply "PyPy is
> >> cheating by comparing with an old Python 2.7.2": it is inside a thread
> >> which started because "we didn't backport performance improvements to
> >> 2.7.x so far".
> >>
> >> Just to convince myself, I just ran a performance comparison.  I ran
> >> the same benchmark suite as speed.pypy.org, with 2.7.2 against 2.7.10,
> >> both freshly compiled with no "configure" options at all.  The
> >> differences are usually in the noise, but range from +5% to... -60%.
> >> If anything, this seems to show that CPython should take more care
> >> about performance regressions.  If someone is interested:
> >>
> >> * "raytrace-simple" is 1.19 times slower
> >> * "bm_mako" is 1.29 times slower
> >> * "spitfire_cstringio" is 1.60 times slower
> >> * a number of other benchmarks are around 1.08.
> >>
> >> The "7.0x faster" number on speed.pypy.org would be significantly
> >> *higher* if we upgraded the baseline to 2.7.10 now.
> >
> > If someone were to volunteer to set up and run speed.python.org,
> > I think we could add some additional focus on performance
> > regressions. Right now, we don't have any way of reliably
> > and reproducibly testing Python performance.
> >
> > Hint: The PSF would most likely fund such adventures :-)
> >
> > --
> > Marc-Andre Lemburg
> > eGenix.com
> >
> > Professional Python Services directly from the Source  (#1, Jun 01 2015)
> >>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
> >>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
> >>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
> > ________________________________________________________________________
> >
> > ::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::
> >
> >    eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
> >     D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
> >            Registered at Amtsgericht Duesseldorf: HRB 46611
> >                http://www.egenix.com/company/contact/
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev at python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150602/fbe0c979/attachment.html>

From rose at happyspork.com  Tue Jun  2 23:01:54 2015
From: rose at happyspork.com (Rose Ames)
Date: Tue, 02 Jun 2015 17:01:54 -0400
Subject: [Python-Dev] Fwd: zipimport
In-Reply-To: <556E187B.3050508@happyspork.com>
References: <556E187B.3050508@happyspork.com>
Message-ID: <556E19C2.8060205@happyspork.com>

At pycon I talked with a few people about bugs.python.org/issue19699.
The consensus seemed to be that zipimport wants a lot of work, possibly
a rewrite.

I'll have some time to work on this over the next couple of months, but
I want to be working on the right thing.  Could the people who were
involved in that conversation remind me of their concerns with
zipimport?  What exactly would the goal of a rewrite be?

Thanks,

Rose






From solipsis at pitrou.net  Tue Jun  2 23:18:45 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 2 Jun 2015 23:18:45 +0200
Subject: [Python-Dev] Avoiding reference leaks in heap types with custom
	tp_dealloc
In-Reply-To: <CA+=+wqDsO1WaZ0y9v-JcoLnzy9LQArEeys_No614-geZ5PKUGQ@mail.gmail.com>
References: <CA+=+wqCrP3hHqCFPUo0_-6MZ6pbB-LJ7+=e0iPJvUXUcWY6nRw@mail.gmail.com>
 <20150601173319.6e321656@fsol>
 <CA+=+wqAvrBnvNH11xE6iEkoWYGtKZkLnbUZyysBY8Nosag4wow@mail.gmail.com>
 <20150601180013.6815f6ab@fsol>
 <CA+=+wqDsO1WaZ0y9v-JcoLnzy9LQArEeys_No614-geZ5PKUGQ@mail.gmail.com>
Message-ID: <20150602231845.091a93a2@fsol>

On Tue, 2 Jun 2015 21:09:52 +0200
Petr Viktorin <encukou at gmail.com> wrote:
> On Mon, Jun 1, 2015 at 6:00 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> [...]
> > I think we have been laxist with additions to the stable ABI:
> > apparently, they should be conditioned on the API version requested by
> > the user.  For example, in pystate.h:
> >
> > #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x03030000
> > /* New in 3.3 */
> > PyAPI_FUNC(int) PyState_AddModule(PyObject*, struct PyModuleDef*);
> > PyAPI_FUNC(int) PyState_RemoveModule(struct PyModuleDef*);
> > #endif
> >
> > (those were added by Martin, so I assume he knew what he was doing :-))
> >
> > This way, failing to restrict yourself to a given API version fails at
> > compile time, not at runtime. However, it's also more work to do so
> > when adding stuff, which is why we tend to skimp on it.
> 
> I see! I completely missed that memo.
> I filed a patch that wraps my 3.5 additions as issue 24365.
> 
> I think this should be in the PEP, so people like me can find it. Does
> the attached wording look good?

It looks good to me, but Martin should probably validate it. If he
hasn't answered yet in one month, perhaps you can add it anyway :)

Regards

Antoine.

From brett at python.org  Tue Jun  2 23:20:10 2015
From: brett at python.org (Brett Cannon)
Date: Tue, 02 Jun 2015 21:20:10 +0000
Subject: [Python-Dev] Fwd: zipimport
In-Reply-To: <556E19C2.8060205@happyspork.com>
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
Message-ID: <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>

On Tue, Jun 2, 2015 at 5:04 PM Rose Ames <rose at happyspork.com> wrote:

> At pycon I talked with a few people about bugs.python.org/issue19699.
> The consensus seemed to be that zipimport wants a lot of work, possibly
> a rewrite.
>
> I'll have some time to work on this over the next couple of months, but
> I want to be working on the right thing.  Could the people who were
> involved in that conversation remind me of their concerns with
> zipimport?  What exactly would the goal of a rewrite be?
>

I believe the participants consisted of Thomas Wouters, Greg Smith, Eric
Snow, Nick Coghlan, and myself. In the end I think the general consensus
for long-term maintenance was to write the minimal amount of code required
that could read zip files and then write all of the import-related code on
top of that. Basically the idea of freezing zipfile seemed messy since it
has a ton of dependencies and baggage that simply isn't needed to replace
zipimport.

I vaguely remember people suggesting writing the minimal zip reading code
in C but I can't remember why since we have I/O access in importlib through
_io and thus it's really just the pulling apart of the zip file to get at
the files to import and thus should be doable in pure Python.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150602/ff7dee96/attachment.html>

From eric at trueblade.com  Wed Jun  3 00:11:13 2015
From: eric at trueblade.com (Eric V. Smith)
Date: Tue, 02 Jun 2015 18:11:13 -0400
Subject: [Python-Dev] Fwd: zipimport
In-Reply-To: <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
 <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
Message-ID: <556E2A01.8060602@trueblade.com>

On 6/2/2015 5:20 PM, Brett Cannon wrote:
> 
> 
> On Tue, Jun 2, 2015 at 5:04 PM Rose Ames <rose at happyspork.com
> <mailto:rose at happyspork.com>> wrote:
> 
>     At pycon I talked with a few people about bugs.python.org/issue19699
>     <http://bugs.python.org/issue19699>.
>     The consensus seemed to be that zipimport wants a lot of work, possibly
>     a rewrite.
> 
>     I'll have some time to work on this over the next couple of months, but
>     I want to be working on the right thing.  Could the people who were
>     involved in that conversation remind me of their concerns with
>     zipimport?  What exactly would the goal of a rewrite be?
> 
> 
> I believe the participants consisted of Thomas Wouters, Greg Smith, Eric
> Snow, Nick Coghlan, and myself. In the end I think the general consensus
> for long-term maintenance was to write the minimal amount of code
> required that could read zip files and then write all of the
> import-related code on top of that. Basically the idea of freezing
> zipfile seemed messy since it has a ton of dependencies and baggage that
> simply isn't needed to replace zipimport.

Hey, I was there! This is what I recall as well.

> I vaguely remember people suggesting writing the minimal zip reading
> code in C but I can't remember why since we have I/O access in importlib
> through _io and thus it's really just the pulling apart of the zip file
> to get at the files to import and thus should be doable in pure Python.

I don't think writing it in Python ever came up. I can't think of a
reason why that wouldn't work. Rose: this seems like a good approach to try.

Eric.

From rose at happyspork.com  Wed Jun  3 00:41:04 2015
From: rose at happyspork.com (Rose Ames)
Date: Tue, 02 Jun 2015 18:41:04 -0400
Subject: [Python-Dev] Fwd: zipimport
In-Reply-To: <556E2A01.8060602@trueblade.com>
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
 <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
 <556E2A01.8060602@trueblade.com>
Message-ID: <556E3100.3030301@happyspork.com>

Writing it in Python did come up and was decided against, but I don't 
recall the reasoning.  Could it have been a performance thing?


On 06/02/2015 06:11 PM, Eric V. Smith wrote:
> On 6/2/2015 5:20 PM, Brett Cannon wrote:
>>
>>
>> On Tue, Jun 2, 2015 at 5:04 PM Rose Ames <rose at happyspork.com
>> <mailto:rose at happyspork.com>> wrote:
>>
>>      At pycon I talked with a few people about bugs.python.org/issue19699
>>      <http://bugs.python.org/issue19699>.
>>      The consensus seemed to be that zipimport wants a lot of work, possibly
>>      a rewrite.
>>
>>      I'll have some time to work on this over the next couple of months, but
>>      I want to be working on the right thing.  Could the people who were
>>      involved in that conversation remind me of their concerns with
>>      zipimport?  What exactly would the goal of a rewrite be?
>>
>>
>> I believe the participants consisted of Thomas Wouters, Greg Smith, Eric
>> Snow, Nick Coghlan, and myself. In the end I think the general consensus
>> for long-term maintenance was to write the minimal amount of code
>> required that could read zip files and then write all of the
>> import-related code on top of that. Basically the idea of freezing
>> zipfile seemed messy since it has a ton of dependencies and baggage that
>> simply isn't needed to replace zipimport.
>
> Hey, I was there! This is what I recall as well.
>
>> I vaguely remember people suggesting writing the minimal zip reading
>> code in C but I can't remember why since we have I/O access in importlib
>> through _io and thus it's really just the pulling apart of the zip file
>> to get at the files to import and thus should be doable in pure Python.
>
> I don't think writing it in Python ever came up. I can't think of a
> reason why that wouldn't work. Rose: this seems like a good approach to try.
>
> Eric.
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/rose%40happyspork.com
>

From solipsis at pitrou.net  Wed Jun  3 10:59:48 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 3 Jun 2015 10:59:48 +0200
Subject: [Python-Dev] zipimport
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
 <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
Message-ID: <20150603105948.77fd4a67@fsol>

On Tue, 02 Jun 2015 21:20:10 +0000
Brett Cannon <brett at python.org> wrote:
> 
> I vaguely remember people suggesting writing the minimal zip reading code
> in C but I can't remember why since we have I/O access in importlib through
> _io and thus it's really just the pulling apart of the zip file to get at
> the files to import and thus should be doable in pure Python.

You would need other modules, such as struct and zlib, right?

Regards

Antoine.



From mal at egenix.com  Wed Jun  3 11:38:13 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 03 Jun 2015 11:38:13 +0200
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>	<CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>	<mk7asm$37i$1@ger.gmane.org>	<CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>	<CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>	<CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>	<CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>	<5568FAF6.6040802@python.org>
 <20150530015621.518b537f@fsol>	<CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>	<556906F0.4090504@sdamon.com>	<CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>	<CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>	<556A45D0.7070606@hastings.org>	<CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>	<556C3E93.50108@egenix.com>
 <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
Message-ID: <556ECB05.8060200@egenix.com>

On 02.06.2015 21:07, Maciej Fijalkowski wrote:
> Hi
> 
> There was a PSF-sponsored effort to improve the situation with the
> https://bitbucket.org/pypy/codespeed2/src being written (thank you
> PSF). It's not better enough than codespeed that I would like, but
> gives some opportunities.
> 
> That said, we have a benchmark machine for benchmarking cpython and I
> never deployed nightly benchmarks of cpython for a variety of reasons.
> 
> * would be cool to get a small VM to set up the web front
> 
> * people told me that py3k is only interesting, so I did not set it up
> for py3k because benchmarks are mostly missing
> 
> I'm willing to set up a nightly speed.python.org using nightly build
> on python 2 and possible python 3 if there is an interest. I need
> support from someone maintaining python buildbot to setup builds and a
> VM to set up stuff, otherwise I'm good to go
> 
> DISCLAIMER: I did facilitate in codespeed rewrite that was not as
> successful as I would have hoped. I did not receive any money from the
> PSF on that though.

I think we should look into getting speed.python.org up and
running for both Python 2 and 3 branches:

     https://speed.python.org/

What would it take to make that happen ?

> Cheers,
> fijal
> 
> 
> On Mon, Jun 1, 2015 at 1:14 PM, M.-A. Lemburg <mal at egenix.com> wrote:
>> On 01.06.2015 12:44, Armin Rigo wrote:
>>> Hi Larry,
>>>
>>> On 31 May 2015 at 01:20, Larry Hastings <larry at hastings.org> wrote:
>>>> p.s. Supporting this patch also helps cut into PyPy's reported performance
>>>> lead--that is, if they ever upgrade speed.pypy.org from comparing against
>>>> Python *2.7.2*.
>>>
>>> Right, we should do this upgrade when 2.7.11 is out.
>>>
>>> There is some irony in your comment which seems to imply "PyPy is
>>> cheating by comparing with an old Python 2.7.2": it is inside a thread
>>> which started because "we didn't backport performance improvements to
>>> 2.7.x so far".
>>>
>>> Just to convince myself, I just ran a performance comparison.  I ran
>>> the same benchmark suite as speed.pypy.org, with 2.7.2 against 2.7.10,
>>> both freshly compiled with no "configure" options at all.  The
>>> differences are usually in the noise, but range from +5% to... -60%.
>>> If anything, this seems to show that CPython should take more care
>>> about performance regressions.  If someone is interested:
>>>
>>> * "raytrace-simple" is 1.19 times slower
>>> * "bm_mako" is 1.29 times slower
>>> * "spitfire_cstringio" is 1.60 times slower
>>> * a number of other benchmarks are around 1.08.
>>>
>>> The "7.0x faster" number on speed.pypy.org would be significantly
>>> *higher* if we upgraded the baseline to 2.7.10 now.
>>
>> If someone were to volunteer to set up and run speed.python.org,
>> I think we could add some additional focus on performance
>> regressions. Right now, we don't have any way of reliably
>> and reproducibly testing Python performance.
>>
>> Hint: The PSF would most likely fund such adventures :-)
>>
>> --
>> Marc-Andre Lemburg
>> eGenix.com
>>
>> Professional Python Services directly from the Source  (#1, Jun 01 2015)
>>>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
>> ________________________________________________________________________
>>
>> ::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::
>>
>>    eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>>     D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>>            Registered at Amtsgericht Duesseldorf: HRB 46611
>>                http://www.egenix.com/company/contact/
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/mal%40egenix.com
> 

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 03 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From fijall at gmail.com  Wed Jun  3 12:04:10 2015
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Wed, 3 Jun 2015 12:04:10 +0200
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <556ECB05.8060200@egenix.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
 <556ECB05.8060200@egenix.com>
Message-ID: <CAK5idxRh3d5mpT0hkagtAZQXG+GkbmTA2fjYCM+PEJ6_C0WaYw@mail.gmail.com>

On Wed, Jun 3, 2015 at 11:38 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> On 02.06.2015 21:07, Maciej Fijalkowski wrote:
>> Hi
>>
>> There was a PSF-sponsored effort to improve the situation with the
>> https://bitbucket.org/pypy/codespeed2/src being written (thank you
>> PSF). It's not better enough than codespeed that I would like, but
>> gives some opportunities.
>>
>> That said, we have a benchmark machine for benchmarking cpython and I
>> never deployed nightly benchmarks of cpython for a variety of reasons.
>>
>> * would be cool to get a small VM to set up the web front
>>
>> * people told me that py3k is only interesting, so I did not set it up
>> for py3k because benchmarks are mostly missing
>>
>> I'm willing to set up a nightly speed.python.org using nightly build
>> on python 2 and possible python 3 if there is an interest. I need
>> support from someone maintaining python buildbot to setup builds and a
>> VM to set up stuff, otherwise I'm good to go
>>
>> DISCLAIMER: I did facilitate in codespeed rewrite that was not as
>> successful as I would have hoped. I did not receive any money from the
>> PSF on that though.
>
> I think we should look into getting speed.python.org up and
> running for both Python 2 and 3 branches:
>
>      https://speed.python.org/
>
> What would it take to make that happen ?

I guess ideal would be some cooperation from some of the cpython devs,
so say someone can setup cpython buildbot

From rdmurray at bitdance.com  Wed Jun  3 15:49:58 2015
From: rdmurray at bitdance.com (R. David Murray)
Date: Wed, 03 Jun 2015 09:49:58 -0400
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <CAK5idxRh3d5mpT0hkagtAZQXG+GkbmTA2fjYCM+PEJ6_C0WaYw@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
 <556ECB05.8060200@egenix.com> <CAK5idxRh3d5m
 pT0hkagtAZQXG+GkbmTA2fjYCM+PEJ6_C0WaYw@mail.gmail.com>
Message-ID: <20150603134959.3C831B3050B@webabinitio.net>

On Wed, 03 Jun 2015 12:04:10 +0200, Maciej Fijalkowski <fijall at gmail.com> wrote:
> On Wed, Jun 3, 2015 at 11:38 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> > On 02.06.2015 21:07, Maciej Fijalkowski wrote:
> >> Hi
> >>
> >> There was a PSF-sponsored effort to improve the situation with the
> >> https://bitbucket.org/pypy/codespeed2/src being written (thank you
> >> PSF). It's not better enough than codespeed that I would like, but
> >> gives some opportunities.
> >>
> >> That said, we have a benchmark machine for benchmarking cpython and I
> >> never deployed nightly benchmarks of cpython for a variety of reasons.
> >>
> >> * would be cool to get a small VM to set up the web front
> >>
> >> * people told me that py3k is only interesting, so I did not set it up
> >> for py3k because benchmarks are mostly missing
> >>
> >> I'm willing to set up a nightly speed.python.org using nightly build
> >> on python 2 and possible python 3 if there is an interest. I need
> >> support from someone maintaining python buildbot to setup builds and a
> >> VM to set up stuff, otherwise I'm good to go
> >>
> >> DISCLAIMER: I did facilitate in codespeed rewrite that was not as
> >> successful as I would have hoped. I did not receive any money from the
> >> PSF on that though.
> >
> > I think we should look into getting speed.python.org up and
> > running for both Python 2 and 3 branches:
> >
> >      https://speed.python.org/
> >
> > What would it take to make that happen ?
> 
> I guess ideal would be some cooperation from some of the cpython devs,
> so say someone can setup cpython buildbot

What does "set up cpython buildbot" mean in this context?

--David

From brett at python.org  Wed Jun  3 15:52:12 2015
From: brett at python.org (Brett Cannon)
Date: Wed, 03 Jun 2015 13:52:12 +0000
Subject: [Python-Dev] zipimport
In-Reply-To: <20150603105948.77fd4a67@fsol>
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
 <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
 <20150603105948.77fd4a67@fsol>
Message-ID: <CAP1=2W6zQTdkVSASG9myG06NFW7_BxT=KEqku65vHuRmxNCTww@mail.gmail.com>

On Wed, Jun 3, 2015 at 5:00 AM Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Tue, 02 Jun 2015 21:20:10 +0000
> Brett Cannon <brett at python.org> wrote:
> >
> > I vaguely remember people suggesting writing the minimal zip reading code
> > in C but I can't remember why since we have I/O access in importlib
> through
> > _io and thus it's really just the pulling apart of the zip file to get at
> > the files to import and thus should be doable in pure Python.
>
> You would need other modules, such as struct and zlib, right?
>

zlib yes, but that's already a C extension so that could get compiled in as
a built-in module if necessary. As for struct, it might be nice but it
doesn't provide any functionality that couldn't be reproduced manually.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150603/ae5689e0/attachment.html>

From fijall at gmail.com  Wed Jun  3 18:59:28 2015
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Wed, 3 Jun 2015 18:59:28 +0200
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <20150603134959.3C831B3050B@webabinitio.net>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAK5idxSv6V266Anib+TAMY21J=Zc2wjFfL2M9ZgrHbtJHFXgSg@mail.gmail.com>
 <556ECB05.8060200@egenix.com>
 <CAK5idxRh3d5mpT0hkagtAZQXG+GkbmTA2fjYCM+PEJ6_C0WaYw@mail.gmail.com>
 <20150603134959.3C831B3050B@webabinitio.net>
Message-ID: <CAK5idxREAqCd4ruwaWVU1W6M6N3T35Mq8kUgN-eEfcrVmjPx-g@mail.gmail.com>

On Wed, Jun 3, 2015 at 3:49 PM, R. David Murray <rdmurray at bitdance.com> wrote:
> On Wed, 03 Jun 2015 12:04:10 +0200, Maciej Fijalkowski <fijall at gmail.com> wrote:
>> On Wed, Jun 3, 2015 at 11:38 AM, M.-A. Lemburg <mal at egenix.com> wrote:
>> > On 02.06.2015 21:07, Maciej Fijalkowski wrote:
>> >> Hi
>> >>
>> >> There was a PSF-sponsored effort to improve the situation with the
>> >> https://bitbucket.org/pypy/codespeed2/src being written (thank you
>> >> PSF). It's not better enough than codespeed that I would like, but
>> >> gives some opportunities.
>> >>
>> >> That said, we have a benchmark machine for benchmarking cpython and I
>> >> never deployed nightly benchmarks of cpython for a variety of reasons.
>> >>
>> >> * would be cool to get a small VM to set up the web front
>> >>
>> >> * people told me that py3k is only interesting, so I did not set it up
>> >> for py3k because benchmarks are mostly missing
>> >>
>> >> I'm willing to set up a nightly speed.python.org using nightly build
>> >> on python 2 and possible python 3 if there is an interest. I need
>> >> support from someone maintaining python buildbot to setup builds and a
>> >> VM to set up stuff, otherwise I'm good to go
>> >>
>> >> DISCLAIMER: I did facilitate in codespeed rewrite that was not as
>> >> successful as I would have hoped. I did not receive any money from the
>> >> PSF on that though.
>> >
>> > I think we should look into getting speed.python.org up and
>> > running for both Python 2 and 3 branches:
>> >
>> >      https://speed.python.org/
>> >
>> > What would it take to make that happen ?
>>
>> I guess ideal would be some cooperation from some of the cpython devs,
>> so say someone can setup cpython buildbot
>
> What does "set up cpython buildbot" mean in this context?

The way it works is dual - there is a program running the benchmarks
(the runner) which is in the pypy case run by the pypy buildbot and
the web side that reports stuff. So someone who has access to cpython
buildbot would be useful.

From tetsuya.morimoto at gmail.com  Thu Jun  4 04:08:02 2015
From: tetsuya.morimoto at gmail.com (Tetsuya Morimoto)
Date: Thu, 4 Jun 2015 11:08:02 +0900
Subject: [Python-Dev] 2.7 is here until 2020,
	please don't call it a waste.
In-Reply-To: <556C3E93.50108@egenix.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <7A4DF5A2-5D04-4F5A-9883-B5E815A14909@gmail.com>
 <CAP1=2W5cZZzwQj0kqHgKozoKhCL_2fvx2cydNqOZBVW1BNGEdw@mail.gmail.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
Message-ID: <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>

> If someone were to volunteer to set up and run speed.python.org, I think
we could add some additional focus on performance regressions. Right now,
we don't have any way of reliably and reproducibly testing Python
performance.

I'm very interested in speed.python.org and feel regret that the project is
standing still. I have a mind to contribute something ...

thanks,
Tetsuya


On Mon, Jun 1, 2015 at 8:14 PM, M.-A. Lemburg <mal at egenix.com> wrote:

> On 01.06.2015 12:44, Armin Rigo wrote:
> > Hi Larry,
> >
> > On 31 May 2015 at 01:20, Larry Hastings <larry at hastings.org> wrote:
> >> p.s. Supporting this patch also helps cut into PyPy's reported
> performance
> >> lead--that is, if they ever upgrade speed.pypy.org from comparing
> against
> >> Python *2.7.2*.
> >
> > Right, we should do this upgrade when 2.7.11 is out.
> >
> > There is some irony in your comment which seems to imply "PyPy is
> > cheating by comparing with an old Python 2.7.2": it is inside a thread
> > which started because "we didn't backport performance improvements to
> > 2.7.x so far".
> >
> > Just to convince myself, I just ran a performance comparison.  I ran
> > the same benchmark suite as speed.pypy.org, with 2.7.2 against 2.7.10,
> > both freshly compiled with no "configure" options at all.  The
> > differences are usually in the noise, but range from +5% to... -60%.
> > If anything, this seems to show that CPython should take more care
> > about performance regressions.  If someone is interested:
> >
> > * "raytrace-simple" is 1.19 times slower
> > * "bm_mako" is 1.29 times slower
> > * "spitfire_cstringio" is 1.60 times slower
> > * a number of other benchmarks are around 1.08.
> >
> > The "7.0x faster" number on speed.pypy.org would be significantly
> > *higher* if we upgraded the baseline to 2.7.10 now.
>
> If someone were to volunteer to set up and run speed.python.org,
> I think we could add some additional focus on performance
> regressions. Right now, we don't have any way of reliably
> and reproducibly testing Python performance.
>
> Hint: The PSF would most likely fund such adventures :-)
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Source  (#1, Jun 01 2015)
> >>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
> >>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
> >>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
> ________________________________________________________________________
>
> ::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::
>
>    eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>     D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>            Registered at Amtsgericht Duesseldorf: HRB 46611
>                http://www.egenix.com/company/contact/
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/tetsuya.morimoto%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150604/469b4371/attachment.html>

From rose at happyspork.com  Thu Jun  4 04:10:04 2015
From: rose at happyspork.com (Rose Ames)
Date: Wed, 03 Jun 2015 22:10:04 -0400
Subject: [Python-Dev] zipimport
In-Reply-To: <556EF964.1040607@python.org>
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
 <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
 <20150603105948.77fd4a67@fsol> <556EF471.1090604@happyspork.com>
 <556EF964.1040607@python.org>
Message-ID: <556FB37C.3090307@happyspork.com>

On 06/03/2015 08:56 AM, Antoine Pitrou wrote:
> Le 03/06/2015 14:34, Rose Ames a ?crit :
>>
>>
>> On 06/03/2015 04:59 AM, Antoine Pitrou wrote:
>>> On Tue, 02 Jun 2015 21:20:10 +0000
>>> Brett Cannon <brett at python.org> wrote:
>>>>
>>>> I vaguely remember people suggesting writing the minimal zip reading code
>>>> in C but I can't remember why since we have I/O access in importlib through
>>>> _io and thus it's really just the pulling apart of the zip file to get at
>>>> the files to import and thus should be doable in pure Python.
>>>
>>> You would need other modules, such as struct and zlib, right?
>>>
>> zlib is already required for compressed zips:
>>
>> https://github.com/python/cpython/blob/master/Modules/zipimport.c#L1030
>
> Ah, the C code already importing the zlib module? Then I guess it's fine :-)
>
>> Do some companies
>> zip the standard library or something?
>
> I don't know. If they do so, they probably exclude the zlib module in
> some way.
>
> Regards
>
> Antoine.

Sounds like I can just add a zipimporter to Lib/importlib/_bootstrap.py 
and remove zipimport.c entirely, is that right?

From ncoghlan at gmail.com  Thu Jun  4 11:30:11 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 4 Jun 2015 19:30:11 +1000
Subject: [Python-Dev] zipimport
In-Reply-To: <556FB37C.3090307@happyspork.com>
References: <556E187B.3050508@happyspork.com> <556E19C2.8060205@happyspork.com>
 <CAP1=2W7V_pxt85UcAMkXUK8fFNGLQuGYRjCVObR_j7dHx65amQ@mail.gmail.com>
 <20150603105948.77fd4a67@fsol> <556EF471.1090604@happyspork.com>
 <556EF964.1040607@python.org> <556FB37C.3090307@happyspork.com>
Message-ID: <CADiSq7ciwp-2OuuomrV-=DZiX_=UqF9QZ-c8h4qvGfTEnDRhdA@mail.gmail.com>

On 4 June 2015 at 12:10, Rose Ames <rose at happyspork.com> wrote:
> Sounds like I can just add a zipimporter to Lib/importlib/_bootstrap.py and
> remove zipimport.c entirely, is that right?

We moved things around a bit for 3.5, so this would be in
Lib/importlib/_bootstrap_external.py now.

My vague recollection is that the key complication we came up with was
how to make the "standard library in a zipfile" scenario keep working
(together with the completely frozen application case, where the app
and its modules are bundled together with the interpreter in one
binary). In that case, the zip format is just being used as an
archive, rather than for compression, so the current zlib dependency
shouldn't enter into it.

However, it likely makes sense to look at what would be involved in
making zipimport PEP 451 compatible *without* accounting for that
constraint, and then see what would need to be refactored/frozen/moved
to C to handle it after the simpler case is working.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From mal at egenix.com  Thu Jun  4 12:55:55 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 04 Jun 2015 12:55:55 +0200
Subject: [Python-Dev] speed.python.org (was: 2.7 is here until 2020,
 please don't call it a waste.)
In-Reply-To: <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>	<CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>	<mk7asm$37i$1@ger.gmane.org>	<CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>	<CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>	<CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>	<CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>	<5568FAF6.6040802@python.org>
 <20150530015621.518b537f@fsol>	<CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>	<556906F0.4090504@sdamon.com>	<CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>	<CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>	<556A45D0.7070606@hastings.org>	<CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>	<556C3E93.50108@egenix.com>
 <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>
Message-ID: <55702EBB.7070606@egenix.com>

On 04.06.2015 04:08, Tetsuya Morimoto wrote:
>> If someone were to volunteer to set up and run speed.python.org, I think
> we could add some additional focus on performance regressions. Right now,
> we don't have any way of reliably and reproducibly testing Python
> performance.
> 
> I'm very interested in speed.python.org and feel regret that the project is
> standing still. I have a mind to contribute something ...

On 03.06.2015 18:59, Maciej Fijalkowski wrote:> On Wed, Jun 3, 2015 at 3:49 PM, R. David Murray
>>>> I think we should look into getting speed.python.org up and
>>>> running for both Python 2 and 3 branches:
>>>>
>>>>      https://speed.python.org/
>>>>
>>>> What would it take to make that happen ?
>>>
>>> I guess ideal would be some cooperation from some of the cpython devs,
>>> so say someone can setup cpython buildbot
>>
>> What does "set up cpython buildbot" mean in this context?
>
> The way it works is dual - there is a program running the benchmarks
> (the runner) which is in the pypy case run by the pypy buildbot and
> the web side that reports stuff. So someone who has access to cpython
> buildbot would be useful.

Ok, so there's interest and we have at least a few people who are
willing to help.

Now we need someone to take the lead on this and form a small
project group to get everything implemented. Who would be up
to such a task ?

The speed project already has a mailing list, so you could use
that for organizing the details.

We could also create a PSF work group and assign a budget to it,
if that helps.

If you need help with all this, let me know.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 04 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From rdmurray at bitdance.com  Thu Jun  4 16:32:00 2015
From: rdmurray at bitdance.com (R. David Murray)
Date: Thu, 04 Jun 2015 10:32:00 -0400
Subject: [Python-Dev] speed.python.org (was: 2.7 is here until 2020,
	please don't call it a waste.)
In-Reply-To: <55702EBB.7070606@egenix.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>
 <55702EBB.7070606@egenix.com>
Message-ID: <20150604143200.E2BCFB14089@webabinitio.net>

On Thu, 04 Jun 2015 12:55:55 +0200, "M.-A. Lemburg" <mal at egenix.com> wrote:
> On 04.06.2015 04:08, Tetsuya Morimoto wrote:
> >> If someone were to volunteer to set up and run speed.python.org, I think
> > we could add some additional focus on performance regressions. Right now,
> > we don't have any way of reliably and reproducibly testing Python
> > performance.
> > 
> > I'm very interested in speed.python.org and feel regret that the project is
> > standing still. I have a mind to contribute something ...
> 
> On 03.06.2015 18:59, Maciej Fijalkowski wrote:> On Wed, Jun 3, 2015 at 3:49 PM, R. David Murray
> >>>> I think we should look into getting speed.python.org up and
> >>>> running for both Python 2 and 3 branches:
> >>>>
> >>>>      https://speed.python.org/
> >>>>
> >>>> What would it take to make that happen ?
> >>>
> >>> I guess ideal would be some cooperation from some of the cpython devs,
> >>> so say someone can setup cpython buildbot
> >>
> >> What does "set up cpython buildbot" mean in this context?
> >
> > The way it works is dual - there is a program running the benchmarks
> > (the runner) which is in the pypy case run by the pypy buildbot and
> > the web side that reports stuff. So someone who has access to cpython
> > buildbot would be useful.

(I don't seem to have gotten a copy of Maciej's message, at least not
yet.)

OK, so what you are saying is that speed.python.org will run a buildbot
slave so that when a change is committed to cPython, a speed run will be
triggered?  Is "the runner" a normal buildbot slave, or something
custom?  In the normal case the master controls what the slave
runs...but regardless, you'll need to let us know how the slave
invocation needs to be configured on the master.

> Ok, so there's interest and we have at least a few people who are
> willing to help.
> 
> Now we need someone to take the lead on this and form a small
> project group to get everything implemented. Who would be up
> to such a task ?
> 
> The speed project already has a mailing list, so you could use
> that for organizing the details.

If it's a low volume list I'm willing to sign up, but regardless I'm
willing to help with the buildbot setup on the CPython side.  (As soon
as my credential-update request gets through infrastructure, at least :)

--David

From fijall at gmail.com  Thu Jun  4 17:51:26 2015
From: fijall at gmail.com (Maciej Fijalkowski)
Date: Thu, 4 Jun 2015 17:51:26 +0200
Subject: [Python-Dev] speed.python.org (was: 2.7 is here until 2020,
 please don't call it a waste.)
In-Reply-To: <20150604143200.E2BCFB14089@webabinitio.net>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>
 <55702EBB.7070606@egenix.com> <20150604143200.E2BCFB14089@webabinitio.net>
Message-ID: <CAK5idxTW7BHL=Y=6bqpXekmoDb9vptG71wCe8rjzJoNupWnpEA@mail.gmail.com>

On Thu, Jun 4, 2015 at 4:32 PM, R. David Murray <rdmurray at bitdance.com> wrote:
> On Thu, 04 Jun 2015 12:55:55 +0200, "M.-A. Lemburg" <mal at egenix.com> wrote:
>> On 04.06.2015 04:08, Tetsuya Morimoto wrote:
>> >> If someone were to volunteer to set up and run speed.python.org, I think
>> > we could add some additional focus on performance regressions. Right now,
>> > we don't have any way of reliably and reproducibly testing Python
>> > performance.
>> >
>> > I'm very interested in speed.python.org and feel regret that the project is
>> > standing still. I have a mind to contribute something ...
>>
>> On 03.06.2015 18:59, Maciej Fijalkowski wrote:> On Wed, Jun 3, 2015 at 3:49 PM, R. David Murray
>> >>>> I think we should look into getting speed.python.org up and
>> >>>> running for both Python 2 and 3 branches:
>> >>>>
>> >>>>      https://speed.python.org/
>> >>>>
>> >>>> What would it take to make that happen ?
>> >>>
>> >>> I guess ideal would be some cooperation from some of the cpython devs,
>> >>> so say someone can setup cpython buildbot
>> >>
>> >> What does "set up cpython buildbot" mean in this context?
>> >
>> > The way it works is dual - there is a program running the benchmarks
>> > (the runner) which is in the pypy case run by the pypy buildbot and
>> > the web side that reports stuff. So someone who has access to cpython
>> > buildbot would be useful.
>
> (I don't seem to have gotten a copy of Maciej's message, at least not
> yet.)
>
> OK, so what you are saying is that speed.python.org will run a buildbot
> slave so that when a change is committed to cPython, a speed run will be
> triggered?  Is "the runner" a normal buildbot slave, or something
> custom?  In the normal case the master controls what the slave
> runs...but regardless, you'll need to let us know how the slave
> invocation needs to be configured on the master.

Ideally nightly (benchmarks take a while). The setup for pypy looks like this:


https://bitbucket.org/pypy/buildbot/src/5fa1f1a4990f842dfbee416c4c2e2f6f75d451c4/bot2/pypybuildbot/builds.py?at=default#cl-734

so fairly easy. This already generates a json file that you can plot.
We can setup an upload automatically too.


>
>> Ok, so there's interest and we have at least a few people who are
>> willing to help.
>>
>> Now we need someone to take the lead on this and form a small
>> project group to get everything implemented. Who would be up
>> to such a task ?
>>
>> The speed project already has a mailing list, so you could use
>> that for organizing the details.
>
> If it's a low volume list I'm willing to sign up, but regardless I'm
> willing to help with the buildbot setup on the CPython side.  (As soon
> as my credential-update request gets through infrastructure, at least :)
>
> --David
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com

From Steve.Dower at microsoft.com  Fri Jun  5 01:24:41 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Thu, 4 Jun 2015 23:24:41 +0000
Subject: [Python-Dev] Mingw help
Message-ID: <BY2PR0301MB1653C6865620C89F4B215E3DF5B30@BY2PR0301MB1653.namprd03.prod.outlook.com>

If you have an interest in linking to the Windows builds of Python 2.7 and 3.5+ using mingw, please visit http://bugs.python.org/issue24385

Unless someone can provide me with the One True Way to generate a lib that will work for everyone, I'm going to replace the lib file with instructions on how to generate it with your own tools. I've done enough guessing and need someone who actually uses mingw to step up and help out.

Thanks,
Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150604/d2edfb59/attachment.html>

From python at mrabarnett.plus.com  Fri Jun  5 02:37:28 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Fri, 05 Jun 2015 01:37:28 +0100
Subject: [Python-Dev] Unable to build regex module against Python 3.5
 32-bit
In-Reply-To: <CACac1F-n8zN+5cxQu8DsBGQWBkUkAVTVQUngsHrS1OZMNmUVAQ@mail.gmail.com>
References: <556380A9.4000206@mrabarnett.plus.com>
 <CACac1F9EXVXH_8FEeyVoDkZBbGuDPz-+O4LzCLZ8PMjjOsr9_w@mail.gmail.com>
 <5563B18B.4040305@mrabarnett.plus.com>
 <CACac1F9aQ1Jj_HYQisjnDP3SJtVievP=oLgjEBJvjxj8btO9pg@mail.gmail.com>
 <CACac1F_j2-riRTvLzNe=dhrwbRVROqM+gT=rgyy4FzCAe-pibA@mail.gmail.com>
 <BY1PR03MB14661DDB918F7C6D961EE8B5F5CC0@BY1PR03MB1466.namprd03.prod.outlook.com>
 <CACac1F9hrDLMsiXs+YpP0fzdMG+9PHcT1q=HgpuhOe9xT1ZHtQ@mail.gmail.com>
 <CADiSq7f7odgBhTGVDkXo33qQkHeRy34HjE31aMJaLgfjVBp70w@mail.gmail.com>
 <CACac1F-n8zN+5cxQu8DsBGQWBkUkAVTVQUngsHrS1OZMNmUVAQ@mail.gmail.com>
Message-ID: <5570EF48.9060301@mrabarnett.plus.com>

On 2015-05-27 09:25, Paul Moore wrote:
> On 27 May 2015 at 09:10, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> The old distutils docs aren't gone, the top level links just moved to the
>> distutils package docs: https://docs.python.org/3/library/distutils.html
>>
>> I kept them (with the same deep link URLs) because I know there's stuff in
>> there that isn't currently documented anywhere else. I moved them to a more
>> obscure location because there's also stuff in there that's thoroughly
>> outdated, and it's a non-trivial task to figure out which is which and move
>> the still useful stuff to a more appropriate home :)
>
> Thanks.
>
> Your plan worked perfectly, because I never knew they were there :-)
>
> https://docs.python.org/3/install/index.html#older-versions-of-python-and-mingw
> implies that the libpythonXY.a files are only needed in older versions
> of Python/mingw. I don't know how true that is, although I do know
> that mingw should be able to link directly to a DLL without needing a
> lib file.
>
> It would be interesting to know if MRAB's build process can use the
> DLL, rather than requiring a lib file (or for that matter if distutils
> works without the lib file!)
>
Steve Dower's post has prompted me to look again at building the regex
module for Python 3.5, 32-bit and 64-bit, using just Mingw64 and
linking against python32.dll. It works!

Earlier versions of Python, however, including Python 2.7, still seem
to want libpython??.a.


From python at mrabarnett.plus.com  Fri Jun  5 03:28:51 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Fri, 05 Jun 2015 02:28:51 +0100
Subject: [Python-Dev] Unable to build regex module against Python 3.5
 32-bit
In-Reply-To: <5570EF48.9060301@mrabarnett.plus.com>
References: <556380A9.4000206@mrabarnett.plus.com>
 <CACac1F9EXVXH_8FEeyVoDkZBbGuDPz-+O4LzCLZ8PMjjOsr9_w@mail.gmail.com>
 <5563B18B.4040305@mrabarnett.plus.com>
 <CACac1F9aQ1Jj_HYQisjnDP3SJtVievP=oLgjEBJvjxj8btO9pg@mail.gmail.com>
 <CACac1F_j2-riRTvLzNe=dhrwbRVROqM+gT=rgyy4FzCAe-pibA@mail.gmail.com>
 <BY1PR03MB14661DDB918F7C6D961EE8B5F5CC0@BY1PR03MB1466.namprd03.prod.outlook.com>
 <CACac1F9hrDLMsiXs+YpP0fzdMG+9PHcT1q=HgpuhOe9xT1ZHtQ@mail.gmail.com>
 <CADiSq7f7odgBhTGVDkXo33qQkHeRy34HjE31aMJaLgfjVBp70w@mail.gmail.com>
 <CACac1F-n8zN+5cxQu8DsBGQWBkUkAVTVQUngsHrS1OZMNmUVAQ@mail.gmail.com>
 <5570EF48.9060301@mrabarnett.plus.com>
Message-ID: <5570FB53.5070906@mrabarnett.plus.com>

On 2015-06-05 01:37, MRAB wrote:
> On 2015-05-27 09:25, Paul Moore wrote:
>> On 27 May 2015 at 09:10, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>> The old distutils docs aren't gone, the top level links just moved to the
>>> distutils package docs: https://docs.python.org/3/library/distutils.html
>>>
>>> I kept them (with the same deep link URLs) because I know there's stuff in
>>> there that isn't currently documented anywhere else. I moved them to a more
>>> obscure location because there's also stuff in there that's thoroughly
>>> outdated, and it's a non-trivial task to figure out which is which and move
>>> the still useful stuff to a more appropriate home :)
>>
>> Thanks.
>>
>> Your plan worked perfectly, because I never knew they were there :-)
>>
>> https://docs.python.org/3/install/index.html#older-versions-of-python-and-mingw
>> implies that the libpythonXY.a files are only needed in older versions
>> of Python/mingw. I don't know how true that is, although I do know
>> that mingw should be able to link directly to a DLL without needing a
>> lib file.
>>
>> It would be interesting to know if MRAB's build process can use the
>> DLL, rather than requiring a lib file (or for that matter if distutils
>> works without the lib file!)
>>
> Steve Dower's post has prompted me to look again at building the regex
> module for Python 3.5, 32-bit and 64-bit, using just Mingw64 and
> linking against python32.dll. It works!
>
> Earlier versions of Python, however, including Python 2.7, still seem
> to want libpython??.a.
>
For reference, here's how I can build the regex module on Windows 8.1,
64-bit, using only MinGW64.

For Python 3.5, I can link against "python35.dll", but for earlier
versions, including Python 2.7, I need "libpython.a".

I have built regex module for all of the 16 supported versions of
Python (2.5-2.7, 3.1-3.5, 64-bit and 32-bit) and they have all passed
the tests.


rem For Python 3.5, 64-bit.
rem Can link against the Python DLL.

rem Compile
"C:\MinGW64\bin\gcc.exe" -mdll -m64 -DMS_WIN64 -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python35-64\include" -c 
"D:\mrab-regex\source\_regex_unicode.c" -o 
"D:\mrab-regex\release\3.5-64\_regex_unicode.o"

"C:\MinGW64\bin\gcc.exe" -mdll -m64 -DMS_WIN64 -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python35-64\include" -c 
"D:\mrab-regex\source\_regex.c" -o "D:\mrab-regex\release\3.5-64\_regex.o"

rem Link
"C:\MinGW64\bin\gcc.exe" -m64 -shared -s 
"D:\mrab-regex\release\3.5-64\_regex_unicode.o" 
"D:\mrab-regex\release\3.5-64\_regex.o" -L"C:\Python35" -lpython35 -o 
"D:\mrab-regex\release\3.5-64\_regex.pyd"


rem For Python 3.5, 32-bit.
rem Can link against the Python DLL.

rem Compile
"C:\MinGW64\bin\gcc.exe" -mdll -m32  -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python35-32\include" -c 
"D:\mrab-regex\source\_regex_unicode.c" -o 
"D:\mrab-regex\release\3.5-32\_regex_unicode.o"

"C:\MinGW64\bin\gcc.exe" -mdll -m32  -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python35-32\include" -c 
"D:\mrab-regex\source\_regex.c" -o "D:\mrab-regex\release\3.5-32\_regex.o"

rem Link
"C:\MinGW64\bin\gcc.exe" -m32 -shared -s 
"D:\mrab-regex\release\3.5-32\_regex_unicode.o" 
"D:\mrab-regex\release\3.5-32\_regex.o" -L"C:\Python35-32" -lpython35 -o 
"D:\mrab-regex\release\3.5-32\_regex.pyd"


rem For Python 3.4, 64-bit.
rem Need to link against the Python .a file.

rem Make libpython34.a
"C:\MinGW64\x86_64-w64-mingw32\bin\gendef.exe" - 
"C:\Windows\System32\python34.dll" >"C:\Python34-64\libs\libpython34.def"

"C:\MinGW64\bin\dlltool.exe" --dllname python34.dll --def 
"C:\Python34-64\libs\libpython34.def" --output-lib 
"C:\Python34-64\libs\libpython34.a"

rem Compile
"C:\MinGW64\bin\gcc.exe" -mdll -m64 -DMS_WIN64 -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python34-64\include" -c 
"D:\mrab-regex\source\_regex_unicode.c" -o 
"D:\mrab-regex\release\3.4-64\_regex_unicode.o"

rem Link
"C:\MinGW64\bin\gcc.exe" -mdll -m64 -DMS_WIN64 -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python34-64\include" -c 
"D:\mrab-regex\source\_regex.c" -o "D:\mrab-regex\release\3.4-64\_regex.o"

"C:\MinGW64\bin\gcc.exe" -m64 -shared -s 
"D:\mrab-regex\release\3.4-64\_regex_unicode.o" 
"D:\mrab-regex\release\3.4-64\_regex.o" -L"C:\Python34-64\libs" 
-lpython34 -o "D:\mrab-regex\release\3.4-64\_regex.pyd"


rem For Python 3.4, 32-bit.
rem Need to link against the Python .a file.

rem Make libpython34.a
"C:\MinGW64\x86_64-w64-mingw32\bin\gendef.exe" - 
"C:\Windows\SysWOW64\python34.dll" >"C:\Python34-32\libs\libpython34.def"

"C:\MinGW64\x86_64-w64-mingw32\bin\dlltool.exe" --as-flags=--32 -m i386 
--dllname python34.dll --def "C:\Python34-32\libs\libpython34.def" 
--output-lib "C:\Python34-32\libs\libpython34.a"

rem Compile
"C:\MinGW64\bin\gcc.exe" -mdll -m32  -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python34-32\include" -c 
"D:\mrab-regex\source\_regex_unicode.c" -o 
"D:\mrab-regex\release\3.4-32\_regex_unicode.o"

"C:\MinGW64\bin\gcc.exe" -mdll -m32  -O2 -Wall -Wsign-compare 
-Wconversion -I"C:\Python34-32\include" -c 
"D:\mrab-regex\source\_regex.c" -o "D:\mrab-regex\release\3.4-32\_regex.o"

rem Link
"C:\MinGW64\bin\gcc.exe" -m32 -shared -s 
"D:\mrab-regex\release\3.4-32\_regex_unicode.o" 
"D:\mrab-regex\release\3.4-32\_regex.o" -L"C:\Python34-32\libs" 
-lpython34 -o "D:\mrab-regex\release\3.4-32\_regex.pyd"


rem For earlier versions of Python, follow the pattern of Python 3.4.



From breamoreboy at yahoo.co.uk  Fri Jun  5 03:33:13 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Fri, 05 Jun 2015 02:33:13 +0100
Subject: [Python-Dev] Unable to build regex module against Python 3.5
	32-bit
In-Reply-To: <5570EF48.9060301@mrabarnett.plus.com>
References: <556380A9.4000206@mrabarnett.plus.com>
 <CACac1F9EXVXH_8FEeyVoDkZBbGuDPz-+O4LzCLZ8PMjjOsr9_w@mail.gmail.com>
 <5563B18B.4040305@mrabarnett.plus.com>
 <CACac1F9aQ1Jj_HYQisjnDP3SJtVievP=oLgjEBJvjxj8btO9pg@mail.gmail.com>
 <CACac1F_j2-riRTvLzNe=dhrwbRVROqM+gT=rgyy4FzCAe-pibA@mail.gmail.com>
 <BY1PR03MB14661DDB918F7C6D961EE8B5F5CC0@BY1PR03MB1466.namprd03.prod.outlook.com>
 <CACac1F9hrDLMsiXs+YpP0fzdMG+9PHcT1q=HgpuhOe9xT1ZHtQ@mail.gmail.com>
 <CADiSq7f7odgBhTGVDkXo33qQkHeRy34HjE31aMJaLgfjVBp70w@mail.gmail.com>
 <CACac1F-n8zN+5cxQu8DsBGQWBkUkAVTVQUngsHrS1OZMNmUVAQ@mail.gmail.com>
 <5570EF48.9060301@mrabarnett.plus.com>
Message-ID: <mkqu8t$8c0$1@ger.gmane.org>

On 05/06/2015 01:37, MRAB wrote:
> Steve Dower's post has prompted me to look again at building the regex
> module for Python 3.5, 32-bit and 64-bit, using just Mingw64 and
> linking against python32.dll. It works!
>
> Earlier versions of Python, however, including Python 2.7, still seem
> to want libpython??.a.
>

This http://bugs.python.org/issue24385 should interest you.

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From p.f.moore at gmail.com  Fri Jun  5 12:24:32 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Fri, 5 Jun 2015 11:24:32 +0100
Subject: [Python-Dev] Unable to build regex module against Python 3.5
	32-bit
In-Reply-To: <5570FB53.5070906@mrabarnett.plus.com>
References: <556380A9.4000206@mrabarnett.plus.com>
 <CACac1F9EXVXH_8FEeyVoDkZBbGuDPz-+O4LzCLZ8PMjjOsr9_w@mail.gmail.com>
 <5563B18B.4040305@mrabarnett.plus.com>
 <CACac1F9aQ1Jj_HYQisjnDP3SJtVievP=oLgjEBJvjxj8btO9pg@mail.gmail.com>
 <CACac1F_j2-riRTvLzNe=dhrwbRVROqM+gT=rgyy4FzCAe-pibA@mail.gmail.com>
 <BY1PR03MB14661DDB918F7C6D961EE8B5F5CC0@BY1PR03MB1466.namprd03.prod.outlook.com>
 <CACac1F9hrDLMsiXs+YpP0fzdMG+9PHcT1q=HgpuhOe9xT1ZHtQ@mail.gmail.com>
 <CADiSq7f7odgBhTGVDkXo33qQkHeRy34HjE31aMJaLgfjVBp70w@mail.gmail.com>
 <CACac1F-n8zN+5cxQu8DsBGQWBkUkAVTVQUngsHrS1OZMNmUVAQ@mail.gmail.com>
 <5570EF48.9060301@mrabarnett.plus.com>
 <5570FB53.5070906@mrabarnett.plus.com>
Message-ID: <CACac1F8aWejHkqTgyHfn4nUPKTBZZEULjmw4C3ou_KMp8iNtRg@mail.gmail.com>

On 5 June 2015 at 02:28, MRAB <python at mrabarnett.plus.com> wrote:
> For reference, here's how I can build the regex module on Windows 8.1,
> 64-bit, using only MinGW64.
>
> For Python 3.5, I can link against "python35.dll", but for earlier
> versions, including Python 2.7, I need "libpython.a".
>
> I have built regex module for all of the 16 supported versions of
> Python (2.5-2.7, 3.1-3.5, 64-bit and 32-bit) and they have all passed
> the tests.
>
>
> rem For Python 3.5, 64-bit.
> rem Can link against the Python DLL.
>
> rem Compile
> "C:\MinGW64\bin\gcc.exe" -mdll -m64 -DMS_WIN64 -O2 -Wall -Wsign-compare
> -Wconversion -I"C:\Python35-64\include" -c
> "D:\mrab-regex\source\_regex_unicode.c" -o
> "D:\mrab-regex\release\3.5-64\_regex_unicode.o"
>
> "C:\MinGW64\bin\gcc.exe" -mdll -m64 -DMS_WIN64 -O2 -Wall -Wsign-compare
> -Wconversion -I"C:\Python35-64\include" -c "D:\mrab-regex\source\_regex.c"
> -o "D:\mrab-regex\release\3.5-64\_regex.o"
>
> rem Link
> "C:\MinGW64\bin\gcc.exe" -m64 -shared -s
> "D:\mrab-regex\release\3.5-64\_regex_unicode.o"
> "D:\mrab-regex\release\3.5-64\_regex.o" -L"C:\Python35" -lpython35 -o
> "D:\mrab-regex\release\3.5-64\_regex.pyd"

[...]

Note that even if we were to ship appropriate libpython.a libraries,
this would always be a totally unsupported way of building Python
extensions using mingw.

If you want to build extensions, whether using mingw or otherwise, you
should use distutils/setuptools. To use mingw, there's the
--compiler=mingw switch that allows you to choose your compiler. There
are a number of open issues around distutils with mingw which may mean
that currently this approach doesn't work, but getting the mingw-using
community to help with fixing those issues is the *only* way we'll
ever get to a point where mingw is a supported toolchain for Python.

I appreciate that "just getting it to work" is a powerful motivator
here, but the plethora of different ways that people use for building
extensions with mingw is *precisely* why mingw has failed to remain a
supported toolchain.

I'd be interested to know how well using setup.py --compiler=mingw
works for your project, and why you chose to go with the manual
process. (Just as an aside, setup.py works prerfectly for me when
building the regex package with MSVC).

Paul

From techtonik at gmail.com  Fri Jun  5 08:50:40 2015
From: techtonik at gmail.com (anatoly techtonik)
Date: Fri, 5 Jun 2015 09:50:40 +0300
Subject: [Python-Dev] Mingw help
In-Reply-To: <BY2PR0301MB1653C6865620C89F4B215E3DF5B30@BY2PR0301MB1653.namprd03.prod.outlook.com>
References: <BY2PR0301MB1653C6865620C89F4B215E3DF5B30@BY2PR0301MB1653.namprd03.prod.outlook.com>
Message-ID: <CAPkN8xJ5tQ27DCU-gBuMFR9LBdqjE0DmWqEXOynuUsjwDEXc6A@mail.gmail.com>

On Fri, Jun 5, 2015 at 2:24 AM, Steve Dower <Steve.Dower at microsoft.com> wrote:
> If you have an interest in linking to the Windows builds of Python 2.7 and
> 3.5+ using mingw, please visit http://bugs.python.org/issue24385
>
> Unless someone can provide me with the One True Way to generate a lib that
> will work for everyone, I'm going to replace the lib file with instructions
> on how to generate it with your own tools. I've done enough guessing and
> need someone who actually uses mingw to step up and help out.

I use MinGW with SCons to build RHVoice and Wesnoth. I haven't seen a problem
to use resulting .dll from ctypes, so as far as I know, if you're not
using SEH, the
resulting libraries should be compatible between MS tools and MinGW.

Is that what are you asking?
-- 
anatoly t.

From status at bugs.python.org  Fri Jun  5 18:08:24 2015
From: status at bugs.python.org (Python tracker)
Date: Fri,  5 Jun 2015 18:08:24 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20150605160824.D81CD567EE@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2015-05-29 - 2015-06-05)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    4853 ( +9)
  closed 31295 (+54)
  total  36148 (+63)

Open issues with patches: 2225 


Issues opened (30)
==================

#23239: SSL match_hostname does not accept IP Address
http://bugs.python.org/issue23239  reopened by christian.heimes

#24325: Speedup types.coroutine()
http://bugs.python.org/issue24325  opened by yselivanov

#24329: __qualname__ and __slots__
http://bugs.python.org/issue24329  opened by yselivanov

#24330: "Configure Idle" not in "Options"
http://bugs.python.org/issue24330  opened by Yellow

#24331: *** Error in `/usr/bin/python': double free or corruption (!pr
http://bugs.python.org/issue24331  opened by krichter

#24334: SSLSocket extra level of indirection
http://bugs.python.org/issue24334  opened by christian.heimes

#24336: Allow arbitrary keywords to @contextmanager functions
http://bugs.python.org/issue24336  opened by vadmium

#24337: Implement `http.client.HTTPMessage.__repr__` to make debugging
http://bugs.python.org/issue24337  opened by cool-RR

#24338: In argparse adding wrong arguments makes malformed namespace
http://bugs.python.org/issue24338  opened by py.user

#24339: iso6937 encoding missing
http://bugs.python.org/issue24339  opened by John Helour

#24340: co_stacksize estimate can be highly off
http://bugs.python.org/issue24340  opened by arigo

#24341: Test suite emits many DeprecationWarnings about sys.exc_clear(
http://bugs.python.org/issue24341  opened by serhiy.storchaka

#24343: Installing 3.4.3 gives  error codes 2503 and 2502  on windows 
http://bugs.python.org/issue24343  opened by lac

#24351: string.Template documentation incorrectly references "identifi
http://bugs.python.org/issue24351  opened by july

#24352: Provide a way for assertLogs to optionally not hide the loggin
http://bugs.python.org/issue24352  opened by r.david.murray

#24355: Provide a unittest api for controlling verbosity in tests
http://bugs.python.org/issue24355  opened by r.david.murray

#24356: venv documentation incorrect / misleading
http://bugs.python.org/issue24356  opened by Graham.Oliver

#24358: Should compression file-like objects provide .fileno(), mislea
http://bugs.python.org/issue24358  opened by josh.r

#24360: improve argparse.Namespace __repr__ for invalid identifiers.
http://bugs.python.org/issue24360  opened by mbussonn

#24362: Simplify the fast nodes resize logic in C OrderedDict.
http://bugs.python.org/issue24362  opened by eric.snow

#24363: httplib fails to handle semivalid HTTP headers
http://bugs.python.org/issue24363  opened by mgdelmonte

#24364: Not all defects pass through email policy
http://bugs.python.org/issue24364  opened by r.david.murray

#24370: OrderedDict behavior is unclear with misbehaving keys.
http://bugs.python.org/issue24370  opened by eric.snow

#24372: Documentation for ssl.wrap_socket's ssl_version parameter is o
http://bugs.python.org/issue24372  opened by eric.smith

#24379: slice.literal notation
http://bugs.python.org/issue24379  opened by llllllllll

#24380: Got warning when compiling _scproxy.c on Mac
http://bugs.python.org/issue24380  opened by vajrasky

#24381: Got warning when compiling ffi.c on Mac
http://bugs.python.org/issue24381  opened by vajrasky

#24383: consider implementing __await__ on concurrent.futures.Future
http://bugs.python.org/issue24383  opened by yselivanov

#24384: difflib.SequenceMatcher faster quick_ratio with lower bound sp
http://bugs.python.org/issue24384  opened by floyd

#24385: libpython27.a in python-2.7.10 i386 (windows msi release) cont
http://bugs.python.org/issue24385  opened by Jan Harkes



Most recent 15 issues with no replies (15)
==========================================

#24381: Got warning when compiling ffi.c on Mac
http://bugs.python.org/issue24381

#24380: Got warning when compiling _scproxy.c on Mac
http://bugs.python.org/issue24380

#24370: OrderedDict behavior is unclear with misbehaving keys.
http://bugs.python.org/issue24370

#24364: Not all defects pass through email policy
http://bugs.python.org/issue24364

#24352: Provide a way for assertLogs to optionally not hide the loggin
http://bugs.python.org/issue24352

#24351: string.Template documentation incorrectly references "identifi
http://bugs.python.org/issue24351

#24341: Test suite emits many DeprecationWarnings about sys.exc_clear(
http://bugs.python.org/issue24341

#24340: co_stacksize estimate can be highly off
http://bugs.python.org/issue24340

#24329: __qualname__ and __slots__
http://bugs.python.org/issue24329

#24324: Remove -Wunreachable-code flag
http://bugs.python.org/issue24324

#24322: Hundreds of linker warnings on Windows
http://bugs.python.org/issue24322

#24319: Crash during "make coverage-report"
http://bugs.python.org/issue24319

#24307: pip error on windows whose current user name contains non-asci
http://bugs.python.org/issue24307

#24302: Dead Code of Handler check in function faulthandler_fatal_erro
http://bugs.python.org/issue24302

#24300: Code Refactoring  in function nis_mapname()
http://bugs.python.org/issue24300



Most recent 15 issues waiting for review (15)
=============================================

#24383: consider implementing __await__ on concurrent.futures.Future
http://bugs.python.org/issue24383

#24381: Got warning when compiling ffi.c on Mac
http://bugs.python.org/issue24381

#24380: Got warning when compiling _scproxy.c on Mac
http://bugs.python.org/issue24380

#24379: slice.literal notation
http://bugs.python.org/issue24379

#24362: Simplify the fast nodes resize logic in C OrderedDict.
http://bugs.python.org/issue24362

#24360: improve argparse.Namespace __repr__ for invalid identifiers.
http://bugs.python.org/issue24360

#24336: Allow arbitrary keywords to @contextmanager functions
http://bugs.python.org/issue24336

#24334: SSLSocket extra level of indirection
http://bugs.python.org/issue24334

#24325: Speedup types.coroutine()
http://bugs.python.org/issue24325

#24320: Remove a now-unnecessary workaround from importlib._bootstrap.
http://bugs.python.org/issue24320

#24318: Better documentaiton of profile-opt (and release builds in gen
http://bugs.python.org/issue24318

#24314: irrelevant cross-link in documentation of user-defined functio
http://bugs.python.org/issue24314

#24303: OSError 17 due to _multiprocessing/semaphore.c assuming a one-
http://bugs.python.org/issue24303

#24302: Dead Code of Handler check in function faulthandler_fatal_erro
http://bugs.python.org/issue24302

#24300: Code Refactoring  in function nis_mapname()
http://bugs.python.org/issue24300



Top 10 most discussed issues (10)
=================================

#24363: httplib fails to handle semivalid HTTP headers
http://bugs.python.org/issue24363  15 msgs

#18003: lzma module very slow with line-oriented reading.
http://bugs.python.org/issue18003  10 msgs

#23237: Interrupts are lost during readline PyOS_InputHook processing 
http://bugs.python.org/issue23237   8 msgs

#24338: In argparse adding wrong arguments makes malformed namespace
http://bugs.python.org/issue24338   8 msgs

#19543: Add -3 warnings for codec convenience method changes
http://bugs.python.org/issue19543   7 msgs

#24360: improve argparse.Namespace __repr__ for invalid identifiers.
http://bugs.python.org/issue24360   7 msgs

#24379: slice.literal notation
http://bugs.python.org/issue24379   7 msgs

#24303: OSError 17 due to _multiprocessing/semaphore.c assuming a one-
http://bugs.python.org/issue24303   6 msgs

#24320: Remove a now-unnecessary workaround from importlib._bootstrap.
http://bugs.python.org/issue24320   6 msgs

#24336: Allow arbitrary keywords to @contextmanager functions
http://bugs.python.org/issue24336   6 msgs



Issues closed (53)
==================

#5497: openssl compileerror with original source
http://bugs.python.org/issue5497  closed by zach.ware

#5633: fix for timeit when the statement is a string and the setup is
http://bugs.python.org/issue5633  closed by serhiy.storchaka

#5843: Normalization error in urlunparse
http://bugs.python.org/issue5843  closed by vadmium

#15009: urlsplit can't round-trip relative-host urls.
http://bugs.python.org/issue15009  closed by vadmium

#16991: Add OrderedDict written in C
http://bugs.python.org/issue16991  closed by eric.snow

#21853: Fix inspect in unicodeless build
http://bugs.python.org/issue21853  closed by serhiy.storchaka

#22203: inspect.getargspec() returns wrong spec for builtins
http://bugs.python.org/issue22203  closed by yselivanov

#22357: inspect module documentation makes no reference to __qualname_
http://bugs.python.org/issue22357  closed by yselivanov

#23659: csv.register_dialect  doc string
http://bugs.python.org/issue23659  closed by berker.peksag

#23934: inspect.signature reporting "()" for all builtin & extension t
http://bugs.python.org/issue23934  closed by yselivanov

#24115: Unhandled error results of C API functions
http://bugs.python.org/issue24115  closed by serhiy.storchaka

#24148: 'cum' not a valid sort key for pstats.Stats.sort_stats
http://bugs.python.org/issue24148  closed by berker.peksag

#24264: imageop Unsafe Arithmetic
http://bugs.python.org/issue24264  closed by serhiy.storchaka

#24267: test_venv.EnsurePipTest.test_with_pip triggers version check o
http://bugs.python.org/issue24267  closed by python-dev

#24270: PEP 485 (math.isclose) implementation
http://bugs.python.org/issue24270  closed by berker.peksag

#24284: Inconsistency in startswith/endswith
http://bugs.python.org/issue24284  closed by serhiy.storchaka

#24308: Test failure: test_with_pip (test.test_venv.EnsurePipTest in 3
http://bugs.python.org/issue24308  closed by ned.deily

#24317: Change installer Customize default to match quick settings
http://bugs.python.org/issue24317  closed by steve.dower

#24323: Typo in Mutable Sequence Types documentation.
http://bugs.python.org/issue24323  closed by rhettinger

#24326: Audioop: weightB not divided by GCD, weightA divided twice
http://bugs.python.org/issue24326  closed by serhiy.storchaka

#24327: yield unpacking
http://bugs.python.org/issue24327  closed by serhiy.storchaka

#24328: Extension modules with single-letter names can't be loaded
http://bugs.python.org/issue24328  closed by python-dev

#24332: urllib.parse should not discard delimiters when associated com
http://bugs.python.org/issue24332  closed by vadmium

#24333: urllib2 HTTPS connection over a digest auth enabled proxy give
http://bugs.python.org/issue24333  closed by vadmium

#24335: Provide __list__(self) along the lines of __str__(self)
http://bugs.python.org/issue24335  closed by ethan.furman

#24342: coroutine wrapper reentrancy
http://bugs.python.org/issue24342  closed by yselivanov

#24344: Overlap display issue in windows installer
http://bugs.python.org/issue24344  closed by steve.dower

#24345: Py_tp_finalize is missing
http://bugs.python.org/issue24345  closed by python-dev

#24346: TypeError at List in Tuple concatenation
http://bugs.python.org/issue24346  closed by r.david.murray

#24347: unchecked return value in C OrderedDict
http://bugs.python.org/issue24347  closed by eric.snow

#24348: incorrect decref in C OrderedDict
http://bugs.python.org/issue24348  closed by eric.snow

#24349: Null pointer dereferences in C OrderedDict
http://bugs.python.org/issue24349  closed by eric.snow

#24350: dict({'a':'aa'},a='bb') raises segmentation fault on Mac
http://bugs.python.org/issue24350  closed by Graham Klyne

#24353: NameError: name 'path_separators' is not defined
http://bugs.python.org/issue24353  closed by ionelmc

#24354: Requests Library get issue
http://bugs.python.org/issue24354  closed by r.david.murray

#24357: Change socket.getaddrinfo example to show IPv6 connectivity
http://bugs.python.org/issue24357  closed by ned.deily

#24359: C OrderedDict needs to check for changes during iteration
http://bugs.python.org/issue24359  closed by eric.snow

#24361: OrderedDict: crash with threads
http://bugs.python.org/issue24361  closed by eric.snow

#24365: Conditionalize 3.5 additions to the stable ABI
http://bugs.python.org/issue24365  closed by yselivanov

#24366: Simple indentation
http://bugs.python.org/issue24366  closed by yselivanov

#24367: Idle hangs when you close the debugger while debugging
http://bugs.python.org/issue24367  closed by terry.reedy

#24368: Some C OrderedDict methods need to support keyword arguments.
http://bugs.python.org/issue24368  closed by eric.snow

#24369: Using OrderedDict.move_to_end during iteration is problematic.
http://bugs.python.org/issue24369  closed by eric.snow

#24371: configparser hate dot in option like eth2.6
http://bugs.python.org/issue24371  closed by zach.ware

#24373: Use traverse & finalize in xxlimited and in PEP 489 tests
http://bugs.python.org/issue24373  closed by ncoghlan

#24374: Plug refleak in set_coroutine_wrapper
http://bugs.python.org/issue24374  closed by yselivanov

#24375: Performance regression relative to 2.7
http://bugs.python.org/issue24375  closed by yselivanov

#24376: xxlimited.c errors when building 32 and 64 bit on Windows
http://bugs.python.org/issue24376  closed by zach.ware

#24377: Refleak in OrderedDict.__repr__ when an item is not found.
http://bugs.python.org/issue24377  closed by eric.snow

#24378: dir(dictobject) returns empty list when __getattribute__ is ov
http://bugs.python.org/issue24378  closed by r.david.murray

#24382: Fail to build time module on Mac
http://bugs.python.org/issue24382  closed by zach.ware

#24386: Bug Tracker emails going to gmail spam
http://bugs.python.org/issue24386  closed by r.david.murray

#24387: json.loads should be idempotent when the argument is a diction
http://bugs.python.org/issue24387  closed by r.david.murray

From rdmurray at bitdance.com  Sun Jun  7 20:41:58 2015
From: rdmurray at bitdance.com (R. David Murray)
Date: Sun, 07 Jun 2015 14:41:58 -0400
Subject: [Python-Dev] Tracker reviews look like spam
In-Reply-To: <CA+HSgbL3YXmvv3qsbKCDp_8OYnYM1wuGyjXxM+J3EzMxKtRgHA@mail.gmail.com>
References: <mittl6$bj3$1@ger.gmane.org> <20150512221524.GC1768@k3>
 <CAPTjJmoz+nDE4dXnN7CaVyt7D+L8XY8C27Y-F2Qoj+JGBdojBw@mail.gmail.com>
 <CA+HSgbL3YXmvv3qsbKCDp_8OYnYM1wuGyjXxM+J3EzMxKtRgHA@mail.gmail.com>
Message-ID: <20150607184159.3ADF3B900CE@webabinitio.net>

On Fri, 22 May 2015 08:05:49 +0300, Antti Haapala <antti at haapala.name> wrote:
> There's an issue about this at
> http://psf.upfronthosting.co.za/roundup/meta/issue562
> 
> I believe the problem is not that of the SPF, but the fact that mail gets
> sent using IPv6 from an address that has neither a name mapping to it nor a
> reverse pointer from IP address to name in DNS. See the second-first
> comment where R. David Murray states that "Mail is consistently sent from

The ipv6 reverse dns issue is being worked on.

From storchaka at gmail.com  Mon Jun  8 11:02:55 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Mon, 08 Jun 2015 12:02:55 +0300
Subject: [Python-Dev] Fix or remove XMLParser.doctype()
Message-ID: <ml3loj$5ib$1@ger.gmane.org>

There are issues with the doctype() method of XMLParser.

1. Using subclass of XMLParser emits a deprecation warning [1].

2. There is behavior difference between Python and C implementations if 
implement doctype() [2].

This method was deprecated for long time, and the simplest solution is 
to completely remove it. The questions are:

1. Can doctype() be removed or there is significant risk to break 
someone's code?

2. If fix doctype() instead of removing it (in any case should be done 
in maintained releases), what behavior should be preserved? Should 
target's doctype() be called instead of XMLParser.doctype() (Python 
implementation) or be called in addition to XMLParser.doctype() (C 
implementation)?

Eli suggested to ask on Python-Dev.

[1] http://bugs.python.org/issue19176
[2] http://bugs.python.org/issue19176#msg238783


From solipsis at pitrou.net  Tue Jun  9 01:06:57 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 9 Jun 2015 01:06:57 +0200
Subject: [Python-Dev] cpython (3.4): #23891: add a section to the
 Tutorial describing virtual environments and pip
References: <20150608213917.60942.73793@psf.io>
Message-ID: <20150609010657.23b3c7b4@fsol>

On Mon, 08 Jun 2015 21:39:17 +0000
andrew.kuchling <python-checkins at python.org> wrote:
> https://hg.python.org/cpython/rev/15ee0e7078e3
> changeset:   96555:15ee0e7078e3
> branch:      3.4
> parent:      96552:2b78227974fa
> user:        Andrew Kuchling <amk at amk.ca>
> date:        Mon Jun 08 17:35:45 2015 -0400
> summary:
>   #23891: add a section to the Tutorial describing virtual environments and pip

This is very cool, thank you!

Regards

Antoine.



From amk at amk.ca  Tue Jun  9 16:35:54 2015
From: amk at amk.ca (A.M. Kuchling)
Date: Tue, 9 Jun 2015 10:35:54 -0400
Subject: [Python-Dev] cpython (3.4): #23891: add a section to the
 Tutorial describing virtual environments and pip
In-Reply-To: <20150609010657.23b3c7b4@fsol>
References: <20150608213917.60942.73793@psf.io> <20150609010657.23b3c7b4@fsol>
Message-ID: <20150609143554.GA84392@DATLANDREWK.local>

> >   #23891: add a section to the Tutorial describing virtual environments and pip
>
> This is very cool, thank you!

Thanks!  This suggestion came up at April's Language Summit as an
improvement to make the tutorial better reflect current practice.
Another suggestion was to suggest the 'requests' module in a few
places in the docs (issue #23989); Van L. made that change soon after
the Summit.

--amk

From greg at krypto.org  Tue Jun  9 23:41:23 2015
From: greg at krypto.org (Gregory P. Smith)
Date: Tue, 09 Jun 2015 21:41:23 +0000
Subject: [Python-Dev] Tracker reviews look like spam
In-Reply-To: <mittl6$bj3$1@ger.gmane.org>
References: <mittl6$bj3$1@ger.gmane.org>
Message-ID: <CAGE7PN+SBgak0U=OJGnWNJehf3Ur_yayJRZgaBWZizU2OE_S1w@mail.gmail.com>

On Tue, May 12, 2015 at 3:09 PM Terry Reedy <tjreedy at udel.edu> wrote:

> Gmail dumps patch review email in my junk box. The problem seems to be
> the spoofed From: header.
>
> Received: from psf.upfronthosting.co.za ([2a01:4f8:131:2480::3])
>          by mx.google.com with ESMTP id
> m1si26039166wjy.52.2015.05.12.00.20.38
>          for <tjreedy at udel.edu>;
>          Tue, 12 May 2015 00:20:38 -0700 (PDT)
> Received-SPF: softfail (google.com: domain of transitioning
> storchaka at gmail.com does not designate 2a01:4f8:131:2480::3 as permitted
> sender) client-ip=2a01:4f8:131:2480::3;
>
> Tracker reviews are the only false positives in my junk list. Otherwise,
> I might stop reviewing. Verizon does not even deliver mail that fails
> its junk test, so I would not be surprised if there are people who
> simply do not get emailed reviews.
>
> Tracker posts are sent from Person Name <report at bugs.python.org>
> Perhaps reviews could come 'from' Person Name <review at bugs.python.org>
>
> Even direct tracker posts just get a neutral score.
> Received-SPF: neutral (google.com: 2a01:4f8:131:2480::3 is neither
> permitted nor denied by best guess record for domain of
> roundup-admin at psf.upfronthosting.co.za) client-ip=2a01:4f8:131:2480::3;
>
> SPF is Sender Policy Framework
> https://en.wikipedia.org/wiki/Sender_Policy_Framework
>
> Checkins mail, for instance, gets an SPF 'pass' because python.org
> designates mail.python.org as a permitted sender.
>
> --
> Terry Jan Reedy
>

FWIW while the email signing issue might be interesting to fix...  I really
wish code reviews weren't emailed separately, I'd like to see them simply
appended to the related issue as a block comment on the issue. Then all
email notification would be handled by the bug tracker itself (which
happens to already work).  It keeps related conversation in a single place
which will continue to persist even if we switch review tools in the future.

I *believe* you can get this to happen today in a review if you add the
report at bugs.python.org email address to the code review as the issueXXXXX
in the subject line will make the tracker turn it into a bug comment.  If
so, having that be the default cc for all reviews created would be a great
feature (and modify it not to send mail to anyone else).

My 2 cents.  (literally; as I'm only suggesting, not implementing)

-gps
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150609/842946ae/attachment.html>

From rdmurray at bitdance.com  Wed Jun 10 02:45:43 2015
From: rdmurray at bitdance.com (R. David Murray)
Date: Tue, 09 Jun 2015 20:45:43 -0400
Subject: [Python-Dev] Tracker reviews look like spam
In-Reply-To: <CAGE7PN+SBgak0U=OJGnWNJehf3Ur_yayJRZgaBWZizU2OE_S1w@mail.gmail.com>
References: <mittl6$bj3$1@ger.gmane.org>
 <CAGE7PN+SBgak0U=OJGnWNJehf3Ur_yayJRZgaBWZizU2OE_S1w@mail.gmail.com>
Message-ID: <20150610004544.5AFFFB900D9@webabinitio.net>


On Tue, 09 Jun 2015 21:41:23 -0000, "Gregory P. Smith" <greg at krypto.org> wrote:
> I *believe* you can get this to happen today in a review if you add the
> report at bugs.python.org email address to the code review as the issueXXXXX
> in the subject line will make the tracker turn it into a bug comment.  If
> so, having that be the default cc for all reviews created would be a great
> feature (and modify it not to send mail to anyone else).

I haven't double checked, but I think the issue number has to be in
square brackets to be recognized.  Presumably that's a change that
could be made.  What is lacking is someone willing to climb the
relatively steep learning curve needed to submit patches for that
part of the system, and some one of us with the keys to the tracker
with time to apply the patch.  Given the former I think we can
manage the latter.

I believe Ezio did try to make rietveld update the tracker, and ran
into a problem whose nature I don't know...but I don't think he spent
a whole lot of time trying to debug the problem, whatever it was.
I imagine he'll chime in :)

--David

From rosuav at gmail.com  Thu Jun 11 05:45:29 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 11 Jun 2015 13:45:29 +1000
Subject: [Python-Dev] Checklist for changing Python's grammar
Message-ID: <CAPTjJmrW+ipKndvHpfkCiiZmsikm+47LkgZGoj7Mv7yXq+J0uQ@mail.gmail.com>

As part of a bit of hacking on CPython, I'm looking at playing with
the grammar. In the source tree, Grammar/Grammar, there's a note
advising me to follow the steps in PEP 306, but PEP 306 says it's been
moved to the dev guide. Should the source comment be changed to point
to https://docs.python.org/devguide/grammar.html ? Or is there a
better canonical location for the dev guide?

ChrisA

From brett at python.org  Thu Jun 11 15:42:39 2015
From: brett at python.org (Brett Cannon)
Date: Thu, 11 Jun 2015 13:42:39 +0000
Subject: [Python-Dev] Checklist for changing Python's grammar
In-Reply-To: <CAPTjJmrW+ipKndvHpfkCiiZmsikm+47LkgZGoj7Mv7yXq+J0uQ@mail.gmail.com>
References: <CAPTjJmrW+ipKndvHpfkCiiZmsikm+47LkgZGoj7Mv7yXq+J0uQ@mail.gmail.com>
Message-ID: <CAP1=2W7MJjqzjjQ5D4eknfpeVkFNGsyFS6pct84ONZ9sOMyvBg@mail.gmail.com>

On Wed, Jun 10, 2015 at 11:46 PM Chris Angelico <rosuav at gmail.com> wrote:

> As part of a bit of hacking on CPython, I'm looking at playing with
> the grammar. In the source tree, Grammar/Grammar, there's a note
> advising me to follow the steps in PEP 306, but PEP 306 says it's been
> moved to the dev guide. Should the source comment be changed to point
> to https://docs.python.org/devguide/grammar.html ? Or is there a
> better canonical location for the dev guide?
>

The link you found is the right one and the comment in the grammar file is
out of date.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150611/9605d96b/attachment.html>

From rosuav at gmail.com  Thu Jun 11 21:07:55 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Fri, 12 Jun 2015 05:07:55 +1000
Subject: [Python-Dev] Checklist for changing Python's grammar
In-Reply-To: <CAP1=2W7MJjqzjjQ5D4eknfpeVkFNGsyFS6pct84ONZ9sOMyvBg@mail.gmail.com>
References: <CAPTjJmrW+ipKndvHpfkCiiZmsikm+47LkgZGoj7Mv7yXq+J0uQ@mail.gmail.com>
 <CAP1=2W7MJjqzjjQ5D4eknfpeVkFNGsyFS6pct84ONZ9sOMyvBg@mail.gmail.com>
Message-ID: <CAPTjJmrHbeTHgvd7-YE97m9xzzFGguxRXqdihtWJL7VETAHx4w@mail.gmail.com>

On Thu, Jun 11, 2015 at 11:42 PM, Brett Cannon <brett at python.org> wrote:
> On Wed, Jun 10, 2015 at 11:46 PM Chris Angelico <rosuav at gmail.com> wrote:
>>
>> As part of a bit of hacking on CPython, I'm looking at playing with
>> the grammar. In the source tree, Grammar/Grammar, there's a note
>> advising me to follow the steps in PEP 306, but PEP 306 says it's been
>> moved to the dev guide. Should the source comment be changed to point
>> to https://docs.python.org/devguide/grammar.html ? Or is there a
>> better canonical location for the dev guide?
>
>
> The link you found is the right one and the comment in the grammar file is
> out of date.

Thanks. http://bugs.python.org/issue24435

ChrisA

From romapera15 at gmail.com  Thu Jun 11 22:40:09 2015
From: romapera15 at gmail.com (=?UTF-8?Q?fran=C3=A7ai_s?=)
Date: Thu, 11 Jun 2015 17:40:09 -0300
Subject: [Python-Dev] What were the complaints of binary code programmers
	that not accept Assembly?
Message-ID: <CAK_6Rwd0=N91e_sJAGJLrAchOWV=fzFA4Vo8MvEVdyWwSp_hZA@mail.gmail.com>

Quote from: http://worrydream.com/dbx/
<http://www.google.com/url?q=http%3A%2F%2Fworrydream.com%2Fdbx%2F&sa=D&sntz=1&usg=AFQjCNGtoFsF9_dXHFcevD5jUNMFGl59iQ>

    Reactions to SOAP and Fortran
    Richard Hamming -- The Art of Doing Science and Engineering, p25 (pdf
book)

    In the beginning we programmed in absolute binary... Finally, a
Symbolic Assembly Program was devised -- after more years than you are apt
to believe during which most programmers continued their heroic absolute
binary programming. At the time [the assembler] first appeared I would
guess about 1% of the older programmers were interested in it -- using
[assembly] was "sissy stuff", and a real programmer would not stoop to
wasting machine capacity to do the assembly.

    Yes! Programmers wanted no part of it, though when pressed they had to
admit their old methods used more machine time in locating and fixing up
errors than the [assembler] ever used. One of the main complaints was when
using a symbolic system you do not know where anything was in storage --
though in the early days we supplied a mapping of symbolic to actual
storage, and believe it or not they later lovingly pored over such sheets
rather than realize they did not need to know that information if they
stuck to operating within the system -- no! When correcting errors they
preferred to do it in absolute binary.

    FORTRAN was proposed by Backus and friends, and again was opposed by
almost all programmers. First, it was said it could not be done. Second, if
it could be done, it would be too wasteful of machine time and capacity.
Third, even if it did work, no respectable programmer would use it -- it
was only for sissies!


    John von Neumann's reaction to assembly language and Fortran
    John A.N. Lee, Virginia Polytechnical Institute

    John von Neumann, when he first heard about FORTRAN in 1954, was
unimpressed and asked "why would you want more than machine language?" One
of von Neumann's students at Princeton recalled that graduate students were
being used to hand assemble programs into binary for their early machine.
This student took time out to build an assembler, but when von Neumann
found out about it he was very angry, saying that it was a waste of a
valuable scientific computing instrument to use it to do clerical work.


Because 1% of the older programmers that did not accept to use Assembly
changed their minds about the Assembly?

Because the older programmers think that using [assembly] was "sissy
stuff", and a real programmer would not stoop to wasting machine capacity
to do the assembly?

What were all the complaints of binary code programmers that did not accept
to use Assembly?

John Von Neumann changed his mind about the Assembly and Fortran?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150611/c1328407/attachment.html>

From steve at pearwood.info  Fri Jun 12 04:19:38 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 12 Jun 2015 12:19:38 +1000
Subject: [Python-Dev] What were the complaints of binary code
	programmers that not accept Assembly?
In-Reply-To: <CAK_6Rwd0=N91e_sJAGJLrAchOWV=fzFA4Vo8MvEVdyWwSp_hZA@mail.gmail.com>
References: <CAK_6Rwd0=N91e_sJAGJLrAchOWV=fzFA4Vo8MvEVdyWwSp_hZA@mail.gmail.com>
Message-ID: <20150612021938.GU20701@ando.pearwood.info>

Hi Fran?ai,

This list is for the development of the Python programming language, not 
Fortran, assembly, or discussions about the early history of 
programming. Possibly Stackoverflow or one of their related sites might 
be a better place to ask. Or you could ask on the python-list at python.org 
mailing list, they have more tolerance of off-topic discussions there.


-- 
Steve

From benno at benno.id.au  Fri Jun 12 08:38:03 2015
From: benno at benno.id.au (Ben Leslie)
Date: Fri, 12 Jun 2015 16:38:03 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <556D3385.5000009@gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
Message-ID: <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>

On 2 June 2015 at 14:39, Yury Selivanov <yselivanov.ml at gmail.com> wrote:
> Hi Ben,
>
> On 2015-05-31 8:35 AM, Ben Leslie wrote:
>>
>> Hi Yury,
>>
>> I'm just starting my exploration into using async/await; all my
>> 'real-world' scenarios are currently hypothetical.
>>
>> One such hypothetical scenario however is that if I have a server
>> process running, with some set of concurrent connections, each managed
>> by a co-routine. Each co-routine is of some arbitrary complexity e.g:
>> some combination of reading files, reading from database, reading from
>> peripherals. If I notice one of those co-routines appears stuck and
>> not making progress, I'd very much like to debug that, and preferably
>> in a way that doesn't necessarily stop the rest of the server (or even
>> the co-routine that appears stuck).
>>
>> The problem with the "if debug: log(...)" approach is that you need
>> foreknowledge of the fault state occurring; on a busy server you don't
>> want to just be logging every 'switch()'. I guess you could do
>> something like "switch_state[outer_coro] = get_current_stack_frames()"
>> on each switch. To me double book-keeping something that the
>> interpreter already knows seems somewhat wasteful but maybe it isn't
>> really too bad.
>
>
> I guess it all depends on how "switching" is organized in your
> framework of choice.  In asyncio, for instance, all the code that
> knows about coroutines is in tasks.py.  `Task` class is responsible
> for running coroutines, and it's the single place where you would
> need to put the "if debug: ..." line for debugging "slow" Futures--
> the only thing that coroutines can "stuck" with (the other thing
> is accidentally calling blocking code, but your proposal wouldn't
> help with that).

I suspect that I haven't properly explained the motivating case.

My motivating case is being able to debug a relatively large, complex
system. If the system crashes (through an exception), or in some other
manner enters an unexpected state (co-routines blocked for too long)
it would be very nice to be able to debug an arbitrary co-routine, not
necessarily the one indicating a bad system state, to see exactly what
it is/was doing at the time the anomalous  behavior occurs.

So, this is a case of trying to analyse some system wide behavior
than necessarily one particular task. So until you start
analysing the rest of the system you don't know which co-routines
you want to analyse.

My motivation for this is primarily avoiding double book-keeping. I
assume that the framework has organised things so that there is
some data structure to find all the co-routines (or some other object
wrapping the co-routines) and that all the "switching" occurs in
one place.

With that in mind I can have some code that works something like:

@coroutine
def switch():
    coro_stacks[current_coro] = inspect.getouterframes(inspect.currentframe())
    ....

I think something like this is probably the best approach to achieve
my desired goals with currently available APIs.

However I really don't like it as it required this double book-keeping. I'm
manually retaining this trace back for each coro, which seems like a
waste of memory, considering the interpreter already has this information
stored, just unexposed.

I feel it is desirable, and in-line with existing Python patterns to expose
the interpreter data structures, rather than making the user do extra work
to access the same information.

Cheers,

Ben

From status at bugs.python.org  Fri Jun 12 18:08:26 2015
From: status at bugs.python.org (Python tracker)
Date: Fri, 12 Jun 2015 18:08:26 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20150612160826.C7A6256906@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2015-06-05 - 2015-06-12)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    4884 (+31)
  closed 31317 (+22)
  total  36201 (+53)

Open issues with patches: 2239 


Issues opened (40)
==================

#24390: Python 3.4.3 64 bits is not "high dpi aware"
http://bugs.python.org/issue24390  opened by ivb

#24391: Better repr for threading objects
http://bugs.python.org/issue24391  opened by serhiy.storchaka

#24393: Test urllib2_localnet fails depending on host proxy configurat
http://bugs.python.org/issue24393  opened by eacb

#24395: webbrowser.py update to use argparse.py
http://bugs.python.org/issue24395  opened by Hasan Diwan

#24396: Provide convenience function for paths relative to the current
http://bugs.python.org/issue24396  opened by madison.may

#24397: Test asynchat makes wrong assumptions
http://bugs.python.org/issue24397  opened by eacb

#24398: Update test_capi to use test.support.script_helper
http://bugs.python.org/issue24398  opened by bobcatfish

#24400: Awaitable ABC incompatible with functools.singledispatch
http://bugs.python.org/issue24400  opened by Ben.Darnell

#24401: Windows 8.1 install gives DLL required to complete could not r
http://bugs.python.org/issue24401  opened by lac

#24402: input() uses sys.__stdout__ instead of sys.stdout for prompt
http://bugs.python.org/issue24402  opened by Keita Kita

#24403: Missing fixer for changed round() behavior
http://bugs.python.org/issue24403  opened by priska

#24405: Missing code markup in "Expressions" documentation
http://bugs.python.org/issue24405  opened by Gareth.Rees

#24406: "Built-in Types" documentation doesn't explain how dictionarie
http://bugs.python.org/issue24406  opened by Gareth.Rees

#24407: Use after free in PyDict_merge
http://bugs.python.org/issue24407  opened by pkt

#24408: tkinter.font.Font.measure() broken in 3.5
http://bugs.python.org/issue24408  opened by vadmium

#24412: setUpClass equivalent for addCleanup
http://bugs.python.org/issue24412  opened by r.david.murray

#24413: Inconsistent behavior between set and dict_keys/dict_items: fo
http://bugs.python.org/issue24413  opened by canjo

#24414: MACOSX_DEPLOYMENT_TARGET set incorrectly by configure
http://bugs.python.org/issue24414  opened by debohman

#24415: SIGINT always reset to SIG_DFL by Py_Finalize()
http://bugs.python.org/issue24415  opened by ABalmosan

#24416: Return a namedtuple from date.isocalendar()
http://bugs.python.org/issue24416  opened by bmispelon

#24417: Type-specific documentation for __format__ methods
http://bugs.python.org/issue24417  opened by skip.montanaro

#24418: "make install" will not install pip if already present in user
http://bugs.python.org/issue24418  opened by pitrou

#24419: In argparse action append_const doesn't work for positional ar
http://bugs.python.org/issue24419  opened by py.user

#24420: Documentation regressions from adding subprocess.run()
http://bugs.python.org/issue24420  opened by vadmium

#24421: Race condition compiling Modules/_math.c
http://bugs.python.org/issue24421  opened by vadmium

#24424: xml.dom.minidom: performance issue with Node.insertBefore()
http://bugs.python.org/issue24424  opened by Robert Haschke

#24425: Installer Vender Issue
http://bugs.python.org/issue24425  opened by Hayden Young

#24426: re.split performance degraded significantly by capturing group
http://bugs.python.org/issue24426  opened by Patrick Maupin

#24427: subclass of multiprocessing Connection segfault upon attribute
http://bugs.python.org/issue24427  opened by neologix

#24429: msvcrt error when embedded
http://bugs.python.org/issue24429  opened by erik flister

#24430: ZipFile.read() cannot decrypt multiple members from Windows 7z
http://bugs.python.org/issue24430  opened by era

#24431: StreamWriter.drain is not callable concurrently
http://bugs.python.org/issue24431  opened by Martin.Teichmann

#24432: Upgrade windows builds to use OpenSSL 1.0.2b
http://bugs.python.org/issue24432  opened by alex

#24434: ItemsView.__contains__ does not mimic dict_items
http://bugs.python.org/issue24434  opened by clevy

#24435: Grammar/Grammar points to outdated guide
http://bugs.python.org/issue24435  opened by Rosuav

#24436: _PyTraceback_Add has no const qualifier for its char * argumen
http://bugs.python.org/issue24436  opened by mic-e

#24437: Add information about the buildbot console view and irc notice
http://bugs.python.org/issue24437  opened by r.david.murray

#24438: Strange behaviour when multiplying imaginary inf by 1
http://bugs.python.org/issue24438  opened by Alex Monk

#24439: Feedback for awaitable coroutine documentation
http://bugs.python.org/issue24439  opened by vadmium

#24440: Move the buildslave setup information from the wiki to the dev
http://bugs.python.org/issue24440  opened by r.david.murray



Most recent 15 issues with no replies (15)
==========================================

#24440: Move the buildslave setup information from the wiki to the dev
http://bugs.python.org/issue24440

#24439: Feedback for awaitable coroutine documentation
http://bugs.python.org/issue24439

#24435: Grammar/Grammar points to outdated guide
http://bugs.python.org/issue24435

#24427: subclass of multiprocessing Connection segfault upon attribute
http://bugs.python.org/issue24427

#24424: xml.dom.minidom: performance issue with Node.insertBefore()
http://bugs.python.org/issue24424

#24421: Race condition compiling Modules/_math.c
http://bugs.python.org/issue24421

#24419: In argparse action append_const doesn't work for positional ar
http://bugs.python.org/issue24419

#24417: Type-specific documentation for __format__ methods
http://bugs.python.org/issue24417

#24415: SIGINT always reset to SIG_DFL by Py_Finalize()
http://bugs.python.org/issue24415

#24414: MACOSX_DEPLOYMENT_TARGET set incorrectly by configure
http://bugs.python.org/issue24414

#24412: setUpClass equivalent for addCleanup
http://bugs.python.org/issue24412

#24407: Use after free in PyDict_merge
http://bugs.python.org/issue24407

#24406: "Built-in Types" documentation doesn't explain how dictionarie
http://bugs.python.org/issue24406

#24405: Missing code markup in "Expressions" documentation
http://bugs.python.org/issue24405

#24381: Got warning when compiling ffi.c on Mac
http://bugs.python.org/issue24381



Most recent 15 issues waiting for review (15)
=============================================

#24440: Move the buildslave setup information from the wiki to the dev
http://bugs.python.org/issue24440

#24437: Add information about the buildbot console view and irc notice
http://bugs.python.org/issue24437

#24436: _PyTraceback_Add has no const qualifier for its char * argumen
http://bugs.python.org/issue24436

#24434: ItemsView.__contains__ does not mimic dict_items
http://bugs.python.org/issue24434

#24426: re.split performance degraded significantly by capturing group
http://bugs.python.org/issue24426

#24424: xml.dom.minidom: performance issue with Node.insertBefore()
http://bugs.python.org/issue24424

#24420: Documentation regressions from adding subprocess.run()
http://bugs.python.org/issue24420

#24416: Return a namedtuple from date.isocalendar()
http://bugs.python.org/issue24416

#24408: tkinter.font.Font.measure() broken in 3.5
http://bugs.python.org/issue24408

#24406: "Built-in Types" documentation doesn't explain how dictionarie
http://bugs.python.org/issue24406

#24405: Missing code markup in "Expressions" documentation
http://bugs.python.org/issue24405

#24400: Awaitable ABC incompatible with functools.singledispatch
http://bugs.python.org/issue24400

#24397: Test asynchat makes wrong assumptions
http://bugs.python.org/issue24397

#24393: Test urllib2_localnet fails depending on host proxy configurat
http://bugs.python.org/issue24393

#24391: Better repr for threading objects
http://bugs.python.org/issue24391



Top 10 most discussed issues (10)
=================================

#24400: Awaitable ABC incompatible with functools.singledispatch
http://bugs.python.org/issue24400  26 msgs

#24385: libpython27.a in python-2.7.10 i386 (windows msi release) cont
http://bugs.python.org/issue24385  15 msgs

#15745: Numerous utime ns tests fail on FreeBSD w/ ZFS (update: and Ne
http://bugs.python.org/issue15745  13 msgs

#24416: Return a namedtuple from date.isocalendar()
http://bugs.python.org/issue24416  11 msgs

#18003: lzma module very slow with line-oriented reading.
http://bugs.python.org/issue18003  10 msgs

#24391: Better repr for threading objects
http://bugs.python.org/issue24391  10 msgs

#20186: Derby #18: Convert 31 sites to Argument Clinic across 23 files
http://bugs.python.org/issue20186   9 msgs

#24429: msvcrt error when embedded
http://bugs.python.org/issue24429   9 msgs

#11245: Implementation of IMAP IDLE in imaplib?
http://bugs.python.org/issue11245   6 msgs

#13566: Increase pickle compatibility
http://bugs.python.org/issue13566   6 msgs



Issues closed (22)
==================

#10666: OS X installer variants have confusing readline differences
http://bugs.python.org/issue10666  closed by ned.deily

#14287: sys.stdin.readline and KeyboardInterrupt on windows
http://bugs.python.org/issue14287  closed by vadmium

#18885: handle EINTR in the stdlib
http://bugs.python.org/issue18885  closed by haypo

#20346: Argument Clinic: missing entry in table mapping legacy convert
http://bugs.python.org/issue20346  closed by taleinat

#23653: Make inspect._empty test to False
http://bugs.python.org/issue23653  closed by yselivanov

#23891: Tutorial doesn't mention either pip or virtualenv
http://bugs.python.org/issue23891  closed by akuchling

#24299: 2.7.10 test__locale.py change breaks on Solaris
http://bugs.python.org/issue24299  closed by serhiy.storchaka

#24351: string.Template documentation incorrectly references "identifi
http://bugs.python.org/issue24351  closed by barry

#24362: Simplify the fast nodes resize logic in C OrderedDict.
http://bugs.python.org/issue24362  closed by Arfrever

#24388: Python readline module crashes in history_get on FreeBSD with 
http://bugs.python.org/issue24388  closed by ned.deily

#24389: idle launches in cmd but not from any other method.
http://bugs.python.org/issue24389  closed by zach.ware

#24392: pop functioning
http://bugs.python.org/issue24392  closed by steven.daprano

#24394: TypeError: popitem() takes no keyword arguments
http://bugs.python.org/issue24394  closed by eric.snow

#24399: Backports Arch Linux support for platform.linux_distribution()
http://bugs.python.org/issue24399  closed by berker.peksag

#24404: Python 2.7.0's BZ2File does not support with-statements
http://bugs.python.org/issue24404  closed by ned.deily

#24409: PythonPROD abrt
http://bugs.python.org/issue24409  closed by eric.smith

#24410: set.__eq__ returns False when it should return NotImplemented
http://bugs.python.org/issue24410  closed by serhiy.storchaka

#24411: Drop redundant lock in queue.Queue methods qsize(), empty() an
http://bugs.python.org/issue24411  closed by rhettinger

#24422: base64 not encode correctly
http://bugs.python.org/issue24422  closed by ezio.melotti

#24423: Fix wrong indentation in Doc/whatsnew/3.5.rst
http://bugs.python.org/issue24423  closed by ned.deily

#24428: Import sys,getopt is having issue while taking inputs
http://bugs.python.org/issue24428  closed by eric.smith

#24433: There is no asyncio.ensure_future in Python 3.4.3
http://bugs.python.org/issue24433  closed by zach.ware

From guido at python.org  Fri Jun 12 20:13:23 2015
From: guido at python.org (Guido van Rossum)
Date: Fri, 12 Jun 2015 11:13:23 -0700
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
Message-ID: <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>

On Thu, Jun 11, 2015 at 11:38 PM, Ben Leslie <benno at benno.id.au> wrote:

> On 2 June 2015 at 14:39, Yury Selivanov <yselivanov.ml at gmail.com> wrote:
> > Hi Ben,
> >
> > On 2015-05-31 8:35 AM, Ben Leslie wrote:
> >>
> >> Hi Yury,
> >>
> >> I'm just starting my exploration into using async/await; all my
> >> 'real-world' scenarios are currently hypothetical.
> >>
> >> One such hypothetical scenario however is that if I have a server
> >> process running, with some set of concurrent connections, each managed
> >> by a co-routine. Each co-routine is of some arbitrary complexity e.g:
> >> some combination of reading files, reading from database, reading from
> >> peripherals. If I notice one of those co-routines appears stuck and
> >> not making progress, I'd very much like to debug that, and preferably
> >> in a way that doesn't necessarily stop the rest of the server (or even
> >> the co-routine that appears stuck).
> >>
> >> The problem with the "if debug: log(...)" approach is that you need
> >> foreknowledge of the fault state occurring; on a busy server you don't
> >> want to just be logging every 'switch()'. I guess you could do
> >> something like "switch_state[outer_coro] = get_current_stack_frames()"
> >> on each switch. To me double book-keeping something that the
> >> interpreter already knows seems somewhat wasteful but maybe it isn't
> >> really too bad.
> >
> > I guess it all depends on how "switching" is organized in your
> > framework of choice.  In asyncio, for instance, all the code that
> > knows about coroutines is in tasks.py.  `Task` class is responsible
> > for running coroutines, and it's the single place where you would
> > need to put the "if debug: ..." line for debugging "slow" Futures--
> > the only thing that coroutines can "stuck" with (the other thing
> > is accidentally calling blocking code, but your proposal wouldn't
> > help with that).
>
> I suspect that I haven't properly explained the motivating case.
>
> My motivating case is being able to debug a relatively large, complex
> system. If the system crashes (through an exception), or in some other
> manner enters an unexpected state (co-routines blocked for too long)
> it would be very nice to be able to debug an arbitrary co-routine, not
> necessarily the one indicating a bad system state, to see exactly what
> it is/was doing at the time the anomalous  behavior occurs.
>
> So, this is a case of trying to analyse some system wide behavior
> than necessarily one particular task. So until you start
> analysing the rest of the system you don't know which co-routines
> you want to analyse.
>
> My motivation for this is primarily avoiding double book-keeping. I
> assume that the framework has organised things so that there is
> some data structure to find all the co-routines (or some other object
> wrapping the co-routines) and that all the "switching" occurs in
> one place.
>
> With that in mind I can have some code that works something like:
>
> @coroutine
> def switch():
>     coro_stacks[current_coro] =
> inspect.getouterframes(inspect.currentframe())
>     ....
>
> I think something like this is probably the best approach to achieve
> my desired goals with currently available APIs.
>
> However I really don't like it as it required this double book-keeping. I'm
> manually retaining this trace back for each coro, which seems like a
> waste of memory, considering the interpreter already has this information
> stored, just unexposed.
>
> I feel it is desirable, and in-line with existing Python patterns to expose
> the interpreter data structures, rather than making the user do extra work
> to access the same information.
>

Ben, I suspect that this final paragraph is actually the crux to your
request. You need to understand what the interpreter is doing before you
can propose an API to its data structures. The particular thing to
understand about coroutines is that a coroutine which is suspended at
"yield" or "yield from" has a frame but no stack -- the frame holds the
locals and the suspension point, but it is not connected to any other
frames. Its f_back pointer is literally NULL. (Perhaps you are more used to
threads, where a suspended thread still has a stack.) Moreover, the
interpreter has no bookkeeping that keeps track of suspended frames. So I'm
not sure exactly what information you think the interpreter has stored but
does not expose. Even asyncio/tasks.py does not have this bookkeeping -- it
keeps track of Tasks, which are an asyncio-specific class that wraps
certain coroutines, but not every coroutine is wrapped by a Task (and this
is intentional, as a coroutine is a much more lightweight data structure
than a Task instance).

IOW I don't think that the problem here is that you haven't sufficiently
motivated your use case -- you are asking for information that just isn't
available. (Which is actually where you started the thread -- you can get
to the frame of the coroutine but there's nowhere to go from that frame.)

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150612/24143352/attachment.html>

From ncoghlan at gmail.com  Sat Jun 13 09:22:01 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 13 Jun 2015 17:22:01 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
Message-ID: <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>

On 13 June 2015 at 04:13, Guido van Rossum <guido at python.org> wrote:
> IOW I don't think that the problem here is that you haven't sufficiently
> motivated your use case -- you are asking for information that just isn't
> available. (Which is actually where you started the thread -- you can get to
> the frame of the coroutine but there's nowhere to go from that frame.)

If I'm understanding Ben's request correctly, it isn't really the
stack trace that he's interested in (as you say, that specific
phrasing doesn't match the way coroutine suspension works), but rather
having visibility into the chain of control flow delegation for
currently suspended frames: what operation is the outermost frame
ultimately blocked *on*, and how did it get to the point of waiting
for that operation?

At the moment, all of the coroutine and generator-iterator resumption
information is implicit in the frame state, so we can't externally
introspect the delegation of control flow in a case like Ben's
original example (for coroutines) or like this one for generators:

    def g1():
        yield 42

    def g2():
        yield from g1()

    g = g2()
    next(g)
    # We can tell here that g is suspended
    # We can't tell that it delegated flow to a g1() instance

I wonder if in 3.6 it might be possible to *add* some bookkeeping to
"await" and "yield from" expressions that provides external visibility
into the underlying iterable or coroutine that the generator-iterator
or coroutine has delegated flow control to. As an initial assessment,
the runtime cost would be:

* an additional pointer added to generator/coroutine objects to track
control flow delegation
* setting that when suspending in "await" and "yield from" expressions
* clearing it when resuming in "await" and "yield from" expressions

(This would be a read-only borrowed reference from a Python level
perspective, so it shouldn't be necessary to alter the reference count
- we'd just be aliasing the existing reference from the frame's
internal stack state)

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From benno at benno.id.au  Sat Jun 13 12:25:59 2015
From: benno at benno.id.au (Ben Leslie)
Date: Sat, 13 Jun 2015 20:25:59 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
Message-ID: <CABZ0LtA=swXVFT+yvQPXS_3vT_dRM12bagVdgmcNLNnN0RGR=w@mail.gmail.com>

On 13 June 2015 at 17:22, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 13 June 2015 at 04:13, Guido van Rossum <guido at python.org> wrote:
>> IOW I don't think that the problem here is that you haven't sufficiently
>> motivated your use case -- you are asking for information that just isn't
>> available. (Which is actually where you started the thread -- you can get to
>> the frame of the coroutine but there's nowhere to go from that frame.)
>
> If I'm understanding Ben's request correctly, it isn't really the
> stack trace that he's interested in (as you say, that specific
> phrasing doesn't match the way coroutine suspension works), but rather
> having visibility into the chain of control flow delegation for
> currently suspended frames: what operation is the outermost frame
> ultimately blocked *on*, and how did it get to the point of waiting
> for that operation?
>
> At the moment, all of the coroutine and generator-iterator resumption
> information is implicit in the frame state, so we can't externally
> introspect the delegation of control flow in a case like Ben's
> original example (for coroutines) or like this one for generators:
>
>     def g1():
>         yield 42
>
>     def g2():
>         yield from g1()
>
>     g = g2()
>     next(g)
>     # We can tell here that g is suspended
>     # We can't tell that it delegated flow to a g1() instance
>
> I wonder if in 3.6 it might be possible to *add* some bookkeeping to
> "await" and "yield from" expressions that provides external visibility
> into the underlying iterable or coroutine that the generator-iterator
> or coroutine has delegated flow control to. As an initial assessment,
> the runtime cost would be:
>
> * an additional pointer added to generator/coroutine objects to track
> control flow delegation
> * setting that when suspending in "await" and "yield from" expressions
> * clearing it when resuming in "await" and "yield from" expressions

Thanks Nick for rephrasing with the appropriate terminology. I had tried
to get it right but with a background of implementing OS kernels, I have
a strong habit of existing terminology to break.

I agree with your suggestion that explicitly having the pointers is much
nicer than my opcode hack implementation.

Without side-tracking this discussion I do just want to say that the
hypothetical code is something that actually works if frame objects
expose the stack, which is possibly as easy as:

+static PyObject *
+frame_getstack(PyFrameObject *f, void *closure)
+{
+    PyObject **p;
+    PyObject *list = PyList_New(0);
+
+    if (list == NULL)
+        return NULL;
+
+    if (f->f_stacktop != NULL) {
+        for (p = f->f_valuestack; p < f->f_stacktop; p++) {
+            /* FIXME: is this the correct error handling condition? */
+            if (PyList_Append(list, *p)) {
+                Py_DECREF(list);
+                return NULL;
+            }
+        }
+    }
+
+    return list;
+}

I have implemented and tested this and it worked well (although I really
don't know CPython internals well enough to know if the above code doesn't
have some serious issues with it).

Is there any reason an f_stack attribute is not exposed for frames? Many of the
other PyFrameObject values are exposed. I'm guessing that there probably
aren't too many places where you can get hold of a frame that doesn't have an
empty stack in normal operation, so it probably isn't necessary.

Anyway, I'm not suggesting that adding f_stack is better than explicitly
adding pointers, but it does seem a more general thing that can be exposed
and enable this use case without requiring extra book-keeping data structures.

Cheers,

Ben

From benno at benno.id.au  Sat Jun 13 12:36:55 2015
From: benno at benno.id.au (Ben Leslie)
Date: Sat, 13 Jun 2015 20:36:55 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CAP7+vJ+hcJBu60yxEGugZkjbF7u2OsLCsj7OjB4obbOOMMffvA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <CAP7+vJ+hcJBu60yxEGugZkjbF7u2OsLCsj7OjB4obbOOMMffvA@mail.gmail.com>
Message-ID: <CABZ0LtARcKCFJ9c2iGX8sAvNutzqAq-kHigCooTs1rK+UY1F_w@mail.gmail.com>

On 13 June 2015 at 19:03, Guido van Rossum <guido at python.org> wrote:
> On Sat, Jun 13, 2015 at 12:22 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> On 13 June 2015 at 04:13, Guido van Rossum <guido at python.org> wrote:
>> > IOW I don't think that the problem here is that you haven't sufficiently
>> > motivated your use case -- you are asking for information that just
>> > isn't
>> > available. (Which is actually where you started the thread -- you can
>> > get to
>> > the frame of the coroutine but there's nowhere to go from that frame.)
>>
>> If I'm understanding Ben's request correctly, it isn't really the
>> stack trace that he's interested in (as you say, that specific
>> phrasing doesn't match the way coroutine suspension works), but rather
>> having visibility into the chain of control flow delegation for
>> currently suspended frames: what operation is the outermost frame
>> ultimately blocked *on*, and how did it get to the point of waiting
>> for that operation?
>>
>> At the moment, all of the coroutine and generator-iterator resumption
>> information is implicit in the frame state, so we can't externally
>> introspect the delegation of control flow in a case like Ben's
>> original example (for coroutines) or like this one for generators:
>>
>>     def g1():
>>         yield 42
>>
>>     def g2():
>>         yield from g1()
>>
>>     g = g2()
>>     next(g)
>>     # We can tell here that g is suspended
>>     # We can't tell that it delegated flow to a g1() instance
>>
>> I wonder if in 3.6 it might be possible to *add* some bookkeeping to
>> "await" and "yield from" expressions that provides external visibility
>> into the underlying iterable or coroutine that the generator-iterator
>> or coroutine has delegated flow control to. As an initial assessment,
>> the runtime cost would be:
>>
>> * an additional pointer added to generator/coroutine objects to track
>> control flow delegation
>> * setting that when suspending in "await" and "yield from" expressions
>> * clearing it when resuming in "await" and "yield from" expressions
>>
>> (This would be a read-only borrowed reference from a Python level
>> perspective, so it shouldn't be necessary to alter the reference count
>> - we'd just be aliasing the existing reference from the frame's
>> internal stack state)
>
>
> Ah, this makes sense. I think the object you're after is 'reciever' [sic] in
> the YIELD_FROM opcode implementation, right?

Right this is the exact book-keeping that I was originally referring to in
my previous emails (sorry for not making that more explicit earlier).
The 'reciever' is actually on the stack part of the co-routine's frame all
the time (i.e.: pointed to via f->f_stacktop).

So from my point of view the book-keeping is there (albeit somewhat
obscurely!), but the objects on the frame's stack aren't exposed via
the Python wrapping of PyFrameObject, (although could, I think, easily
be exposed). Once you get at the receiver object (which is another
co-routine) you can traverse down to the point the co-routine 'switched'.

Cheers,

Ben

From guido at python.org  Sat Jun 13 11:03:22 2015
From: guido at python.org (Guido van Rossum)
Date: Sat, 13 Jun 2015 02:03:22 -0700
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
Message-ID: <CAP7+vJ+hcJBu60yxEGugZkjbF7u2OsLCsj7OjB4obbOOMMffvA@mail.gmail.com>

On Sat, Jun 13, 2015 at 12:22 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On 13 June 2015 at 04:13, Guido van Rossum <guido at python.org> wrote:
> > IOW I don't think that the problem here is that you haven't sufficiently
> > motivated your use case -- you are asking for information that just isn't
> > available. (Which is actually where you started the thread -- you can
> get to
> > the frame of the coroutine but there's nowhere to go from that frame.)
>
> If I'm understanding Ben's request correctly, it isn't really the
> stack trace that he's interested in (as you say, that specific
> phrasing doesn't match the way coroutine suspension works), but rather
> having visibility into the chain of control flow delegation for
> currently suspended frames: what operation is the outermost frame
> ultimately blocked *on*, and how did it get to the point of waiting
> for that operation?
>
> At the moment, all of the coroutine and generator-iterator resumption
> information is implicit in the frame state, so we can't externally
> introspect the delegation of control flow in a case like Ben's
> original example (for coroutines) or like this one for generators:
>
>     def g1():
>         yield 42
>
>     def g2():
>         yield from g1()
>
>     g = g2()
>     next(g)
>     # We can tell here that g is suspended
>     # We can't tell that it delegated flow to a g1() instance
>
> I wonder if in 3.6 it might be possible to *add* some bookkeeping to
> "await" and "yield from" expressions that provides external visibility
> into the underlying iterable or coroutine that the generator-iterator
> or coroutine has delegated flow control to. As an initial assessment,
> the runtime cost would be:
>
> * an additional pointer added to generator/coroutine objects to track
> control flow delegation
> * setting that when suspending in "await" and "yield from" expressions
> * clearing it when resuming in "await" and "yield from" expressions
>
> (This would be a read-only borrowed reference from a Python level
> perspective, so it shouldn't be necessary to alter the reference count
> - we'd just be aliasing the existing reference from the frame's
> internal stack state)
>

Ah, this makes sense. I think the object you're after is 'reciever' [sic]
in the YIELD_FROM opcode implementation, right?

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150613/1923879e/attachment.html>

From jaivishkothari10104733 at gmail.com  Sat Jun 13 12:38:05 2015
From: jaivishkothari10104733 at gmail.com (jaivish kothari)
Date: Sat, 13 Jun 2015 16:08:05 +0530
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
Message-ID: <CAPyvaKkQWZMxdbquiv6J15JBqyX_7MiscbM_OAicRT+YUb5YzQ@mail.gmail.com>

Hi ,

   I had a Question,i hope i'll find the solution here.

Say i have a Queue.
 >>> h = Queue.Queue(maxsize=0)
>>> h.put(1)
>>> h.put(2)
>>> h.empty()
False
>>> h.join()
>>> h.empty()
False
>>> h.get()
1
>>> h.get()
2
>>> h.get()
Blocked.......................

My Question is :
In a single threaded environment why does the get() gets blocked , instead
of raising an exception.On interpreter i have no way to resume working.

And my second question is :
Why doe we have to explicitly call task_done after get(). why doesn't get()
implicitly call task_done().
as for put() entry for unfinished_task is automatically added .why not get
deleted in get() then.

Thanks.

On Fri, May 29, 2015 at 10:16 AM, Ben Leslie <benno at benno.id.au> wrote:

> Hi all,
>
> Apologies in advance; I'm not a regular, and this may have been
> handled already (but I couldn't find it when searching).
>
> I've been using the new async/await functionality (congrats again to
> Yury on getting that through!), and I'd like to get a stack trace
> between the place at which blocking occurs and the outer co-routine.
>
> For example, consider this code:
>
> """
> async def a():
>     await b()
>
> async def b():
>     await switch()
>
> @types.coroutine
> def switch():
>     yield
>
> coro_a = a()
> coro_a.send(None)
> """
>
> At this point I'd really like to be able to somehow get a stack trace
> similar to:
>
> test.py:2
> test.py:4
> test.py:9
>
> Using the gi_frame attribute of coro_a, I can get the line number of
> the outer frame (e.g.: line 2), but from there there is no way to
> descend the stack to reach the actual yield point.
>
> I thought that perhaps the switch() co-routine could yield the frame
> object returned from inspect.currentframe(), however once that
> function yields that frame object has f_back changed to None.
>
> A hypothetical approach would be to work the way down form the
> outer-frame, but that requires getting access to the co-routine object
> that the outer-frame is currently await-ing. Some hypothetical code
> could be:
>
> """
> def show(coro):
>     print("{}:{}".format(coro.gi_frame.f_code.co_filename,
> coro.gi_frame.f_lineno))
>     if dis.opname[coro.gi_code.co_code[coro.gi_frame.f_lasti + 1]] ==
> 'YIELD_FROM':
>         show(coro.gi_frame.f_stack[0])
> """
>
> This relies on the fact that an await-ing co-routine will be executing
> a YIELD_FROM instruction. The above code uses a completely
> hypothetical 'f_stack' property of frame objects to pull the
> co-routine object which a co-routine is currently await-ing from the
> stack. I've implemented a proof-of-concept f_stack property in the
> frameobject.c just to test out the above code, and it seems to work.
>
> With all that, some questions:
>
> 1) Does anyone else see value in trying to get the stack-trace down to
> the actual yield point?
> 2) Is there a different way of doing it that doesn't require changes
> to Python internals?
> 3) Assuming no to #2 is there a better way of getting the information
> compared to the pretty hacking byte-code/stack inspection?
>
> Thanks,
>
> Ben
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/jaivishkothari10104733%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150613/1588f17f/attachment.html>

From ncoghlan at gmail.com  Sat Jun 13 18:21:55 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 14 Jun 2015 02:21:55 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CABZ0LtA=swXVFT+yvQPXS_3vT_dRM12bagVdgmcNLNnN0RGR=w@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <CABZ0LtA=swXVFT+yvQPXS_3vT_dRM12bagVdgmcNLNnN0RGR=w@mail.gmail.com>
Message-ID: <CADiSq7fAfAOGsyh_tfw5r+2R_-arMiamfif5Dn7G27jt42ubNw@mail.gmail.com>

On 13 June 2015 at 20:25, Ben Leslie <benno at benno.id.au> wrote:
> Is there any reason an f_stack attribute is not exposed for frames? Many of the
> other PyFrameObject values are exposed. I'm guessing that there probably
> aren't too many places where you can get hold of a frame that doesn't have an
> empty stack in normal operation, so it probably isn't necessary.

There's also the fact that anything we do in CPython that assumes a
value stack based interpreter implementation might not work on other
Python implementations, so we generally try to keep that level of
detail hidden.

> Anyway, I'm not suggesting that adding f_stack is better than explicitly
> adding pointers, but it does seem a more general thing that can be exposed
> and enable this use case without requiring extra book-keeping data structures.

Emulating CPython frames on other implementations is already hard
enough, without exposing the value stack directly.

Compared to "emulate CPython's value stack", "keep track of delegation
to subiterators and subcoroutines" is a far more reasonable request to
make of developers of other implementations.

>From a learnability perspective, there's also nothing about an
"f_stack" attribute that says "you can use this to find out where a
generator or coroutine has delegated control", while attributes like
"gi_delegate" or "cr_delegate" would be more self-explanatory.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From guido at python.org  Sat Jun 13 19:34:59 2015
From: guido at python.org (Guido van Rossum)
Date: Sat, 13 Jun 2015 10:34:59 -0700
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CADiSq7fAfAOGsyh_tfw5r+2R_-arMiamfif5Dn7G27jt42ubNw@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <CABZ0LtA=swXVFT+yvQPXS_3vT_dRM12bagVdgmcNLNnN0RGR=w@mail.gmail.com>
 <CADiSq7fAfAOGsyh_tfw5r+2R_-arMiamfif5Dn7G27jt42ubNw@mail.gmail.com>
Message-ID: <CAP7+vJJ93ys1bN5STfZPcNGbqrBsNvywmb-+Fu=J_+sp1YhoDg@mail.gmail.com>

On Sat, Jun 13, 2015 at 9:21 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On 13 June 2015 at 20:25, Ben Leslie <benno at benno.id.au> wrote:
> > Is there any reason an f_stack attribute is not exposed for frames? Many
> of the
> > other PyFrameObject values are exposed. I'm guessing that there probably
> > aren't too many places where you can get hold of a frame that doesn't
> have an
> > empty stack in normal operation, so it probably isn't necessary.
>
> There's also the fact that anything we do in CPython that assumes a
> value stack based interpreter implementation might not work on other
> Python implementations, so we generally try to keep that level of
> detail hidden.
>
> > Anyway, I'm not suggesting that adding f_stack is better than explicitly
> > adding pointers, but it does seem a more general thing that can be
> exposed
> > and enable this use case without requiring extra book-keeping data
> structures.
>
> Emulating CPython frames on other implementations is already hard
> enough, without exposing the value stack directly.
>

I'm not sure how strong this argument is. We also expose bytecode, which is
about as unportable as anything. (Though arguably we can't keep bytecode a
secret, because it's written to .pyc files.) I find Ben's implementation
pretty straightforward, and because it makes a copy, I don't think there's
anything one could do with the exposed stack that could violate any of the
interpreter's invariants (I might change it to return a tuple to emphasize
this point to the caller though).

But I agree it isn't a solution to the question about the suspension
"stack".


> Compared to "emulate CPython's value stack", "keep track of delegation
> to subiterators and subcoroutines" is a far more reasonable request to
> make of developers of other implementations.
>
> From a learnability perspective, there's also nothing about an
> "f_stack" attribute that says "you can use this to find out where a
> generator or coroutine has delegated control", while attributes like
> "gi_delegate" or "cr_delegate" would be more self-explanatory.
>

Stack frame objects are kind of expensive and I would hate to add an extra
pointer to every frame just to support this functionality. Perhaps we could
have a flag though that says whether the top of the stack is in fact the
generator object on which we're waiting in a yield-from? This flag could
perhaps sit next to f_executing (also note that this new flag is mutually
exclusive with f_executing). We could then easily provide a new method or
property on the frame object that returns the desired generator if the flag
is set or None if the flag is not set -- other Python implementations could
choose to implement this differently.

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150613/2bd9943a/attachment.html>

From python at mrabarnett.plus.com  Sat Jun 13 21:01:57 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Sat, 13 Jun 2015 20:01:57 +0100
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CAPyvaKkQWZMxdbquiv6J15JBqyX_7MiscbM_OAicRT+YUb5YzQ@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <CAPyvaKkQWZMxdbquiv6J15JBqyX_7MiscbM_OAicRT+YUb5YzQ@mail.gmail.com>
Message-ID: <557C7E25.1050702@mrabarnett.plus.com>

On 2015-06-13 11:38, jaivish kothari wrote:
> Hi ,
>
>     I had a Question,i hope i'll find the solution here.
>
> Say i have a Queue.
>   >>> h = Queue.Queue(maxsize=0)
>  >>> h.put(1)
>  >>> h.put(2)
>  >>> h.empty()
> False
>  >>> h.join()
>  >>> h.empty()
> False
>  >>> h.get()
> 1
>  >>> h.get()
> 2
>  >>> h.get()
> Blocked.......................
>
> My Question is :
> In a single threaded environment why does the get() gets blocked ,
> instead of raising an exception.On interpreter i have no way to resume
> working.
>
> And my second question is :
> Why doe we have to explicitly call task_done after get(). why doesn't
> get() implicitly call task_done().
> as for put() entry for unfinished_task is automatically added .why not
> get deleted in get() then.
>
The way it's used is to get a item, process it, and then signal that it
has finished with it. There's going to be a period of time when an item
is not in the queue, but is still being processed.


From greg.ewing at canterbury.ac.nz  Sun Jun 14 01:20:57 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 14 Jun 2015 11:20:57 +1200
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
Message-ID: <557CBAD9.1000705@canterbury.ac.nz>

Nick Coghlan wrote:
> I wonder if in 3.6 it might be possible to *add* some bookkeeping to
> "await" and "yield from" expressions that provides external visibility
> into the underlying iterable or coroutine that the generator-iterator
> or coroutine has delegated flow control to.

In my original implementation of yield-from, stack frames
had an f_yieldfrom slot referring to the object being
yielded from.

I gather that slot no longer exists at the C level, but
it ought to be possible to provide a Python attribute or
method returning the same information.

-- 
Greg

From benno at benno.id.au  Sun Jun 14 02:00:05 2015
From: benno at benno.id.au (Ben Leslie)
Date: Sun, 14 Jun 2015 10:00:05 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <557CBAD9.1000705@canterbury.ac.nz>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <557CBAD9.1000705@canterbury.ac.nz>
Message-ID: <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>

On 14 June 2015 at 09:20, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Nick Coghlan wrote:
>>
>> I wonder if in 3.6 it might be possible to *add* some bookkeeping to
>> "await" and "yield from" expressions that provides external visibility
>> into the underlying iterable or coroutine that the generator-iterator
>> or coroutine has delegated flow control to.
>
>
> In my original implementation of yield-from, stack frames
> had an f_yieldfrom slot referring to the object being
> yielded from.
>
> I gather that slot no longer exists at the C level, but
> it ought to be possible to provide a Python attribute or
> method returning the same information.

OK, this is really easy to implement actually. I implemented it as gi_yieldfrom
on the generator object rather than f_yieldfrom on the frame object,
primarily as most of the related code is already in the genobject.c module.

Something like this:

diff --git a/Objects/genobject.c b/Objects/genobject.c
index 3c32e7b..bc42fe5 100644
--- a/Objects/genobject.c
+++ b/Objects/genobject.c
@@ -553,11 +553,22 @@ gen_set_qualname(PyGenObject *op, PyObject *value)
     return 0;
 }

+static PyObject *
+gen_getyieldfrom(PyGenObject *gen)
+{
+    PyObject *yf = gen_yf(gen);
+    if (yf == NULL)
+        Py_RETURN_NONE;
+
+    return yf;
+}
+
 static PyGetSetDef gen_getsetlist[] = {
     {"__name__", (getter)gen_get_name, (setter)gen_set_name,
      PyDoc_STR("name of the generator")},
     {"__qualname__", (getter)gen_get_qualname, (setter)gen_set_qualname,
      PyDoc_STR("qualified name of the generator")},
+    {"gi_yieldfrom", (getter)gen_getyieldfrom, NULL, NULL},
     {NULL} /* Sentinel */
 };

(Note: Above probably doesn't exactly follow correct coding conventions, and
probably need to add an appropriate doc string.)

This greatly simplifies my very original 'show' routine to:

def show(coro):
    print("{}:{} ({})".format(coro.gi_frame.f_code.co_filename,
coro.gi_frame.f_lineno, coro))
    if coro.gi_yieldfrom:
        show(coro.gi_yieldfrom)

I think this would give the widest flexibility for non-CPython
implementations to implement
the same property in the most appropriate manner.

If this seems like a good approach I'll try and work it in to a
suitable patch for contribution.

Cheers,

Ben

From ncoghlan at gmail.com  Sun Jun 14 03:05:36 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 14 Jun 2015 11:05:36 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CAP7+vJJ93ys1bN5STfZPcNGbqrBsNvywmb-+Fu=J_+sp1YhoDg@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <CABZ0LtA=swXVFT+yvQPXS_3vT_dRM12bagVdgmcNLNnN0RGR=w@mail.gmail.com>
 <CADiSq7fAfAOGsyh_tfw5r+2R_-arMiamfif5Dn7G27jt42ubNw@mail.gmail.com>
 <CAP7+vJJ93ys1bN5STfZPcNGbqrBsNvywmb-+Fu=J_+sp1YhoDg@mail.gmail.com>
Message-ID: <CADiSq7c80rgfTLinA4VHSD6jKT8woAUs2mx-vS86yqfQVA2hRg@mail.gmail.com>

On 14 Jun 2015 03:35, "Guido van Rossum" <guido at python.org> wrote:
>
> On Sat, Jun 13, 2015 at 9:21 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> From a learnability perspective, there's also nothing about an
>> "f_stack" attribute that says "you can use this to find out where a
>> generator or coroutine has delegated control", while attributes like
>> "gi_delegate" or "cr_delegate" would be more self-explanatory.
>
> Stack frame objects are kind of expensive and I would hate to add an
extra pointer to every frame just to support this functionality. Perhaps we
could have a flag though that says whether the top of the stack is in fact
the generator object on which we're waiting in a yield-from? This flag
could perhaps sit next to f_executing (also note that this new flag is
mutually exclusive with f_executing). We could then easily provide a new
method or property on the frame object that returns the desired generator
if the flag is set or None if the flag is not set -- other Python
implementations could choose to implement this differently.

Fortunately, we can expose this control flow delegation info through the
generator-iterator and coroutine object APIs, rather than needing to do it
directly on the frame.

I'd missed that it could be done without *any* new C level state though - I
now think Ben's right that we should be able to just expose the delegation
lookup from the resumption logic itself as a calculated property.

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150614/1de07516/attachment.html>

From ncoghlan at gmail.com  Sun Jun 14 03:16:34 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 14 Jun 2015 11:16:34 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <557CBAD9.1000705@canterbury.ac.nz>
 <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>
Message-ID: <CADiSq7dJT+b_6Jqg6T59cpFXQuUiugB7M-69-qH51tDN9h_sPQ@mail.gmail.com>

On 14 Jun 2015 10:01, "Ben Leslie" <benno at benno.id.au> wrote:
>
> If this seems like a good approach I'll try and work it in to a
> suitable patch for contribution.

I think it's a good approach, and worth opening an enhancement issue for.

I expect any patch would need some adjustments after Yury has finished
revising the async/await implementation to address some beta compatibility
issues with functools.singledispatch and Tornado.

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150614/1c8db199/attachment.html>

From guido at python.org  Sun Jun 14 11:16:56 2015
From: guido at python.org (Guido van Rossum)
Date: Sun, 14 Jun 2015 02:16:56 -0700
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CADiSq7dJT+b_6Jqg6T59cpFXQuUiugB7M-69-qH51tDN9h_sPQ@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <557CBAD9.1000705@canterbury.ac.nz>
 <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>
 <CADiSq7dJT+b_6Jqg6T59cpFXQuUiugB7M-69-qH51tDN9h_sPQ@mail.gmail.com>
Message-ID: <CAP7+vJKOMU2psMyLaKOax=xuUksgKPJYJ_8-G2GRd02EJ0DicA@mail.gmail.com>

A good plan. I think this could be added to 3.5 still? It's a pretty minor
adjustment to the PEP 492 machinery, really.

On Sat, Jun 13, 2015 at 6:16 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:

>
> On 14 Jun 2015 10:01, "Ben Leslie" <benno at benno.id.au> wrote:
> >
> > If this seems like a good approach I'll try and work it in to a
> > suitable patch for contribution.
>
> I think it's a good approach, and worth opening an enhancement issue for.
>
> I expect any patch would need some adjustments after Yury has finished
> revising the async/await implementation to address some beta compatibility
> issues with functools.singledispatch and Tornado.
>
> Cheers,
> Nick.
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150614/feced550/attachment.html>

From benno at benno.id.au  Sun Jun 14 12:50:49 2015
From: benno at benno.id.au (Ben Leslie)
Date: Sun, 14 Jun 2015 20:50:49 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CAP7+vJKOMU2psMyLaKOax=xuUksgKPJYJ_8-G2GRd02EJ0DicA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <557CBAD9.1000705@canterbury.ac.nz>
 <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>
 <CADiSq7dJT+b_6Jqg6T59cpFXQuUiugB7M-69-qH51tDN9h_sPQ@mail.gmail.com>
 <CAP7+vJKOMU2psMyLaKOax=xuUksgKPJYJ_8-G2GRd02EJ0DicA@mail.gmail.com>
Message-ID: <CABZ0LtCMLNWMvJBAdvUMkZ3jJq=jeEmXE=9eJdXhgbGQFXBOqA@mail.gmail.com>

Per Nick's advice I've created enhancement proposal 245340 with an
attached patch.

On 14 June 2015 at 19:16, Guido van Rossum <guido at python.org> wrote:
> A good plan. I think this could be added to 3.5 still? It's a pretty minor
> adjustment to the PEP 492 machinery, really.
>
> On Sat, Jun 13, 2015 at 6:16 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>>
>> On 14 Jun 2015 10:01, "Ben Leslie" <benno at benno.id.au> wrote:
>> >
>> > If this seems like a good approach I'll try and work it in to a
>> > suitable patch for contribution.
>>
>> I think it's a good approach, and worth opening an enhancement issue for.
>>
>> I expect any patch would need some adjustments after Yury has finished
>> revising the async/await implementation to address some beta compatibility
>> issues with functools.singledispatch and Tornado.
>>
>> Cheers,
>> Nick.
>>
>>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)

From ncoghlan at gmail.com  Sun Jun 14 13:20:04 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 14 Jun 2015 21:20:04 +1000
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CAP7+vJKOMU2psMyLaKOax=xuUksgKPJYJ_8-G2GRd02EJ0DicA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <557CBAD9.1000705@canterbury.ac.nz>
 <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>
 <CADiSq7dJT+b_6Jqg6T59cpFXQuUiugB7M-69-qH51tDN9h_sPQ@mail.gmail.com>
 <CAP7+vJKOMU2psMyLaKOax=xuUksgKPJYJ_8-G2GRd02EJ0DicA@mail.gmail.com>
Message-ID: <CADiSq7fd3xwz=NspSpKgWpLrhg=NuL+46XxV4-fDrv95MP6ijQ@mail.gmail.com>

On 14 Jun 2015 19:17, "Guido van Rossum" <guido at python.org> wrote:
>
> A good plan. I think this could be added to 3.5 still? It's a pretty
minor adjustment to the PEP 492 machinery, really.

Good point - as per Ben's original post, the lack of it makes it quite hard
to get a clear picture of the system state when using coroutines in the
current beta.

Cheers,
Nick.

>
> On Sat, Jun 13, 2015 at 6:16 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>>
>> On 14 Jun 2015 10:01, "Ben Leslie" <benno at benno.id.au> wrote:
>> >
>> > If this seems like a good approach I'll try and work it in to a
>> > suitable patch for contribution.
>>
>> I think it's a good approach, and worth opening an enhancement issue for.
>>
>> I expect any patch would need some adjustments after Yury has finished
revising the async/await implementation to address some beta compatibility
issues with functools.singledispatch and Tornado.
>>
>> Cheers,
>> Nick.
>>
>>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150614/2c6527bc/attachment.html>

From breamoreboy at yahoo.co.uk  Sun Jun 14 13:26:15 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Sun, 14 Jun 2015 12:26:15 +0100
Subject: [Python-Dev] Obtaining stack-frames from co-routine objects
In-Reply-To: <CABZ0LtCMLNWMvJBAdvUMkZ3jJq=jeEmXE=9eJdXhgbGQFXBOqA@mail.gmail.com>
References: <CABZ0LtAPMi9nQb9ZSp4jW4-kvTte_WVjz=wtKvUWmGeSwuyqVA@mail.gmail.com>
 <55687044.1090700@gmail.com>
 <CABZ0LtDX8J-Uc6kBBQ93cCF1NzAcebA6oWsf03WX5VuViL0niA@mail.gmail.com>
 <556D3385.5000009@gmail.com>
 <CABZ0LtAqnX0_3fpZK-FFFPzvGwkWhO5i1O92W42qv6i269cXXg@mail.gmail.com>
 <CAP7+vJJOycvm6tWE626RcAJ8UEBusRJsq3V8oqTmrS3KPkoWFw@mail.gmail.com>
 <CADiSq7dfMPtJUJWDaGH-Ry1CMnVv3NkHS+unDB0_1hxcAmn0KA@mail.gmail.com>
 <557CBAD9.1000705@canterbury.ac.nz>
 <CABZ0LtD3NEFr9vCT07c00QmtMQF5Y3NQp3moXE0D4KxoA8DA4w@mail.gmail.com>
 <CADiSq7dJT+b_6Jqg6T59cpFXQuUiugB7M-69-qH51tDN9h_sPQ@mail.gmail.com>
 <CAP7+vJKOMU2psMyLaKOax=xuUksgKPJYJ_8-G2GRd02EJ0DicA@mail.gmail.com>
 <CABZ0LtCMLNWMvJBAdvUMkZ3jJq=jeEmXE=9eJdXhgbGQFXBOqA@mail.gmail.com>
Message-ID: <mljod3$47d$1@ger.gmane.org>

On 14/06/2015 11:50, Ben Leslie wrote:
> Per Nick's advice I've created enhancement proposal 245340 with an
> attached patch.
>

http://bugs.python.org/issue24450 as opposed to 
http://bugs.python.org/issue24450#msg245340 :)

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From nad at acm.org  Mon Jun 15 03:43:34 2015
From: nad at acm.org (Ned Deily)
Date: Sun, 14 Jun 2015 18:43:34 -0700
Subject: [Python-Dev] cpython (merge 3.4 -> default): null merge with 3.4
References: <20150615003717.2599.82804@psf.io>
Message-ID: <nad-FE92CA.18433414062015@news.gmane.org>

In article <20150615003717.2599.82804 at psf.io>,
 senthil.kumaran <python-checkins at python.org> wrote:
> https://hg.python.org/cpython/rev/9a0c5ffe7420
> changeset:   96605:9a0c5ffe7420
> parent:      96603:47a566d6ee2a
> parent:      96604:3ded282f9615
> user:        Senthil Kumaran <senthil at uthcode.com>
> date:        Sun Jun 14 17:37:09 2015 -0700
> summary:
>   null merge with 3.4
> 
> Back porting changeset db302b88fdb6 to 3.4 branch, which fixed multiple 
> documentation typos.
> 
> Related Issues:
> #issue21528
> #issue24453
> 
> files:

Senthil,

There is now an active 3.5 branch, so the correct current order of 
merging is:

3.4 -> 3.5
3.5 -> default

I've checked in a couple of null merges to try to fix things.

-- 
 Ned Deily,
 nad at acm.org


From senthil at uthcode.com  Tue Jun 16 04:30:41 2015
From: senthil at uthcode.com (Senthil Kumaran)
Date: Mon, 15 Jun 2015 19:30:41 -0700
Subject: [Python-Dev] cpython (merge 3.4 -> default): null merge with 3.4
In-Reply-To: <nad-FE92CA.18433414062015@news.gmane.org>
References: <20150615003717.2599.82804@psf.io>
 <nad-FE92CA.18433414062015@news.gmane.org>
Message-ID: <11F58A5421434146A1E07F9C669FC828@uthcode.com>

On Sunday, June 14, 2015 at 6:43 PM, Ned Deily wrote:
> Senthil,
> 
> There is now an active 3.5 branch, so the correct current order of 
> merging is:
> 
> 3.4 -> 3.5
> 3.5 -> default
> 
> I've checked in a couple of null merges to try to fix things.
Oh! Missed that. Sorry, for the trouble. 
I will update my local branches and follow the process.

3.4 -> 3.5
3.5 -> default

Thank you!
Senthil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150615/e104fb01/attachment.html>

From larry at hastings.org  Wed Jun 17 01:26:34 2015
From: larry at hastings.org (Larry Hastings)
Date: Tue, 16 Jun 2015 16:26:34 -0700
Subject: [Python-Dev] How do people like the early 3.5 branch?
Message-ID: <5580B0AA.3060901@hastings.org>



A quick look through the checkin logs suggests that there's literally 
nothing happening in 3.6 right now.  All the checkins are merges.

Is anyone expecting to do work in 3.6 soon?  Or did the early branch 
just create a bunch of make-work for everybody?

Monitoring the progress of our experiment,


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150616/e7e0e1a4/attachment.html>

From robertc at robertcollins.net  Wed Jun 17 01:43:29 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 17 Jun 2015 11:43:29 +1200
Subject: [Python-Dev] How do people like the early 3.5 branch?
In-Reply-To: <5580B0AA.3060901@hastings.org>
References: <5580B0AA.3060901@hastings.org>
Message-ID: <CAJ3HoZ2hxQM3M3QB=OO2M9dO+bbNpJDxFfunBsGFoqY-kDWxnQ@mail.gmail.com>

On 17 June 2015 at 11:26, Larry Hastings <larry at hastings.org> wrote:
>
>
> A quick look through the checkin logs suggests that there's literally
> nothing happening in 3.6 right now.  All the checkins are merges.
>
> Is anyone expecting to do work in 3.6 soon?  Or did the early branch just
> create a bunch of make-work for everybody?
>
> Monitoring the progress of our experiment,

When I next get tuits, it will be on 3.6; I like the branch early even
though I haven't used it.

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud

From taleinat at gmail.com  Wed Jun 17 07:18:25 2015
From: taleinat at gmail.com (Tal Einat)
Date: Wed, 17 Jun 2015 08:18:25 +0300
Subject: [Python-Dev] How do people like the early 3.5 branch?
In-Reply-To: <5580B0AA.3060901@hastings.org>
References: <5580B0AA.3060901@hastings.org>
Message-ID: <CALWZvp4TFnoehT7sCi9o9gLTwhOL5ps=EyOyTTi84-ubE8KM5A@mail.gmail.com>

On Wed, Jun 17, 2015 at 2:26 AM, Larry Hastings <larry at hastings.org> wrote:
>
>
> A quick look through the checkin logs suggests that there's literally
> nothing happening in 3.6 right now.  All the checkins are merges.
>
> Is anyone expecting to do work in 3.6 soon?  Or did the early branch just
> create a bunch of make-work for everybody?

Work on Argument Clinic conversion is continuing (slowly) on the
default branch, without interfering with the 3.5 release process.

From storchaka at gmail.com  Wed Jun 17 19:10:03 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 17 Jun 2015 20:10:03 +0300
Subject: [Python-Dev] How do people like the early 3.5 branch?
In-Reply-To: <5580B0AA.3060901@hastings.org>
References: <5580B0AA.3060901@hastings.org>
Message-ID: <mls9lb$toq$1@ger.gmane.org>

On 17.06.15 02:26, Larry Hastings wrote:
> A quick look through the checkin logs suggests that there's literally
> nothing happening in 3.6 right now.  All the checkins are merges.
>
> Is anyone expecting to do work in 3.6 soon?  Or did the early branch
> just create a bunch of make-work for everybody?
>
> Monitoring the progress of our experiment,

Now the main effort is focused on fixing bugs found in first betas and 
on finishing non-completed features. I think this will be changed after 
releasing next betas and especially after release candidates.


From tjreedy at udel.edu  Thu Jun 18 20:27:51 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 18 Jun 2015 14:27:51 -0400
Subject: [Python-Dev] Unicode 8.0 and 3.5
Message-ID: <mlv2kk$fc6$1@ger.gmane.org>

Unicode 8.0 was just released.  Can we have unicodedata updated to match 
in 3.5?

-- 
Terry Jan Reedy


From larry at hastings.org  Thu Jun 18 20:33:13 2015
From: larry at hastings.org (Larry Hastings)
Date: Thu, 18 Jun 2015 11:33:13 -0700
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <mlv2kk$fc6$1@ger.gmane.org>
References: <mlv2kk$fc6$1@ger.gmane.org>
Message-ID: <55830EE9.30509@hastings.org>

On 06/18/2015 11:27 AM, Terry Reedy wrote:
> Unicode 8.0 was just released.  Can we have unicodedata updated to 
> match in 3.5?
>

What does this entail?  Data changes, code changes, both?


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150618/bfcb047d/attachment.html>

From python at mrabarnett.plus.com  Thu Jun 18 21:34:14 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Thu, 18 Jun 2015 20:34:14 +0100
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <55830EE9.30509@hastings.org>
References: <mlv2kk$fc6$1@ger.gmane.org> <55830EE9.30509@hastings.org>
Message-ID: <55831D36.1070108@mrabarnett.plus.com>

On 2015-06-18 19:33, Larry Hastings wrote:
> On 06/18/2015 11:27 AM, Terry Reedy wrote:
>> Unicode 8.0 was just released.  Can we have unicodedata updated to
>> match in 3.5?
>>
>
> What does this entail?  Data changes, code changes, both?
>
It looks like just data changes.

There are additional codepoints and a renamed property (which the
standard library doesn't support anyway).


From ajaveed at sabanciuniv.edu  Thu Jun 18 23:34:18 2015
From: ajaveed at sabanciuniv.edu (Arsalan Javeed (Student))
Date: Fri, 19 Jun 2015 00:34:18 +0300
Subject: [Python-Dev] About Python Developer Testsuites
Message-ID: <CAEJYch_=qzbLTne8wz1N8FtiTJZ0djyOqeXD9jrDNAwyp0QrGw@mail.gmail.com>

Dear Concerned,

I am using python v.2.7.1 for my research work in software testing. I have
already downloaded the source codes and found the built-in testsuites for
running on the build. However, i was wondering if by any chance are there
any testsuites which exist that are primarily meant for* developers* of the
source code which can be used before releasing a build, for instance smoke
testsuite/regression testsuites, besides the one already distributed with
source code.

If such developer testsuite exist i would request to share them.

Thanking in advance for the information.

-- 
Regards,

Arsalan Javeed
MS Computer Science and Engineering
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150619/6aa98272/attachment.html>

From steve at pearwood.info  Fri Jun 19 01:56:44 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 19 Jun 2015 09:56:44 +1000
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <55831D36.1070108@mrabarnett.plus.com>
References: <mlv2kk$fc6$1@ger.gmane.org> <55830EE9.30509@hastings.org>
 <55831D36.1070108@mrabarnett.plus.com>
Message-ID: <20150618235644.GG20701@ando.pearwood.info>

On Thu, Jun 18, 2015 at 08:34:14PM +0100, MRAB wrote:
> On 2015-06-18 19:33, Larry Hastings wrote:
> >On 06/18/2015 11:27 AM, Terry Reedy wrote:
> >>Unicode 8.0 was just released.  Can we have unicodedata updated to
> >>match in 3.5?
> >>
> >
> >What does this entail?  Data changes, code changes, both?
> >
> It looks like just data changes.

At the very least, there is a change to the casefolding algorithm. 
Cherokee was classified as unicameral but is now considered bicameral 
(two cases, like English). Unusually, case-folding Cherokee maps to 
uppercase rather than lowercase.

The full set of changes is listed here:

http://unicode.org/versions/Unicode8.0.0/

Apart from the addition of 7716 characters and changes to 
str.casefold(), I don't think any of the changes will make a big 
difference to Python's implementation. But it would be good to support 
Unicode 8 (to the degree that Python actually does support Unicode, 
rather than just that character set part of it).

 
> There are additional codepoints and a renamed property (which the
> standard library doesn't support anyway).

Which one are you referring to, Indic_Matra_Category renamed to 
Indic_Positional_Category?


-- 
Steve

From python at mrabarnett.plus.com  Fri Jun 19 02:55:07 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Fri, 19 Jun 2015 01:55:07 +0100
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <20150618235644.GG20701@ando.pearwood.info>
References: <mlv2kk$fc6$1@ger.gmane.org> <55830EE9.30509@hastings.org>
 <55831D36.1070108@mrabarnett.plus.com>
 <20150618235644.GG20701@ando.pearwood.info>
Message-ID: <5583686B.9070008@mrabarnett.plus.com>

On 2015-06-19 00:56, Steven D'Aprano wrote:
> On Thu, Jun 18, 2015 at 08:34:14PM +0100, MRAB wrote:
>> On 2015-06-18 19:33, Larry Hastings wrote:
>> >On 06/18/2015 11:27 AM, Terry Reedy wrote:
>> >>Unicode 8.0 was just released.  Can we have unicodedata updated to
>> >>match in 3.5?
>> >>
>> >
>> >What does this entail?  Data changes, code changes, both?
>> >
>> It looks like just data changes.
>
> At the very least, there is a change to the casefolding algorithm.
> Cherokee was classified as unicameral but is now considered bicameral
> (two cases, like English). Unusually, case-folding Cherokee maps to
> uppercase rather than lowercase.
>
Doesn't the case-folding just depend on the data and the algorithm
remains the same?

> The full set of changes is listed here:
>
> http://unicode.org/versions/Unicode8.0.0/
>
> Apart from the addition of 7716 characters and changes to
> str.casefold(), I don't think any of the changes will make a big
> difference to Python's implementation. But it would be good to support
> Unicode 8 (to the degree that Python actually does support Unicode,
> rather than just that character set part of it).
>
>
>> There are additional codepoints and a renamed property (which the
>> standard library doesn't support anyway).
>
> Which one are you referring to, Indic_Matra_Category renamed to
> Indic_Positional_Category?
>
Yes.


From larry at hastings.org  Fri Jun 19 05:30:41 2015
From: larry at hastings.org (Larry Hastings)
Date: Thu, 18 Jun 2015 20:30:41 -0700
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <mlv2kk$fc6$1@ger.gmane.org>
References: <mlv2kk$fc6$1@ger.gmane.org>
Message-ID: <55838CE1.8050304@hastings.org>

On 06/18/2015 11:27 AM, Terry Reedy wrote:
> Unicode 8.0 was just released.  Can we have unicodedata updated to 
> match in 3.5?


What do the Python unicodedata Experts say?  That'd be "loewis", 
"lemburg", and "ezio.melotti".

According to the Dev Guide,


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150618/97ddd3b6/attachment.html>

From steve at pearwood.info  Fri Jun 19 05:33:12 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 19 Jun 2015 13:33:12 +1000
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <5583686B.9070008@mrabarnett.plus.com>
References: <mlv2kk$fc6$1@ger.gmane.org> <55830EE9.30509@hastings.org>
 <55831D36.1070108@mrabarnett.plus.com>
 <20150618235644.GG20701@ando.pearwood.info>
 <5583686B.9070008@mrabarnett.plus.com>
Message-ID: <20150619033312.GH20701@ando.pearwood.info>

On Fri, Jun 19, 2015 at 01:55:07AM +0100, MRAB wrote:
> On 2015-06-19 00:56, Steven D'Aprano wrote:

> >At the very least, there is a change to the casefolding algorithm.
> >Cherokee was classified as unicameral but is now considered bicameral
> >(two cases, like English). Unusually, case-folding Cherokee maps to
> >uppercase rather than lowercase.
> >
> Doesn't the case-folding just depend on the data and the algorithm
> remains the same?

That depends on what algorithm str.casefold uses :-)

Case folding is specifically mentioned as something that people 
migrating to Unicode 8 will need to take care with, and also says:

"This mapping also has consequences on identifiers, as described in the 
changes to UAX #31, Unicode Identifier and Pattern Syntax."

http://unicode.org/versions/Unicode8.0.0/#Migration


-- 
Steve

From storchaka at gmail.com  Fri Jun 19 06:56:57 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Fri, 19 Jun 2015 07:56:57 +0300
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <55831D36.1070108@mrabarnett.plus.com>
References: <mlv2kk$fc6$1@ger.gmane.org> <55830EE9.30509@hastings.org>
 <55831D36.1070108@mrabarnett.plus.com>
Message-ID: <mm07ep$sgu$1@ger.gmane.org>

On 18.06.15 22:34, MRAB wrote:
> On 2015-06-18 19:33, Larry Hastings wrote:
>> On 06/18/2015 11:27 AM, Terry Reedy wrote:
>>> Unicode 8.0 was just released.  Can we have unicodedata updated to
>>> match in 3.5?
>>>
>>
>> What does this entail?  Data changes, code changes, both?
>>
> It looks like just data changes.
>
> There are additional codepoints and a renamed property (which the
> standard library doesn't support anyway).

May be private table for case-insensitive matching in the re module 
should be updated too.



From wes.turner at gmail.com  Fri Jun 19 08:32:52 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Fri, 19 Jun 2015 01:32:52 -0500
Subject: [Python-Dev] About Python Developer Testsuites
In-Reply-To: <CAEJYch_=qzbLTne8wz1N8FtiTJZ0djyOqeXD9jrDNAwyp0QrGw@mail.gmail.com>
References: <CAEJYch_=qzbLTne8wz1N8FtiTJZ0djyOqeXD9jrDNAwyp0QrGw@mail.gmail.com>
Message-ID: <CACfEFw8rVLd+CtbkvvsTHc9sPXmz9uwGpOgCFwGuHQuUKMwN_Q@mail.gmail.com>

On Thu, Jun 18, 2015 at 4:34 PM, Arsalan Javeed (Student) <
ajaveed at sabanciuniv.edu> wrote:

> Dear Concerned,
>
> I am using python v.2.7.1 for my research work in software testing. I have
> already downloaded the source codes and found the built-in testsuites for
> running on the build. However, i was wondering if by any chance are there
> any testsuites which exist that are primarily meant for* developers* of
> the source code which can be used before releasing a build, for instance
> smoke testsuite/regression testsuites, besides the one already distributed
> with source code.
>
> If such developer testsuite exist i would request to share them.
>
> Thanking in advance for the information.
>

| Source: https://hg.python.org/cpython/file/tip/Lib/test
| Docs: https://docs.python.org/devguide/coverage.html
| Docs: https://docs.python.org/devguide/runtests.html
| Docs https://docs.python.org/devguide/buildbots.html
| Source:  https://hg.python.org/hooks/file/tip/hgbuildbot.py

| Source: https://hg.python.org/benchmarks/file/tip/performance

| Docs: https://hg.python.org/devinabox/file/tip/README

speed.python.org
-------------------------
| Homepage: https://speed.python.org/

* https://westurner.org/wiki/awesome-python-testing#benchmarks


>
> Regards,
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150619/e39aa36d/attachment.html>

From wes.turner at gmail.com  Fri Jun 19 08:36:23 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Fri, 19 Jun 2015 01:36:23 -0500
Subject: [Python-Dev] About Python Developer Testsuites
In-Reply-To: <CACfEFw8rVLd+CtbkvvsTHc9sPXmz9uwGpOgCFwGuHQuUKMwN_Q@mail.gmail.com>
References: <CAEJYch_=qzbLTne8wz1N8FtiTJZ0djyOqeXD9jrDNAwyp0QrGw@mail.gmail.com>
 <CACfEFw8rVLd+CtbkvvsTHc9sPXmz9uwGpOgCFwGuHQuUKMwN_Q@mail.gmail.com>
Message-ID: <CACfEFw9zxXP06wTybJ4jR3944zCVDC2qa=O=eNLE49BOaX+qhw@mail.gmail.com>

...
[Python-Dev] Automated testing of patches from bugs.python.org
https://groups.google.com/forum/#!topic/dev-python/CQA4YAItD74

On Fri, Jun 19, 2015 at 1:32 AM, Wes Turner <wes.turner at gmail.com> wrote:

>
>
> On Thu, Jun 18, 2015 at 4:34 PM, Arsalan Javeed (Student) <
> ajaveed at sabanciuniv.edu> wrote:
>
>> Dear Concerned,
>>
>> I am using python v.2.7.1 for my research work in software testing. I
>> have already downloaded the source codes and found the built-in testsuites
>> for running on the build. However, i was wondering if by any chance are
>> there any testsuites which exist that are primarily meant for*
>> developers* of the source code which can be used before releasing a
>> build, for instance smoke testsuite/regression testsuites, besides the one
>> already distributed with source code.
>>
>> If such developer testsuite exist i would request to share them.
>>
>> Thanking in advance for the information.
>>
>
> | Source: https://hg.python.org/cpython/file/tip/Lib/test
> | Docs: https://docs.python.org/devguide/coverage.html
> | Docs: https://docs.python.org/devguide/runtests.html
> | Docs https://docs.python.org/devguide/buildbots.html
> | Source:  https://hg.python.org/hooks/file/tip/hgbuildbot.py
>
> | Source: https://hg.python.org/benchmarks/file/tip/performance
>
> | Docs: https://hg.python.org/devinabox/file/tip/README
>
> speed.python.org
> -------------------------
> | Homepage: https://speed.python.org/
>
> * https://westurner.org/wiki/awesome-python-testing#benchmarks
>
>
>>
>> Regards,
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150619/ffffa2c5/attachment.html>

From lkb.teichmann at gmail.com  Fri Jun 19 14:56:51 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 19 Jun 2015 14:56:51 +0200
Subject: [Python-Dev] async __setitem__
Message-ID: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>

Hi everyone,

I was really happy to see PEP 492 - that is really a big step forward.
It makes many programming constructs possible in combination with
asyncio, which have been cumbersome or impossible until now.

One thing that still is impossible is to have __setitem__ be used
asynchronously. As an example, I am working with a database that
I access over the network, and it is easy to write

    data = await table[key]

to get something out of the database. But getting something in, I have
to write something like

    await table.set(key, value)

It would be cool if I could just write

    await table[key] = value

which would, I think, fit in nicely with all of PEP 492.

That said, I was hesitating which keyword to use, await or async. PEP 492
says something like async is an adjective, await a command. Then for
setitem, technically, I would need an adverb, so "asyncly table[key] = value".
But I didn't want to introduce yet another keyword (two different ones for the
same thing are already confusing enough), so I arbitrarily chose await.

If we are really going for it (I don't see the use case yet, I just
mention it for
completeness), we could also be extend this to other points where variables
are set, e.g. "for await i in something():" which is a bit similar to
"async for i in something()" yet different.

Greetings

Martin

From triccare at gmail.com  Fri Jun 19 17:21:53 2015
From: triccare at gmail.com (triccare triccare)
Date: Fri, 19 Jun 2015 11:21:53 -0400
Subject: [Python-Dev] PEP 0420: Extent of implementation? Specifically
	Python 2?
Message-ID: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>

Greetings,

Will this PEP be implemented in Python 2?

And, more generally, is there a way to know the extent of implementation of
any particular PEP?

Thank you for your time.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150619/b422c2f1/attachment.html>

From eric at trueblade.com  Fri Jun 19 17:54:03 2015
From: eric at trueblade.com (Eric V. Smith)
Date: Fri, 19 Jun 2015 11:54:03 -0400
Subject: [Python-Dev] PEP 0420: Extent of implementation? Specifically
 Python 2?
In-Reply-To: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>
References: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>
Message-ID: <55843B1B.2050305@trueblade.com>

On 06/19/2015 11:21 AM, triccare triccare wrote:
> Greetings,
> 
> Will this PEP be implemented in Python 2?
> 
> And, more generally, is there a way to know the extent of implementation
> of any particular PEP?

PEP 420 (namespace packages) will not be implemented in Python 2.

I don't think there's any general way of knowing, short of reading the
documentation of the specific features. In this case, namespace packages
are documented in 3.x, but not 2.x.

Eric.


From status at bugs.python.org  Fri Jun 19 18:08:20 2015
From: status at bugs.python.org (Python tracker)
Date: Fri, 19 Jun 2015 18:08:20 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20150619160820.889545692A@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2015-06-12 - 2015-06-19)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    4892 ( +8)
  closed 31342 (+25)
  total  36234 (+33)

Open issues with patches: 2244 


Issues opened (21)
==================

#24444: In argparse empty choices cannot be printed in the help
http://bugs.python.org/issue24444  opened by py.user

#24447: tab indentation breaks in tokenize.untokenize
http://bugs.python.org/issue24447  opened by gumblex

#24448: Syntax highlighting marks multiline comments as strings
http://bugs.python.org/issue24448  opened by irdb

#24450: Add gi_yieldfrom calculated property to generator object
http://bugs.python.org/issue24450  opened by bennoleslie

#24451: Add metrics to future objects (concurrent or asyncio?)
http://bugs.python.org/issue24451  opened by Joshua.Harlow

#24452: Make webbrowser support Chrome on Mac OS X
http://bugs.python.org/issue24452  opened by nedbat

#24454: Improve the usability of the match object named group API
http://bugs.python.org/issue24454  opened by rhettinger

#24455: IDLE debugger causes crash if not quitted properly before next
http://bugs.python.org/issue24455  opened by irdb

#24456: audioop.adpcm2lin Buffer Over-read
http://bugs.python.org/issue24456  opened by JohnLeitch

#24457: audioop.lin2adpcm Buffer Over-read
http://bugs.python.org/issue24457  opened by JohnLeitch

#24458: Documentation for PEP 489
http://bugs.python.org/issue24458  opened by encukou

#24459: Mention PYTHONFAULTHANDLER in the man page
http://bugs.python.org/issue24459  opened by Antony.Lee

#24460: urlencode() of dictionary not as expected
http://bugs.python.org/issue24460  opened by drueter at assyst.com

#24462: bytearray.find Buffer Over-read
http://bugs.python.org/issue24462  opened by JohnLeitch

#24464: Got warning when compiling sqlite3 module on Mac OSX
http://bugs.python.org/issue24464  opened by vajrasky

#24465: Make tarfile have deterministic sorting
http://bugs.python.org/issue24465  opened by samthursfield

#24466: extend_path explanation in documentation is ambiguous
http://bugs.python.org/issue24466  opened by alanf

#24467: bytearray pop and remove Buffer Over-read
http://bugs.python.org/issue24467  opened by JohnLeitch

#24468: Expose compiler flag constants as code object attributes
http://bugs.python.org/issue24468  opened by ncoghlan

#24469: Py2.x int free list can grow without bounds
http://bugs.python.org/issue24469  opened by scoder

#24470: ctypes incorrect handling of long int on 64bits Linux
http://bugs.python.org/issue24470  opened by Marco Clemencic



Most recent 15 issues with no replies (15)
==========================================

#24467: bytearray pop and remove Buffer Over-read
http://bugs.python.org/issue24467

#24466: extend_path explanation in documentation is ambiguous
http://bugs.python.org/issue24466

#24464: Got warning when compiling sqlite3 module on Mac OSX
http://bugs.python.org/issue24464

#24458: Documentation for PEP 489
http://bugs.python.org/issue24458

#24457: audioop.lin2adpcm Buffer Over-read
http://bugs.python.org/issue24457

#24456: audioop.adpcm2lin Buffer Over-read
http://bugs.python.org/issue24456

#24455: IDLE debugger causes crash if not quitted properly before next
http://bugs.python.org/issue24455

#24447: tab indentation breaks in tokenize.untokenize
http://bugs.python.org/issue24447

#24424: xml.dom.minidom: performance issue with Node.insertBefore()
http://bugs.python.org/issue24424

#24417: Type-specific documentation for __format__ methods
http://bugs.python.org/issue24417

#24415: SIGINT always reset to SIG_DFL by Py_Finalize()
http://bugs.python.org/issue24415

#24414: MACOSX_DEPLOYMENT_TARGET set incorrectly by configure
http://bugs.python.org/issue24414

#24407: Use after free in PyDict_merge
http://bugs.python.org/issue24407

#24381: Got warning when compiling ffi.c on Mac
http://bugs.python.org/issue24381

#24370: OrderedDict behavior is unclear with misbehaving keys.
http://bugs.python.org/issue24370



Most recent 15 issues waiting for review (15)
=============================================

#24465: Make tarfile have deterministic sorting
http://bugs.python.org/issue24465

#24464: Got warning when compiling sqlite3 module on Mac OSX
http://bugs.python.org/issue24464

#24458: Documentation for PEP 489
http://bugs.python.org/issue24458

#24452: Make webbrowser support Chrome on Mac OS X
http://bugs.python.org/issue24452

#24451: Add metrics to future objects (concurrent or asyncio?)
http://bugs.python.org/issue24451

#24450: Add gi_yieldfrom calculated property to generator object
http://bugs.python.org/issue24450

#24440: Move the buildslave setup information from the wiki to the dev
http://bugs.python.org/issue24440

#24439: Feedback for awaitable coroutine documentation
http://bugs.python.org/issue24439

#24436: _PyTraceback_Add has no const qualifier for its char * argumen
http://bugs.python.org/issue24436

#24434: ItemsView.__contains__ does not mimic dict_items
http://bugs.python.org/issue24434

#24426: re.split performance degraded significantly by capturing group
http://bugs.python.org/issue24426

#24424: xml.dom.minidom: performance issue with Node.insertBefore()
http://bugs.python.org/issue24424

#24421: Race condition compiling Modules/_math.c
http://bugs.python.org/issue24421

#24420: Documentation regressions from adding subprocess.run()
http://bugs.python.org/issue24420

#24416: Return a namedtuple from date.isocalendar()
http://bugs.python.org/issue24416



Top 10 most discussed issues (10)
=================================

#24429: msvcrt error when embedded
http://bugs.python.org/issue24429  11 msgs

#24400: Awaitable ABC incompatible with functools.singledispatch
http://bugs.python.org/issue24400   7 msgs

#24419: In argparse action append_const doesn't work for positional ar
http://bugs.python.org/issue24419   7 msgs

#24450: Add gi_yieldfrom calculated property to generator object
http://bugs.python.org/issue24450   7 msgs

#24465: Make tarfile have deterministic sorting
http://bugs.python.org/issue24465   7 msgs

#24412: setUpClass equivalent for addCleanup
http://bugs.python.org/issue24412   6 msgs

#24454: Improve the usability of the match object named group API
http://bugs.python.org/issue24454   6 msgs

#13501: Make libedit support more generic; port readline / libedit to 
http://bugs.python.org/issue13501   4 msgs

#19235: Add a dedicated subclass for recursion errors
http://bugs.python.org/issue19235   4 msgs

#23255: SimpleHTTPRequestHandler refactor for more extensible usage.
http://bugs.python.org/issue23255   4 msgs



Issues closed (24)
==================

#5209: nntplib needs updating to RFC 3977
http://bugs.python.org/issue5209  closed by vadmium

#8232: webbrowser.open incomplete on Windows
http://bugs.python.org/issue8232  closed by steve.dower

#14780: urllib.request could use the default CA store
http://bugs.python.org/issue14780  closed by vadmium

#15745: Numerous utime ns tests fail on FreeBSD w/ ZFS (update: and Ne
http://bugs.python.org/issue15745  closed by haypo

#22086: Tab indent no longer works in interpreter
http://bugs.python.org/issue22086  closed by r.david.murray

#24175: Consistent test_utime() failures on FreeBSD
http://bugs.python.org/issue24175  closed by zach.ware

#24405: Missing code markup in "Expressions" documentation
http://bugs.python.org/issue24405  closed by berker.peksag

#24406: "Built-in Types" doc doesn't explain dict comparison.
http://bugs.python.org/issue24406  closed by terry.reedy

#24431: StreamWriter.drain is not callable concurrently
http://bugs.python.org/issue24431  closed by gvanrossum

#24435: Grammar/Grammar points to outdated guide
http://bugs.python.org/issue24435  closed by berker.peksag

#24437: Add information about the buildbot console view and irc notice
http://bugs.python.org/issue24437  closed by r.david.murray

#24438: Strange behaviour when multiplying imaginary inf by 1
http://bugs.python.org/issue24438  closed by mark.dickinson

#24441: In argparse add_argument() allows the empty choices argument
http://bugs.python.org/issue24441  closed by r.david.murray

#24442: Readline Bindings Don't Report Any Error On Completer Function
http://bugs.python.org/issue24442  closed by Perry Randall

#24443: Link for clear and wait missing in EventObjects
http://bugs.python.org/issue24443  closed by berker.peksag

#24445: rstrip strips what it doesn't have to
http://bugs.python.org/issue24445  closed by o1da

#24446: imap and map inconsistent behaviour
http://bugs.python.org/issue24446  closed by ned.deily

#24449: Please add async write method to asyncio.StreamWriter
http://bugs.python.org/issue24449  closed by gvanrossum

#24453: A typo of doubled "the" words in the doc (easy fix)
http://bugs.python.org/issue24453  closed by rhettinger

#24461: os.environ.get treats Environt variant with double quotation m
http://bugs.python.org/issue24461  closed by r.david.murray

#24463: Python 3.4 bugs
http://bugs.python.org/issue24463  closed by skip.montanaro

#24471: Spam
http://bugs.python.org/issue24471  closed by haypo

#24472: Spam
http://bugs.python.org/issue24472  closed by skrah

#24473: Spam
http://bugs.python.org/issue24473  closed by skrah

From tjreedy at udel.edu  Fri Jun 19 21:30:06 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 19 Jun 2015 15:30:06 -0400
Subject: [Python-Dev] PEP 0420: Extent of implementation? Specifically
	Python 2?
In-Reply-To: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>
References: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>
Message-ID: <mm1qle$1kg$1@ger.gmane.org>

On 6/19/2015 11:21 AM, triccare triccare wrote:

> Will this PEP be implemented in Python 2?

"Version: 3.3"

Enhancements are not backported

> And, more generally, is there a way to know the extent of implementation
> of any particular PEP?

When a PEP is accepted, the version field should be updated to reflect 
the first version the enhancement appears in.

-- 
Terry Jan Reedy


From victor.stinner at gmail.com  Sat Jun 20 09:30:41 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 20 Jun 2015 09:30:41 +0200
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
Message-ID: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>

Hi,

I didn't get much feedback on this PEP. Since the Python 3.6 branch is
open (default), it's probably better to push such change in the
beginning of the 3.6 cycle, to catch issues earlier.

Are you ok to chain exceptions at C level by default?

Relatedi issue: https://bugs.python.org/issue23763

Victor

PEP: 490
Title: Chain exceptions at C level
Version: $Revision$
Last-Modified: $Date$
Author: Victor Stinner <victor.stinner at gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 25-March-2015
Python-Version: 3.6


Abstract
========

Chain exceptions at C level, as already done at Python level.


Rationale
=========

Python 3 introduced a new killer feature: exceptions are chained by
default, PEP 3134.

Example::

    try:
        raise TypeError("err1")
    except TypeError:
        raise ValueError("err2")

Output::

    Traceback (most recent call last):
      File "test.py", line 2, in <module>
        raise TypeError("err1")
    TypeError: err1

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "test.py", line 4, in <module>
        raise ValueError("err2")
    ValueError: err2

Exceptions are chained by default in Python code, but not in
extensions written in C.

A new private ``_PyErr_ChainExceptions()`` function was introduced in
Python 3.4.3 and 3.5 to chain exceptions. Currently, it must be called
explicitly to chain exceptions and its usage is not trivial.

Example of ``_PyErr_ChainExceptions()`` usage from the ``zipimport``
module to chain the previous ``OSError`` to a new ``ZipImportError``
exception::

    PyObject *exc, *val, *tb;
    PyErr_Fetch(&exc, &val, &tb);
    PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);
    _PyErr_ChainExceptions(exc, val, tb);

This PEP proposes to also chain exceptions automatically at C level to
stay consistent and give more information on failures to help
debugging. The previous example becomes simply::

    PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);


Proposal
========

Modify PyErr_*() functions to chain exceptions
----------------------------------------------

Modify C functions raising exceptions of the Python C API to
automatically chain exceptions: modify ``PyErr_SetString()``,
``PyErr_Format()``, ``PyErr_SetNone()``, etc.


Modify functions to not chain exceptions
----------------------------------------

Keeping the previous exception is not always interesting when the new
exception contains information of the previous exception or even more
information, especially when the two exceptions have the same type.

Example of an useless exception chain with ``int(str)``::

    TypeError: a bytes-like object is required, not 'type'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: int() argument must be a string, a bytes-like object or
a number, not 'type'

The new ``TypeError`` exception contains more information than the
previous exception. The previous exception should be hidden.

The ``PyErr_Clear()`` function can be called to clear the current
exception before raising a new exception, to not chain the current
exception with a new exception.


Modify functions to chain exceptions
------------------------------------

Some functions save and then restore the current exception. If a new
exception is raised, the exception is currently displayed into
sys.stderr or ignored depending on the function.  Some of these
functions should be modified to chain exceptions instead.

Examples of function ignoring the new exception(s):

* ``ptrace_enter_call()``: ignore exception
* ``subprocess_fork_exec()``: ignore exception raised by enable_gc()
* ``t_bootstrap()`` of the ``_thread`` module: ignore exception raised
  by trying to display the bootstrap function to ``sys.stderr``
* ``PyDict_GetItem()``, ``_PyDict_GetItem_KnownHash()``: ignore
  exception raised by looking for a key in the dictionary
* ``_PyErr_TrySetFromCause()``: ignore exception
* ``PyFrame_LocalsToFast()``: ignore exception raised by
  ``dict_to_map()``
* ``_PyObject_Dump()``: ignore exception. ``_PyObject_Dump()`` is used
  to debug, to inspect a running process, it should not modify the
  Python state.
* ``Py_ReprLeave()``: ignore exception "because there is no way to
  report them"
* ``type_dealloc()``: ignore exception raised by
  ``remove_all_subclasses()``
* ``PyObject_ClearWeakRefs()``: ignore exception?
* ``call_exc_trace()``, ``call_trace_protected()``: ignore exception
* ``remove_importlib_frames()``: ignore exception
* ``do_mktuple()``, helper used by ``Py_BuildValue()`` for example:
  ignore exception?
* ``flush_io()``: ignore exception
* ``sys_write()``, ``sys_format()``: ignore exception
* ``_PyTraceback_Add()``: ignore exception
* ``PyTraceBack_Print()``: ignore exception

Examples of function displaying the new exception to ``sys.stderr``:

* ``atexit_callfuncs()``: display exceptions with
  ``PyErr_Display()`` and return the latest exception, the function
  calls multiple callbacks and only returns the latest exception
* ``sock_dealloc()``: log the ``ResourceWarning`` exception with
  ``PyErr_WriteUnraisable()``
* ``slot_tp_del()``: display exception with
  ``PyErr_WriteUnraisable()``
* ``_PyGen_Finalize()``: display ``gen_close()`` exception with
  ``PyErr_WriteUnraisable()``
* ``slot_tp_finalize()``: display exception raised by the
  ``__del__()`` method with ``PyErr_WriteUnraisable()``
* ``PyErr_GivenExceptionMatches()``: display exception raised by
  ``PyType_IsSubtype()`` with ``PyErr_WriteUnraisable()``


Backward compatibility
======================

A side effect of chaining exceptions is that exceptions store
traceback objects which store frame objects which store local
variables.  Local variables are kept alive by exceptions. A common
issue is a reference cycle between local variables and exceptions: an
exception is stored in a local variable and the frame indirectly
stored in the exception. The cycle only impacts applications storing
exceptions.

The reference cycle can now be fixed with the new
``traceback.TracebackException`` object introduced in Python 3.5. It
stores informations required to format a full textual traceback without
storing local variables.

The ``asyncio`` is impacted by the reference cycle issue. This module
is also maintained outside Python standard library to release a
version for Python 3.3.  ``traceback.TracebackException`` will maybe
be backported in a private ``asyncio`` module to fix reference cycle
issues.


Alternatives
============

No change
---------

A new private ``_PyErr_ChainExceptions()`` function is enough to chain
manually exceptions.

Exceptions will only be chained explicitly where it makes sense.


New helpers to chain exceptions
-------------------------------

Functions like ``PyErr_SetString()`` don't chain automatically
exceptions. To make the usage of ``_PyErr_ChainExceptions()`` easier,
new private functions are added:

* ``_PyErr_SetStringChain(exc_type, message)``
* ``_PyErr_FormatChain(exc_type, format, ...)``
* ``_PyErr_SetNoneChain(exc_type)``
* ``_PyErr_SetObjectChain(exc_type, exc_value)``

Helper functions to raise specific exceptions like
``_PyErr_SetKeyError(key)`` or ``PyErr_SetImportError(message, name,
path)`` don't chain exceptions.  The generic
``_PyErr_ChainExceptions(exc_type, exc_value, exc_tb)`` should be used
to chain exceptions with these helper functions.


Appendix
========

PEPs
----

* `PEP 3134 -- Exception Chaining and Embedded Tracebacks
  <https://www.python.org/dev/peps/pep-3134/>`_ (Python 3.0):
  new ``__context__`` and ``__cause__`` attributes for exceptions
* `PEP 415 - Implement context suppression with exception attributes
  <https://www.python.org/dev/peps/pep-0415/>`_ (Python 3.3):
  ``raise exc from None``
* `PEP 409 - Suppressing exception context
  <https://www.python.org/dev/peps/pep-0409/>`_
  (superseded by the PEP 415)


Python C API
------------

The header file ``Include/pyerror.h`` declares functions related to
exceptions.

Functions raising exceptions:

* ``PyErr_SetNone(exc_type)``
* ``PyErr_SetObject(exc_type, exc_value)``
* ``PyErr_SetString(exc_type, message)``
* ``PyErr_Format(exc, format, ...)``

Helpers to raise specific exceptions:

* ``PyErr_BadArgument()``
* ``PyErr_BadInternalCall()``
* ``PyErr_NoMemory()``
* ``PyErr_SetFromErrno(exc)``
* ``PyErr_SetFromWindowsErr(err)``
* ``PyErr_SetImportError(message, name, path)``
* ``_PyErr_SetKeyError(key)``
* ``_PyErr_TrySetFromCause(prefix_format, ...)``

Manage the current exception:

* ``PyErr_Clear()``: clear the current exception,
  like ``except: pass``
* ``PyErr_Fetch(exc_type, exc_value, exc_tb)``
* ``PyErr_Restore(exc_type, exc_value, exc_tb)``
* ``PyErr_GetExcInfo(exc_type, exc_value, exc_tb)``
* ``PyErr_SetExcInfo(exc_type, exc_value, exc_tb)``

Others function to handle exceptions:

* ``PyErr_ExceptionMatches(exc)``: check to implement
  ``except exc:  ...``
* ``PyErr_GivenExceptionMatches(exc1, exc2)``
* ``PyErr_NormalizeException(exc_type, exc_value, exc_tb)``
* ``_PyErr_ChainExceptions(exc_type, exc_value, exc_tb)``


Python Issues
-------------

Chain exceptions:

* `Issue #23763: Chain exceptions in C
  <http://bugs.python.org/issue23763>`_
* `Issue #23696: zipimport: chain ImportError to OSError
  <http://bugs.python.org/issue23696>`_
* `Issue #21715: Chaining exceptions at C level
  <http://bugs.python.org/issue21715>`_: added
  ``_PyErr_ChainExceptions()``
* `Issue #18488: sqlite: finalize() method of user function may be
  called with an exception set if a call to step() method failed
  <http://bugs.python.org/issue18488>`_
* `Issue #23781: Add private _PyErr_ReplaceException() in 2.7
  <http://bugs.python.org/issue23781>`_
* `Issue #23782: Leak in _PyTraceback_Add
  <http://bugs.python.org/issue23782>`_

Changes preventing to loose exceptions:

* `Issue #23571: Raise SystemError if a function returns a result with an
  exception set <http://bugs.python.org/issue23571>`_
* `Issue #18408: Fixes crashes found by pyfailmalloc
  <http://bugs.python.org/issue18408>`_


Copyright
=========

This document has been placed in the public domain.



..
   Local Variables:
   mode: indented-text
   indent-tabs-mode: nil
   sentence-end-double-space: t
   fill-column: 70
   coding: utf-8
   End:

From levkivskyi at gmail.com  Sat Jun 20 08:51:47 2015
From: levkivskyi at gmail.com (Ivan Levkivskyi)
Date: Sat, 20 Jun 2015 08:51:47 +0200
Subject: [Python-Dev] Unbound locals in class scopes
Message-ID: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>

Hello,

There appeared a question in the discussion on
http://bugs.python.org/issue24129 about documenting the behavior that
unbound local variables in a class definition do not follow the normal
rules.

Guido said 13 years ago that this behavior should not be changed:
https://mail.python.org/pipermail/python-dev/2002-April/023428.html,
however, things changed a bit in Python 3.4 with the introduction of the
LOAD_CLASSDEREF opcode. I just wanted to double-check whether it is still a
desired/expected behavior.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150620/713992a6/attachment.html>

From stefan_ml at behnel.de  Sat Jun 20 11:00:47 2015
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Sat, 20 Jun 2015 11:00:47 +0200
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
Message-ID: <mm3a3v$r26$1@ger.gmane.org>

Victor Stinner schrieb am 20.06.2015 um 09:30:
> Are you ok to chain exceptions at C level by default?

I agree that it can be a bit non-obvious where exceptions are chained and
when they are not and my guess is that most C code simply doesn't take care
of chaining exceptions at all. If only because it was ported from Python 2
to the point that it worked in Python 3. Almost no test suite will include
dedicated tests for chained exception handling in C code, so flaws in this
area would rarely be noticed.

I'm rather for this proposal (say, +0.3) for consistency reasons, but it
will break code that was written with the assumption that exception
chaining doesn't happen for the current exception automatically, and also
code that incidentally gets it right. I don't think I have an idea of
what's more common in C code - that exception chaining is wanted, or that
the current exception is intentionally meant to be replaced by another.

And that is the crux in your proposal: you're changing the default
behaviour into its opposite. In order to do that, it should be reasonably
likely that the current standard behaviour is not intended in more than
half of the cases. I find that difficult to estimate.

Stefan



From ncoghlan at gmail.com  Sat Jun 20 15:55:41 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 20 Jun 2015 23:55:41 +1000
Subject: [Python-Dev] async __setitem__
In-Reply-To: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>
References: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>
Message-ID: <CADiSq7eDcnSPJ0mgJ1fsRHPxsRn6ik-hOjAV5RbRTUHg74BMgQ@mail.gmail.com>

On 19 June 2015 at 22:56, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
> to get something out of the database. But getting something in, I have
> to write something like
>
>     await table.set(key, value)
>
> It would be cool if I could just write
>
>     await table[key] = value

You've introduced an ambiguity here, though, as you're applying
async/await to a statement without a leading keyword.

In "await expr" it's clear "expr" is expected to produce an Awaitable,
and the await expression waits for it.

In "async for" and "async with" it's clearly a modifier on the
respective keyword, and hence serves as a good mnemonic for switching
to the asynchronous variants of the relevant protocols.

But what does an "asynchronous assignment" do? Do we need __asetattr__
and __asetitem__ protocols, and only allow it when the target is a
subscript operation or an attribute? What if we're assigning to
multiple targets, do the run in parallel? How is tuple unpacking
handled? How is augmented assignment handled?

If we allow asynchronous assignment, do we allow asynchronous deletion as well?

As you start working through some of those possible implications of
offering an asynchronous assignment syntax, the explicit method based
"await table.set(key, value)" construct may not look so bad after all
:)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Sat Jun 20 16:19:40 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 21 Jun 2015 00:19:40 +1000
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <mm3a3v$r26$1@ger.gmane.org>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
 <mm3a3v$r26$1@ger.gmane.org>
Message-ID: <CADiSq7ctSPU6fGJ0q-+Ke23yMWeONAxBBGsXRw8+s6dQENtv1Q@mail.gmail.com>

On 20 June 2015 at 19:00, Stefan Behnel <stefan_ml at behnel.de> wrote:
> And that is the crux in your proposal: you're changing the default
> behaviour into its opposite. In order to do that, it should be reasonably
> likely that the current standard behaviour is not intended in more than
> half of the cases. I find that difficult to estimate.

I think in this case the design decision can be made more on the basis
of how difficult it is to flip the behaviour to the non-default case:

Status quo: "Dear CPython core developers, why do you hate us so?
Signed, everyone that has ever needed to manually chain exceptions at
the C level using the current C API" (OK, it's not *that* bad as C
level coding goes, but it's still pretty arcane, requiring a lot of
knowledge of exception handling implementation details)

With Victor's PEP: "Call PyErr_Clear() before setting your exception
to disable the implicit exception chaining in 3.6+ in a way that
remains entirely compatible with older Python versions"

Given that significant difference in relative usability, +1 from me
for changing the default behaviour.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ron3200 at gmail.com  Sat Jun 20 18:12:57 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Sat, 20 Jun 2015 12:12:57 -0400
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
Message-ID: <mm43ea$jom$1@ger.gmane.org>



On 06/20/2015 02:51 AM, Ivan Levkivskyi wrote:
> Hello,
>
> There appeared a question in the discussion on
> http://bugs.python.org/issue24129 about documenting the behavior that
> unbound local variables in a class definition do not follow the normal rules.
>
> Guido said 13 years ago that this behavior should not be changed:
> https://mail.python.org/pipermail/python-dev/2002-April/023428.html,
> however, things changed a bit in Python 3.4 with the introduction of the
> LOAD_CLASSDEREF opcode. I just wanted to double-check whether it is still a
> desired/expected behavior.


Guido's comment still stands as far as references inside methods work in 
regards to the class body. (they must use a self name to access the class 
name space.) But the execution of the class body does use lexical scope, 
otherwise it would print xtop instead of xlocal here.

x = "xtop"
y = "ytop"
def func():
     x = "xlocal"
     y = "ylocal"
     class C:
         print(x)
         print(y)
         y = 1
func()

prints

xlocal
ytop


Maybe a better way to put this is, should the above be the same as this?

 >>> x = "xtop"
 >>> y = "ytop"
 >>> def func():
...     x = "xlocal"
...     y = "ylocal"
...     def _C():
...         print(x)
...         print(y)
...         y = 1
...         return locals()
...     C = type("C", (), _C())
...
 >>> func()
xlocal
Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
   File "<stdin>", line 9, in func
   File "<stdin>", line 6, in _C
UnboundLocalError: local variable 'y' referenced before assignment

I think yes, but I'm not sure how it may be different in other ways.


Cheers,
    Ron



From ron3200 at gmail.com  Sat Jun 20 18:39:16 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Sat, 20 Jun 2015 12:39:16 -0400
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <mm43ea$jom$1@ger.gmane.org>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org>
Message-ID: <mm44vl$b5v$1@ger.gmane.org>



On 06/20/2015 12:12 PM, Ron Adam wrote:
>
>
> On 06/20/2015 02:51 AM, Ivan Levkivskyi wrote:

>> Guido said 13 years ago that this behavior should not be changed:
>> https://mail.python.org/pipermail/python-dev/2002-April/023428.html,
>> however, things changed a bit in Python 3.4 with the introduction of the
>> LOAD_CLASSDEREF opcode. I just wanted to double-check whether it is still a
>> desired/expected behavior.

> Guido's comment still stands as far as references inside methods work in
> regards to the class body. (they must use a self name to access the class
> name space.) But the execution of the class body does use lexical scope,
> otherwise it would print xtop instead of xlocal here.

Minor corrections:

Methods can access but not write to the class scope without using self.  So 
that is also equivalent to the function version using type().  The methods 
capture the closure they were defined in, which is interesting.

And the self name refers to the object's names space not the class name space.


From yselivanov.ml at gmail.com  Sat Jun 20 19:45:52 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Sat, 20 Jun 2015 13:45:52 -0400
Subject: [Python-Dev] async __setitem__
In-Reply-To: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>
References: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>
Message-ID: <5585A6D0.3080005@gmail.com>

Hi Martin,

Actually, I think that it's better to adopt a bit different design:

    async with db.transaction():
        table[key1] = 'a'
        table[key2] = 'b'

And then, in __aexit__ you should flush the updates to the database
in bulk.  Usually, when working with an ORM, you need to update more
than one field anyways.  And if it's just one,
'await table.set(key, val)' looks more obvious than
'await table[key] = val'.

I'd also recommend you to prohibit __setitem__ and __setattr__ outside
of a transaction.

Yury

On 2015-06-19 8:56 AM, Martin Teichmann wrote:
> Hi everyone,
>
> I was really happy to see PEP 492 - that is really a big step forward.
> It makes many programming constructs possible in combination with
> asyncio, which have been cumbersome or impossible until now.
>
> One thing that still is impossible is to have __setitem__ be used
> asynchronously. As an example, I am working with a database that
> I access over the network, and it is easy to write
>
>      data = await table[key]
>
> to get something out of the database. But getting something in, I have
> to write something like
>
>      await table.set(key, value)
>
> It would be cool if I could just write
>
>      await table[key] = value
>
> which would, I think, fit in nicely with all of PEP 492.
>
> That said, I was hesitating which keyword to use, await or async. PEP 492
> says something like async is an adjective, await a command. Then for
> setitem, technically, I would need an adverb, so "asyncly table[key] = value".
> But I didn't want to introduce yet another keyword (two different ones for the
> same thing are already confusing enough), so I arbitrarily chose await.
>
> If we are really going for it (I don't see the use case yet, I just
> mention it for
> completeness), we could also be extend this to other points where variables
> are set, e.g. "for await i in something():" which is a bit similar to
> "async for i in something()" yet different.
>
> Greetings
>
> Martin
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com


From mal at egenix.com  Sat Jun 20 21:16:48 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Sat, 20 Jun 2015 21:16:48 +0200
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
Message-ID: <5585BC20.6090107@egenix.com>

On 20.06.2015 09:30, Victor Stinner wrote:
> Hi,
> 
> I didn't get much feedback on this PEP. Since the Python 3.6 branch is
> open (default), it's probably better to push such change in the
> beginning of the 3.6 cycle, to catch issues earlier.
> 
> Are you ok to chain exceptions at C level by default?

I think it's a good idea to make C APIs available to
simplify chaining exceptions at the C level, but don't
believe that always doing this by default is a good idea.
It should really be a case-by-case decision, IMO.

Note that Python exceptions are cheap to raise in C
(very much unlike in Python), so making this more
expensive by default would introduce a significant
overhead - without much proven benefit.

More below...

> PEP: 490
> Title: Chain exceptions at C level
> Version: $Revision$
> Last-Modified: $Date$
> Author: Victor Stinner <victor.stinner at gmail.com>
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 25-March-2015
> Python-Version: 3.6
> 
> 
> Abstract
> ========
> 
> Chain exceptions at C level, as already done at Python level.
> 
> 
> Rationale
> =========
> 
> Python 3 introduced a new killer feature: exceptions are chained by
> default, PEP 3134.
> 
> Example::
> 
>     try:
>         raise TypeError("err1")
>     except TypeError:
>         raise ValueError("err2")
> 
> Output::
> 
>     Traceback (most recent call last):
>       File "test.py", line 2, in <module>
>         raise TypeError("err1")
>     TypeError: err1
> 
>     During handling of the above exception, another exception occurred:
> 
>     Traceback (most recent call last):
>       File "test.py", line 4, in <module>
>         raise ValueError("err2")
>     ValueError: err2
> 
> Exceptions are chained by default in Python code, but not in
> extensions written in C.
> 
> A new private ``_PyErr_ChainExceptions()`` function was introduced in
> Python 3.4.3 and 3.5 to chain exceptions. Currently, it must be called
> explicitly to chain exceptions and its usage is not trivial.
> 
> Example of ``_PyErr_ChainExceptions()`` usage from the ``zipimport``
> module to chain the previous ``OSError`` to a new ``ZipImportError``
> exception::
> 
>     PyObject *exc, *val, *tb;
>     PyErr_Fetch(&exc, &val, &tb);
>     PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);
>     _PyErr_ChainExceptions(exc, val, tb);
> 
> This PEP proposes to also chain exceptions automatically at C level to
> stay consistent and give more information on failures to help
> debugging. The previous example becomes simply::
> 
>     PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);
> 
> 
> Proposal
> ========
> 
> Modify PyErr_*() functions to chain exceptions
> ----------------------------------------------
> 
> Modify C functions raising exceptions of the Python C API to
> automatically chain exceptions: modify ``PyErr_SetString()``,
> ``PyErr_Format()``, ``PyErr_SetNone()``, etc.
> 
> 
> Modify functions to not chain exceptions
> ----------------------------------------
> 
> Keeping the previous exception is not always interesting when the new
> exception contains information of the previous exception or even more
> information, especially when the two exceptions have the same type.
> 
> Example of an useless exception chain with ``int(str)``::
> 
>     TypeError: a bytes-like object is required, not 'type'
> 
>     During handling of the above exception, another exception occurred:
> 
>     Traceback (most recent call last):
>       File "<stdin>", line 1, in <module>
>     TypeError: int() argument must be a string, a bytes-like object or
> a number, not 'type'
> 
> The new ``TypeError`` exception contains more information than the
> previous exception. The previous exception should be hidden.
> 
> The ``PyErr_Clear()`` function can be called to clear the current
> exception before raising a new exception, to not chain the current
> exception with a new exception.
> 
> 
> Modify functions to chain exceptions
> ------------------------------------
> 
> Some functions save and then restore the current exception. If a new
> exception is raised, the exception is currently displayed into
> sys.stderr or ignored depending on the function.  Some of these
> functions should be modified to chain exceptions instead.
> 
> Examples of function ignoring the new exception(s):
> 
> * ``ptrace_enter_call()``: ignore exception
> * ``subprocess_fork_exec()``: ignore exception raised by enable_gc()
> * ``t_bootstrap()`` of the ``_thread`` module: ignore exception raised
>   by trying to display the bootstrap function to ``sys.stderr``
> * ``PyDict_GetItem()``, ``_PyDict_GetItem_KnownHash()``: ignore
>   exception raised by looking for a key in the dictionary
> * ``_PyErr_TrySetFromCause()``: ignore exception
> * ``PyFrame_LocalsToFast()``: ignore exception raised by
>   ``dict_to_map()``
> * ``_PyObject_Dump()``: ignore exception. ``_PyObject_Dump()`` is used
>   to debug, to inspect a running process, it should not modify the
>   Python state.
> * ``Py_ReprLeave()``: ignore exception "because there is no way to
>   report them"
> * ``type_dealloc()``: ignore exception raised by
>   ``remove_all_subclasses()``
> * ``PyObject_ClearWeakRefs()``: ignore exception?
> * ``call_exc_trace()``, ``call_trace_protected()``: ignore exception
> * ``remove_importlib_frames()``: ignore exception
> * ``do_mktuple()``, helper used by ``Py_BuildValue()`` for example:
>   ignore exception?
> * ``flush_io()``: ignore exception
> * ``sys_write()``, ``sys_format()``: ignore exception
> * ``_PyTraceback_Add()``: ignore exception
> * ``PyTraceBack_Print()``: ignore exception

Which of these do you think would benefit from chaining exceptions ?

> Examples of function displaying the new exception to ``sys.stderr``:
> 
> * ``atexit_callfuncs()``: display exceptions with
>   ``PyErr_Display()`` and return the latest exception, the function
>   calls multiple callbacks and only returns the latest exception
> * ``sock_dealloc()``: log the ``ResourceWarning`` exception with
>   ``PyErr_WriteUnraisable()``
> * ``slot_tp_del()``: display exception with
>   ``PyErr_WriteUnraisable()``
> * ``_PyGen_Finalize()``: display ``gen_close()`` exception with
>   ``PyErr_WriteUnraisable()``
> * ``slot_tp_finalize()``: display exception raised by the
>   ``__del__()`` method with ``PyErr_WriteUnraisable()``
> * ``PyErr_GivenExceptionMatches()``: display exception raised by
>   ``PyType_IsSubtype()`` with ``PyErr_WriteUnraisable()``

Same here.

In many cases, there's no way to report exceptions at all,
so chaining them make things better :-)

> Backward compatibility
> ======================
> 
> A side effect of chaining exceptions is that exceptions store
> traceback objects which store frame objects which store local
> variables.  Local variables are kept alive by exceptions. A common
> issue is a reference cycle between local variables and exceptions: an
> exception is stored in a local variable and the frame indirectly
> stored in the exception. The cycle only impacts applications storing
> exceptions.

It's not only about reference cycles. Tracebacks can also keep
files, sockets and other external resources open as well as
prevent large memory areas from being freed.

> The reference cycle can now be fixed with the new
> ``traceback.TracebackException`` object introduced in Python 3.5. It
> stores informations required to format a full textual traceback without
> storing local variables.
> 
> The ``asyncio`` is impacted by the reference cycle issue. This module
> is also maintained outside Python standard library to release a
> version for Python 3.3.  ``traceback.TracebackException`` will maybe
> be backported in a private ``asyncio`` module to fix reference cycle
> issues.
> 
> 
> Alternatives
> ============
> 
> No change
> ---------
> 
> A new private ``_PyErr_ChainExceptions()`` function is enough to chain
> manually exceptions.
> 
> Exceptions will only be chained explicitly where it makes sense.
> 
> 
> New helpers to chain exceptions
> -------------------------------
> 
> Functions like ``PyErr_SetString()`` don't chain automatically
> exceptions. To make the usage of ``_PyErr_ChainExceptions()`` easier,
> new private functions are added:
> 
> * ``_PyErr_SetStringChain(exc_type, message)``
> * ``_PyErr_FormatChain(exc_type, format, ...)``
> * ``_PyErr_SetNoneChain(exc_type)``
> * ``_PyErr_SetObjectChain(exc_type, exc_value)``
> 
> Helper functions to raise specific exceptions like
> ``_PyErr_SetKeyError(key)`` or ``PyErr_SetImportError(message, name,
> path)`` don't chain exceptions.  The generic
> ``_PyErr_ChainExceptions(exc_type, exc_value, exc_tb)`` should be used
> to chain exceptions with these helper functions.
> 
> 
> Appendix
> ========
> 
> PEPs
> ----
> 
> * `PEP 3134 -- Exception Chaining and Embedded Tracebacks
>   <https://www.python.org/dev/peps/pep-3134/>`_ (Python 3.0):
>   new ``__context__`` and ``__cause__`` attributes for exceptions
> * `PEP 415 - Implement context suppression with exception attributes
>   <https://www.python.org/dev/peps/pep-0415/>`_ (Python 3.3):
>   ``raise exc from None``
> * `PEP 409 - Suppressing exception context
>   <https://www.python.org/dev/peps/pep-0409/>`_
>   (superseded by the PEP 415)
> 
> 
> Python C API
> ------------
> 
> The header file ``Include/pyerror.h`` declares functions related to
> exceptions.
> 
> Functions raising exceptions:
> 
> * ``PyErr_SetNone(exc_type)``
> * ``PyErr_SetObject(exc_type, exc_value)``
> * ``PyErr_SetString(exc_type, message)``
> * ``PyErr_Format(exc, format, ...)``
> 
> Helpers to raise specific exceptions:
> 
> * ``PyErr_BadArgument()``
> * ``PyErr_BadInternalCall()``
> * ``PyErr_NoMemory()``
> * ``PyErr_SetFromErrno(exc)``
> * ``PyErr_SetFromWindowsErr(err)``
> * ``PyErr_SetImportError(message, name, path)``
> * ``_PyErr_SetKeyError(key)``
> * ``_PyErr_TrySetFromCause(prefix_format, ...)``
> 
> Manage the current exception:
> 
> * ``PyErr_Clear()``: clear the current exception,
>   like ``except: pass``
> * ``PyErr_Fetch(exc_type, exc_value, exc_tb)``
> * ``PyErr_Restore(exc_type, exc_value, exc_tb)``
> * ``PyErr_GetExcInfo(exc_type, exc_value, exc_tb)``
> * ``PyErr_SetExcInfo(exc_type, exc_value, exc_tb)``
> 
> Others function to handle exceptions:
> 
> * ``PyErr_ExceptionMatches(exc)``: check to implement
>   ``except exc:  ...``
> * ``PyErr_GivenExceptionMatches(exc1, exc2)``
> * ``PyErr_NormalizeException(exc_type, exc_value, exc_tb)``
> * ``_PyErr_ChainExceptions(exc_type, exc_value, exc_tb)``
> 
> 
> Python Issues
> -------------
> 
> Chain exceptions:
> 
> * `Issue #23763: Chain exceptions in C
>   <http://bugs.python.org/issue23763>`_
> * `Issue #23696: zipimport: chain ImportError to OSError
>   <http://bugs.python.org/issue23696>`_
> * `Issue #21715: Chaining exceptions at C level
>   <http://bugs.python.org/issue21715>`_: added
>   ``_PyErr_ChainExceptions()``
> * `Issue #18488: sqlite: finalize() method of user function may be
>   called with an exception set if a call to step() method failed
>   <http://bugs.python.org/issue18488>`_
> * `Issue #23781: Add private _PyErr_ReplaceException() in 2.7
>   <http://bugs.python.org/issue23781>`_
> * `Issue #23782: Leak in _PyTraceback_Add
>   <http://bugs.python.org/issue23782>`_
> 
> Changes preventing to loose exceptions:
> 
> * `Issue #23571: Raise SystemError if a function returns a result with an
>   exception set <http://bugs.python.org/issue23571>`_
> * `Issue #18408: Fixes crashes found by pyfailmalloc
>   <http://bugs.python.org/issue18408>`_
> 
> 
> Copyright
> =========
> 
> This document has been placed in the public domain.
> 
> 
> 
> ..
>    Local Variables:
>    mode: indented-text
>    indent-tabs-mode: nil
>    sentence-end-double-space: t
>    fill-column: 70
>    coding: utf-8
>    End:
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/mal%40egenix.com
> 

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 20 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2015-06-16: Released eGenix pyOpenSSL 0.13.10 ... http://egenix.com/go78
2015-06-10: Released mxODBC Plone/Zope DA 2.2.2   http://egenix.com/go76
2015-07-20: EuroPython 2015, Bilbao, Spain ...             30 days to go

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From yselivanov.ml at gmail.com  Sat Jun 20 21:29:47 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Sat, 20 Jun 2015 15:29:47 -0400
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <5585BC20.6090107@egenix.com>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
 <5585BC20.6090107@egenix.com>
Message-ID: <5585BF2B.2040000@gmail.com>

On 2015-06-20 3:16 PM, M.-A. Lemburg wrote:
> On 20.06.2015 09:30, Victor Stinner wrote:
>> >Hi,
>> >
>> >I didn't get much feedback on this PEP. Since the Python 3.6 branch is
>> >open (default), it's probably better to push such change in the
>> >beginning of the 3.6 cycle, to catch issues earlier.
>> >
>> >Are you ok to chain exceptions at C level by default?
> I think it's a good idea to make C APIs available to
> simplify chaining exceptions at the C level, but don't
> believe that always doing this by default is a good idea.
> It should really be a case-by-case decision, IMO.

Exactly my thoughts.

Maybe PyErr_SetStringChained, PyErr_FormatChained, etc
is better?

Yury

From greg.ewing at canterbury.ac.nz  Sun Jun 21 00:48:19 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 21 Jun 2015 10:48:19 +1200
Subject: [Python-Dev] async __setitem__
In-Reply-To: <CADiSq7eDcnSPJ0mgJ1fsRHPxsRn6ik-hOjAV5RbRTUHg74BMgQ@mail.gmail.com>
References: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>
 <CADiSq7eDcnSPJ0mgJ1fsRHPxsRn6ik-hOjAV5RbRTUHg74BMgQ@mail.gmail.com>
Message-ID: <5585EDB3.5000307@canterbury.ac.nz>

Nick Coghlan wrote:
> What if we're assigning to
> multiple targets, do the run in parallel? How is tuple unpacking
> handled? How is augmented assignment handled?
> 
> If we allow asynchronous assignment, do we allow asynchronous deletion as well?

Yeah, we'd kind of be letting the camel's nose in
here. There's no obvious place to stop until *every*
dunder method has an async counterpart.

-- 
Greg

From guido at python.org  Sun Jun 21 11:08:00 2015
From: guido at python.org (Guido van Rossum)
Date: Sun, 21 Jun 2015 11:08:00 +0200
Subject: [Python-Dev] async __setitem__
In-Reply-To: <5585EDB3.5000307@canterbury.ac.nz>
References: <CAK9R32SZJw7CK16e9gP1-VP8SsfDKfFru+g5w5BFSc6CP32gsQ@mail.gmail.com>
 <CADiSq7eDcnSPJ0mgJ1fsRHPxsRn6ik-hOjAV5RbRTUHg74BMgQ@mail.gmail.com>
 <5585EDB3.5000307@canterbury.ac.nz>
Message-ID: <CAP7+vJLTuqvJNeVCR109wrmFne0gSF1e95P8CLm_H2Yc3qTUGw@mail.gmail.com>

On Sun, Jun 21, 2015 at 12:48 AM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> Nick Coghlan wrote:
>
>> What if we're assigning to
>> multiple targets, do the run in parallel? How is tuple unpacking
>> handled? How is augmented assignment handled?
>>
>> If we allow asynchronous assignment, do we allow asynchronous deletion as
>> well?
>>
>
> Yeah, we'd kind of be letting the camel's nose in
> here. There's no obvious place to stop until *every*
> dunder method has an async counterpart.


Exactly. We should not go here at all.

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150621/32259fbb/attachment.html>

From solipsis at pitrou.net  Sun Jun 21 11:40:00 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 21 Jun 2015 11:40:00 +0200
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
 <5585BC20.6090107@egenix.com>
Message-ID: <20150621114000.1f41dec5@fsol>

On Sat, 20 Jun 2015 21:16:48 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:
> On 20.06.2015 09:30, Victor Stinner wrote:
> > Hi,
> > 
> > I didn't get much feedback on this PEP. Since the Python 3.6 branch is
> > open (default), it's probably better to push such change in the
> > beginning of the 3.6 cycle, to catch issues earlier.
> > 
> > Are you ok to chain exceptions at C level by default?
> 
> I think it's a good idea to make C APIs available to
> simplify chaining exceptions at the C level, but don't
> believe that always doing this by default is a good idea.
> It should really be a case-by-case decision, IMO.
> 
> Note that Python exceptions are cheap to raise in C
> (very much unlike in Python), so making this more
> expensive by default would introduce a significant
> overhead - without much proven benefit.

I agree with Marc-Andr?.

Regards

Antoine.



From levkivskyi at gmail.com  Sun Jun 21 18:22:59 2015
From: levkivskyi at gmail.com (Ivan Levkivskyi)
Date: Sun, 21 Jun 2015 18:22:59 +0200
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <mm44vl$b5v$1@ger.gmane.org>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
Message-ID: <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>

On 20 June 2015 at 18:39, Ron Adam <ron3200 at gmail.com> wrote:

>
>
> On 06/20/2015 12:12 PM, Ron Adam wrote:
>
>>
>>
>> On 06/20/2015 02:51 AM, Ivan Levkivskyi wrote:
>>
>
>  Guido said 13 years ago that this behavior should not be changed:
>>> https://mail.python.org/pipermail/python-dev/2002-April/023428.html,
>>> however, things changed a bit in Python 3.4 with the introduction of the
>>> LOAD_CLASSDEREF opcode. I just wanted to double-check whether it is
>>> still a
>>> desired/expected behavior.
>>>
>>
>  Guido's comment still stands as far as references inside methods work in
>> regards to the class body. (they must use a self name to access the class
>> name space.) But the execution of the class body does use lexical scope,
>> otherwise it would print xtop instead of xlocal here.
>>
>
> Minor corrections:
>
> Methods can access but not write to the class scope without using self.
> So that is also equivalent to the function version using type().  The
> methods capture the closure they were defined in, which is interesting.
>
> And the self name refers to the object's names space not the class name
> space.


It is still not clear whether Guido's comment still stands for not raising
an UnboundLocalError in class definitions but using globals instead.
This is the only questionable point for me currently.


>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/levkivskyi%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150621/f3f2aa61/attachment.html>

From guido at python.org  Sun Jun 21 22:05:58 2015
From: guido at python.org (Guido van Rossum)
Date: Sun, 21 Jun 2015 22:05:58 +0200
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
 <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
Message-ID: <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>

On Sun, Jun 21, 2015 at 6:22 PM, Ivan Levkivskyi <levkivskyi at gmail.com>
wrote:

> It is still not clear whether Guido's comment still stands for not raising
> an UnboundLocalError in class definitions but using globals instead.
>

Can you phrase this in the form of an example, showing what it currently
does and what you think it should do, instead?


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150621/97065629/attachment.html>

From levkivskyi at gmail.com  Mon Jun 22 00:46:07 2015
From: levkivskyi at gmail.com (Ivan Levkivskyi)
Date: Mon, 22 Jun 2015 00:46:07 +0200
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
 <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
 <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>
Message-ID: <CAOMjWkm8yURSatm_bEyqi=NRYDhDy7EtcqMNm-znDuuAT2PcBw@mail.gmail.com>

On 21 June 2015 at 22:05, Guido van Rossum <guido at python.org> wrote:
>
> On Sun, Jun 21, 2015 at 6:22 PM, Ivan Levkivskyi <levkivskyi at gmail.com>
wrote:
>>
>> It is still not clear whether Guido's comment still stands for not
raising an UnboundLocalError in class definitions but using globals instead.
>
>
> Can you phrase this in the form of an example, showing what it currently
does and what you think it should do, instead?
>

Here is an example:

x = "xtop"
y = "ytop"
def func():
    x = "xlocal"
    y = "ylocal"
    class C:
        print(x)
        print(y)
        y = 1
func()

prints

xlocal
ytop

Intuitively, one might think that it should raise UnboundLocalError or
print ylocal instead of ytop.
This question was discussed 13 years ago and then you said that this lookup
in globals
is an intentional behavior.

This behavior is not documented, and I started an issue on bug tracker
about documenting it.
Then, Eric proposed to ask you again, whether this is still an intentional
behavior.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150622/439e660b/attachment.html>

From ncoghlan at gmail.com  Mon Jun 22 03:03:34 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 22 Jun 2015 11:03:34 +1000
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CAOMjWkm8yURSatm_bEyqi=NRYDhDy7EtcqMNm-znDuuAT2PcBw@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
 <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
 <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>
 <CAOMjWkm8yURSatm_bEyqi=NRYDhDy7EtcqMNm-znDuuAT2PcBw@mail.gmail.com>
Message-ID: <CADiSq7e6308Esr--5xPD3h3S9+zGSj+sSpsX09e8-Y1Pr8=Deg@mail.gmail.com>

On 22 June 2015 at 08:46, Ivan Levkivskyi <levkivskyi at gmail.com> wrote:
>
>
> On 21 June 2015 at 22:05, Guido van Rossum <guido at python.org> wrote:
>>
>> On Sun, Jun 21, 2015 at 6:22 PM, Ivan Levkivskyi <levkivskyi at gmail.com>
>> wrote:
>>>
>>> It is still not clear whether Guido's comment still stands for not
>>> raising an UnboundLocalError in class definitions but using globals instead.
>>
>>
>> Can you phrase this in the form of an example, showing what it currently
>> does and what you think it should do, instead?
>>
>
> Here is an example:
>
> x = "xtop"
> y = "ytop"
> def func():
>     x = "xlocal"
>     y = "ylocal"
>     class C:
>         print(x)
>         print(y)
>         y = 1
> func()
>
> prints
>
> xlocal
> ytop
>
> Intuitively, one might think that it should raise UnboundLocalError or print
> ylocal instead of ytop.
> This question was discussed 13 years ago and then you said that this lookup
> in globals
> is an intentional behavior.
>
> This behavior is not documented, and I started an issue on bug tracker
> about documenting it.
> Then, Eric proposed to ask you again, whether this is still an intentional
> behavior.

Diving into some bytecode details:

We added LOAD_CLASSDEREF
(https://docs.python.org/3/library/dis.html#opcode-LOAD_CLASSDEREF) a
while back to account for the fact that __prepare__ may inject locals
into a class body that the compiler doesn't know about. Unlike
LOAD_DEREF, LOAD_CLASSDEREF checks the locals before it checks the
closure cell.

However, neither LOAD_CLASSDEREF *nor* LOAD_DEREF is used for names
that are *assigned* in the class body - those get flagged as local
variables, so we end up emitting LOAD_NAME for them, and hence ignore
any nonlocal variables with that name. If a module global or builtin
exists, we'll find that, otherwise we'll throw NameError.

With nested_scopes having been the default since 2.2, and accounting
for the fact that folks *do* write code like "name = name" at class
scope and expect it to "do the right thing", it seems reasonable to me
to just make this work properly, rather than having to explain why it
doesn't work as one might expected based on the behaviour of module
level class definitions and other references to nonlocal variables
from a class body.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From guido at python.org  Mon Jun 22 09:13:58 2015
From: guido at python.org (Guido van Rossum)
Date: Mon, 22 Jun 2015 09:13:58 +0200
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CADiSq7e6308Esr--5xPD3h3S9+zGSj+sSpsX09e8-Y1Pr8=Deg@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
 <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
 <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>
 <CAOMjWkm8yURSatm_bEyqi=NRYDhDy7EtcqMNm-znDuuAT2PcBw@mail.gmail.com>
 <CADiSq7e6308Esr--5xPD3h3S9+zGSj+sSpsX09e8-Y1Pr8=Deg@mail.gmail.com>
Message-ID: <CAP7+vJLzH345E4az+Wpra8jXExKbmOa_e5oOy_7KB76P3ELLrA@mail.gmail.com>

On Mon, Jun 22, 2015 at 3:03 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On 22 June 2015 at 08:46, Ivan Levkivskyi <levkivskyi at gmail.com> wrote:
> >
> >
> > On 21 June 2015 at 22:05, Guido van Rossum <guido at python.org> wrote:
> >>
> >> On Sun, Jun 21, 2015 at 6:22 PM, Ivan Levkivskyi <levkivskyi at gmail.com>
> >> wrote:
> >>>
> >>> It is still not clear whether Guido's comment still stands for not
> >>> raising an UnboundLocalError in class definitions but using globals
> instead.
> >>
> >>
> >> Can you phrase this in the form of an example, showing what it currently
> >> does and what you think it should do, instead?
> >>
> >
> > Here is an example:
> >
> > x = "xtop"
> > y = "ytop"
> > def func():
> >     x = "xlocal"
> >     y = "ylocal"
> >     class C:
> >         print(x)
> >         print(y)
> >         y = 1
> > func()
> >
> > prints
> >
> > xlocal
> > ytop
> >
> > Intuitively, one might think that it should raise UnboundLocalError or
> print
> > ylocal instead of ytop.
> > This question was discussed 13 years ago and then you said that this
> lookup
> > in globals
> > is an intentional behavior.
> >
> > This behavior is not documented, and I started an issue on bug tracker
> > about documenting it.
> > Then, Eric proposed to ask you again, whether this is still an
> intentional
> > behavior.
>
> Diving into some bytecode details:
>

In particular, the bytecode for C is:

>>> dis.dis(func.__code__.co_consts[3])
  6           0 LOAD_NAME                0 (__name__)
              3 STORE_NAME               1 (__module__)
              6 LOAD_CONST               0 ('func.<locals>.C')
              9 STORE_NAME               2 (__qualname__)

  7          12 LOAD_NAME                3 (print)
             15 LOAD_CLASSDEREF          0 (x)
             18 CALL_FUNCTION            1 (1 positional, 0 keyword pair)
             21 POP_TOP

  8          22 LOAD_NAME                3 (print)
             25 LOAD_NAME                4 (y)
             28 CALL_FUNCTION            1 (1 positional, 0 keyword pair)
             31 POP_TOP

  9          32 LOAD_CONST               1 (1)
             35 STORE_NAME               4 (y)
             38 LOAD_CONST               2 (None)
             41 RETURN_VALUE
>>>


> We added LOAD_CLASSDEREF
> (https://docs.python.org/3/library/dis.html#opcode-LOAD_CLASSDEREF) a
> while back to account for the fact that __prepare__ may inject locals
> into a class body that the compiler doesn't know about. Unlike
> LOAD_DEREF, LOAD_CLASSDEREF checks the locals before it checks the
> closure cell.
>
> However, neither LOAD_CLASSDEREF *nor* LOAD_DEREF is used for names
> that are *assigned* in the class body - those get flagged as local
> variables, so we end up emitting LOAD_NAME for them, and hence ignore
> any nonlocal variables with that name. If a module global or builtin
> exists, we'll find that, otherwise we'll throw NameError.
>
> With nested_scopes having been the default since 2.2, and accounting
> for the fact that folks *do* write code like "name = name" at class
> scope and expect it to "do the right thing", it seems reasonable to me
> to just make this work properly, rather than having to explain why it
> doesn't work as one might expected based on the behaviour of module
> level class definitions and other references to nonlocal variables
> from a class body.
>

But what *is* the correct behavior? I suspect people's intuitions differ.
If you think of this as similar to function scopes you're likely to be
wrong.

Also, since classes inside functions are commonly used in unit tests (at
least mine :-), I worry that *any* changes in this behavior might break
working code (no matter how much that "working" could be considered an
accident, it's still going to be a bear to debug if this happens to you).

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150622/297ed937/attachment.html>

From jimjjewett at gmail.com  Mon Jun 22 16:17:48 2015
From: jimjjewett at gmail.com (Jim J. Jewett)
Date: Mon, 22 Jun 2015 07:17:48 -0700 (PDT)
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <55830EE9.30509@hastings.org>
Message-ID: <5588190c.4814370a.fdbd.ffffa844@mx.google.com>



On Thu Jun 18 20:33:13 CEST 2015, Larry Hastings asked:

> On 06/18/2015 11:27 AM, Terry Reedy wrote:
>> Unicode 8.0 was just released.  Can we have unicodedata updated to 
>> match in 3.5?

> What does this entail?  Data changes, code changes, both?

Note that the unicode 7 changes also need to be considered, because
python 3.4 used unicode 6.3.

There are some changes to the recommendations on what to use in
identifiers.  Python doesn't follow precisely the previous rules,
but it would be good to ensure that any newly allowed characters
are intentional -- particularly for the newly defined characters.

My gut feel is that it would have been fine during beta, but for 
the 3rd RC I am not so sure.

-jJ

--

If there are still threading problems with my replies, please
email me with details, so that I can try to resolve them.  -jJ

From ethan at stoneleaf.us  Mon Jun 22 19:00:18 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Mon, 22 Jun 2015 10:00:18 -0700
Subject: [Python-Dev] PEP 0420: Extent of implementation? Specifically
 Python 2?
In-Reply-To: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>
References: <CADtmbDAwrBqYGwuVciNFoxsLapsQf9EfuZmLC4rQUOMneN=2aA@mail.gmail.com>
Message-ID: <55883F22.9080006@stoneleaf.us>

On 06/19/2015 08:21 AM, triccare triccare wrote:

> And, more generally, is there a way to know the extent of implementation of any particular PEP?

By default, no new features are going into the 2.7 line.  If something does, it will be mentioned explicitly in the PEP.  See PEP 466 [1] and 476 [2] as examples.

--
~Ethan~


[1]  https://www.python.org/dev/peps/pep-0466/
[2]  https://www.python.org/dev/peps/pep-0476/

From zachary.ware+pydev at gmail.com  Mon Jun 22 19:03:04 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Mon, 22 Jun 2015 12:03:04 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files to 2.7
Message-ID: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>

Hi,

As you may know, Steve Dower put significant effort into rewriting the
project files used by the Windows build as part of moving to VC14 as
the official compiler for Python 3.5.  Compared to the project files
for 3.4 (and older), the new project files are smaller, cleaner,
simpler, more easily extensible, and in some cases quite a bit more
correct.

I'd like to backport those new project files to 2.7, and Intel is
willing to fund that work as part of making Python ICC compilable on
all platforms they support since it makes building Python 2.7 with ICC
much easier.  I have no intention of changing the version of MSVC used
for official 2.7 builds, it *will* remain at MSVC 9.0 (VS 2008) as
determined the last time we had a thread about it.  VS 2010 and newer
can access older compilers (back to 2008) as a 'PlatformToolset' if
they're installed, so all we have to do is set the PlatformToolset in
the projects at 'v90'.  Backporting the projects would make it easier
to build 2.7 with a newer compiler, but that's already possible if
somebody wants to put enough work into it, the default will be the old
compiler, and we can emit big scary warnings if somebody does use a
compiler other than v90.

With the stipulation that the officially supported compiler won't
change, I want to make sure there's no major opposition to replacing
the old project files in PCbuild.  The old files would move to
PC\VS9.0, so they'll still be available and usable if necessary.

Using the backported project files to build 2.7 would require two
versions of Visual Studio to be installed; VS2010 (or newer) would be
required in addition to VS2008.  All Windows core developers should
already have VS2010 for Python 3.4 (and/or VS2015 for 3.5) and I
expect that anyone else who cares enough to still have VS2008 probably
has (or can easily get) one of the free editions of VS 2010 or newer,
so I don't consider this to be a major issue.

The backported files could be added alongside the old files in
PCbuild, in a better-named 'NewPCbuild' directory, or in a
subdirectory of PC. I would rather replace the old project files in
PCbuild, though; I'd like for the backported files to be the
recommended way to build, complete with support from PCbuild/build.bat
which would make the new project files the default for the buildbots.

I have a work-in-progress branch with the backported files in PCbuild,
which you can find at
https://hg.python.org/sandbox/zware/log/project_files_backport.  There
are still a couple bugs to work out (and a couple unrelated changes to
PC/pyconfig.h), but most everything works as expected.

Looking forward to hearing others' opinions,
-- 
Zach

From ethan at stoneleaf.us  Mon Jun 22 20:11:04 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Mon, 22 Jun 2015 11:11:04 -0700
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
Message-ID: <55884FB8.7090201@stoneleaf.us>

-1 on auto-chaining.

+1 on chaining helper functions so it's dirt-simple.

--
~Ethan~

From ncoghlan at gmail.com  Tue Jun 23 00:03:56 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 23 Jun 2015 08:03:56 +1000
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
Message-ID: <CADiSq7ft21Nh068yoBRNex0pABi7djHb3OqzOaLE0iwq=Ts05A@mail.gmail.com>

Updating the build system to better handle changes in underlying platforms
is one of the "standard exemptions" arising from Python 2.7's long term
support status, so if this change makes things easier for contributors on
Windows, +1 from me.

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150623/416a7c24/attachment.html>

From ncoghlan at gmail.com  Tue Jun 23 00:10:04 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 23 Jun 2015 08:10:04 +1000
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <55884FB8.7090201@stoneleaf.us>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
 <55884FB8.7090201@stoneleaf.us>
Message-ID: <CADiSq7cAo-95Uo9-=7BwLoDhKBtG7ULEx9i4e56CUoxsozZD-w@mail.gmail.com>

On 23 Jun 2015 04:12, "Ethan Furman" <ethan at stoneleaf.us> wrote:
>
> -1 on auto-chaining.
>
> +1 on chaining helper functions so it's dirt-simple.

Chiming in again since I wasn't clear on this aspect last time: I'd also be
+1 on parallel APIs that handle the chaining.

Since the auto-chaining idea seems largely unpopular, that suggests to me
that a parallel set of APIs would be the most readily accepted way forward.

I nominate a "*Chained" suffix as my suggested colour for the bikeshed :)

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150623/a288fa28/attachment.html>

From Steve.Dower at microsoft.com  Tue Jun 23 00:20:51 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Mon, 22 Jun 2015 22:20:51 +0000
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
Message-ID: <BLUPR0301MB16525F2C3ABD1BB1D66965B1F5A10@BLUPR0301MB1652.namprd03.prod.outlook.com>

Zachary Ware wrote:
> With the stipulation that the officially supported compiler won't change, I want
> to make sure there's no major opposition to replacing the old project files in
> PCbuild. The old files would move to PC\VS9.0, so they'll still be available and
> usable if necessary.

I'm selfishly -0, since this won't benefit me and will give me more work (I don't develop 2.7 other than to build the releases, so this will just cause me to mess with my build machine). But since other people are doing most of the work I'm not going to try and block it.

I don't see any real need to emit scary warnings about compiler compatibility - a simple warning like in 3.5 for old compilers is fine. 

> Using the backported project files to build 2.7 would require two versions of
> Visual Studio to be installed; VS2010 (or newer) would be required in addition
> to VS2008. All Windows core developers should already have VS2010 for Python 3.4
> (and/or VS2015 for 3.5) and I expect that anyone else who cares enough to still
> have VS2008 probably has (or can easily get) one of the free editions of VS 2010
> or newer, so I don't consider this to be a major issue.

It may have a workaround, but it also could be a serious blocking issue for those who can't install another version. Especially since there's no benefit to the resulting builds.

> The backported files could be added alongside the old files in PCbuild, in a
> better-named 'NewPCbuild' directory, or in a subdirectory of PC. I would rather
> replace the old project files in PCbuild, though; I'd like for the backported
> files to be the recommended way to build, complete with support from
> PCbuild/build.bat which would make the new project files the default for the
> buildbots.

Agreed, just replace them. PCBuild is messy enough with the output files in there (I'd avoid moving them - plenty of the stdlib and test suite expects them to be there), we don't need duplicate project files.

Cheers,
Steve

> I have a work-in-progress branch with the backported files in PCbuild, which you
> can find at https://hg.python.org/sandbox/zware/log/project_files_backport.
> There are still a couple bugs to work out (and a couple unrelated changes to
> PC/pyconfig.h), but most everything works as expected.
> 
> Looking forward to hearing others' opinions,
> --
> Zach

From chris.barker at noaa.gov  Tue Jun 23 01:29:49 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Mon, 22 Jun 2015 16:29:49 -0700
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
Message-ID: <-6939408081783016743@unknownmsgid>

> I'd like to backport those new project files to 2.7,

Would this change anything about how extensions are built?

There is now the "ms compiler for 2.7" would that work? Or only in
concert with VS2010 express?

-CHB

> and Intel is
> willing to fund that work as part of making Python ICC compilable on
> all platforms they support since it makes building Python 2.7 with ICC
> much easier.  I have no intention of changing the version of MSVC used
> for official 2.7 builds, it *will* remain at MSVC 9.0 (VS 2008) as
> determined the last time we had a thread about it.  VS 2010 and newer
> can access older compilers (back to 2008) as a 'PlatformToolset' if
> they're installed, so all we have to do is set the PlatformToolset in
> the projects at 'v90'.  Backporting the projects would make it easier
> to build 2.7 with a newer compiler, but that's already possible if
> somebody wants to put enough work into it, the default will be the old
> compiler, and we can emit big scary warnings if somebody does use a
> compiler other than v90.
>
> With the stipulation that the officially supported compiler won't
> change, I want to make sure there's no major opposition to replacing
> the old project files in PCbuild.  The old files would move to
> PC\VS9.0, so they'll still be available and usable if necessary.
>
> Using the backported project files to build 2.7 would require two
> versions of Visual Studio to be installed; VS2010 (or newer) would be
> required in addition to VS2008.  All Windows core developers should
> already have VS2010 for Python 3.4 (and/or VS2015 for 3.5) and I
> expect that anyone else who cares enough to still have VS2008 probably
> has (or can easily get) one of the free editions of VS 2010 or newer,
> so I don't consider this to be a major issue.
>
> The backported files could be added alongside the old files in
> PCbuild, in a better-named 'NewPCbuild' directory, or in a
> subdirectory of PC. I would rather replace the old project files in
> PCbuild, though; I'd like for the backported files to be the
> recommended way to build, complete with support from PCbuild/build.bat
> which would make the new project files the default for the buildbots.
>
> I have a work-in-progress branch with the backported files in PCbuild,
> which you can find at
> https://hg.python.org/sandbox/zware/log/project_files_backport.  There
> are still a couple bugs to work out (and a couple unrelated changes to
> PC/pyconfig.h), but most everything works as expected.
>
> Looking forward to hearing others' opinions,
> --
> Zach
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov

From zachary.ware+pydev at gmail.com  Tue Jun 23 03:43:39 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Mon, 22 Jun 2015 20:43:39 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <BLUPR0301MB16525F2C3ABD1BB1D66965B1F5A10@BLUPR0301MB1652.namprd03.prod.outlook.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <BLUPR0301MB16525F2C3ABD1BB1D66965B1F5A10@BLUPR0301MB1652.namprd03.prod.outlook.com>
Message-ID: <CAKJDb-Px-+On61FtsMHtnwgWGNPjf7iJe24mUCJ0QSEoAnQ3vg@mail.gmail.com>

On Mon, Jun 22, 2015 at 5:20 PM, Steve Dower <Steve.Dower at microsoft.com> wrote:
> Zachary Ware wrote:
>> With the stipulation that the officially supported compiler won't change, I want
>> to make sure there's no major opposition to replacing the old project files in
>> PCbuild. The old files would move to PC\VS9.0, so they'll still be available and
>> usable if necessary.
>
> I'm selfishly -0, since this won't benefit me and will give me more work (I don't develop 2.7 other than to build the releases, so this will just cause me to mess with my build machine). But since other people are doing most of the work I'm not going to try and block it.

I will be making sure that the files in PC\VS9.0 do still work
properly; I could also try to adjust paths in Tools\msi such that you
would only need to use PC\VS9.0 instead of PCbuild.  I have yet to
successfully build an MSI for any Python release, though, so I can't
make any promises about that working.

> I don't see any real need to emit scary warnings about compiler compatibility - a simple warning like in 3.5 for old compilers is fine.

That's the one I was talking about, actually :).  Describing it as a
"big scary" warning was probably a bit much.

>> Using the backported project files to build 2.7 would require two versions of
>> Visual Studio to be installed; VS2010 (or newer) would be required in addition
>> to VS2008. All Windows core developers should already have VS2010 for Python 3.4
>> (and/or VS2015 for 3.5) and I expect that anyone else who cares enough to still
>> have VS2008 probably has (or can easily get) one of the free editions of VS 2010
>> or newer, so I don't consider this to be a major issue.
>
> It may have a workaround, but it also could be a serious blocking issue for those who can't install another version. Especially since there's no benefit to the resulting builds.

The easiest workaround would be to use the project files in PC\VS9.0.
It might work to install only whatever gets you a new-enough MSBuild,
but I haven't tested that at all and thus don't know.  I will tell you
that it works to build the backported pcbuild.proj with
\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe, invoked from
a clean environment on a machine with VS 2008, 2010, 2013, and 2015RC
installed, though.

>> The backported files could be added alongside the old files in PCbuild, in a
>> better-named 'NewPCbuild' directory, or in a subdirectory of PC. I would rather
>> replace the old project files in PCbuild, though; I'd like for the backported
>> files to be the recommended way to build, complete with support from
>> PCbuild/build.bat which would make the new project files the default for the
>> buildbots.
>
> Agreed, just replace them. PCBuild is messy enough with the output files in there (I'd avoid moving them - plenty of the stdlib and test suite expects them to be there), we don't need duplicate project files.

That probably explains a few of the failures I'm still seeing that I
hadn't dug into yet.

-- 
Zach

From zachary.ware+pydev at gmail.com  Tue Jun 23 03:45:10 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Mon, 22 Jun 2015 20:45:10 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <-6939408081783016743@unknownmsgid>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <-6939408081783016743@unknownmsgid>
Message-ID: <CAKJDb-PuLFqDX_KOLkb9uxOZgPANALqKcgZx1fn_xRFLpTsi5g@mail.gmail.com>

On Mon, Jun 22, 2015 at 6:29 PM, Chris Barker - NOAA Federal
<chris.barker at noaa.gov> wrote:
>> I'd like to backport those new project files to 2.7,
>
> Would this change anything about how extensions are built?
>
> There is now the "ms compiler for 2.7" would that work? Or only in
> concert with VS2010 express?

It should have absolutely no impact on building extensions.  If it
does, that's a bug to be fixed.  Regular users of Python on Windows
(those who don't build their own) should have no idea that anything
has changed at all.

--
Zach

From zachary.ware+pydev at gmail.com  Tue Jun 23 03:58:09 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Mon, 22 Jun 2015 20:58:09 -0500
Subject: [Python-Dev] speed.python.org (was: 2.7 is here until 2020,
 please don't call it a waste.)
In-Reply-To: <CAK5idxTW7BHL=Y=6bqpXekmoDb9vptG71wCe8rjzJoNupWnpEA@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>
 <55702EBB.7070606@egenix.com> <20150604143200.E2BCFB14089@webabinitio.net>
 <CAK5idxTW7BHL=Y=6bqpXekmoDb9vptG71wCe8rjzJoNupWnpEA@mail.gmail.com>
Message-ID: <CAKJDb-OfGFW8PS-m7Yp40==Vxw7fB45WcpQeqUs=V1UG0P3sCA@mail.gmail.com>

On Thu, Jun 4, 2015 at 10:51 AM, Maciej Fijalkowski <fijall at gmail.com> wrote:
> On Thu, Jun 4, 2015 at 4:32 PM, R. David Murray <rdmurray at bitdance.com> wrote:
>> OK, so what you are saying is that speed.python.org will run a buildbot
>> slave so that when a change is committed to cPython, a speed run will be
>> triggered?  Is "the runner" a normal buildbot slave, or something
>> custom?  In the normal case the master controls what the slave
>> runs...but regardless, you'll need to let us know how the slave
>> invocation needs to be configured on the master.
>
> Ideally nightly (benchmarks take a while). The setup for pypy looks like this:
>
>
> https://bitbucket.org/pypy/buildbot/src/5fa1f1a4990f842dfbee416c4c2e2f6f75d451c4/bot2/pypybuildbot/builds.py?at=default#cl-734
>
> so fairly easy. This already generates a json file that you can plot.
> We can setup an upload automatically too.

I've been looking at what it will take to set up the buildmaster for
this, and it looks like it's just a matter of checking out the
benchmarks, building Python, testing it, and running the benchmarks.
There is the question of which benchmark repo to use:
https://bitbucket.org/pypy/benchmarks or
https://hg.python.org/benchmarks; ideally, we should use
hg.python.org/benchmarks for CPython benchmarks, but it looks like
pypy/benchmarks has the necessary runner, so I suppose we'll be using
it for now.  Is there interest from both sides to merge those
repositories?

The big question for the buildmaster is what options to pass to the
benchmark runner.  I suppose most of them should match the
CPythonBenchmark BuildFactory from the PyPy buildbot master
configuration, but otherwise I'm not sure.

The other big question is where the benchmarks will be run.  The
speed.python.org page makes it sound like there's a box intended for
that purpose (the one running the speed.python.org page?); can anyone
with access to it contact me to get the build slave set up?

-- 
Zach

From ncoghlan at gmail.com  Tue Jun 23 04:32:44 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 23 Jun 2015 12:32:44 +1000
Subject: [Python-Dev] How do people like the early 3.5 branch?
In-Reply-To: <5580B0AA.3060901@hastings.org>
References: <5580B0AA.3060901@hastings.org>
Message-ID: <CADiSq7eesg3cshS+gWDCjVYRkMBF_bF45hcNQ2K_o55QE9nKbQ@mail.gmail.com>

On 17 June 2015 at 09:26, Larry Hastings <larry at hastings.org> wrote:
>
>
> A quick look through the checkin logs suggests that there's literally
> nothing happening in 3.6 right now.  All the checkins are merges.
>
> Is anyone expecting to do work in 3.6 soon?  Or did the early branch just
> create a bunch of make-work for everybody?

A bit of a belated reply here, but my main observations are:

1. I still like the general idea
2. In the absence of ongoing integration testing against major third
party projects, I think the second beta might be a better branch point
than the first one

PEPs 489 (extension module loading) and 492 (async/await) are my main
reason for thinking that:

* For PEP 489, which represented a major refactoring of an existing
system, the first beta revealed a critical gap in the regression test
suite (hence the brown-paper-bag early beta 2 release)
* For PEP 492, which represented the introduction of a major new
system, the first beta revealed some non-trivial interoperability
issues that hadn't been considered in the original design

On the other hand, that qualifier on the second observation is a
fairly important one :)

I've been thinking more about what a gated "third party continuous
integration" repo might look like, and I'm beginning to think if we
structured it appropriately, the idea might be simpler than I
previously thought. The essential components seem to be:

1. Listening to and/or regularly polling the BuildBot master to check
for versions that pass the regression test suite on the stable
Buildbot fleet (I'm not sure what BuildBot's notification options are,
but cron is always available as a fallback option)
2. Pull those revisions into a local repo that's accessible read-only
over the web
3. Tag the specific revision that passed (this keeps track of which
revisions are known to pass the regression test suite, vs those that
only came in as dependencies of passing revisions)
4. Ideally, publish a notification via an asynchronous notification
system (e.g. fedmsg) that other CI systems can consume as a trigger
for their own testing

It's not at the top of my todo list right now, but if nobody beats me
to it I'll try to see what I can set up in Kallithea some time in the
next few months :)

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Tue Jun 23 05:33:32 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 23 Jun 2015 13:33:32 +1000
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CAP7+vJLzH345E4az+Wpra8jXExKbmOa_e5oOy_7KB76P3ELLrA@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
 <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
 <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>
 <CAOMjWkm8yURSatm_bEyqi=NRYDhDy7EtcqMNm-znDuuAT2PcBw@mail.gmail.com>
 <CADiSq7e6308Esr--5xPD3h3S9+zGSj+sSpsX09e8-Y1Pr8=Deg@mail.gmail.com>
 <CAP7+vJLzH345E4az+Wpra8jXExKbmOa_e5oOy_7KB76P3ELLrA@mail.gmail.com>
Message-ID: <CADiSq7d6aj_2OkvuCvFGHUTDqedqsMvEp5P9VmB-K6YkR6tSZw@mail.gmail.com>

On 22 June 2015 at 17:13, Guido van Rossum <guido at python.org> wrote:
> But what *is* the correct behavior? I suspect people's intuitions differ. If
> you think of this as similar to function scopes you're likely to be wrong.

For me, there's a correct answer for *new* users based on the combination of:

1. For module level class definitions, if you look up a name before
binding it locally, it's looked up the same way it would be if you
never bound it locally at all
2. For function level class definitions, If the name isn't bound
locally in the class body, nonlocals are visible as if the class body
was running in the scope of the function defining the class

If folks never experienced Python as it existed prior to 2.2, the
current behaviour isn't likely to be at all intuitive to them (I
barely make it into that category myself - prior to adopting 2.2.2 for
DSP testing in 2002 or so, my sole experience with Python 1.5.2 was a
single university level networking class*)

> Also, since classes inside functions are commonly used in unit tests (at
> least mine :-), I worry that *any* changes in this behavior might break
> working code (no matter how much that "working" could be considered an
> accident, it's still going to be a bear to debug if this happens to you).

Agreed - I think any change here would need to go through a
deprecation period where we warned about cases where LOAD_NAME would
be changed to LOAD_CLASSDEREF and there was a nonlocal defined with
that name.

Since it's trivial easy to workaround by renaming a local variable to
use a name that differs from the nested class attribute, I'd also be
entirely happy with leaving the current behaviour as an obscure quirk
- I'm only +0 on changing it, and also +0 on declaring it "not worth
the hassle of changing".

Cheers,
Nick.

P.S. *Two fun facts about that class (this was in the late 90's): a)
the lecturer *required* us to use Python as he assumed there'd be very
few people that already knew it, so we'd be starting off on a more
level playing field; and b) I asked if I could use Java instead, since
I already knew that at the time (he said no, and Python really *was*
very easy to pick up for the assignments we needed to do. It made it
up again an easy choice several years later)

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Tue Jun 23 05:37:43 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 23 Jun 2015 13:37:43 +1000
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <CAKJDb-PuLFqDX_KOLkb9uxOZgPANALqKcgZx1fn_xRFLpTsi5g@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <-6939408081783016743@unknownmsgid>
 <CAKJDb-PuLFqDX_KOLkb9uxOZgPANALqKcgZx1fn_xRFLpTsi5g@mail.gmail.com>
Message-ID: <CADiSq7dBw9FcmFMrDfjk9CyB3UzvUbEuaoXLYgHYTm4KAsLtCg@mail.gmail.com>

On 23 June 2015 at 11:45, Zachary Ware <zachary.ware+pydev at gmail.com> wrote:
> On Mon, Jun 22, 2015 at 6:29 PM, Chris Barker - NOAA Federal
> <chris.barker at noaa.gov> wrote:
>>> I'd like to backport those new project files to 2.7,
>>
>> Would this change anything about how extensions are built?
>>
>> There is now the "ms compiler for 2.7" would that work? Or only in
>> concert with VS2010 express?
>
> It should have absolutely no impact on building extensions.  If it
> does, that's a bug to be fixed.  Regular users of Python on Windows
> (those who don't build their own) should have no idea that anything
> has changed at all.

I'd suggest explicitly reaching out to the Stackless folks to get
their feedback. As I believe the switched to a newer compiler and VC
runtime for Windows a while back, I suspect it will make their lives
easier rather than harder, but it's still worth asking for their
input.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From zachary.ware+pydev at gmail.com  Tue Jun 23 06:42:48 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Mon, 22 Jun 2015 23:42:48 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <CADiSq7dBw9FcmFMrDfjk9CyB3UzvUbEuaoXLYgHYTm4KAsLtCg@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <-6939408081783016743@unknownmsgid>
 <CAKJDb-PuLFqDX_KOLkb9uxOZgPANALqKcgZx1fn_xRFLpTsi5g@mail.gmail.com>
 <CADiSq7dBw9FcmFMrDfjk9CyB3UzvUbEuaoXLYgHYTm4KAsLtCg@mail.gmail.com>
Message-ID: <CAKJDb-NnH9cd4kgyzkXJQGTZAoLthiwUmLadoL-dmeHR+Htz9A@mail.gmail.com>

On Mon, Jun 22, 2015 at 10:37 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> I'd suggest explicitly reaching out to the Stackless folks to get
> their feedback. As I believe the switched to a newer compiler and VC
> runtime for Windows a while back, I suspect it will make their lives
> easier rather than harder, but it's still worth asking for their
> input.

>From a glance at the stackless repo, it looks like it still uses
VS2008's project files (which pretty much means using MSVC 9), but I
could have easily missed something.

Christian, what say you?  Would having the project files from 3.5
backported to 2.7 (but still using MSVC 9) be positive, negative, or
indifferent for Stackless?

-- 
Zach

From victor.stinner at gmail.com  Tue Jun 23 09:29:38 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 23 Jun 2015 09:29:38 +0200
Subject: [Python-Dev] PEP 490: Chain exceptions at C level
In-Reply-To: <CADiSq7cAo-95Uo9-=7BwLoDhKBtG7ULEx9i4e56CUoxsozZD-w@mail.gmail.com>
References: <CAMpsgwZNx154bPCHZ2VWePOdT93FZwhqYJsfH6-5NSXaqYuaVQ@mail.gmail.com>
 <55884FB8.7090201@stoneleaf.us>
 <CADiSq7cAo-95Uo9-=7BwLoDhKBtG7ULEx9i4e56CUoxsozZD-w@mail.gmail.com>
Message-ID: <CAMpsgwYo_B_hrd4hAZhbkbPHxGKzbCAAyojP38GGRv2sWTU9+A@mail.gmail.com>

Hi,

2015-06-23 0:10 GMT+02:00 Nick Coghlan <ncoghlan at gmail.com>:
> Chiming in again since I wasn't clear on this aspect last time: I'd also be
> +1 on parallel APIs that handle the chaining.
>
> Since the auto-chaining idea seems largely unpopular, that suggests to me
> that a parallel set of APIs would be the most readily accepted way forward.
>
> I nominate a "*Chained" suffix as my suggested colour for the bikeshed :)

In the PEP, I proposed adding helpers in Alternatives, but I proposed
to add *private* APIs:
https://www.python.org/dev/peps/pep-0490/#new-helpers-to-chain-exceptions

I would prefer to experiment these API during one cycle before making
them public.

Extending the public API has a cost on the maintenance, it's common to
add 1 or 2 versions of the same function which makes the public API
uglier and heavier to maintain (especially since we have the public
ABI).

Victor

From mal at python.org  Tue Jun 23 10:07:46 2015
From: mal at python.org (M.-A. Lemburg)
Date: Tue, 23 Jun 2015 10:07:46 +0200
Subject: [Python-Dev] speed.python.org
In-Reply-To: <CAKJDb-OfGFW8PS-m7Yp40==Vxw7fB45WcpQeqUs=V1UG0P3sCA@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>	<CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>	<CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>	<5568FAF6.6040802@python.org>
 <20150530015621.518b537f@fsol>	<CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>	<556906F0.4090504@sdamon.com>	<CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>	<CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>	<556A45D0.7070606@hastings.org>	<CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>	<556C3E93.50108@egenix.com>	<CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>	<55702EBB.7070606@egenix.com>	<20150604143200.E2BCFB14089@webabinitio.net>	<CAK5idxTW7BHL=Y=6bqpXekmoDb9vptG71wCe8rjzJoNupWnpEA@mail.gmail.com>
 <CAKJDb-OfGFW8PS-m7Yp40==Vxw7fB45WcpQeqUs=V1UG0P3sCA@mail.gmail.com>
Message-ID: <558913D2.1090101@python.org>

On 23.06.2015 03:58, Zachary Ware wrote:
> On Thu, Jun 4, 2015 at 10:51 AM, Maciej Fijalkowski <fijall at gmail.com> wrote:
>> On Thu, Jun 4, 2015 at 4:32 PM, R. David Murray <rdmurray at bitdance.com> wrote:
>>> OK, so what you are saying is that speed.python.org will run a buildbot
>>> slave so that when a change is committed to cPython, a speed run will be
>>> triggered?  Is "the runner" a normal buildbot slave, or something
>>> custom?  In the normal case the master controls what the slave
>>> runs...but regardless, you'll need to let us know how the slave
>>> invocation needs to be configured on the master.
>>
>> Ideally nightly (benchmarks take a while). The setup for pypy looks like this:
>>
>>
>> https://bitbucket.org/pypy/buildbot/src/5fa1f1a4990f842dfbee416c4c2e2f6f75d451c4/bot2/pypybuildbot/builds.py?at=default#cl-734
>>
>> so fairly easy. This already generates a json file that you can plot.
>> We can setup an upload automatically too.
> 
> I've been looking at what it will take to set up the buildmaster for
> this, and it looks like it's just a matter of checking out the
> benchmarks, building Python, testing it, and running the benchmarks.
> There is the question of which benchmark repo to use:
> https://bitbucket.org/pypy/benchmarks or
> https://hg.python.org/benchmarks; ideally, we should use
> hg.python.org/benchmarks for CPython benchmarks, but it looks like
> pypy/benchmarks has the necessary runner, so I suppose we'll be using
> it for now.  Is there interest from both sides to merge those
> repositories?
> 
> The big question for the buildmaster is what options to pass to the
> benchmark runner.  I suppose most of them should match the
> CPythonBenchmark BuildFactory from the PyPy buildbot master
> configuration, but otherwise I'm not sure.
> 
> The other big question is where the benchmarks will be run.  The
> speed.python.org page makes it sound like there's a box intended for
> that purpose (the one running the speed.python.org page?); can anyone
> with access to it contact me to get the build slave set up?

Yes, I believe the machine is currently only running that page.

I've pinged the PSF Infra Team to get you access to it.

Thank you for looking into this !

Cheers,
-- 
Marc-Andre Lemburg
Director
Python Software Foundation
http://www.python.org/psf/

From mail at timgolden.me.uk  Tue Jun 23 10:14:29 2015
From: mail at timgolden.me.uk (Tim Golden)
Date: Tue, 23 Jun 2015 09:14:29 +0100
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
Message-ID: <55891565.4070401@timgolden.me.uk>

On 22/06/2015 18:03, Zachary Ware wrote:
> As you may know, Steve Dower put significant effort into rewriting the
> project files used by the Windows build as part of moving to VC14 as
> the official compiler for Python 3.5.  Compared to the project files
> for 3.4 (and older), the new project files are smaller, cleaner,
> simpler, more easily extensible, and in some cases quite a bit more
> correct.
> 
> I'd like to backport those new project files to 2.7, and Intel is
> willing to fund that work as part of making Python ICC compilable on
> all platforms they support since it makes building Python 2.7 with ICC
> much easier.  


Very much +1. Anything which eases building on Windows is good. I'd
meant a while ago to put in a vote of thanks to you & Steve for the
incremental improvements and tidy-ups to the build area, and I'd
certainly like to see this proposed change go in.

I'll try to find time to check out your branch and build from there, but
unless you've done anything bizarre I'm sure I'll be happy with it.

TJG

From tismer at stackless.com  Tue Jun 23 13:27:22 2015
From: tismer at stackless.com (Christian Tismer)
Date: Tue, 23 Jun 2015 13:27:22 +0200
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <CAKJDb-NnH9cd4kgyzkXJQGTZAoLthiwUmLadoL-dmeHR+Htz9A@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <-6939408081783016743@unknownmsgid>
 <CAKJDb-PuLFqDX_KOLkb9uxOZgPANALqKcgZx1fn_xRFLpTsi5g@mail.gmail.com>
 <CADiSq7dBw9FcmFMrDfjk9CyB3UzvUbEuaoXLYgHYTm4KAsLtCg@mail.gmail.com>
 <CAKJDb-NnH9cd4kgyzkXJQGTZAoLthiwUmLadoL-dmeHR+Htz9A@mail.gmail.com>
Message-ID: <5589429A.7050801@stackless.com>

Hi Zack,

On 23.06.15 06:42, Zachary Ware wrote:
> On Mon, Jun 22, 2015 at 10:37 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> I'd suggest explicitly reaching out to the Stackless folks to get
>> their feedback. As I believe the switched to a newer compiler and VC
>> runtime for Windows a while back, I suspect it will make their lives
>> easier rather than harder, but it's still worth asking for their
>> input.
> From a glance at the stackless repo, it looks like it still uses
> VS2008's project files (which pretty much means using MSVC 9), but I
> could have easily missed something.
>
> Christian, what say you?  Would having the project files from 3.5
> backported to 2.7 (but still using MSVC 9) be positive, negative, or
> indifferent for Stackless?
>

I am very positive about your efforts.

We wanted to create a Stackless 2.8 version end of 2013
with support for newer compilers, to begin with. After a while,
we stopped this, and I left the branch in a private, unmaintained
repository.
https://bitbucket.org/stackless-dev/stackless-private/branch/2.8-slp

Maybe you can make use of this, use it as you like it. You will
get an invitation.

cheers - Chris

-- 
Christian Tismer             :^)   tismer at stackless.com
Software Consulting          :     http://www.stackless.com/
Karl-Liebknecht-Str. 121     :     http://www.pydica.net/
14482 Potsdam                :     GPG key -> 0xFB7BEE0E
phone +49 173 24 18 776  fax +49 (30) 700143-0023


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 522 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150623/b9809701/attachment.sig>

From levkivskyi at gmail.com  Tue Jun 23 14:52:52 2015
From: levkivskyi at gmail.com (Ivan Levkivskyi)
Date: Tue, 23 Jun 2015 14:52:52 +0200
Subject: [Python-Dev] Unbound locals in class scopes
In-Reply-To: <CADiSq7d6aj_2OkvuCvFGHUTDqedqsMvEp5P9VmB-K6YkR6tSZw@mail.gmail.com>
References: <CAOMjWk=rYEAnsv+aKnApRHieqXRSCPWgMFy4Z8ozue6L6AU_VQ@mail.gmail.com>
 <mm43ea$jom$1@ger.gmane.org> <mm44vl$b5v$1@ger.gmane.org>
 <CAOMjWk=-0mo8ZT3AXWnAPgCqwWp6jA6jwqQ1KF83nc8AaD9U6w@mail.gmail.com>
 <CAP7+vJLsD8fd1f1o4xqz-GYRajVx28rxKvCXUK4Bj8=kZNyyyA@mail.gmail.com>
 <CAOMjWkm8yURSatm_bEyqi=NRYDhDy7EtcqMNm-znDuuAT2PcBw@mail.gmail.com>
 <CADiSq7e6308Esr--5xPD3h3S9+zGSj+sSpsX09e8-Y1Pr8=Deg@mail.gmail.com>
 <CAP7+vJLzH345E4az+Wpra8jXExKbmOa_e5oOy_7KB76P3ELLrA@mail.gmail.com>
 <CADiSq7d6aj_2OkvuCvFGHUTDqedqsMvEp5P9VmB-K6YkR6tSZw@mail.gmail.com>
Message-ID: <CAOMjWkn4y+Sf-KYRKPAsRTgMvBUFOvyi_xsr-yCZ9-6vTphnzQ@mail.gmail.com>

On 23 June 2015 at 05:33, Nick Coghlan <ncoghlan at gmail.com> wrote:

>
> > Also, since classes inside functions are commonly used in unit tests (at
> > least mine :-), I worry that *any* changes in this behavior might break
> > working code (no matter how much that "working" could be considered an
> > accident, it's still going to be a bear to debug if this happens to you).
>
> Agreed - I think any change here would need to go through a
> deprecation period where we warned about cases where LOAD_NAME would
> be changed to LOAD_CLASSDEREF and there was a nonlocal defined with
> that name.
>
> Since it's trivial easy to workaround by renaming a local variable to
> use a name that differs from the nested class attribute, I'd also be
> entirely happy with leaving the current behaviour as an obscure quirk
> - I'm only +0 on changing it, and also +0 on declaring it "not worth
> the hassle of changing".
>
>
This is also my attitude on this issue. Looks like everyone agrees that
this is
a strange behavior, but not strange enough to be changed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150623/789878e7/attachment.html>

From barry at python.org  Tue Jun 23 17:38:13 2015
From: barry at python.org (Barry Warsaw)
Date: Tue, 23 Jun 2015 11:38:13 -0400
Subject: [Python-Dev] Getting ready for  Python 3.5
Message-ID: <20150623113813.055960d3@limelight.wooz.org>

[Apologies for the cross-posting! -BAW]

For Ubuntu 15.10 (Wily Werewolf), we want to make Python 3.5 the default
Python 3 version.  It's currently undecided whether we will keep Python 3.4 as
a supported version, but a lot of that depends on how easily an archive port
to Python 3.5 goes.  Ideally, we'd be able to make the switch to 3.5 now ahead
of the planned 16.04 LTS release.

As part of this work, we've done a partial test rebuild of packages in the
archive that depend in some way on Python 3.  For now, this is an x86 only
partial rebuild in a PPA.  In this PPA, we have Python 3.5 as the default
Python 3 version, with 3.4 still enabled.

You can see the results so far here:

https://launchpad.net/~pythoneers/+archive/ubuntu/py35asdefault/+packages

TL;DR: of 1065 uploads, we have ~64% success rate.  Some caveats:

 * These are i386 and amd64 only, so we're still in the dark about other
   architectures.

 * These are build tests only.  While many builds do run the package's test
   suite, not all packages have test suites, or not all are enabled.  Also, we
   are not running what are called DEP-8 tests, which are various additional
   smoketests that are run after the package is built, on the built package
   (e.g. install the package and all its dependencies in a chroot and see if
   it can be imported in both Python 2 and 3).

 * Some failures are due to dependency build order, so a simple rebuild may
   succeed.

 * Some failures may not be new.  Because a lot of packages don't get new
   uploads for every new Ubuntu version, they may be failing to build for
   reasons unrelated to Python 3.5.

 * We may have missed some packages which declare their build dependencies in
   a way that got past our rather simplistic filter.

Our plan is get the success rate up on the PPA, filing and fixing bugs
upstream where possible, then to set up a full archive test rebuild, again
with 3.5 as default and 3.4 as enabled, to see what other failures may occur.
This full archive rebuild will include all the other architectures, and it's
possible packages will build on x86 but fail on some other platform.  We're
also planning on setting up a QA stack to run the DEP-8 tests for packages
that have them.

In the meantime, you can help!

I've started a wiki page listing the backward compatibility breaks I've found
so far.  Feel free to add to this, or look into the linked issues.  Contribute
to the wiki or the linked issues.

https://wiki.python.org/moin/PortingToPy3k/34to35

Build and install Python 3.5 from source and run your package's test suite in
a virtual environment or in tox.  (tox 2.1 supports Python 3.5 but it isn't
yet available in Debian/Ubuntu - no worries, install tox in a virtualenv and
use *that* to run your test suite.)

Create a chroot with the py35asdefault PPA enabled, and do some porting there.
Here's a good guideline on how to use a PPA.

https://help.launchpad.net/Packaging/PPA
https://launchpad.net/~pythoneers/+archive/ubuntu/py35asdefault

Examine the build failures and see if you can identify or fix the problem.
Start with the package details page

https://launchpad.net/~pythoneers/+archive/ubuntu/py35asdefault/+packages

and drill down into the console logs for specific failures by clicking on the
'amd64' or 'i386' links next to any big red X you see, then clicking on the
'buildlog' link.  E.g.

http://tinyurl.com/pnpjtv6

Scroll down near the bottom, which is where the failure will most likely be
evident.

Release new versions of your packages with Python 3.5 support to PyPI so the
Debian maintainers and Ubuntu developers can begin to upload compatible
versions.  If you're a developer for other Linux distros or platforms, let's
work together!  (As is often the case, we'll trail blaze on Ubuntu and push
the results upstream to Debian as much as possible.)

Follow up here on any of the CC'd mailing lists, email me directly, or ping me
on IRC (nick 'barry' on python-dev, ubuntu-release @ freenode, debian-python @
OFTC).

Python 3.5 is in beta, with a final release scheduled for September 2015 (see
PEP 478).  There's still plenty of time to fix or adjust to issues we find but
there are a ton of packages, so all help is greatly appreciated.

Let's make Python 3.5 the easiest upgrade ever.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150623/48c9e66d/attachment.sig>

From rustompmody at gmail.com  Wed Jun 24 12:22:27 2015
From: rustompmody at gmail.com (Rustom Mody)
Date: Wed, 24 Jun 2015 15:52:27 +0530
Subject: [Python-Dev] CRLF problems in repo
Message-ID: <CAJ+Teoec=rSj0CsykXAp0oq7tqs_bJ1gP-rOgQ1_onuUmx4f-w@mail.gmail.com>

Hello folks

<context>
Along with a few students we were preparing to poke around inside the
CPython sources.
Of course it would be neat if we submitted useful patches... but since I
dont expect to get there so fast I thought I'd start by setting up with git
which I am more familiar with than mercurial.
That resulted in some adventures... including CRLF issues.
</context>

Some suggestions on both python mentors list and the general user list did
not seem to think these completely irrelevant so reporting here.

Can submit a bug-report if it looks ok

-------------------------

Mixed file -- both LF and CRLF (line 29 LF rest CRLF)
Lib/venv/scripts/nt/Activate.ps1

Lib/test/decimaltestdata is a directory with mixed up files -- ie some CRLF
some LF files

PCbuild/readme.txt: CRLF (explicit)
This is fine. Many of the following should (ideally/IMHO) be likewise CRLF
(not BIN)

*.sln: CRLF but marked as BIN in hgeol

PC/example_nt/example.vcproj
Emacs shows as Dos file; But not picked up by file; maybe other such

Missing BIN pattern from .hgeol
*.pdf
Note that Doc/library/turtle-star.pdf exists
This is most likely a bug

Existent file-types with non-existent files in hgeol; Probably harmless
*.vsprops
*.mk
*.dsp
*.dsw

These seem straightforward CRLF files to me; why BIN in hgeol?
Lib/test/test_email/data/msg_26.txt
Lib/test/coding20731.py

Regards,
Rusi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150624/bba6b642/attachment.html>

From srkunze at mail.de  Wed Jun 24 20:14:14 2015
From: srkunze at mail.de (Sven R. Kunze)
Date: Wed, 24 Jun 2015 20:14:14 +0200
Subject: [Python-Dev] Importance of "async" keyword
Message-ID: <558AF376.30607@mail.de>

Hi everybody,

recently, I stumbled over the new 3.5 release and in doing so over PEP 0492.

After careful consideration and after reading many blog posts of various 
coders, I first would like to thank Yury Selivanov and everybody else 
who brought PEP 0492 to its final state. I therefore considered usage 
within our projects, however, still find hazy items in PEP 0492. So, I 
would like to contribute my thoughts on this in order to either increase 
my understanding or even improve Python's async capability.

In order to do this, I need a clarification regarding the rationale 
behind the async keyword. The PEP rationalizes its introduction with:

"If useful() [...] would become a regular python function, [...] 
important() would be broken."


What bothers me is, why should important() be broken in that case?


Regards,
Sven R. Kunze

From yselivanov.ml at gmail.com  Wed Jun 24 22:16:52 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Wed, 24 Jun 2015 16:16:52 -0400
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558AF376.30607@mail.de>
References: <558AF376.30607@mail.de>
Message-ID: <558B1034.6080504@gmail.com>

Hi Sven,

On 2015-06-24 2:14 PM, Sven R. Kunze wrote:
> Hi everybody,
>
> recently, I stumbled over the new 3.5 release and in doing so over PEP 
> 0492.
>
> After careful consideration and after reading many blog posts of 
> various coders, I first would like to thank Yury Selivanov and 
> everybody else who brought PEP 0492 to its final state. I therefore 
> considered usage within our projects, however, still find hazy items 
> in PEP 0492. So, I would like to contribute my thoughts on this in 
> order to either increase my understanding or even improve Python's 
> async capability.
>
> In order to do this, I need a clarification regarding the rationale 
> behind the async keyword. The PEP rationalizes its introduction with:
>
> "If useful() [...] would become a regular python function, [...] 
> important() would be broken."
>
>
> What bothers me is, why should important() be broken in that case?


Let me first quote the PEP to make the question more obvious.

The section at question is: [1]; it explains why "async" keyword is 
needed for function declarations.  Here's a code example from the PEP:

def useful():
     ...
     await log(...)
     ...

def important():
     await useful()

Sven, if we don't have 'async def', and instead say that "a function is 
a *coroutine function* when it has at least one 'await' expression", 
then when you refactor "useful()" by removing the "await" from it, it 
stops being a *coroutine function*, which means that it won't return an 
*awaitable* anymore.  Hence the "await useful()" call in the 
"important()" function will be broken.

'async def' guarantees that function always return a "coroutine"; it 
eliminates the need of using @asyncio.coroutine decorator (or similar), 
which besides making code easier to read, also improves the 
performance.  Not to mention new 'async for' and 'async with' statements.

This is also covered in the rationale section of the PEP [2]

[1] https://www.python.org/dev/peps/pep-0492/#importance-of-async-keyword
[2] https://www.python.org/dev/peps/pep-0492/#rationale-and-goals

Thanks,
Yury

P.S. This and many other things were discussed at length on the mailing 
lists, I suggest you to browse through the archives.

From pjenvey at underboss.org  Wed Jun 24 22:34:38 2015
From: pjenvey at underboss.org (Philip Jenvey)
Date: Wed, 24 Jun 2015 13:34:38 -0700
Subject: [Python-Dev] speed.python.org (was: 2.7 is here until 2020,
	please don't call it a waste.)
In-Reply-To: <CAKJDb-OfGFW8PS-m7Yp40==Vxw7fB45WcpQeqUs=V1UG0P3sCA@mail.gmail.com>
References: <34384DEB0F607E42BD61D446586AD4E86B51172C@ORSMSX103.amr.corp.intel.com>
 <CANc-5Uwx5rVJfYfQK6_-SxnJSSLv62t7p4NddC=mNXda=WWJDQ@mail.gmail.com>
 <mk7asm$37i$1@ger.gmane.org>
 <CAP7+vJKDqU8TakYoQ6+z+zt_vs823MGZ9YzcFOBOwusYRtyqKg@mail.gmail.com>
 <CAMpsgwY+6CPWWEr2yyNgiTYCWJ9p67UNy8_p5hR2WQ9MtXXnbQ@mail.gmail.com>
 <CADiSq7eGd-R8PjNYKnn8XmMF7BD-s9cd65E+HNnXzRi-gSa4GA@mail.gmail.com>
 <CAGE7PNJqO36PXi3P8a6Ov1vkN2KigDnaMYQGyFcophYiSe+GNw@mail.gmail.com>
 <5568FAF6.6040802@python.org> <20150530015621.518b537f@fsol>
 <CADiSq7c35TP5pfH6AuEg7-hESf7-gkbLeh40i6rHREhvfqziRA@mail.gmail.com>
 <556906F0.4090504@sdamon.com>
 <CADiSq7daKMxSoFswRaYF2_PKtRGK2nMoJ69N4weKc7E=KKreYA@mail.gmail.com>
 <CABVPEKqW1qoHLSPppqtEELbLR=15cqUKMAk55CTVv+F1HqqUtw@mail.gmail.com>
 <556A45D0.7070606@hastings.org>
 <CAMSv6X3awjhAOdCW5oN2Kyz3GiwrRMO_mPCOqQKVaFk6r96OTw@mail.gmail.com>
 <556C3E93.50108@egenix.com>
 <CAGBzy2o5h=+pR8iLYfPjpQs8+-gwVDUjL1kXMPtMWtn26L7MyA@mail.gmail.com>
 <55702EBB.7070606@egenix.com> <20150604143200.E2BCFB14089@webabinitio.net>
 <CAK5idxTW7BHL=Y=6bqpXekmoDb9vptG71wCe8rjzJoNupWnpEA@mail.gmail.com>
 <CAKJDb-OfGFW8PS-m7Yp40==Vxw7fB45WcpQeqUs=V1UG0P3sCA@mail.gmail.com>
Message-ID: <0C87750C-3DAC-46FB-BBAD-EE312877D709@underboss.org>

> On Jun 22, 2015, at 6:58 PM, Zachary Ware <zachary.ware+pydev at gmail.com> wrote:
> 
> On Thu, Jun 4, 2015 at 10:51 AM, Maciej Fijalkowski <fijall at gmail.com> wrote:
>> On Thu, Jun 4, 2015 at 4:32 PM, R. David Murray <rdmurray at bitdance.com> wrote:
>>> OK, so what you are saying is that speed.python.org will run a buildbot
>>> slave so that when a change is committed to cPython, a speed run will be
>>> triggered?  Is "the runner" a normal buildbot slave, or something
>>> custom?  In the normal case the master controls what the slave
>>> runs...but regardless, you'll need to let us know how the slave
>>> invocation needs to be configured on the master.
>> 
>> Ideally nightly (benchmarks take a while). The setup for pypy looks like this:
>> 
>> 
>> https://bitbucket.org/pypy/buildbot/src/5fa1f1a4990f842dfbee416c4c2e2f6f75d451c4/bot2/pypybuildbot/builds.py?at=default#cl-734
>> 
>> so fairly easy. This already generates a json file that you can plot.
>> We can setup an upload automatically too.
> 
> I've been looking at what it will take to set up the buildmaster for
> this, and it looks like it's just a matter of checking out the
> benchmarks, building Python, testing it, and running the benchmarks.
> There is the question of which benchmark repo to use:
> https://bitbucket.org/pypy/benchmarks or
> https://hg.python.org/benchmarks; ideally, we should use
> hg.python.org/benchmarks for CPython benchmarks, but it looks like
> pypy/benchmarks has the necessary runner, so I suppose we'll be using
> it for now.  Is there interest from both sides to merge those
> repositories?

PyPy?s is a more extensive suite but it does not fully support Python 3 yet. It makes sense to merge them.

--
Philip Jenvey


From srkunze at mail.de  Wed Jun 24 23:21:54 2015
From: srkunze at mail.de (Sven R. Kunze)
Date: Wed, 24 Jun 2015 23:21:54 +0200
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558B1034.6080504@gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
Message-ID: <558B1F72.9000409@mail.de>

Thanks, Yury, for you quick response.

On 24.06.2015 22:16, Yury Selivanov wrote:
> Sven, if we don't have 'async def', and instead say that "a function 
> is a *coroutine function* when it has at least one 'await' 
> expression", then when you refactor "useful()" by removing the "await" 
> from it, it stops being a *coroutine function*, which means that it 
> won't return an *awaitable* anymore.  Hence the "await useful()" call 
> in the "important()" function will be broken.

I feared you would say that. Your reasoning assumes that *await* needs 
an *explicitly declared awaitable*.

Let us assume for a moment, we had no async keyword. So, any awaitable 
needs to have at least 1 await in it. Why can we not have awaitables 
with 0 awaits in them?

>
> 'async def' guarantees that function always return a "coroutine"; it 
> eliminates the need of using @asyncio.coroutine decorator (or 
> similar), which besides making code easier to read, also improves the 
> performance.  Not to mention new 'async for' and 'async with' statements.

Recently, I read Guido's blog about the history of Python and how he 
eliminated special cases from Python step by step. As I see it, the same 
could be done here.

What is the difference of a function (no awaits) or an awaitable (> 1 
awaits) from an end-user's perspective (i.e. the programmer)?

My answer would be: none. When used the same way, they should behave in 
the same manner. As long as, we get our nice tracebacks when something 
went wrong, everything seems find to me.


Please, correct me if I am wrong.

>
> This is also covered in the rationale section of the PEP [2]
>
> [1] https://www.python.org/dev/peps/pep-0492/#importance-of-async-keyword
> [2] https://www.python.org/dev/peps/pep-0492/#rationale-and-goals
>
> Thanks,
> Yury
>
> P.S. This and many other things were discussed at length on the 
> mailing lists, I suggest you to browse through the archives.

I can imagine that. As said I went through of some of them, but it could 
be that I missed some of them as well.

Is there a way to search them through (by not using Google)?


Regard,
Sven
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150624/b2a6addf/attachment.html>

From stephen at xemacs.org  Thu Jun 25 03:15:35 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 25 Jun 2015 10:15:35 +0900
Subject: [Python-Dev]  CRLF problems in repo
In-Reply-To: <CAJ+Teoec=rSj0CsykXAp0oq7tqs_bJ1gP-rOgQ1_onuUmx4f-w@mail.gmail.com>
References: <CAJ+Teoec=rSj0CsykXAp0oq7tqs_bJ1gP-rOgQ1_onuUmx4f-w@mail.gmail.com>
Message-ID: <87r3p0wpu0.fsf@uwakimon.sk.tsukuba.ac.jp>

Rustom Mody writes:

 > Can submit a bug-report if it looks ok

Thanks for the post.  IMO this should have gone directly to the
tracker because (1) you have some support for the idea that at least
some of these are unintentional, and therefore candidates for
alignment with the rest of the code, (2) the nature of the issue is
that it's a mixed bag that is going to need to be addressed line by
line, and the tracker is much better at doing that (you can piece each
file out to a separate issue, and have a "master issue" that blocks on
each individual issue), and (3) these are "easy bugs", so that new
contributors (eg, your students) could do the actual patches while
discussion of the need is ongoing.

YMMV, of course, but (2) and (3) are generic criteria for deciding
whether to post or to use the tracker, so I decided to mention them.

From steve at pearwood.info  Thu Jun 25 04:16:59 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 25 Jun 2015 12:16:59 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558B1F72.9000409@mail.de>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
Message-ID: <20150625021659.GS20701@ando.pearwood.info>

On Wed, Jun 24, 2015 at 11:21:54PM +0200, Sven R. Kunze wrote:
> Thanks, Yury, for you quick response.
> 
> On 24.06.2015 22:16, Yury Selivanov wrote:
> >Sven, if we don't have 'async def', and instead say that "a function 
> >is a *coroutine function* when it has at least one 'await' 
> >expression", then when you refactor "useful()" by removing the "await" 
> >from it, it stops being a *coroutine function*, which means that it 
> >won't return an *awaitable* anymore.  Hence the "await useful()" call 
> >in the "important()" function will be broken.
> 
> I feared you would say that. Your reasoning assumes that *await* needs 
> an *explicitly declared awaitable*.
> 
> Let us assume for a moment, we had no async keyword. So, any awaitable 
> needs to have at least 1 await in it. Why can we not have awaitables 
> with 0 awaits in them?

I haven't been following the async discussion in detail, but I would 
expect that the answer is for the same reason that you cannot have a 
generator function with 0 yields in it.


> >'async def' guarantees that function always return a "coroutine"; it 
> >eliminates the need of using @asyncio.coroutine decorator (or 
> >similar), which besides making code easier to read, also improves the 
> >performance.  Not to mention new 'async for' and 'async with' statements.
> 
> Recently, I read Guido's blog about the history of Python and how he 
> eliminated special cases from Python step by step. As I see it, the same 
> could be done here.
> 
> What is the difference of a function (no awaits) or an awaitable (> 1 
> awaits) from an end-user's perspective (i.e. the programmer)?

The first is syncronous, the second is asyncronous. 

> My answer would be: none. When used the same way, they should behave in 
> the same manner. As long as, we get our nice tracebacks when something 
> went wrong, everything seems find to me.

How is that possible?

def func():
    # Simulate a time consuming calculation.
    time.sleep(10000)
    return 42

# Call func syncronously, blocking until the calculation is done:
x = func()
# Call func asyncronously, without blocking:
y = func()


I think that one of us is missing something here. As I said, I haven't 
followed the whole discussion, so it might be me. But on face value, I 
don't think what you say is reasonable.

> >P.S. This and many other things were discussed at length on the 
> >mailing lists, I suggest you to browse through the archives.
> 
> I can imagine that. As said I went through of some of them, but it could 
> be that I missed some of them as well.
> 
> Is there a way to search them through (by not using Google)?

You can download the mailing list archive for the relevant months, and 
use your mail client to search them.


-- 
Steve

From arcriley at gmail.com  Thu Jun 25 04:36:43 2015
From: arcriley at gmail.com (Arc Riley)
Date: Wed, 24 Jun 2015 19:36:43 -0700
Subject: [Python-Dev] Adding c-api async protocol support
Message-ID: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>

A type slot for tp_as_async has already been added (which is good!) but we
do not currently seem to have protocol functions for awaitable types.

I would expect to find an Awaitable Protocol listed under Abstract Objects
Layer, with functions like PyAwait_Check, PyAwaitIter_Check, and
PyAwaitIter_Next, etc.

Specifically its currently difficult to test whether an object is awaitable
or an awaitable iterable, or use said objects from the c-api without
relying on method testing/calling mechanisms.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150624/f7b548e1/attachment.html>

From rustompmody at gmail.com  Thu Jun 25 05:36:36 2015
From: rustompmody at gmail.com (Rustom Mody)
Date: Thu, 25 Jun 2015 09:06:36 +0530
Subject: [Python-Dev] CRLF problems in repo
In-Reply-To: <87r3p0wpu0.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CAJ+Teoec=rSj0CsykXAp0oq7tqs_bJ1gP-rOgQ1_onuUmx4f-w@mail.gmail.com>
 <87r3p0wpu0.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAJ+TeodiaPJEzn1=MY9u461xunGhqwEwriBoLiTF6EDbX2bLHQ@mail.gmail.com>

On Thu, Jun 25, 2015 at 6:45 AM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
>
> Rustom Mody writes:
>
>  > Can submit a bug-report if it looks ok
>
> Thanks for the post.  IMO this should have gone directly to the
> tracker

Done http://bugs.python.org/issue24507#msg245793

> because (1) you have some support for the idea that at least
> some of these are unintentional, and therefore candidates for
> alignment with the rest of the code, (2) the nature of the issue is
> that it's a mixed bag that is going to need to be addressed line by
> line, and the tracker is much better at doing that (you can piece each
> file out to a separate issue, and have a "master issue" that blocks on
> each individual issue),

I guess you meant generic-you here?

> and (3) these are "easy bugs", so that new
> contributors (eg, your students) could do the actual patches while
> discussion of the need is ongoing.

I thought of that but if this meant a rather obese patch with really
minor real-contents I assumed this would be premature [for me :-) ]

>
> YMMV, of course, but (2) and (3) are generic criteria for deciding
> whether to post or to use the tracker, so I decided to mention them.

Thanks for explaining


Regards
Rusi

From stefan_ml at behnel.de  Thu Jun 25 06:48:36 2015
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Thu, 25 Jun 2015 06:48:36 +0200
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
Message-ID: <mmg175$50q$1@ger.gmane.org>

Arc Riley schrieb am 25.06.2015 um 04:36:
> A type slot for tp_as_async has already been added (which is good!) but we
> do not currently seem to have protocol functions for awaitable types.
> 
> I would expect to find an Awaitable Protocol listed under Abstract Objects
> Layer, with functions like PyAwait_Check, PyAwaitIter_Check, and
> PyAwaitIter_Next, etc.
> 
> Specifically its currently difficult to test whether an object is awaitable
> or an awaitable iterable, or use said objects from the c-api without
> relying on method testing/calling mechanisms.

FYI, Cython's latest master branch already implements PEP 492 (in Python
2.6+, using slots in Py3.2+). So you can just write "await x" and "async
for ..." and get the expected slot calling code for it, or method calling
fallback code in Python 2.x.

That being said, I guess you're right that CPython would eventually want to
extend the C-API here in order to cover the new protocols also for manually
written C code. I'd wait with that a bit, though, until after Py3.5 is
finally released and the actual needs for C code that want to use the new
features become clearer.

Stefan



From zachary.ware+pydev at gmail.com  Thu Jun 25 07:38:16 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Thu, 25 Jun 2015 00:38:16 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <5589429A.7050801@stackless.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <-6939408081783016743@unknownmsgid>
 <CAKJDb-PuLFqDX_KOLkb9uxOZgPANALqKcgZx1fn_xRFLpTsi5g@mail.gmail.com>
 <CADiSq7dBw9FcmFMrDfjk9CyB3UzvUbEuaoXLYgHYTm4KAsLtCg@mail.gmail.com>
 <CAKJDb-NnH9cd4kgyzkXJQGTZAoLthiwUmLadoL-dmeHR+Htz9A@mail.gmail.com>
 <5589429A.7050801@stackless.com>
Message-ID: <CAKJDb-P4MoJ9NT+1Ej38KnX0TTQatsvi8VpxuV_RE-VLcqn8YQ@mail.gmail.com>

On Jun 23, 2015, at 06:27, Christian Tismer <tismer at stackless.com> wrote:
> On 23.06.15 06:42, Zachary Ware wrote:
>> Christian, what say you?  Would having the project files from 3.5
>> backported to 2.7 (but still using MSVC 9) be positive, negative, or
>> indifferent for Stackless?
>
> I am very positive about your efforts.
>
> We wanted to create a Stackless 2.8 version end of 2013
> with support for newer compilers, to begin with. After a while,
> we stopped this, and I left the branch in a private, unmaintained
> repository.
> https://bitbucket.org/stackless-dev/stackless-private/branch/2.8-slp

Thanks, Chris!

Since there has been no violent opposition (and some positive
support), I?ve cleaned up the patch and posted it:

https://bugs.python.org/issue24508

Reviews (and review-by-testing) welcome!

-- 
Zach

From stephen at xemacs.org  Thu Jun 25 10:54:52 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 25 Jun 2015 17:54:52 +0900
Subject: [Python-Dev] CRLF problems in repo
In-Reply-To: <CAJ+TeodiaPJEzn1=MY9u461xunGhqwEwriBoLiTF6EDbX2bLHQ@mail.gmail.com>
References: <CAJ+Teoec=rSj0CsykXAp0oq7tqs_bJ1gP-rOgQ1_onuUmx4f-w@mail.gmail.com>
 <87r3p0wpu0.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAJ+TeodiaPJEzn1=MY9u461xunGhqwEwriBoLiTF6EDbX2bLHQ@mail.gmail.com>
Message-ID: <87oak4w4kj.fsf@uwakimon.sk.tsukuba.ac.jp>

Private, since it doesn't really have anything to do with evaluating
actual content.  FYI, this thread probably should have stayed on
core-mentorship for a bit and then jumped directly to the tracker.

Rustom Mody writes:

 > > because (1) you have some support for the idea that at least
 > > some of these are unintentional, and therefore candidates for
 > > alignment with the rest of the code,

This you means you,

 (2) the nature of the issue is
 > > that it's a mixed bag that is going to need to be addressed line by
 > > line, and the tracker is much better at doing that (you can piece each
 > > file out to a separate issue, and have a "master issue" that blocks on
 > > each individual issue),
 > 
 > I guess you meant generic-you here?

and this you is indeed generic-you, although it applies to your case.

BTW, my terminology "blocks" is inaccurate (though in common use on
Python channels).  The proper term is "the master issue *depends* on
each individual issue, and the individual issues are *dependencies* of
the master issue."

 > > and (3) these are "easy bugs", so that new contributors (eg, your
 > > students) could do the actual patches while discussion of the
 > > need is ongoing.
 > 
 > I thought of that but if this meant a rather obese patch with really
 > minor real-contents I assumed this would be premature [for me :-) ]

The point of splitting up the task according to different files is so
that each one would get a patch, which could be done by several
students (or 3rd parties) individually without coordinating
explicitly.  Of course you might personally want to take
responsibility to screen your students' work, but that's not necessary
with this scheme unless your students are very inexperienced and
immature, like junior high school.  (I don't know what level you're
talking about, I'd guess college so this warning doesn't apply.  And
of course even junior high school students can be reliable -- I think
the record youngest committer got the privilege at age 15.  I think
that's very cool. :-)  And of course this is all up to your judgment.

Then the submitted patch would be reviewed "this is better, apply"
(which needs to be done by a developer with commit privilege, so at
that point they have to review the patch) or "this is
unecessary/inappropriate, mark invalid/wontfix and close."  After
resolving the particular issue, the "master" issue will show fewer
blockers.

I don't think this particular technique is discussed in the
developer's Guide, but that's the best source for a concise complete
description of the submit-review-commit process in Python.  I think
you mentioned you already read the Developers' Guide but it's easier
to repost the URL than search for that post.

https://docs.python.org/devguide/

Regards,
Steve


From arcriley at gmail.com  Thu Jun 25 12:08:59 2015
From: arcriley at gmail.com (Arc Riley)
Date: Thu, 25 Jun 2015 03:08:59 -0700
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <mmg175$50q$1@ger.gmane.org>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
 <mmg175$50q$1@ger.gmane.org>
Message-ID: <CAHhipD-irgn5kVZRSgpc4hN=nxNmoTZAcntq2Op92b9f_PyFww@mail.gmail.com>

On Wed, Jun 24, 2015 at 9:48 PM, Stefan Behnel <stefan_ml at behnel.de> wrote:
>
>
> I'd wait with that a bit, though, until after Py3.5 is finally released
> and the actual needs for C code that want to use the new
> features become clearer.
>

I strongly disagree.

What we would end up with is 3rd party extension modules developed over the
next 2-3 years including their own awaitable testing functions/macros, and
in order to continue to provide 3.5 support into the future that code will
persist for the next 5-10 years regardless to whether its later added in
Python 3.6.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150625/c061fdfa/attachment.html>

From mal at egenix.com  Thu Jun 25 15:54:30 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 25 Jun 2015 15:54:30 +0200
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
Message-ID: <558C0816.9090703@egenix.com>

On 22.06.2015 19:03, Zachary Ware wrote:
> Hi,
> 
> As you may know, Steve Dower put significant effort into rewriting the
> project files used by the Windows build as part of moving to VC14 as
> the official compiler for Python 3.5.  Compared to the project files
> for 3.4 (and older), the new project files are smaller, cleaner,
> simpler, more easily extensible, and in some cases quite a bit more
> correct.
> 
> I'd like to backport those new project files to 2.7, and Intel is
> willing to fund that work as part of making Python ICC compilable on
> all platforms they support since it makes building Python 2.7 with ICC
> much easier.  I have no intention of changing the version of MSVC used
> for official 2.7 builds, it *will* remain at MSVC 9.0 (VS 2008) as
> determined the last time we had a thread about it.  VS 2010 and newer
> can access older compilers (back to 2008) as a 'PlatformToolset' if
> they're installed, so all we have to do is set the PlatformToolset in
> the projects at 'v90'.  Backporting the projects would make it easier
> to build 2.7 with a newer compiler, but that's already possible if
> somebody wants to put enough work into it, the default will be the old
> compiler, and we can emit big scary warnings if somebody does use a
> compiler other than v90.
> 
> With the stipulation that the officially supported compiler won't
> change, I want to make sure there's no major opposition to replacing
> the old project files in PCbuild.  The old files would move to
> PC\VS9.0, so they'll still be available and usable if necessary.
> 
> Using the backported project files to build 2.7 would require two
> versions of Visual Studio to be installed; VS2010 (or newer) would be
> required in addition to VS2008.  All Windows core developers should
> already have VS2010 for Python 3.4 (and/or VS2015 for 3.5) and I
> expect that anyone else who cares enough to still have VS2008 probably
> has (or can easily get) one of the free editions of VS 2010 or newer,
> so I don't consider this to be a major issue.

To understand this correctly:

In order to build Python 2.7.11 (or a later version), we'd
now need VS 2008 *and* VS 2010 installed ?

This doesn't sound like it would make things easier for
people who need to build their own Python on Windows, e.g.
because they are freezing their apps for distribution, or
using their own internal special builds.

For VS 2008 we now have a long-term solution thanks to MS.
The same is not true for VS 2010 (download links are not official
anymore), so we'd complicate things again by requiring the
mixed compiler setup.

> The backported files could be added alongside the old files in
> PCbuild, in a better-named 'NewPCbuild' directory, or in a
> subdirectory of PC. I would rather replace the old project files in
> PCbuild, though; I'd like for the backported files to be the
> recommended way to build, complete with support from PCbuild/build.bat
> which would make the new project files the default for the buildbots.

If you keep the old VS 2008 build files around, I'd be
fine with having an optional upgrade path to newer
compilers :-)

It's rather unlikely that the project files will change much
in coming years, so there shouldn't be much maintenance
effort.

> I have a work-in-progress branch with the backported files in PCbuild,
> which you can find at
> https://hg.python.org/sandbox/zware/log/project_files_backport.  There
> are still a couple bugs to work out (and a couple unrelated changes to
> PC/pyconfig.h), but most everything works as expected.
> 
> Looking forward to hearing others' opinions,

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 25 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2015-06-25: Released mxODBC 3.3.3 ...             http://egenix.com/go79
2015-06-16: Released eGenix pyOpenSSL 0.13.10 ... http://egenix.com/go78
2015-07-20: EuroPython 2015, Bilbao, Spain ...             25 days to go

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From zachary.ware+pydev at gmail.com  Thu Jun 25 17:12:39 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Thu, 25 Jun 2015 10:12:39 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <558C0816.9090703@egenix.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <558C0816.9090703@egenix.com>
Message-ID: <CAKJDb-N3wp2nKXCVSytvOgpGkWAvmiCcntjMXp+NFA_V6a4q5w@mail.gmail.com>

On Thu, Jun 25, 2015 at 8:54 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> On 22.06.2015 19:03, Zachary Ware wrote:
>> Using the backported project files to build 2.7 would require two
>> versions of Visual Studio to be installed; VS2010 (or newer) would be
>> required in addition to VS2008.  All Windows core developers should
>> already have VS2010 for Python 3.4 (and/or VS2015 for 3.5) and I
>> expect that anyone else who cares enough to still have VS2008 probably
>> has (or can easily get) one of the free editions of VS 2010 or newer,
>> so I don't consider this to be a major issue.
>
> To understand this correctly:
>
> In order to build Python 2.7.11 (or a later version), we'd
> now need VS 2008 *and* VS 2010 installed ?

Using the backported project files, yes, two versions of VS would be
required.  However, the second version doesn't have to be VS 2010, it
could be 2010, 2012, 2013, 2015, or any from the future (until
Microsoft changes the format of the project files again).

> This doesn't sound like it would make things easier for
> people who need to build their own Python on Windows, e.g.
> because they are freezing their apps for distribution, or
> using their own internal special builds.
>
> For VS 2008 we now have a long-term solution thanks to MS.
> The same is not true for VS 2010 (download links are not official
> anymore), so we'd complicate things again by requiring the
> mixed compiler setup.

For anyone building 2.7 and any other (current) version of Python on
the same machine, all the necessary pieces are already in place.

The main motivation for backporting is to make it easier to build
Python with ICC.  Pre-backport, Intel gave me a list of 9 steps they
were doing to build 2.7.10 with ICC.  Post-backport, it's just
`PCbuild\build.bat -e [-p x64] [-d] "/p:PlatformToolset=Intel C++
Compiler XE 15.0" "/p:BasePlatformToolset=v90"` (the bracketed options
being optional, for 64-bit and debug builds, respectively).

There are some benefits for MSVC builds too, though: it's easier to do
a parallel build, and OpenSSL and Tcl/Tk/Tix are built by the project
files, so the whole build can be done much quicker.  The new project
files for Tcl and Tk also take care of copying their DLLs into PCbuild
so that _tkinter can find them without having to mess around with
PATH.  My patch also fixes a couple of long-standing bugs with finding
the right libraries for 64-bit Tcl/Tk and the test suite undoing the
work of FixTk.py after any test that imports Tkinter, both of which
were exacerbated by the rest of the patch.

> If you keep the old VS 2008 build files around, I'd be
> fine with having an optional upgrade path to newer
> compilers :-)

The old files are moved to PC/VS9.0, and they work as expected as far
as I've tested them.  Btw, the upgrade path to newer compilers is just
a side-effect which I'm kind of torn about.  On one hand, making it
easier to build 2.7 with the "wrong" compiler is bound to lead to
issues somewhere somehow.  On the other hand, people are already doing
so anyway, with their own upgraded project files.

> It's rather unlikely that the project files will change much
> in coming years, so there shouldn't be much maintenance
> effort.

Hopefully not :)

-- 
Zach

From yselivanov.ml at gmail.com  Thu Jun 25 17:43:54 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Thu, 25 Jun 2015 11:43:54 -0400
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
Message-ID: <558C21BA.8040604@gmail.com>

Hi Arc,

On 2015-06-24 10:36 PM, Arc Riley wrote:
> A type slot for tp_as_async has already been added (which is good!) but we
> do not currently seem to have protocol functions for awaitable types.
>
> I would expect to find an Awaitable Protocol listed under Abstract Objects
> Layer, with functions like PyAwait_Check, PyAwaitIter_Check, and
> PyAwaitIter_Next, etc.
>
> Specifically its currently difficult to test whether an object is awaitable
> or an awaitable iterable, or use said objects from the c-api without
> relying on method testing/calling mechanisms.

The request is reasonable, I created a couple of bug tracker
issues:

http://bugs.python.org/issue24511
http://bugs.python.org/issue24510

Let's continue the discussion there.

Yury

From srkunze at mail.de  Thu Jun 25 17:55:53 2015
From: srkunze at mail.de (Sven R. Kunze)
Date: Thu, 25 Jun 2015 17:55:53 +0200
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <20150625021659.GS20701@ando.pearwood.info>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de> <20150625021659.GS20701@ando.pearwood.info>
Message-ID: <558C2489.2080305@mail.de>


On 25.06.2015 04:16, Steven D'Aprano wrote:
> On Wed, Jun 24, 2015 at 11:21:54PM +0200, Sven R. Kunze wrote:
>> Thanks, Yury, for you quick response.
>>
>> On 24.06.2015 22:16, Yury Selivanov wrote:
>>> Sven, if we don't have 'async def', and instead say that "a function
>>> is a *coroutine function* when it has at least one 'await'
>>> expression", then when you refactor "useful()" by removing the "await"
>> >from it, it stops being a *coroutine function*, which means that it
>>> won't return an *awaitable* anymore.  Hence the "await useful()" call
>>> in the "important()" function will be broken.
>> I feared you would say that. Your reasoning assumes that *await* needs
>> an *explicitly declared awaitable*.
>>
>> Let us assume for a moment, we had no async keyword. So, any awaitable
>> needs to have at least 1 await in it. Why can we not have awaitables
>> with 0 awaits in them?
> I haven't been following the async discussion in detail, but I would
> expect that the answer is for the same reason that you cannot have a
> generator function with 0 yields in it.
Exactly. That is why I do not stop here.
>
>>> 'async def' guarantees that function always return a "coroutine"; it
>>> eliminates the need of using @asyncio.coroutine decorator (or
>>> similar), which besides making code easier to read, also improves the
>>> performance.  Not to mention new 'async for' and 'async with' statements.
>> Recently, I read Guido's blog about the history of Python and how he
>> eliminated special cases from Python step by step. As I see it, the same
>> could be done here.
>>
>> What is the difference of a function (no awaits) or an awaitable (> 1
>> awaits) from an end-user's perspective (i.e. the programmer)?
> The first is syncronous, the second is asyncronous.
Correct. Main questions to me here: do I, as a caller, need to care?
>
>> My answer would be: none. When used the same way, they should behave in
>> the same manner. As long as, we get our nice tracebacks when something
>> went wrong, everything seems find to me.
> How is that possible?
>
> def func():
>      # Simulate a time consuming calculation.
>      time.sleep(10000)
>      return 42
>
> # Call func syncronously, blocking until the calculation is done:
> x = func()
> # Call func asyncronously, without blocking:
> y = func()
>
>
> I think that one of us is missing something here. As I said, I haven't
> followed the whole discussion, so it might be me. But on face value, I
> don't think what you say is reasonable.

Ah, wait. That is not what I intended and I completely agree with Guido 
when he says:
"I want to be able to *syntactically* tell where the suspension points 
are in coroutines." http://code.activestate.com/lists/python-dev/135906/

So, it is either:

# Call func syncronously, blocking until the calculation is done:
x = func()

# Call func asyncronously, without blocking:
y = await func()

So, from my perspective, no matter, how many (even zero) suspension 
points there are in func(), the first variant would still be blocking 
and the second one not.

Many programmers (when adhering to classical programming) conceive the 
world as if they call a function and simply do not care about how it 
works and what it does. The main thing is, it does what it is supposed 
to do.

Whether this function achieves the goal by working asynchronously or 
blocking, AND if I, as a programmer, allow the function to work 
asynchronously, is a completely different matter. That in turn is 
extremely well reflected by the 'await' keyword (IMHO at least).



Another issue that bothers me, is code reuse. Independent from whether 
the 'async def' makes sense or not, it would not allow us to reuse 
asyncio functions as if they were normal functions and vice versa (if I 
understood that correctly). So, we would have to implement things twice 
for the asyncio world and the classic world. To me, it would be 
important to use one function in either world where it suits me better. 
I am uncertain if that makes sense but right now it does to me.


One last thing regarding 'async def', that came to my mind recently, is 
that it compares to that necessity of Java to decorate functions with 
all possible exceptions that could be raised inside. Thankfully, we do 
not need to do that in Python. IIRC, it is considered bad practice as it 
ties modules together tighter as they need to be. If I add a single 
exception down in the traceback, I need to re-add it everywhere where 
that function is used. That is a mess regarding clean code. Somehow, 
'async def' reminds me of that, too, but maybe, I am missing something here.
>>> P.S. This and many other things were discussed at length on the
>>> mailing lists, I suggest you to browse through the archives.
>> I can imagine that. As said I went through of some of them, but it could
>> be that I missed some of them as well.
>>
>> Is there a way to search them through (by not using Google)?
> You can download the mailing list archive for the relevant months, and
> use your mail client to search them.

Thanks Steven. I already found one on the Web: 
http://code.activestate.com/lists/python-dev/

And as it seems, I already read through all relevant discussions. :-/

From victor.stinner at gmail.com  Thu Jun 25 17:56:39 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 25 Jun 2015 17:56:39 +0200
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <558C21BA.8040604@gmail.com>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
 <558C21BA.8040604@gmail.com>
Message-ID: <CAMpsgwYP19cOp9jvKxBWdKNdQ4QpQ9KVizTLcbf452w2L5LPTQ@mail.gmail.com>

It looks like the code is currently moving fast. I suggest to wait
until Python 3.6 to stabilize the Python C API for async/await. It's a
pain to maintain a public API. I hate having to add 2 or 3 versions of
a single function :-(

Victor

2015-06-25 17:43 GMT+02:00 Yury Selivanov <yselivanov.ml at gmail.com>:
> Hi Arc,
>
>
> On 2015-06-24 10:36 PM, Arc Riley wrote:
>>
>> A type slot for tp_as_async has already been added (which is good!) but we
>> do not currently seem to have protocol functions for awaitable types.
>>
>> I would expect to find an Awaitable Protocol listed under Abstract Objects
>> Layer, with functions like PyAwait_Check, PyAwaitIter_Check, and
>> PyAwaitIter_Next, etc.
>>
>> Specifically its currently difficult to test whether an object is
>> awaitable
>> or an awaitable iterable, or use said objects from the c-api without
>> relying on method testing/calling mechanisms.
>
>
> The request is reasonable, I created a couple of bug tracker
> issues:
>
> http://bugs.python.org/issue24511
> http://bugs.python.org/issue24510
>
> Let's continue the discussion there.
>
> Yury
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com

From Steve.Dower at microsoft.com  Thu Jun 25 17:42:26 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Thu, 25 Jun 2015 15:42:26 +0000
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <558C0816.9090703@egenix.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <558C0816.9090703@egenix.com>
Message-ID: <BLUPR0301MB165239E03E427AE513CDA5F8F5AE0@BLUPR0301MB1652.namprd03.prod.outlook.com>

M.-A. Lemburg wrote:
> For VS 2008 we now have a long-term solution thanks to MS.

Without the change to the project files, the compiler at http://aka.ms/vcpython27 isn't sufficient to build Python itself. In theory, with even more patching to the projects (or otherwise making up for the fact that the linked compiler doesn't include the platform toolset files so that MSBuild can find/use it), we may be able to make Python 2.7 buildable with that compiler and any version of MSBuild. Currently we have to build via pcbuild.sln, which requires a full Visual Studio 2008 installation (or Express for C++).

This also makes it more viable to use the Windows SDK compilers. If you install the Windows SDK 7.0 (which includes MSVC9) and Windows SDK 7.1 (which includes the platform toolset files for MSVC9 - toolsets were invented later than the 7.0 release) then you should be able to build from the command line or any version of Visual Studio from 2010 onwards.

> The same is not true for VS 2010 (download links are not official anymore), so we'd
> complicate things again by requiring the mixed compiler setup.

VS 2010 Express editions are still linked at the bottom of https://www.visualstudio.com/en-us/products/visual-studio-express-vs.aspx, though they appear to be protected behind a Microsoft account login (apparently so we can offer you the latest/greatest tools you have access to and not just the link you followed). Visual Studio 2013 Express for Desktop should also work (I'll be testing that as soon as I get a chance).

Cheers,
Steve

From andrew.svetlov at gmail.com  Thu Jun 25 19:25:00 2015
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Thu, 25 Jun 2015 20:25:00 +0300
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <CAMpsgwYP19cOp9jvKxBWdKNdQ4QpQ9KVizTLcbf452w2L5LPTQ@mail.gmail.com>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
 <558C21BA.8040604@gmail.com>
 <CAMpsgwYP19cOp9jvKxBWdKNdQ4QpQ9KVizTLcbf452w2L5LPTQ@mail.gmail.com>
Message-ID: <CAL3CFcXWSJ7hhbpUR6AwJA=cc9unVE-4C=7ubVp9X0+fRTs-HQ@mail.gmail.com>

I'm with Victor: we are in beta now.

Making C API is useful and important but we may wait for new Python release.
The same for asycnio acceleration: we definitely need it but it
requires inviting C API also I believe.

Personally I've concentrated on making third-party libraries on top of
asyncio -- aiohttp etc.

P.S.
Thank you Victor so much for your work on asyncio.
Your changes on keeping source tracebacks and raising warnings for
unclosed resources are very helpful.

On Thu, Jun 25, 2015 at 6:56 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
> It looks like the code is currently moving fast. I suggest to wait
> until Python 3.6 to stabilize the Python C API for async/await. It's a
> pain to maintain a public API. I hate having to add 2 or 3 versions of
> a single function :-(
>
> Victor
>
> 2015-06-25 17:43 GMT+02:00 Yury Selivanov <yselivanov.ml at gmail.com>:
>> Hi Arc,
>>
>>
>> On 2015-06-24 10:36 PM, Arc Riley wrote:
>>>
>>> A type slot for tp_as_async has already been added (which is good!) but we
>>> do not currently seem to have protocol functions for awaitable types.
>>>
>>> I would expect to find an Awaitable Protocol listed under Abstract Objects
>>> Layer, with functions like PyAwait_Check, PyAwaitIter_Check, and
>>> PyAwaitIter_Next, etc.
>>>
>>> Specifically its currently difficult to test whether an object is
>>> awaitable
>>> or an awaitable iterable, or use said objects from the c-api without
>>> relying on method testing/calling mechanisms.
>>
>>
>> The request is reasonable, I created a couple of bug tracker
>> issues:
>>
>> http://bugs.python.org/issue24511
>> http://bugs.python.org/issue24510
>>
>> Let's continue the discussion there.
>>
>> Yury
>>
>> _______________________________________________
>> Python-Dev mailing list
>> Python-Dev at python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com



-- 
Thanks,
Andrew Svetlov

From elizabeth.shashkova at gmail.com  Thu Jun 25 19:23:44 2015
From: elizabeth.shashkova at gmail.com (Elizabeth Shashkova)
Date: Thu, 25 Jun 2015 20:23:44 +0300
Subject: [Python-Dev] Is it a Python bug that the main thread of a process
 created in a daemon thread is a daemon itself?
Message-ID: <CAB38ZR2k8Hm+BsOwuwwy+tQ9EiqHAN-EPPt0wkKOOCH4NkM-YA@mail.gmail.com>

Hello everybody!

When I call fork() inside a daemon thread, the main thread in the child
process has the "daemon" property set to True. This is very confusing,
since the program keeps running while the only thread is a daemon.
According to the docs, if all the threads are daemons the program should
exit. Here is an example:



import os
import threading


def child():
     assert not threading.current_thread().daemon  # This shouldn't fail


def parent():
    new_pid = os.fork()
    if new_pid == 0:
        child()
    else:
        os.waitpid(new_pid, 0)


t = threading.Thread(target=parent)
t.setDaemon(True)
t.start()
t.join()

Is it a bug in the CPython implementation?
Also let's assume the second example. I have another non-daemon thread
in the child process and want to detect this situation. Does anybody
know a way to find such fake daemon threads that are really main
threads?


Best regards,
Elizaveta Shashkova.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150625/666e8b16/attachment-0001.html>

From mal at egenix.com  Thu Jun 25 20:48:41 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 25 Jun 2015 20:48:41 +0200
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <CAKJDb-N3wp2nKXCVSytvOgpGkWAvmiCcntjMXp+NFA_V6a4q5w@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>	<558C0816.9090703@egenix.com>
 <CAKJDb-N3wp2nKXCVSytvOgpGkWAvmiCcntjMXp+NFA_V6a4q5w@mail.gmail.com>
Message-ID: <558C4D09.5040201@egenix.com>

On 25.06.2015 17:12, Zachary Ware wrote:
> On Thu, Jun 25, 2015 at 8:54 AM, M.-A. Lemburg <mal at egenix.com> wrote:
>> On 22.06.2015 19:03, Zachary Ware wrote:
>>> Using the backported project files to build 2.7 would require two
>>> versions of Visual Studio to be installed; VS2010 (or newer) would be
>>> required in addition to VS2008.  All Windows core developers should
>>> already have VS2010 for Python 3.4 (and/or VS2015 for 3.5) and I
>>> expect that anyone else who cares enough to still have VS2008 probably
>>> has (or can easily get) one of the free editions of VS 2010 or newer,
>>> so I don't consider this to be a major issue.
>>
>> To understand this correctly:
>>
>> In order to build Python 2.7.11 (or a later version), we'd
>> now need VS 2008 *and* VS 2010 installed ?
> 
> Using the backported project files, yes, two versions of VS would be
> required.  However, the second version doesn't have to be VS 2010, it
> could be 2010, 2012, 2013, 2015, or any from the future (until
> Microsoft changes the format of the project files again).
> 
>> This doesn't sound like it would make things easier for
>> people who need to build their own Python on Windows, e.g.
>> because they are freezing their apps for distribution, or
>> using their own internal special builds.
>>
>> For VS 2008 we now have a long-term solution thanks to MS.
>> The same is not true for VS 2010 (download links are not official
>> anymore), so we'd complicate things again by requiring the
>> mixed compiler setup.
> 
> For anyone building 2.7 and any other (current) version of Python on
> the same machine, all the necessary pieces are already in place.
> 
> The main motivation for backporting is to make it easier to build
> Python with ICC.  Pre-backport, Intel gave me a list of 9 steps they
> were doing to build 2.7.10 with ICC.  Post-backport, it's just
> `PCbuild\build.bat -e [-p x64] [-d] "/p:PlatformToolset=Intel C++
> Compiler XE 15.0" "/p:BasePlatformToolset=v90"` (the bracketed options
> being optional, for 64-bit and debug builds, respectively).

Sounds good.

BTW: I remember there was some discussion a while ago to
get ICC licenses to core devs. Has there been any progress
on this ?

> There are some benefits for MSVC builds too, though: it's easier to do
> a parallel build, and OpenSSL and Tcl/Tk/Tix are built by the project
> files, so the whole build can be done much quicker.  The new project
> files for Tcl and Tk also take care of copying their DLLs into PCbuild
> so that _tkinter can find them without having to mess around with
> PATH.  My patch also fixes a couple of long-standing bugs with finding
> the right libraries for 64-bit Tcl/Tk and the test suite undoing the
> work of FixTk.py after any test that imports Tkinter, both of which
> were exacerbated by the rest of the patch.
> 
>> If you keep the old VS 2008 build files around, I'd be
>> fine with having an optional upgrade path to newer
>> compilers :-)
> 
> The old files are moved to PC/VS9.0, and they work as expected as far
> as I've tested them. 

So it's still possible to build with "just" VS 2008 installed
or will the VS 2010 (or later) be required for those old
files as well ?

> Btw, the upgrade path to newer compilers is just
> a side-effect which I'm kind of torn about.  On one hand, making it
> easier to build 2.7 with the "wrong" compiler is bound to lead to
> issues somewhere somehow.  On the other hand, people are already doing
> so anyway, with their own upgraded project files.

I guess will have the discussion about switching compilers
for Python 2.7 again at some point. People writing extensions
will sooner or later want to use features from more recent
compilers for Python 2.7 as well :-)

>> It's rather unlikely that the project files will change much
>> in coming years, so there shouldn't be much maintenance
>> effort.
> 
> Hopefully not :) 

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 25 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2015-06-25: Released mxODBC 3.3.3 ...             http://egenix.com/go79
2015-06-16: Released eGenix pyOpenSSL 0.13.10 ... http://egenix.com/go78
2015-07-20: EuroPython 2015, Bilbao, Spain ...             25 days to go

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From zachary.ware+pydev at gmail.com  Thu Jun 25 21:08:55 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Thu, 25 Jun 2015 14:08:55 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <558C4D09.5040201@egenix.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <558C0816.9090703@egenix.com>
 <CAKJDb-N3wp2nKXCVSytvOgpGkWAvmiCcntjMXp+NFA_V6a4q5w@mail.gmail.com>
 <558C4D09.5040201@egenix.com>
Message-ID: <CAKJDb-N0pH9-9+_4GHszQ9d0Ju=AwVT6Xn+CN8EZaZJi2=aQZg@mail.gmail.com>

On Thu, Jun 25, 2015 at 1:48 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> On 25.06.2015 17:12, Zachary Ware wrote:
>> The old files are moved to PC/VS9.0, and they work as expected as far
>> as I've tested them.
>
> So it's still possible to build with "just" VS 2008 installed
> or will the VS 2010 (or later) be required for those old
> files as well ?

Only VS 2008 is required for the old files.  Building in PC/VS9.0/
post-backport is exactly the same as building in PCbuild/
pre-backport.

> I guess will have the discussion about switching compilers
> for Python 2.7 again at some point. People writing extensions
> will sooner or later want to use features from more recent
> compilers for Python 2.7 as well :-)

I expect the same outcome every time it comes up --- too much breakage
for not enough benefit :)

-- 
Zach

From andrew.svetlov at gmail.com  Thu Jun 25 21:11:43 2015
From: andrew.svetlov at gmail.com (Andrew Svetlov)
Date: Thu, 25 Jun 2015 22:11:43 +0300
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558C2489.2080305@mail.de>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
Message-ID: <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>

> Another issue that bothers me, is code reuse. Independent from whether the
> 'async def' makes sense or not, it would not allow us to reuse asyncio
> functions as if they were normal functions and vice versa (if I understood
> that correctly). So, we would have to implement things twice for the asyncio
> world and the classic world. To me, it would be important to use one
> function in either world where it suits me better. I am uncertain if that
> makes sense but right now it does to me.


Yes, you cannot call async function from synchronous code. There are
two worlds: classic and async.

From zachary.ware+pydev at gmail.com  Thu Jun 25 21:12:23 2015
From: zachary.ware+pydev at gmail.com (Zachary Ware)
Date: Thu, 25 Jun 2015 14:12:23 -0500
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
	to 2.7
In-Reply-To: <BLUPR0301MB165239E03E427AE513CDA5F8F5AE0@BLUPR0301MB1652.namprd03.prod.outlook.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <558C0816.9090703@egenix.com>
 <BLUPR0301MB165239E03E427AE513CDA5F8F5AE0@BLUPR0301MB1652.namprd03.prod.outlook.com>
Message-ID: <CAKJDb-M7XZgv6uOta2DxwQ9M07Y=og-AJEvNcbHpZVcKzbnzPQ@mail.gmail.com>

On Thu, Jun 25, 2015 at 10:42 AM, Steve Dower <Steve.Dower at microsoft.com> wrote:
> This also makes it more viable to use the Windows SDK compilers. If you install the Windows SDK 7.0 (which includes MSVC9) and Windows SDK 7.1 (which includes the platform toolset files for MSVC9 - toolsets were invented later than the 7.0 release) then you should be able to build from the command line or any version of Visual Studio from 2010 onwards.

I suspected this, but hadn't brought it up because I hadn't looked
into it in depth.  This sounds good though!  The SDKs are supported
quite a bit longer than VS versions, aren't they?

-- 
Zach

From skip.montanaro at gmail.com  Thu Jun 25 21:18:14 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Thu, 25 Jun 2015 14:18:14 -0500
Subject: [Python-Dev] Is it a Python bug that the main thread of a
 process created in a daemon thread is a daemon itself?
In-Reply-To: <CAB38ZR2k8Hm+BsOwuwwy+tQ9EiqHAN-EPPt0wkKOOCH4NkM-YA@mail.gmail.com>
References: <CAB38ZR2k8Hm+BsOwuwwy+tQ9EiqHAN-EPPt0wkKOOCH4NkM-YA@mail.gmail.com>
Message-ID: <CANc-5UxVm91ygdMHKbxcABP=a83O51rYLT=N5TMMf5oVDjYixg@mail.gmail.com>

On Thu, Jun 25, 2015 at 12:23 PM, Elizabeth Shashkova <
elizabeth.shashkova at gmail.com> wrote:

> When I call fork() inside a daemon thread, the main thread in the child
> process has the "daemon" property set to True.


Didn't this (or a similar) topic come up here recently? For reference:

http://bugs.python.org/issue24512

and

https://docs.python.org/3.4/library/multiprocessing.html#contexts-and-start-methods

from which I quote (emphasis mine):

The parent process uses os.fork() to fork the Python interpreter. The child
process, when it begins, is *effectively identical to the parent process*.
All resources of the parent are inherited by the child process. *Note that
safely forking a multithreaded process is problematic*.


So, even if you get past this particular problem, the conventional wisdom
is, "don't do that."

Skip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150625/3a5c2538/attachment.html>

From solipsis at pitrou.net  Thu Jun 25 21:34:53 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 25 Jun 2015 21:34:53 +0200
Subject: [Python-Dev] Is it a Python bug that the main thread of a
 process created in a daemon thread is a daemon itself?
References: <CAB38ZR2k8Hm+BsOwuwwy+tQ9EiqHAN-EPPt0wkKOOCH4NkM-YA@mail.gmail.com>
Message-ID: <20150625213453.6acad425@fsol>


Hello Elizabeth,

On Thu, 25 Jun 2015 20:23:44 +0300
Elizabeth Shashkova <elizabeth.shashkova at gmail.com> wrote:
> Hello everybody!
> 
> When I call fork() inside a daemon thread, the main thread in the child
> process has the "daemon" property set to True. This is very confusing,
> since the program keeps running while the only thread is a daemon.
> According to the docs, if all the threads are daemons the program should
> exit. Here is an example:
> 
[...]
> 
> Is it a bug in the CPython implementation?

Yes, it looks like a bug. You can report it at http://bugs.python.org

> Also let's assume the second example. I have another non-daemon thread
> in the child process and want to detect this situation. Does anybody
> know a way to find such fake daemon threads that are really main
> threads?

There should really be only one fake daemon thread, since there's
only one main thread. And after calling fork(), you know what the main
thread is :-)

Regards

Antoine.



From Steve.Dower at microsoft.com  Thu Jun 25 21:41:18 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Thu, 25 Jun 2015 19:41:18 +0000
Subject: [Python-Dev] Backporting the 3.5+ Windows build project files
 to 2.7
In-Reply-To: <CAKJDb-M7XZgv6uOta2DxwQ9M07Y=og-AJEvNcbHpZVcKzbnzPQ@mail.gmail.com>
References: <CAKJDb-PtG=MLH=kq=sJ38fs+BiF7jeSbN3PqHLUVjcKAnQpn8Q@mail.gmail.com>
 <558C0816.9090703@egenix.com>
 <BLUPR0301MB165239E03E427AE513CDA5F8F5AE0@BLUPR0301MB1652.namprd03.prod.outlook.com>
 <CAKJDb-M7XZgv6uOta2DxwQ9M07Y=og-AJEvNcbHpZVcKzbnzPQ@mail.gmail.com>
Message-ID: <BLUPR0301MB16524C92554F33C66B34376BF5AE0@BLUPR0301MB1652.namprd03.prod.outlook.com>

Zachary Ware wrote:
> On Thu, Jun 25, 2015 at 10:42 AM, Steve Dower <Steve.Dower at microsoft.com> wrote:
>> This also makes it more viable to use the Windows SDK compilers. If you
>> install the Windows SDK 7.0 (which includes MSVC9) and Windows SDK 7.1 (which
>> includes the platform toolset files for MSVC9 - toolsets were invented later
>> than the 7.0 release) then you should be able to build from the command line or
>> any version of Visual Studio from 2010 onwards.
> 
> I suspected this, but hadn't brought it up because I hadn't looked into it in
> depth. This sounds good though! The SDKs are supported quite a bit longer than
> VS versions, aren't they?

The downloads are available for longer than VS, but they are also unsupported. They also have known installation issues that won't be fixed, so they aren't really a *good* option, just a possibility.

Cheers,
Steve

From victor.stinner at gmail.com  Thu Jun 25 22:24:46 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Thu, 25 Jun 2015 22:24:46 +0200
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <CAL3CFcXWSJ7hhbpUR6AwJA=cc9unVE-4C=7ubVp9X0+fRTs-HQ@mail.gmail.com>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
 <558C21BA.8040604@gmail.com>
 <CAMpsgwYP19cOp9jvKxBWdKNdQ4QpQ9KVizTLcbf452w2L5LPTQ@mail.gmail.com>
 <CAL3CFcXWSJ7hhbpUR6AwJA=cc9unVE-4C=7ubVp9X0+fRTs-HQ@mail.gmail.com>
Message-ID: <CAMpsgwZQj29DQQFZAJqkr4YEPKorXDMHNNBF2z4gGcmvnWi4Fw@mail.gmail.com>

Hi,

2015-06-25 19:25 GMT+02:00 Andrew Svetlov <andrew.svetlov at gmail.com>:
> P.S.
> Thank you Victor so much for your work on asyncio.
> Your changes on keeping source tracebacks and raising warnings for
> unclosed resources are very helpful.

Ah! It's good to know. You're welcome.

We can still enhance the source traceback by building it using the
traceback of the current task.

Victor

From gmludo at gmail.com  Fri Jun 26 01:07:55 2015
From: gmludo at gmail.com (Ludovic Gasc)
Date: Fri, 26 Jun 2015 01:07:55 +0200
Subject: [Python-Dev] Adding c-api async protocol support
In-Reply-To: <CAMpsgwZQj29DQQFZAJqkr4YEPKorXDMHNNBF2z4gGcmvnWi4Fw@mail.gmail.com>
References: <CAHhipD85E86KJjyCpdnb=sESUm6EgaBjY8RN_HxSojOO8N=UMQ@mail.gmail.com>
 <558C21BA.8040604@gmail.com>
 <CAMpsgwYP19cOp9jvKxBWdKNdQ4QpQ9KVizTLcbf452w2L5LPTQ@mail.gmail.com>
 <CAL3CFcXWSJ7hhbpUR6AwJA=cc9unVE-4C=7ubVp9X0+fRTs-HQ@mail.gmail.com>
 <CAMpsgwZQj29DQQFZAJqkr4YEPKorXDMHNNBF2z4gGcmvnWi4Fw@mail.gmail.com>
Message-ID: <CAON-fpESRS7JLj7dc_fDx2Q1+O2ohAjJGKg7oYKPOZf0bDqOVw@mail.gmail.com>

For one time, while we are in a congratulations tunnel, thank you a lot
AsyncIO core devs:

Since several months, we've pushed on production an average of 2 daemons
based on AsyncIO in my company with several protocols.
Most of the time there are small daemons, however, some are complex.
For now, the AsyncIO toolbox is pretty stable and predictive, especially
for the debugging.
The maturity is better that I've excepted, especially when you list the
AsyncIO ecosystem: We are for now few developers to use that for "concrete"
applications.

At least to me, it's the best proof that the foundations are good.

We should now try to publish more tutorials/examples to attract more
newcomers, but I'm the first guilty: I'm completely lack of time to do that.
BTW, I hope that EuroPython will be a good event to propagate some good
vibes around AsyncIO.

--
Ludovic Gasc (GMLudo)
http://www.gmludo.eu/

2015-06-25 22:24 GMT+02:00 Victor Stinner <victor.stinner at gmail.com>:

> Hi,
>
> 2015-06-25 19:25 GMT+02:00 Andrew Svetlov <andrew.svetlov at gmail.com>:
> > P.S.
> > Thank you Victor so much for your work on asyncio.
> > Your changes on keeping source tracebacks and raising warnings for
> > unclosed resources are very helpful.
>
> Ah! It's good to know. You're welcome.
>
> We can still enhance the source traceback by building it using the
> traceback of the current task.
>
> Victor
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/gmludo%40gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150626/730cdb64/attachment.html>

From greg.ewing at canterbury.ac.nz  Fri Jun 26 00:02:59 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 26 Jun 2015 10:02:59 +1200
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558C2489.2080305@mail.de>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de> <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
Message-ID: <558C7A93.1070304@canterbury.ac.nz>

Sven R. Kunze wrote:

> # Call func syncronously, blocking until the calculation is done:
> x = func()
> 
> # Call func asyncronously, without blocking:
> y = await func()

Using the word "blocking" this way is potentially
confusing. The calling task is *always* blocked until
the operation completes. The difference lies in
whether *other* tasks are blocked or not.

> Whether this function achieves the goal by working asynchronously or 
> blocking, AND if I, as a programmer, allow the function to work 
> asynchronously, is a completely different matter. That in turn is 
> extremely well reflected by the 'await' keyword (IMHO at least).

> 
> 
> 
> Another issue that bothers me, is code reuse. Independent from whether 
> the 'async def' makes sense or not, it would not allow us to reuse 
> asyncio functions as if they were normal functions and vice versa (if I 
> understood that correctly).

You understand correctly. If the function is declared async,
you must call it with await; if not, you must not. That's not
ideal, but it's an unavoidable consequence of the way async/await
are implemented.

> So, we would have to implement things twice 
> for the asyncio world and the classic world.

Not exactly; it's possible to create a wrapper that takes an
async function and runs it to completion, allowing it to be
called from sync code. I can't remember offhand, but there's
likely something like this already in asyncio.

> One last thing regarding 'async def', that came to my mind recently, is 
> that it compares to that necessity of Java to decorate functions with 
> all possible exceptions that could be raised inside.

It's a similar situation, but nowhere near as severe. There
are only two possibilities, async or not. Once you've converted
a function to async, you won't need to change it again on
that score.

-- 
Greg

From steve at pearwood.info  Fri Jun 26 04:22:19 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 26 Jun 2015 12:22:19 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558C2489.2080305@mail.de>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de> <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
Message-ID: <20150626022218.GT20701@ando.pearwood.info>

On Thu, Jun 25, 2015 at 05:55:53PM +0200, Sven R. Kunze wrote:
> 
> On 25.06.2015 04:16, Steven D'Aprano wrote:
> >On Wed, Jun 24, 2015 at 11:21:54PM +0200, Sven R. Kunze wrote:
[...]
> >>What is the difference of a function (no awaits) or an awaitable (> 1
> >>awaits) from an end-user's perspective (i.e. the programmer)?
> >
> >The first is syncronous, the second is asyncronous.
>
> Correct. Main questions to me here: do I, as a caller, need to care?

I think so. If you call a syncronous function when you want something 
asyncronous, or vice versa, you're going to be disappointed or confused 
by the result.


[...]
> ># Call func syncronously, blocking until the calculation is done:
> >x = func()
> ># Call func asyncronously, without blocking:
> >y = func()
> >
> >
> >I think that one of us is missing something here. As I said, I haven't
> >followed the whole discussion, so it might be me. But on face value, I
> >don't think what you say is reasonable.
> 
> Ah, wait. That is not what I intended and I completely agree with Guido 
> when he says:
> "I want to be able to *syntactically* tell where the suspension points 
> are in coroutines." http://code.activestate.com/lists/python-dev/135906/
> 
> So, it is either:
> 
> # Call func syncronously, blocking until the calculation is done:
> x = func()
> 
> # Call func asyncronously, without blocking:
> y = await func()
> 
> So, from my perspective, no matter, how many (even zero) suspension 
> points there are in func(), the first variant would still be blocking 
> and the second one not.

Where would the function suspend if there are zero suspension points?

How can you tell what the suspension points *in* the coroutine are from 
"await func()"? 


> Many programmers (when adhering to classical programming) conceive the 
> world as if they call a function and simply do not care about how it 
> works and what it does. The main thing is, it does what it is supposed 
> to do.

Concurrency is not classical single-processing programming.

> Whether this function achieves the goal by working asynchronously or 
> blocking, AND if I, as a programmer, allow the function to work 
> asynchronously, is a completely different matter. That in turn is 
> extremely well reflected by the 'await' keyword (IMHO at least).

It might help if you give a concrete example. Can you show a code 
snippet of your proposal? A single function with zero suspension points 
being called both asyncronously and syncronously, where does it suspend? 
It doesn't need to be a big complex function, something simple will do.


> Another issue that bothers me, is code reuse. Independent from whether 
> the 'async def' makes sense or not, it would not allow us to reuse 
> asyncio functions as if they were normal functions and vice versa (if I 
> understood that correctly).

I expect that it will be simple to write a wrapper that converts an 
ayncronous function to a syncronous one. It's just a matter of waiting 
until it's done.


-- 
Steve

From ncoghlan at gmail.com  Fri Jun 26 12:46:15 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 26 Jun 2015 20:46:15 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
Message-ID: <CADiSq7dbwPEU3DWnZdO4W5wyqW5RwjbSK8AxOZ-NCByPPpUqvg@mail.gmail.com>

On 26 Jun 2015 05:15, "Andrew Svetlov" <andrew.svetlov at gmail.com> wrote:
>
> > Another issue that bothers me, is code reuse. Independent from whether
the
> > 'async def' makes sense or not, it would not allow us to reuse asyncio
> > functions as if they were normal functions and vice versa (if I
understood
> > that correctly). So, we would have to implement things twice for the
asyncio
> > world and the classic world. To me, it would be important to use one
> > function in either world where it suits me better. I am uncertain if
that
> > makes sense but right now it does to me.
>
>
> Yes, you cannot call async function from synchronous code. There are
> two worlds: classic and async.

Exactly, the sync/async split is inherent in the problem domain. Nobody
really likes this consequence of explicitly asynchronous programming
models, and this is a good article on why it's annoying:
http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/

However, it's also one of those cases like binary floating point: there are
good underlying reasons things are the way they are, but truly coming to
grips with them can take a long time, so in the meantime time it's
necessary to just accept it as "just one of those things" (like learning
negative or complex numbers for the first time). For explicitly
asynchronous programming, Glyph's post at
https://glyph.twistedmatrix.com/2014/02/unyielding.html is a decent
starting point on that journey.

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150626/ce5cf42c/attachment.html>

From ncoghlan at gmail.com  Fri Jun 26 12:52:43 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 26 Jun 2015 20:52:43 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558C7A93.1070304@canterbury.ac.nz>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de> <558C7A93.1070304@canterbury.ac.nz>
Message-ID: <CADiSq7fcvRCy+cJQGX=NyGTFyKhuFarnG4E=d6bg2M_bX_0-1A@mail.gmail.com>

On 26 Jun 2015 10:46, "Greg Ewing" <greg.ewing at canterbury.ac.nz> wrote:
>
> Sven R. Kunze wrote:
>> So, we would have to implement things twice for the asyncio world and
the classic world.
>
>
> Not exactly; it's possible to create a wrapper that takes an
> async function and runs it to completion, allowing it to be
> called from sync code. I can't remember offhand, but there's
> likely something like this already in asyncio.

https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_until_complete

It's also possible to use a more comprehensive synchronous-to-asynchronous
adapter like gevent to call asynchronous code from synchronous code.

Going in the other direction (calling sync code from async) uses a thread
or process pool:
https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_in_executor

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150626/4f72a753/attachment.html>

From srkunze at mail.de  Fri Jun 26 15:48:36 2015
From: srkunze at mail.de (Sven R. Kunze)
Date: Fri, 26 Jun 2015 15:48:36 +0200
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>	<558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>	<558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
Message-ID: <558D5834.2030909@mail.de>

On 25.06.2015 21:11, Andrew Svetlov wrote:
>> Another issue that bothers me, is code reuse. Independent from whether the
>> 'async def' makes sense or not, it would not allow us to reuse asyncio
>> functions as if they were normal functions and vice versa (if I understood
>> that correctly). So, we would have to implement things twice for the asyncio
>> world and the classic world. To me, it would be important to use one
>> function in either world where it suits me better. I am uncertain if that
>> makes sense but right now it does to me.
>
> Yes, you cannot call async function from synchronous code. There are
> two worlds: classic and async.

My point is: why does everyone assume that it has to be like that?


@Nick
Thanks for these links; nice reads and reflect exactly what I think 
about these topics. Btw. complex numbers basically works the same way 
(same API) as integers. I would like to see that for functions and 
awaitables as well.

Still, the general assumption is: "the sync/async split is inherent in 
the problem domain". Why? I do not see that inherent split.


@Greg
I do not like wrappers IF it is just because to have wrappers. We 
already have 2 types of wrappers: [normal function call] and [function 
call using await]. Both works like wrappers right in the place where you 
need them.


@Steven
 > Where would the function suspend
 > if there are zero suspension points?

It would not.

 > How can you tell what the suspension
 > points *in* the coroutine are from "await func()"?

Let me answer this with a question: How can you tell what the suspension 
points *in* the coroutine are from "async def"?

 > Can you show a code snippet of your proposal?
By analogy to PEP 0492:

def complex_calc(a):
     if a >= 0:
         return await open(unicode(a)).read()
     l = await complex_calc(a - 1)
     r = complex_calc(a - 2)
     return l + '; ' + r

def business():
     return complex_calc(5)

def business_new()
     return await complex_calc(10)


Maybe, I completely missed the point of the proposal, but this is the 
way I would expect it to work. Putting in an 'await' whenever I see fit 
and it just works.


Regards,
Sven

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150626/75a560e2/attachment.html>

From ethan at stoneleaf.us  Fri Jun 26 16:20:48 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 26 Jun 2015 07:20:48 -0700
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D5834.2030909@mail.de>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>	<558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>	<558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
Message-ID: <558D5FC0.90306@stoneleaf.us>

On 06/26/2015 06:48 AM, Sven R. Kunze wrote:

> def business():
>      return complex_calc(5)
>
> def business_new()
>      return await complex_calc(10)

> Maybe, I completely missed the point of the proposal, but this is the way I would expect it to work. Putting in an 'await' whenever I see fit and it just works.

Sadly, I have basically no experience in this area -- perhaps that's why Sven's arguments make sense to me.  ;)

As Nick said earlier: the caller always blocks; by extension (to my mind, at least) putting an `await` in front of something is saying, "it's okay if other tasks run while I'm blocking on this call."

Maybe we can get there someday.

--
~Ethan~

From rosuav at gmail.com  Fri Jun 26 16:31:01 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sat, 27 Jun 2015 00:31:01 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D5FC0.90306@stoneleaf.us>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
Message-ID: <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>

On Sat, Jun 27, 2015 at 12:20 AM, Ethan Furman <ethan at stoneleaf.us> wrote:
> As Nick said earlier: the caller always blocks; by extension (to my mind, at
> least) putting an `await` in front of something is saying, "it's okay if
> other tasks run while I'm blocking on this call."

Apologies if this is a really REALLY dumb question, but... How hard
would it be to then dispense with the await keyword, and simply
_always_ behave that way? Something like:

def data_from_socket():
    # Other tasks may run while we wait for data
    # The socket.read() function has yield points in it
    data = socket.read(1024, 1)
    return transmogrify(data)

def respond_to_socket():
    while True:
        data = data_from_socket()
        # We can pretend that socket writes happen instantly,
        # but if ever we can't write, it'll let other tasks wait while
        # we're blocked on it.
        socket.write("Got it, next please!")

Do these functions really need to be aware that there are yield points
in what they're calling?

I come from a background of thinking with threads, so I'm accustomed
to doing blocking I/O and assuming/expecting that other threads will
carry on while we wait. If asynchronous I/O can be made that
convenient, it'd be awesome.

But since it hasn't already been made that easy in every other
language, I expect there's some fundamental problem with this
approach, something that intrinsically requires every step in the
chain to know what's a (potential) block point.

ChrisA

From pmiscml at gmail.com  Fri Jun 26 16:51:13 2015
From: pmiscml at gmail.com (Paul Sokolovsky)
Date: Fri, 26 Jun 2015 17:51:13 +0300
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
Message-ID: <20150626175113.010ef154@x230>

Hello,

On Sat, 27 Jun 2015 00:31:01 +1000
Chris Angelico <rosuav at gmail.com> wrote:

> On Sat, Jun 27, 2015 at 12:20 AM, Ethan Furman <ethan at stoneleaf.us>
> wrote:
> > As Nick said earlier: the caller always blocks; by extension (to my
> > mind, at least) putting an `await` in front of something is saying,
> > "it's okay if other tasks run while I'm blocking on this call."
> 
> Apologies if this is a really REALLY dumb question, but... How hard
> would it be to then dispense with the await keyword, and simply
> _always_ behave that way? Something like:
> 
> def data_from_socket():
>     # Other tasks may run while we wait for data
>     # The socket.read() function has yield points in it
>     data = socket.read(1024, 1)
>     return transmogrify(data)
> 
> def respond_to_socket():
>     while True:
>         data = data_from_socket()
>         # We can pretend that socket writes happen instantly,
>         # but if ever we can't write, it'll let other tasks wait while
>         # we're blocked on it.
>         socket.write("Got it, next please!")
> 
> Do these functions really need to be aware that there are yield points
> in what they're calling?
> 
> I come from a background of thinking with threads, so I'm accustomed
> to doing blocking I/O and assuming/expecting that other threads will
> carry on while we wait. If asynchronous I/O can be made that
> convenient, it'd be awesome.

Some say "convenient", others say "dangerous".

Consider:

buf = bytearray(MULTIMEGABYTE)

In one function you do:

socket.write(buf),

in another function (coroutine), you keep mutating buf.

So, currently in Python you know if you do:

socket.write(buf)

Then you know it will finish without interruptions for entire buffer.
And if you write:

await socket.write(buf)

then you know there may be interruption points inside socket.write(),
in particular something else may mutate it while it's being written.
With threads, you always have to assume this last scenario, and do
extra effort to protect against it (mutexes and stuff).


Note that it's original thinking of Guido, but seeing how it all
combines in asyncio (far from being nice consistent way), it's not
surprising that some people will keep not understanding how asyncio
works/how to use it, while other will grow disappointed by it. Here's
my latest disappointment wrt to asyncio usage in MicroPython:
http://bugs.python.org/issue24449


-- 
Best regards,
 Paul                          mailto:pmiscml at gmail.com

From rosuav at gmail.com  Fri Jun 26 17:10:33 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sat, 27 Jun 2015 01:10:33 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <20150626175113.010ef154@x230>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
 <20150626175113.010ef154@x230>
Message-ID: <CAPTjJmowfe6qqLL_hqcfTiR1MMmFEt4oCJV3bfkUmXLtXYjVYw@mail.gmail.com>

On Sat, Jun 27, 2015 at 12:51 AM, Paul Sokolovsky <pmiscml at gmail.com> wrote:
> Some say "convenient", others say "dangerous".
>
> Consider:
>
> buf = bytearray(MULTIMEGABYTE)
>
> In one function you do:
>
> socket.write(buf),
>
> in another function (coroutine), you keep mutating buf.
>
> So, currently in Python you know if you do:
>
> socket.write(buf)
>
> Then you know it will finish without interruptions for entire buffer.
> And if you write:
>
> await socket.write(buf)
>
> then you know there may be interruption points inside socket.write(),
> in particular something else may mutate it while it's being written.
> With threads, you always have to assume this last scenario, and do
> extra effort to protect against it (mutexes and stuff).

Hmm. I'm not sure how this is a problem; whether you use 'await' or
not, using a mutable object from two separate threads or coroutines is
going to have that effect.

The way I'm seeing it, coroutines are like cooperatively-switched
threads; you don't have to worry about certain operations being
interrupted (particularly low-level ones like refcount changes or list
growth), but any time you hit an 'await', you have to assume a context
switch. That's all very well, but I'm not sure it's that big a problem
to accept that any call could context-switch; atomicity is already a
concern in other cases, which is why we have principles like EAFP
rather than LBYL.

There's clearly some benefit to being able to assume that certain
operations are uninterruptible (because there's no 'await' anywhere in
them), but are they truly so? Signals can already interrupt something
_anywhere_:

>>> import signal
>>> def mutate(sig, frm):
...     print("Mutating the global!")
...     lst[5] = 0
...
>>> signal.signal(signal.SIGALRM, mutate)
<Handlers.SIG_DFL: 0>
>>> lst = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> def spin():
...     while True:
...         lst[0] += lst[5]
...         lst[2] += lst[0]
...         lst[4] += lst[6]
...         lst[6] -= lst[2]
...         if not lst[5]:
...             print("Huh???")
...             break
...
>>> signal.alarm(2)
0
>>> spin()
Mutating the global!
Huh???

Any time you're using mutable globals, you have to assume that they
can be mutated out from under you. Coroutines don't change this, so is
there anything gained by knowing where you can't be interrupted by one
specific possible context switch?

ChrisA

From yselivanov.ml at gmail.com  Fri Jun 26 17:48:37 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Fri, 26 Jun 2015 11:48:37 -0400
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de> <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
Message-ID: <558D7455.3030507@gmail.com>



On 2015-06-26 10:31 AM, Chris Angelico wrote:
> On Sat, Jun 27, 2015 at 12:20 AM, Ethan Furman<ethan at stoneleaf.us>  wrote:
>> >As Nick said earlier: the caller always blocks; by extension (to my mind, at
>> >least) putting an `await` in front of something is saying, "it's okay if
>> >other tasks run while I'm blocking on this call."
> Apologies if this is a really REALLY dumb question, but... How hard
> would it be to then dispense with the await keyword, and simply
> _always_  behave that way? Something like:

Chris, Sven, if you want this behavior, gevent & Stackless Python
are your friends.  This approach, however, won't ever be merged in
CPython (this was discussed plenty of times both on python-ideas
and python-dev).

There is also a great essay about explicit vs implicit suspension
points: https://glyph.twistedmatrix.com/2014/02/unyielding.html

Yury

From rdmurray at bitdance.com  Fri Jun 26 18:07:03 2015
From: rdmurray at bitdance.com (R. David Murray)
Date: Fri, 26 Jun 2015 12:07:03 -0400
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAPTjJmowfe6qqLL_hqcfTiR1MMmFEt4oCJV3bfkUmXLtXYjVYw@mail.gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de> <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
 <20150626175113.010ef154@x230>
 <CAPTjJmowfe6qqLL_hqcfTiR1MMmFEt4oCJV3bfkUmXLtXYjVYw@mail.gmail.com>
Message-ID: <20150626160704.70D8B250E71@webabinitio.net>

On Sat, 27 Jun 2015 01:10:33 +1000, Chris Angelico <rosuav at gmail.com> wrote:
> The way I'm seeing it, coroutines are like cooperatively-switched
> threads; you don't have to worry about certain operations being
> interrupted (particularly low-level ones like refcount changes or list
> growth), but any time you hit an 'await', you have to assume a context
> switch. That's all very well, but I'm not sure it's that big a problem
> to accept that any call could context-switch; atomicity is already a
> concern in other cases, which is why we have principles like EAFP
> rather than LBYL.

Read Glyph's article, it explains why:

https://glyph.twistedmatrix.com/2014/02/unyielding.html

> There's clearly some benefit to being able to assume that certain
> operations are uninterruptible (because there's no 'await' anywhere in
> them), but are they truly so? Signals can already interrupt something
> _anywhere_:

Yes, and you could have an out of memory error anywhere in your program
as well.  (Don't do things in your signal handlers, set a flag.)  But
that doesn't change the stuff Glyph talks about (and Guido talks about)
about *reasoning* about your code.

I did my best to avoid using threads, and never invested the time and
effort in Twisted.  But I love programming with asyncio for highly
concurrent applications.  It fits in my brain :)

--David

From status at bugs.python.org  Fri Jun 26 18:08:25 2015
From: status at bugs.python.org (Python tracker)
Date: Fri, 26 Jun 2015 18:08:25 +0200 (CEST)
Subject: [Python-Dev] Summary of Python tracker Issues
Message-ID: <20150626160825.0DBD85666C@psf.upfronthosting.co.za>


ACTIVITY SUMMARY (2015-06-19 - 2015-06-26)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open    4908 (+16)
  closed 31369 (+27)
  total  36277 (+43)

Open issues with patches: 2254 


Issues opened (29)
==================

#15014: smtplib: add support for arbitrary auth methods
http://bugs.python.org/issue15014  reopened by barry

#24475: The docs never define what a pool "task" is
http://bugs.python.org/issue24475  opened by Zahari.Dim

#24477: In argparse subparser's option goes to parent parser
http://bugs.python.org/issue24477  opened by py.user

#24479: Support LMMS project files in mimetypes.guess_type
http://bugs.python.org/issue24479  opened by Andreas Nilsson

#24481: hotshot pack_string Heap Buffer Overflow
http://bugs.python.org/issue24481  opened by JohnLeitch

#24482: itertools.tee causes segfault in a multithreading environment,
http://bugs.python.org/issue24482  opened by Dmitry Odzerikho

#24483: Avoid repeated hash calculation in C implementation of functoo
http://bugs.python.org/issue24483  opened by serhiy.storchaka

#24484: multiprocessing cleanup occasionally throws exception
http://bugs.python.org/issue24484  opened by Jorge Herskovic

#24485: Function source inspection fails on closures
http://bugs.python.org/issue24485  opened by malthe

#24488: ConfigParser.getboolean fails on boolean options
http://bugs.python.org/issue24488  opened by andreas-h

#24492: using custom objects as modules: AttributeErrors new in 3.5
http://bugs.python.org/issue24492  opened by arigo

#24493: subprocess with env=os.environ doesn't preserve environment va
http://bugs.python.org/issue24493  opened by The Compiler

#24494: Can't specify encoding with fileinput and inplace=True
http://bugs.python.org/issue24494  opened by lilydjwg

#24498: Shoudl ptags and eptags be removed from repo?
http://bugs.python.org/issue24498  opened by r.david.murray

#24499: Python Installer text piles up during installation process
http://bugs.python.org/issue24499  opened by Zach ???The Quantum Mechanic??? W

#24500: provide context manager to redirect C output
http://bugs.python.org/issue24500  opened by Zahari.Dim

#24501: configure does not find (n)curses in /usr/local/libs
http://bugs.python.org/issue24501  opened by petepdx

#24502: OS X 2.7 package has zeros for version numbers in sub-packages
http://bugs.python.org/issue24502  opened by Jim Zajkowski

#24503: csv.writer fails when within csv.reader
http://bugs.python.org/issue24503  opened by ezzieyguywuf

#24505: shutil.which wrong result on Windows
http://bugs.python.org/issue24505  opened by bobjalex

#24506: make fails with gcc 4.9 due to fatal warning of unused variabl
http://bugs.python.org/issue24506  opened by krichter

#24507: CRLF issues
http://bugs.python.org/issue24507  opened by RusiMody

#24508: Backport 3.5's Windows build project files to 2.7
http://bugs.python.org/issue24508  opened by zach.ware

#24510: Make _PyCoro_GetAwaitableIter a public API
http://bugs.python.org/issue24510  opened by yselivanov

#24511: Add methods for async protocols
http://bugs.python.org/issue24511  opened by yselivanov

#24512: multiprocessing should log a warning when forking multithreade
http://bugs.python.org/issue24512  opened by trcarden

#24514: tarfile fails to extract archive (handled fine by gnu tar and 
http://bugs.python.org/issue24514  opened by pombreda

#24515: docstring of isinstance
http://bugs.python.org/issue24515  opened by Luc Saffre

#24516: SSL create_default_socket purpose insufficiently documented
http://bugs.python.org/issue24516  opened by messa



Most recent 15 issues with no replies (15)
==========================================

#24512: multiprocessing should log a warning when forking multithreade
http://bugs.python.org/issue24512

#24499: Python Installer text piles up during installation process
http://bugs.python.org/issue24499

#24498: Shoudl ptags and eptags be removed from repo?
http://bugs.python.org/issue24498

#24481: hotshot pack_string Heap Buffer Overflow
http://bugs.python.org/issue24481

#24477: In argparse subparser's option goes to parent parser
http://bugs.python.org/issue24477

#24475: The docs never define what a pool "task" is
http://bugs.python.org/issue24475

#24466: extend_path explanation in documentation is ambiguous
http://bugs.python.org/issue24466

#24424: xml.dom.minidom: performance issue with Node.insertBefore()
http://bugs.python.org/issue24424

#24417: Type-specific documentation for __format__ methods
http://bugs.python.org/issue24417

#24415: SIGINT always reset to SIG_DFL by Py_Finalize()
http://bugs.python.org/issue24415

#24414: MACOSX_DEPLOYMENT_TARGET set incorrectly by configure
http://bugs.python.org/issue24414

#24407: Use after free in PyDict_merge
http://bugs.python.org/issue24407

#24381: Got warning when compiling ffi.c on Mac
http://bugs.python.org/issue24381

#24364: Not all defects pass through email policy
http://bugs.python.org/issue24364

#24352: Provide a way for assertLogs to optionally not hide the loggin
http://bugs.python.org/issue24352



Most recent 15 issues waiting for review (15)
=============================================

#24514: tarfile fails to extract archive (handled fine by gnu tar and 
http://bugs.python.org/issue24514

#24508: Backport 3.5's Windows build project files to 2.7
http://bugs.python.org/issue24508

#24506: make fails with gcc 4.9 due to fatal warning of unused variabl
http://bugs.python.org/issue24506

#24483: Avoid repeated hash calculation in C implementation of functoo
http://bugs.python.org/issue24483

#24468: Expose C level compiler flag constants to Python code
http://bugs.python.org/issue24468

#24467: bytearray pop and remove Buffer Over-read
http://bugs.python.org/issue24467

#24465: Make tarfile have deterministic sorting
http://bugs.python.org/issue24465

#24464: Got warning when compiling sqlite3 module on Mac OS X
http://bugs.python.org/issue24464

#24462: bytearray.find Buffer Over-read
http://bugs.python.org/issue24462

#24458: Documentation for PEP 489
http://bugs.python.org/issue24458

#24456: audioop.adpcm2lin Buffer Over-read
http://bugs.python.org/issue24456

#24452: Make webbrowser support Chrome on Mac OS X
http://bugs.python.org/issue24452

#24451: Add metrics to future objects (concurrent or asyncio?)
http://bugs.python.org/issue24451

#24450: Add cr_await calculated property to coroutine object
http://bugs.python.org/issue24450

#24440: Move the buildslave setup information from the wiki to the dev
http://bugs.python.org/issue24440



Top 10 most discussed issues (10)
=================================

#24483: Avoid repeated hash calculation in C implementation of functoo
http://bugs.python.org/issue24483  15 msgs

#24129: Incorrect (misleading) statement in the execution model docume
http://bugs.python.org/issue24129  11 msgs

#24484: multiprocessing cleanup occasionally throws exception
http://bugs.python.org/issue24484  10 msgs

#20387: tokenize/untokenize roundtrip fails with tabs
http://bugs.python.org/issue20387   8 msgs

#24263: unittest cannot load module whose name starts with Unicode
http://bugs.python.org/issue24263   8 msgs

#23883: __all__ lists are incomplete
http://bugs.python.org/issue23883   7 msgs

#15014: smtplib: add support for arbitrary auth methods
http://bugs.python.org/issue15014   6 msgs

#24325: Speedup types.coroutine()
http://bugs.python.org/issue24325   6 msgs

#24514: tarfile fails to extract archive (handled fine by gnu tar and 
http://bugs.python.org/issue24514   6 msgs

#12210: test_smtplib: intermittent failures on FreeBSD
http://bugs.python.org/issue12210   5 msgs



Issues closed (27)
==================

#13213: generator.throw() behavior
http://bugs.python.org/issue13213  closed by vadmium

#23246: distutils fails to locate vcvarsall with Visual C++ Compiler f
http://bugs.python.org/issue23246  closed by steve.dower

#23684: urlparse() documentation does not account for default scheme
http://bugs.python.org/issue23684  closed by berker.peksag

#24244: Python exception on strftime with %f on Python 3 and Python 2 
http://bugs.python.org/issue24244  closed by python-dev

#24309: string.Template should be using str.format and/or deprecated
http://bugs.python.org/issue24309  closed by wau

#24400: Awaitable ABC incompatible with functools.singledispatch
http://bugs.python.org/issue24400  closed by yselivanov

#24426: re.split performance degraded significantly by capturing group
http://bugs.python.org/issue24426  closed by serhiy.storchaka

#24436: _PyTraceback_Add has no const qualifier for its char * argumen
http://bugs.python.org/issue24436  closed by serhiy.storchaka

#24439: Feedback for awaitable coroutine documentation
http://bugs.python.org/issue24439  closed by yselivanov

#24447: tab indentation breaks in tokenize.untokenize
http://bugs.python.org/issue24447  closed by terry.reedy

#24448: Syntax highlighting marks multiline comments as strings
http://bugs.python.org/issue24448  closed by terry.reedy

#24457: audioop.lin2adpcm Buffer Over-read
http://bugs.python.org/issue24457  closed by serhiy.storchaka

#24474: Accidental exception chaining in inspect.Signature.bind()
http://bugs.python.org/issue24474  closed by ncoghlan

#24476: Statically link vcruntime140.dll
http://bugs.python.org/issue24476  closed by steve.dower

#24478: asyncio: segfault in test_env_var_debug() on non-debug Windows
http://bugs.python.org/issue24478  closed by zach.ware

#24480: Python 2.7.10
http://bugs.python.org/issue24480  closed by yliu120

#24486: http/client.py block indefinitely on line 308 in _read_status
http://bugs.python.org/issue24486  closed by Julien.Palard

#24487: Change asyncio.async() ??? ensure_future()
http://bugs.python.org/issue24487  closed by yselivanov

#24489: cmath.polar() can raise due to pre-existing errno
http://bugs.python.org/issue24489  closed by pitrou

#24490: DeprecationWarning hidden even after warnings.filterwarnings('
http://bugs.python.org/issue24490  closed by ned.deily

#24491: inspect.getsource can't get source code if provided function i
http://bugs.python.org/issue24491  closed by ned.deily

#24495: asyncio.ensure_future() AttributeError with ???async def??? co
http://bugs.python.org/issue24495  closed by yselivanov

#24496: Non-idiomatic examples in gzip docs
http://bugs.python.org/issue24496  closed by berker.peksag

#24497: test_decimal.py contains a dead link
http://bugs.python.org/issue24497  closed by ned.deily

#24504: os.listdir() error if the last folder starts not with the capi
http://bugs.python.org/issue24504  closed by Djarbore

#24509: Undocumented features of asyncio: call_at, call_later
http://bugs.python.org/issue24509  closed by yselivanov

#24513: decimal test version mismatch
http://bugs.python.org/issue24513  closed by skrah

From rosuav at gmail.com  Fri Jun 26 18:20:17 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sat, 27 Jun 2015 02:20:17 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <20150626160704.70D8B250E71@webabinitio.net>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
 <20150626175113.010ef154@x230>
 <CAPTjJmowfe6qqLL_hqcfTiR1MMmFEt4oCJV3bfkUmXLtXYjVYw@mail.gmail.com>
 <20150626160704.70D8B250E71@webabinitio.net>
Message-ID: <CAPTjJmrwBJCSjeS0inuf8hdsnqCjWMN7UQ7ZhDpw49x-UndXnA@mail.gmail.com>

On Sat, Jun 27, 2015 at 2:07 AM, R. David Murray <rdmurray at bitdance.com> wrote:
> On Sat, 27 Jun 2015 01:10:33 +1000, Chris Angelico <rosuav at gmail.com> wrote:
>> The way I'm seeing it, coroutines are like cooperatively-switched
>> threads; you don't have to worry about certain operations being
>> interrupted (particularly low-level ones like refcount changes or list
>> growth), but any time you hit an 'await', you have to assume a context
>> switch. That's all very well, but I'm not sure it's that big a problem
>> to accept that any call could context-switch; atomicity is already a
>> concern in other cases, which is why we have principles like EAFP
>> rather than LBYL.
>
> Read Glyph's article, it explains why:
>
> https://glyph.twistedmatrix.com/2014/02/unyielding.html

Makes some fair points, but IMO it emphasizes what I'm saying about
atomicity. If you really need it, establish it by something other than
just having code that naively progresses through. That's what
databasing is for, after all. Threading and signals force you to think
about concurrency; other models *may* allow you to pretend it doesn't
exist.

Anyway, that answers my question about why the explicit "this can let
other things run" marker. Thanks all.

ChrisA

From ncoghlan at gmail.com  Fri Jun 26 19:05:47 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 27 Jun 2015 03:05:47 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D5834.2030909@mail.de>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
Message-ID: <CADiSq7fOoB9tRxhzj9Rh+H7bNUmKqe9Bmcmrg9cE876Peiffkg@mail.gmail.com>

On 26 June 2015 at 23:48, Sven R. Kunze <srkunze at mail.de> wrote:
> @Nick
> Thanks for these links; nice reads and reflect exactly what I think about
> these topics. Btw. complex numbers basically works the same way (same API)
> as integers.

Not using complex numbers in Python - coming to grips with what they
mean physically.

> I would like to see that for functions and awaitables as well.

This is akin to asking for unification of classes and modules -
they're both namespaces so they have a lot of similarities, but the
ways in which they're different are by design.

Similarly, subroutines and coroutines are not the same thing:
https://en.wikipedia.org/wiki/Coroutine#Comparison_with_subroutines

They're *related*, but they're still not the same thing.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From Steve.Dower at microsoft.com  Fri Jun 26 17:47:32 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Fri, 26 Jun 2015 15:47:32 +0000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D5834.2030909@mail.de>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
Message-ID: <BLUPR0301MB1652BA6E0F63A32184C338B5F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>

On 06/26/2015 06:48 AM, Sven R. Kunze wrote:

> def business():
>      return complex_calc(5)
>
> def business_new()
>      return await complex_calc(10)

> Maybe, I completely missed the point of the proposal, but this is the way I would expect it to work. Putting in an 'await' whenever I see fit and it just works.

Assuming "async def business_new" (to avoid the syntax error), there's no difference between those functions or the one they're calling:

* "complex_calc" returns an awaitable object that, after you've awaited it, will result in an int.
* "business" returns the return value of "complex_calc", which is an awaitable object that, after you've awaited it, will result in an int.
* "business_new" returns an awaitable object that, after you've awaited it, will result in an int.

In all three of these cases, the result is the same. The fact that the awaitable object returned from any of them is implemented by a coroutine isn't important (in the same way that an iterable object may be implemented by a generator, but it's really irrelevant).

The value of the await syntax is when you're doing something interesting, a.k.a. your function is more than delegating directly to another function:

async def business_async():
    tasks = [complex_calc_async(i) for i in range(10)]
    results = [await t for t in tasks]
    await write_to_disk_async(filename, results)
    return sum(results)

Now it's actually useful to be able to await when we choose. Each call to complex_calc_async() could be starting a thread and then suspending until the thread is complete, so we actually start all 10 threads running here before blocking, and then collect the results in the order we started them (and not the order they complete, though I think asyncio has a function to do that). Without the explicit await this would be impossible.

The async def also lets us create coroutines consistently even if they don't await anything:

if has_version:
    async def get_version_async():
        return VERSION
else:
    async def get_version_async():
        return (await get_major_version_async(), await get_minor_version_async())

async def show_version():
    print(await get_version_async())

If, like generators, regular functions became coroutines only in the presence of an await, we'd have to do convoluted code to produce the fast get_version_async(), or else make the caller worry about whether they can skip the await. (Also consider calls that cache their result - a coroutine MUST be awaited, but it doesn't have to await anything if it already has the result).

(Aside: the "_async" suffix is based on the convention used in C#, and it's certainly one that I'll be using throughout my async Python code and encouraging in any code that I review. It's the most obvious way to let callers know whether they need to await the function result or not.)

Cheers,
Steve


From ncoghlan at gmail.com  Fri Jun 26 19:29:20 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 27 Jun 2015 03:29:20 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
Message-ID: <CADiSq7c0fwh3v53a1O_Trx14-egK_3SuOuTmu=cEh+An4Rp8Tg@mail.gmail.com>

On 27 June 2015 at 00:31, Chris Angelico <rosuav at gmail.com> wrote:
> I come from a background of thinking with threads, so I'm accustomed
> to doing blocking I/O and assuming/expecting that other threads will
> carry on while we wait. If asynchronous I/O can be made that
> convenient, it'd be awesome.

Folks, it's worth reading through both
https://glyph.twistedmatrix.com/2014/02/unyielding.html and
http://python-notes.curiousefficiency.org/en/latest/pep_ideas/async_programming.html

It *is* possible to make a language where *everything* is run as a
coroutine, and get rid of the subroutine/coroutine distinction that
way - a subroutine would *literally* be executed as a coroutine that
never waited for anything. The "doesn't suspend" case could then be
handled as an implicit optimisation, rather than as a distinct type,
and the *only* way to do IO would be asynchronously through an event
loop, rather than through blocking API calls.

But that language wouldn't be Python. Python's core execution model is
a synchronous procedural dynamically typed one, with everything beyond
that, whether object oriented programming, functional programming,
asynchronous programming, gradual typing, etc, being a way of managing
the kinds of complexity that arise when managing more state, or more
complicated algorithms, or more interactivity, or a larger development
team.

> But since it hasn't already been made that easy in every other
> language, I expect there's some fundamental problem with this
> approach, something that intrinsically requires every step in the
> chain to know what's a (potential) block point.

As Glyph explains, implicitly switched coroutines are just shared
memory threading under another name - when any function call can cause
you to lose control of the flow of execution, you have to consider
your locking model in much the same way as you do for preemptive
threading.

For Python 2, this "lightweight threads" model is available as
https://pypi.python.org/pypi/gevent, and there also appears to have
been some good recent progress on Python 3 compatibility:
https://github.com/gevent/gevent/issues/38

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ethan at stoneleaf.us  Fri Jun 26 19:40:37 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Fri, 26 Jun 2015 10:40:37 -0700
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <BLUPR0301MB1652BA6E0F63A32184C338B5F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
 <BLUPR0301MB1652BA6E0F63A32184C338B5F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>
Message-ID: <558D8E95.6020404@stoneleaf.us>

On 06/26/2015 08:47 AM, Steve Dower wrote:
> On 06/26/2015 06:48 AM, Sven R. Kunze wrote:
>
>> def business():
>>       return complex_calc(5)
>>
>> def business_new()
>>       return await complex_calc(10)

> Assuming "async def business_new" (to avoid the syntax error), there's no difference between those functions or the one they're calling:
>
> * "complex_calc" returns an awaitable object that, after you've awaited it, will result in an int.
> * "business" returns the return value of "complex_calc", which is an awaitable object that, after you've awaited it, will result in an int.
> * "business_new" returns an awaitable object that, after you've awaited it, will result in an int.
>
> In all three of these cases, the result is the same. The fact that the awaitable object returned from any of them is implemented by a coroutine isn't important (in the same way that an iterable object may be implemented by a generator, but it's really irrelevant).

What?  Shouldn't 'business_new' return the int?  It did await, after all.

--
~Ethan~

From ron3200 at gmail.com  Fri Jun 26 20:06:04 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Fri, 26 Jun 2015 14:06:04 -0400
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de> <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
Message-ID: <mmk4ad$k76$1@ger.gmane.org>



On 06/26/2015 10:31 AM, Chris Angelico wrote:
> Apologies if this is a really REALLY dumb question, but... How hard
> would it be to then dispense with the await keyword, and simply
> _always_  behave that way? Something like:
>
> def data_from_socket():
>      # Other tasks may run while we wait for data
>      # The socket.read() function has yield points in it
>      data = socket.read(1024, 1)
>      return transmogrify(data)
>
> def respond_to_socket():
>      while True:
>          data = data_from_socket()
>          # We can pretend that socket writes happen instantly,
>          # but if ever we can't write, it'll let other tasks wait while
>          # we're blocked on it.
>          socket.write("Got it, next please!")
>
> Do these functions really need to be aware that there are yield points
> in what they're calling?

I think "yield points" is a concept that needs to be spelled out a bit 
clearer in the PEP 492.

It seems that those points are defined by other means outside of a function 
defined with "async def".  From the PEP...

    * It is a SyntaxError to have yield or yield from expressions
      in an async function.

So somewhere in an async function, it needs to "await something" with a 
yield in it that isn't an async function.

This seems to be a bit counter intuitive to me.  Or am I missing something?

Regards,
    Ron





















From yselivanov.ml at gmail.com  Fri Jun 26 20:30:25 2015
From: yselivanov.ml at gmail.com (Yury Selivanov)
Date: Fri, 26 Jun 2015 14:30:25 -0400
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D8E95.6020404@stoneleaf.us>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
 <BLUPR0301MB1652BA6E0F63A32184C338B5F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>
 <558D8E95.6020404@stoneleaf.us>
Message-ID: <558D9A41.3050309@gmail.com>



On 2015-06-26 1:40 PM, Ethan Furman wrote:
> On 06/26/2015 08:47 AM, Steve Dower wrote:
>> On 06/26/2015 06:48 AM, Sven R. Kunze wrote:
>>
>>> def business():
>>>       return complex_calc(5)
>>>
>>> def business_new()
>>>       return await complex_calc(10)
>
>> Assuming "async def business_new" (to avoid the syntax error), 
>> there's no difference between those functions or the one they're 
>> calling:
>>
>> * "complex_calc" returns an awaitable object that, after you've 
>> awaited it, will result in an int.
>> * "business" returns the return value of "complex_calc", which is an 
>> awaitable object that, after you've awaited it, will result in an int.
>> * "business_new" returns an awaitable object that, after you've 
>> awaited it, will result in an int.
>>
>> In all three of these cases, the result is the same. The fact that 
>> the awaitable object returned from any of them is implemented by a 
>> coroutine isn't important (in the same way that an iterable object 
>> may be implemented by a generator, but it's really irrelevant).
>
> What?  Shouldn't 'business_new' return the int?  It did await, after all. 

"business_new" should be defined with an 'async' keyword, that's where 
all the confusion came from:

   async def business_new():
      return await complex_calc(10)

Now, "business_new()" returns a coroutine (which will resolve to the 
result of "complex_calc" awaitable), "await business_new()" will return 
an int.


Yury

From Steve.Dower at microsoft.com  Fri Jun 26 21:03:14 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Fri, 26 Jun 2015 19:03:14 +0000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D8E95.6020404@stoneleaf.us>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
 <BLUPR0301MB1652BA6E0F63A32184C338B5F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>
 <558D8E95.6020404@stoneleaf.us>
Message-ID: <BLUPR0301MB1652438CA97DD11EA902D7C0F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>

Ethan Furman wrote:
> On 06/26/2015 08:47 AM, Steve Dower wrote:
>> On 06/26/2015 06:48 AM, Sven R. Kunze wrote:
>>
>>> def business():
>>> return complex_calc(5)
>>>
>>> def business_new()
>>> return await complex_calc(10)
> 
>> Assuming "async def business_new" (to avoid the syntax error), there's no
>> difference between those functions or the one they're calling:
>>
>> * "complex_calc" returns an awaitable object that, after you've awaited it,
>> will result in an int.
>> * "business" returns the return value of "complex_calc", which is an awaitable
>> object that, after you've awaited it, will result in an int.
>> * "business_new" returns an awaitable object that, after you've awaited it,
>> will result in an int.
>>
>> In all three of these cases, the result is the same. The fact that the
>> awaitable object returned from any of them is implemented by a coroutine isn't
>> important (in the same way that an iterable object may be implemented by a
>> generator, but it's really irrelevant).
> 
> What? Shouldn't 'business_new' return the int? It did await, after all.

I assumed "async def business_new()", rather than some imaginary "await in a non-async function will block because I love to create deadlocks in my code" feature.

Note that blocking prevents *all* coroutines from making progress, unlike threading. When you await all the way to an event loop, it defers the rest of the coroutine until a signal (via a callback) is raised and continues running other coroutines.

Cheers,
Steve


From ncoghlan at gmail.com  Sat Jun 27 03:42:32 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 27 Jun 2015 11:42:32 +1000
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <mmk4ad$k76$1@ger.gmane.org>
References: <558AF376.30607@mail.de> <558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>
 <20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de> <558D5FC0.90306@stoneleaf.us>
 <CAPTjJmpBXYzw7_qNDe4RKkcjpNyW9bOH7WTb3Oq-ymwL7tAzdg@mail.gmail.com>
 <mmk4ad$k76$1@ger.gmane.org>
Message-ID: <CADiSq7f3c1g4FrmHgPn6r4+40uHu408Z4+Zum0CLH4gzNBuTTw@mail.gmail.com>

On 27 June 2015 at 04:06, Ron Adam <ron3200 at gmail.com> wrote:
> It seems that those points are defined by other means outside of a function
> defined with "async def".  From the PEP...
>
>    * It is a SyntaxError to have yield or yield from expressions
>      in an async function.
>
> So somewhere in an async function, it needs to "await something" with a
> yield in it that isn't an async function.

This isn't the case - it can be async functions and C level coroutines
all the way down. Using a generator or other iterable instead requires
adaptation to the awaitable protocol (which makes it possible to tap
into all the work that has been done for the generator-based
coroutines used previously, rather than having to rewrite it to use
native coroutines).

This isn't very clear in the currently released beta as we made some
decisions to simplify the original implementation that we thought
would also be OK from an API design perspective, but turned out to
pose significant problems once folks actually started trying to
integrate native coroutines with other async systems beyond asyncio.

Yury fixed those underlying object model limitations as part of
addressing Ben Darnell's report of problems attempting to integrate
native coroutine support into Tornado
(https://hg.python.org/cpython/rev/7a0a1a4ac639).

Yury had already updated the PEP to account for those changes, but
I've now also added a specific note regarding the API design change in
response to beta feedback: https://hg.python.org/peps/rev/0c963fa25db8

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From jimjjewett at gmail.com  Sat Jun 27 03:51:07 2015
From: jimjjewett at gmail.com (Jim J. Jewett)
Date: Fri, 26 Jun 2015 18:51:07 -0700 (PDT)
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <20150626175113.010ef154@x230>
Message-ID: <558e018b.2f2c320a.d0bf.14e6@mx.google.com>



On Fri Jun 26 16:51:13 CEST 2015, Paul Sokolovsky wrote:

> So, currently in Python you know if you do:

>    socket.write(buf)

> Then you know it will finish without interruptions for entire buffer.

How do you know that?

Are you assuming that socket.write is a builtin, rather than a
python method?  (Not even a python wrapper around a builtin?)

Even if that were true, it would only mean that the call itself
is processed within a single bytecode ... there is no guarantee
that the write method won't release the GIL or call back into
python (and thereby allow a thread switch) as part of its own
logic.

> And if you write:

>    await socket.write(buf)

> then you know there may be interruption points inside socket.write(),
> in particular something else may mutate it while it's being written.

I would consider that external mutation to be bad form ... at least
as bad as violating the expectation of an atomic socket.write() up
above.

So either way, nothing bad SHOULD happen, but it might anyhow.  I'm
not seeing what the async-coloring actually bought you...

-jJ

--

If there are still threading problems with my replies, please
email me with details, so that I can try to resolve them.  -jJ

From benjamin at python.org  Sat Jun 27 22:57:35 2015
From: benjamin at python.org (Benjamin Peterson)
Date: Sat, 27 Jun 2015 15:57:35 -0500
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <mlv2kk$fc6$1@ger.gmane.org>
References: <mlv2kk$fc6$1@ger.gmane.org>
Message-ID: <1435438655.961087.309431641.3EE99A15@webmail.messagingengine.com>



On Thu, Jun 18, 2015, at 13:27, Terry Reedy wrote:
> Unicode 8.0 was just released.  Can we have unicodedata updated to match 
> in 3.5?

3.5 now has Unicode 8.0.0.

From tjreedy at udel.edu  Sun Jun 28 06:22:07 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 28 Jun 2015 00:22:07 -0400
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <1435438655.961087.309431641.3EE99A15@webmail.messagingengine.com>
References: <mlv2kk$fc6$1@ger.gmane.org>
 <1435438655.961087.309431641.3EE99A15@webmail.messagingengine.com>
Message-ID: <mmnspg$i9f$1@ger.gmane.org>

On 6/27/2015 4:57 PM, Benjamin Peterson wrote:
>
>
> On Thu, Jun 18, 2015, at 13:27, Terry Reedy wrote:
>> Unicode 8.0 was just released.  Can we have unicodedata updated to match
>> in 3.5?
>
> 3.5 now has Unicode 8.0.0.

Great.  Does the release PEP or something else have instructions on how 
to do this?

-- 
Terry Jan Reedy


From storchaka at gmail.com  Sun Jun 28 08:03:50 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Sun, 28 Jun 2015 09:03:50 +0300
Subject: [Python-Dev] Unicode 8.0 and 3.5
In-Reply-To: <mm07ep$sgu$1@ger.gmane.org>
References: <mlv2kk$fc6$1@ger.gmane.org> <55830EE9.30509@hastings.org>
 <55831D36.1070108@mrabarnett.plus.com> <mm07ep$sgu$1@ger.gmane.org>
Message-ID: <mmo2o5$kdm$1@ger.gmane.org>

On 19.06.15 07:56, Serhiy Storchaka wrote:
> May be private table for case-insensitive matching in the re module
> should be updated too.

Confirm that the re module doesn't need the update to Unicode 8.0.


From storchaka at gmail.com  Sun Jun 28 09:52:38 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Sun, 28 Jun 2015 10:52:38 +0300
Subject: [Python-Dev] cpython: Minor refactoring. Move reference count
 logic into function that adds entry.
In-Reply-To: <20150628050340.14303.41221@psf.io>
References: <20150628050340.14303.41221@psf.io>
Message-ID: <mmo947$1ll$1@ger.gmane.org>

On 28.06.15 08:03, raymond.hettinger wrote:
> https://hg.python.org/cpython/rev/637e197be547
> changeset:   96697:637e197be547
> user:        Raymond Hettinger <python at rcn.com>
> date:        Sat Jun 27 22:03:35 2015 -0700
> summary:
>    Minor refactoring.  Move reference count logic into function that adds entry.

This looks not correct. key should be increfed before calling 
PyObject_RichCompareBool() for the same reason as startkey.



From robertc at robertcollins.net  Mon Jun 29 06:28:56 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Mon, 29 Jun 2015 16:28:56 +1200
Subject: [Python-Dev] Not getting mail from the issue tracker
Message-ID: <CAJ3HoZ0L74d=s5p5oH40zd3X3VyGJB_XwoxPmc-GCAPGVtdgtg@mail.gmail.com>

Firstly, a big sorry to all those unittest issues I haven't commented on.

Turns out I simply don't get mail from the issue tracker. :(.

Who should I speak to to try and debug this?

In the interim, if you want me to look at an issue please ping me on
IRC (lifeless) or mail me directly.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud

From benjamin at python.org  Mon Jun 29 17:05:20 2015
From: benjamin at python.org (Benjamin Peterson)
Date: Mon, 29 Jun 2015 10:05:20 -0500
Subject: [Python-Dev] Not getting mail from the issue tracker
In-Reply-To: <CAJ3HoZ0L74d=s5p5oH40zd3X3VyGJB_XwoxPmc-GCAPGVtdgtg@mail.gmail.com>
References: <CAJ3HoZ0L74d=s5p5oH40zd3X3VyGJB_XwoxPmc-GCAPGVtdgtg@mail.gmail.com>
Message-ID: <1435590320.868899.310590081.2597774D@webmail.messagingengine.com>



On Sun, Jun 28, 2015, at 23:28, Robert Collins wrote:
> Firstly, a big sorry to all those unittest issues I haven't commented on.
> 
> Turns out I simply don't get mail from the issue tracker. :(.

Is it in your spam?

From srkunze at mail.de  Tue Jun 30 21:39:32 2015
From: srkunze at mail.de (Sven R. Kunze)
Date: Tue, 30 Jun 2015 21:39:32 +0200
Subject: [Python-Dev] Importance of "async" keyword
In-Reply-To: <558D9A41.3050309@gmail.com>
References: <558AF376.30607@mail.de>	<558B1034.6080504@gmail.com>
 <558B1F72.9000409@mail.de>	<20150625021659.GS20701@ando.pearwood.info>
 <558C2489.2080305@mail.de>
 <CAL3CFcUywCtz04nkPamznR1vTeAHYiJFn4_j-9sFnAad6ZkBjg@mail.gmail.com>
 <558D5834.2030909@mail.de>
 <BLUPR0301MB1652BA6E0F63A32184C338B5F5AD0@BLUPR0301MB1652.namprd03.prod.outlook.com>
 <558D8E95.6020404@stoneleaf.us> <558D9A41.3050309@gmail.com>
Message-ID: <5592F074.7010904@mail.de>

Hi,

I have never said I wanted implicit asyncio. Explicit is the Python way 
after all, it served me well and I stick to that.

I like the 'await' syntax to mark suspension points. But the 'async' 
coloring makes no sense to me. It is an implementation details of 
asyncio (IMHO).


 From what I can gather, there are 4 distinct cases I would need to take 
care of. However, I do not wish the team to go through of all of them. 
Additionally, they would even need to think of whether a function 
returns an 'awaitable that returns an int', an 'int' or both depending 
on the input parameters (or due to an implementation error). Maybe, 
somebody even makes a crazy error like returning an 'awaitable that 
returns an awaitable that returns an int'. If somebody likes to 
create/handle this kind of stuff (for whatever reason), using the 
asyncio library and creating crazy wrapper objects would be the way to 
go; not the default syntax.


Regards,
Sven

From moller at mollerware.com  Tue Jun 30 21:40:38 2015
From: moller at mollerware.com (Chris Moller)
Date: Tue, 30 Jun 2015 15:40:38 -0400
Subject: [Python-Dev] Capturing PyRun_String stdout
Message-ID: <5592F0B6.1060002@mollerware.com>

I expect this has been asked before, but I can't find out much about it...

I'm trying to embed Python as a scripting language and I need to capture 
the output of PyRun_String(), PyEval_EvalCode(), or whatever as a char * 
(or wchar_t * or whatever) rather than have it go to stdout.

Python 3.3.2 under plain C, not C++

And, while I'm interrupting everyone's afternoon, another question: if 
I  pass Py_single_input to PyRun_String() or 
Py_CompileString()/PyEval_EvalCode(), it accepts statements like "a=10" 
and can then properly do stuff like "print(a)".  If I use Py_eval_input 
instead, I get error messages.  In both cases, I'm using the same 
global_dict and local_dict, if that makes any difference.  What am I 
doing wrong?

Thanks,
Chris Moller
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150630/1e04049d/attachment.html>

From amauryfa at gmail.com  Tue Jun 30 22:48:44 2015
From: amauryfa at gmail.com (Amaury Forgeot d'Arc)
Date: Tue, 30 Jun 2015 22:48:44 +0200
Subject: [Python-Dev] Capturing PyRun_String stdout
In-Reply-To: <5592F0B6.1060002@mollerware.com>
References: <5592F0B6.1060002@mollerware.com>
Message-ID: <CAGmFidafwLjfCSajTW=3Gv_xoViQyfRiAdsGaT+7pi3_812dDg@mail.gmail.com>

Hi,

2015-06-30 21:40 GMT+02:00 Chris Moller <moller at mollerware.com>:

>  I expect this has been asked before, but I can't find out much about it...
>

Please ask this kind of questions on the python-users mailing list
<https://www.python.org/community/lists/> (or comp.lang.python).

There you will find helpful people who will tell you how to modify
sys.stdout, and the differences between exec() and eval().

Best of luck using Python!


> I'm trying to embed Python as a scripting language and I need to capture
> the output of PyRun_String(), PyEval_EvalCode(), or whatever as a char *
> (or wchar_t * or whatever) rather than have it go to stdout.
>
> Python 3.3.2 under plain C, not C++
>
> And, while I'm interrupting everyone's afternoon, another question: if I
> pass Py_single_input to PyRun_String() or
> Py_CompileString()/PyEval_EvalCode(), it accepts statements like "a=10" and
> can then properly do stuff like "print(a)".  If I use Py_eval_input
> instead, I get error messages.  In both cases, I'm using the same
> global_dict and local_dict, if that makes any difference.  What am I doing
> wrong?
>
> Thanks,
> Chris Moller
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/amauryfa%40gmail.com
>
>


-- 
Amaury Forgeot d'Arc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150630/6950b2ea/attachment.html>